Mar 18 13:04:58.914843 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 18 13:04:59.513143 master-0 kubenswrapper[4025]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 13:04:59.513143 master-0 kubenswrapper[4025]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 18 13:04:59.513143 master-0 kubenswrapper[4025]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 13:04:59.513143 master-0 kubenswrapper[4025]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 13:04:59.513143 master-0 kubenswrapper[4025]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 18 13:04:59.513143 master-0 kubenswrapper[4025]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 13:04:59.513972 master-0 kubenswrapper[4025]: I0318 13:04:59.513730 4025 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 18 13:04:59.520590 master-0 kubenswrapper[4025]: W0318 13:04:59.520548 4025 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 13:04:59.520590 master-0 kubenswrapper[4025]: W0318 13:04:59.520570 4025 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 13:04:59.520590 master-0 kubenswrapper[4025]: W0318 13:04:59.520574 4025 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 13:04:59.520590 master-0 kubenswrapper[4025]: W0318 13:04:59.520578 4025 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 13:04:59.520590 master-0 kubenswrapper[4025]: W0318 13:04:59.520582 4025 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 13:04:59.520590 master-0 kubenswrapper[4025]: W0318 13:04:59.520586 4025 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 13:04:59.520590 master-0 kubenswrapper[4025]: W0318 13:04:59.520591 4025 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 13:04:59.520590 master-0 kubenswrapper[4025]: W0318 13:04:59.520595 4025 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 13:04:59.520790 master-0 kubenswrapper[4025]: W0318 13:04:59.520604 4025 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 13:04:59.520790 master-0 kubenswrapper[4025]: W0318 13:04:59.520609 4025 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 13:04:59.520790 master-0 kubenswrapper[4025]: W0318 13:04:59.520614 4025 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 13:04:59.520790 master-0 kubenswrapper[4025]: W0318 13:04:59.520618 4025 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 13:04:59.520790 master-0 kubenswrapper[4025]: W0318 13:04:59.520622 4025 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 13:04:59.520790 master-0 kubenswrapper[4025]: W0318 13:04:59.520625 4025 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 13:04:59.520790 master-0 kubenswrapper[4025]: W0318 13:04:59.520629 4025 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 13:04:59.520790 master-0 kubenswrapper[4025]: W0318 13:04:59.520633 4025 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 13:04:59.520790 master-0 kubenswrapper[4025]: W0318 13:04:59.520637 4025 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 13:04:59.520790 master-0 kubenswrapper[4025]: W0318 13:04:59.520711 4025 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 13:04:59.520790 master-0 kubenswrapper[4025]: W0318 13:04:59.520715 4025 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 13:04:59.520790 master-0 kubenswrapper[4025]: W0318 13:04:59.520719 4025 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 13:04:59.520790 master-0 kubenswrapper[4025]: W0318 13:04:59.520725 4025 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 13:04:59.520790 master-0 kubenswrapper[4025]: W0318 13:04:59.520729 4025 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 13:04:59.520790 master-0 kubenswrapper[4025]: W0318 13:04:59.520732 4025 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 13:04:59.520790 master-0 kubenswrapper[4025]: W0318 13:04:59.520736 4025 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 13:04:59.520790 master-0 kubenswrapper[4025]: W0318 13:04:59.520740 4025 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 13:04:59.520790 master-0 kubenswrapper[4025]: W0318 13:04:59.520744 4025 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 13:04:59.520790 master-0 kubenswrapper[4025]: W0318 13:04:59.520748 4025 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 13:04:59.520790 master-0 kubenswrapper[4025]: W0318 13:04:59.520752 4025 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 13:04:59.521175 master-0 kubenswrapper[4025]: W0318 13:04:59.520756 4025 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 13:04:59.521175 master-0 kubenswrapper[4025]: W0318 13:04:59.520759 4025 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 13:04:59.521175 master-0 kubenswrapper[4025]: W0318 13:04:59.520764 4025 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 13:04:59.521175 master-0 kubenswrapper[4025]: W0318 13:04:59.520767 4025 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 13:04:59.521175 master-0 kubenswrapper[4025]: W0318 13:04:59.520775 4025 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 13:04:59.521175 master-0 kubenswrapper[4025]: W0318 13:04:59.520779 4025 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 13:04:59.521175 master-0 kubenswrapper[4025]: W0318 13:04:59.520782 4025 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 13:04:59.521175 master-0 kubenswrapper[4025]: W0318 13:04:59.520786 4025 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 13:04:59.521175 master-0 kubenswrapper[4025]: W0318 13:04:59.520790 4025 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 13:04:59.521175 master-0 kubenswrapper[4025]: W0318 13:04:59.520794 4025 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 13:04:59.521175 master-0 kubenswrapper[4025]: W0318 13:04:59.520798 4025 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 13:04:59.521175 master-0 kubenswrapper[4025]: W0318 13:04:59.520804 4025 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 13:04:59.521175 master-0 kubenswrapper[4025]: W0318 13:04:59.520808 4025 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 13:04:59.521175 master-0 kubenswrapper[4025]: W0318 13:04:59.520813 4025 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 13:04:59.521175 master-0 kubenswrapper[4025]: W0318 13:04:59.520817 4025 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 13:04:59.521175 master-0 kubenswrapper[4025]: W0318 13:04:59.520821 4025 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 13:04:59.521175 master-0 kubenswrapper[4025]: W0318 13:04:59.520825 4025 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 13:04:59.521175 master-0 kubenswrapper[4025]: W0318 13:04:59.520831 4025 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 13:04:59.521175 master-0 kubenswrapper[4025]: W0318 13:04:59.520835 4025 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 13:04:59.521175 master-0 kubenswrapper[4025]: W0318 13:04:59.520838 4025 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 13:04:59.521627 master-0 kubenswrapper[4025]: W0318 13:04:59.520842 4025 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 13:04:59.521627 master-0 kubenswrapper[4025]: W0318 13:04:59.520846 4025 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 13:04:59.521627 master-0 kubenswrapper[4025]: W0318 13:04:59.520849 4025 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 13:04:59.521627 master-0 kubenswrapper[4025]: W0318 13:04:59.520853 4025 feature_gate.go:330] unrecognized feature gate: Example Mar 18 13:04:59.521627 master-0 kubenswrapper[4025]: W0318 13:04:59.520857 4025 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 13:04:59.521627 master-0 kubenswrapper[4025]: W0318 13:04:59.520860 4025 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 13:04:59.521627 master-0 kubenswrapper[4025]: W0318 13:04:59.520864 4025 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 13:04:59.521627 master-0 kubenswrapper[4025]: W0318 13:04:59.520868 4025 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 13:04:59.521627 master-0 kubenswrapper[4025]: W0318 13:04:59.520873 4025 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 13:04:59.521627 master-0 kubenswrapper[4025]: W0318 13:04:59.520880 4025 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 13:04:59.521627 master-0 kubenswrapper[4025]: W0318 13:04:59.520885 4025 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 13:04:59.521627 master-0 kubenswrapper[4025]: W0318 13:04:59.520890 4025 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 13:04:59.521627 master-0 kubenswrapper[4025]: W0318 13:04:59.520895 4025 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 13:04:59.521627 master-0 kubenswrapper[4025]: W0318 13:04:59.520900 4025 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 13:04:59.521627 master-0 kubenswrapper[4025]: W0318 13:04:59.520904 4025 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 13:04:59.521627 master-0 kubenswrapper[4025]: W0318 13:04:59.520910 4025 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 13:04:59.521627 master-0 kubenswrapper[4025]: W0318 13:04:59.520958 4025 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 13:04:59.521627 master-0 kubenswrapper[4025]: W0318 13:04:59.520962 4025 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 13:04:59.521627 master-0 kubenswrapper[4025]: W0318 13:04:59.520967 4025 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 13:04:59.522032 master-0 kubenswrapper[4025]: W0318 13:04:59.520971 4025 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 13:04:59.522032 master-0 kubenswrapper[4025]: W0318 13:04:59.520977 4025 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 13:04:59.522032 master-0 kubenswrapper[4025]: W0318 13:04:59.520980 4025 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 13:04:59.522032 master-0 kubenswrapper[4025]: W0318 13:04:59.520984 4025 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 13:04:59.522032 master-0 kubenswrapper[4025]: W0318 13:04:59.520988 4025 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 13:04:59.523763 master-0 kubenswrapper[4025]: I0318 13:04:59.523717 4025 flags.go:64] FLAG: --address="0.0.0.0" Mar 18 13:04:59.523763 master-0 kubenswrapper[4025]: I0318 13:04:59.523754 4025 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 18 13:04:59.523763 master-0 kubenswrapper[4025]: I0318 13:04:59.523766 4025 flags.go:64] FLAG: --anonymous-auth="true" Mar 18 13:04:59.523763 master-0 kubenswrapper[4025]: I0318 13:04:59.523772 4025 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 18 13:04:59.523763 master-0 kubenswrapper[4025]: I0318 13:04:59.523778 4025 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 18 13:04:59.523763 master-0 kubenswrapper[4025]: I0318 13:04:59.523782 4025 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 18 13:04:59.523763 master-0 kubenswrapper[4025]: I0318 13:04:59.523788 4025 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 18 13:04:59.524001 master-0 kubenswrapper[4025]: I0318 13:04:59.523795 4025 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 18 13:04:59.524001 master-0 kubenswrapper[4025]: I0318 13:04:59.523800 4025 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 18 13:04:59.524001 master-0 kubenswrapper[4025]: I0318 13:04:59.523804 4025 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 18 13:04:59.524001 master-0 kubenswrapper[4025]: I0318 13:04:59.523809 4025 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 18 13:04:59.524001 master-0 kubenswrapper[4025]: I0318 13:04:59.523813 4025 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 18 13:04:59.524001 master-0 kubenswrapper[4025]: I0318 13:04:59.523818 4025 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 18 13:04:59.524001 master-0 kubenswrapper[4025]: I0318 13:04:59.523822 4025 flags.go:64] FLAG: --cgroup-root="" Mar 18 13:04:59.524001 master-0 kubenswrapper[4025]: I0318 13:04:59.523826 4025 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 18 13:04:59.524001 master-0 kubenswrapper[4025]: I0318 13:04:59.523830 4025 flags.go:64] FLAG: --client-ca-file="" Mar 18 13:04:59.524001 master-0 kubenswrapper[4025]: I0318 13:04:59.523834 4025 flags.go:64] FLAG: --cloud-config="" Mar 18 13:04:59.524001 master-0 kubenswrapper[4025]: I0318 13:04:59.523838 4025 flags.go:64] FLAG: --cloud-provider="" Mar 18 13:04:59.524001 master-0 kubenswrapper[4025]: I0318 13:04:59.523842 4025 flags.go:64] FLAG: --cluster-dns="[]" Mar 18 13:04:59.524001 master-0 kubenswrapper[4025]: I0318 13:04:59.523847 4025 flags.go:64] FLAG: --cluster-domain="" Mar 18 13:04:59.524001 master-0 kubenswrapper[4025]: I0318 13:04:59.523850 4025 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 18 13:04:59.524001 master-0 kubenswrapper[4025]: I0318 13:04:59.523855 4025 flags.go:64] FLAG: --config-dir="" Mar 18 13:04:59.524001 master-0 kubenswrapper[4025]: I0318 13:04:59.523859 4025 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 18 13:04:59.524001 master-0 kubenswrapper[4025]: I0318 13:04:59.523863 4025 flags.go:64] FLAG: --container-log-max-files="5" Mar 18 13:04:59.524001 master-0 kubenswrapper[4025]: I0318 13:04:59.523868 4025 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 18 13:04:59.524001 master-0 kubenswrapper[4025]: I0318 13:04:59.523873 4025 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 18 13:04:59.524001 master-0 kubenswrapper[4025]: I0318 13:04:59.523877 4025 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 18 13:04:59.524001 master-0 kubenswrapper[4025]: I0318 13:04:59.523881 4025 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 18 13:04:59.524001 master-0 kubenswrapper[4025]: I0318 13:04:59.523885 4025 flags.go:64] FLAG: --contention-profiling="false" Mar 18 13:04:59.524001 master-0 kubenswrapper[4025]: I0318 13:04:59.523889 4025 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 18 13:04:59.524001 master-0 kubenswrapper[4025]: I0318 13:04:59.523894 4025 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 18 13:04:59.524001 master-0 kubenswrapper[4025]: I0318 13:04:59.523898 4025 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 18 13:04:59.524615 master-0 kubenswrapper[4025]: I0318 13:04:59.523903 4025 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 18 13:04:59.524615 master-0 kubenswrapper[4025]: I0318 13:04:59.523923 4025 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 18 13:04:59.524615 master-0 kubenswrapper[4025]: I0318 13:04:59.523927 4025 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 18 13:04:59.524615 master-0 kubenswrapper[4025]: I0318 13:04:59.523931 4025 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 18 13:04:59.524615 master-0 kubenswrapper[4025]: I0318 13:04:59.523936 4025 flags.go:64] FLAG: --enable-load-reader="false" Mar 18 13:04:59.524615 master-0 kubenswrapper[4025]: I0318 13:04:59.523940 4025 flags.go:64] FLAG: --enable-server="true" Mar 18 13:04:59.524615 master-0 kubenswrapper[4025]: I0318 13:04:59.523944 4025 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 18 13:04:59.524615 master-0 kubenswrapper[4025]: I0318 13:04:59.523951 4025 flags.go:64] FLAG: --event-burst="100" Mar 18 13:04:59.524615 master-0 kubenswrapper[4025]: I0318 13:04:59.523956 4025 flags.go:64] FLAG: --event-qps="50" Mar 18 13:04:59.524615 master-0 kubenswrapper[4025]: I0318 13:04:59.523960 4025 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 18 13:04:59.524615 master-0 kubenswrapper[4025]: I0318 13:04:59.523964 4025 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 18 13:04:59.524615 master-0 kubenswrapper[4025]: I0318 13:04:59.523968 4025 flags.go:64] FLAG: --eviction-hard="" Mar 18 13:04:59.524615 master-0 kubenswrapper[4025]: I0318 13:04:59.523974 4025 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 18 13:04:59.524615 master-0 kubenswrapper[4025]: I0318 13:04:59.523978 4025 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 18 13:04:59.524615 master-0 kubenswrapper[4025]: I0318 13:04:59.523982 4025 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 18 13:04:59.524615 master-0 kubenswrapper[4025]: I0318 13:04:59.523986 4025 flags.go:64] FLAG: --eviction-soft="" Mar 18 13:04:59.524615 master-0 kubenswrapper[4025]: I0318 13:04:59.523990 4025 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 18 13:04:59.524615 master-0 kubenswrapper[4025]: I0318 13:04:59.523994 4025 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 18 13:04:59.524615 master-0 kubenswrapper[4025]: I0318 13:04:59.523998 4025 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 18 13:04:59.524615 master-0 kubenswrapper[4025]: I0318 13:04:59.524003 4025 flags.go:64] FLAG: --experimental-mounter-path="" Mar 18 13:04:59.524615 master-0 kubenswrapper[4025]: I0318 13:04:59.524007 4025 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 18 13:04:59.524615 master-0 kubenswrapper[4025]: I0318 13:04:59.524012 4025 flags.go:64] FLAG: --fail-swap-on="true" Mar 18 13:04:59.524615 master-0 kubenswrapper[4025]: I0318 13:04:59.524017 4025 flags.go:64] FLAG: --feature-gates="" Mar 18 13:04:59.524615 master-0 kubenswrapper[4025]: I0318 13:04:59.524022 4025 flags.go:64] FLAG: --file-check-frequency="20s" Mar 18 13:04:59.524615 master-0 kubenswrapper[4025]: I0318 13:04:59.524026 4025 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 18 13:04:59.525180 master-0 kubenswrapper[4025]: I0318 13:04:59.524030 4025 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 18 13:04:59.525180 master-0 kubenswrapper[4025]: I0318 13:04:59.524035 4025 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 18 13:04:59.525180 master-0 kubenswrapper[4025]: I0318 13:04:59.524039 4025 flags.go:64] FLAG: --healthz-port="10248" Mar 18 13:04:59.525180 master-0 kubenswrapper[4025]: I0318 13:04:59.524043 4025 flags.go:64] FLAG: --help="false" Mar 18 13:04:59.525180 master-0 kubenswrapper[4025]: I0318 13:04:59.524049 4025 flags.go:64] FLAG: --hostname-override="" Mar 18 13:04:59.525180 master-0 kubenswrapper[4025]: I0318 13:04:59.524052 4025 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 18 13:04:59.525180 master-0 kubenswrapper[4025]: I0318 13:04:59.524057 4025 flags.go:64] FLAG: --http-check-frequency="20s" Mar 18 13:04:59.525180 master-0 kubenswrapper[4025]: I0318 13:04:59.524062 4025 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 18 13:04:59.525180 master-0 kubenswrapper[4025]: I0318 13:04:59.524066 4025 flags.go:64] FLAG: --image-credential-provider-config="" Mar 18 13:04:59.525180 master-0 kubenswrapper[4025]: I0318 13:04:59.524070 4025 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 18 13:04:59.525180 master-0 kubenswrapper[4025]: I0318 13:04:59.524074 4025 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 18 13:04:59.525180 master-0 kubenswrapper[4025]: I0318 13:04:59.524078 4025 flags.go:64] FLAG: --image-service-endpoint="" Mar 18 13:04:59.525180 master-0 kubenswrapper[4025]: I0318 13:04:59.524082 4025 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 18 13:04:59.525180 master-0 kubenswrapper[4025]: I0318 13:04:59.524086 4025 flags.go:64] FLAG: --kube-api-burst="100" Mar 18 13:04:59.525180 master-0 kubenswrapper[4025]: I0318 13:04:59.524091 4025 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 18 13:04:59.525180 master-0 kubenswrapper[4025]: I0318 13:04:59.524096 4025 flags.go:64] FLAG: --kube-api-qps="50" Mar 18 13:04:59.525180 master-0 kubenswrapper[4025]: I0318 13:04:59.524100 4025 flags.go:64] FLAG: --kube-reserved="" Mar 18 13:04:59.525180 master-0 kubenswrapper[4025]: I0318 13:04:59.524104 4025 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 18 13:04:59.525180 master-0 kubenswrapper[4025]: I0318 13:04:59.524108 4025 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 18 13:04:59.525180 master-0 kubenswrapper[4025]: I0318 13:04:59.524113 4025 flags.go:64] FLAG: --kubelet-cgroups="" Mar 18 13:04:59.525180 master-0 kubenswrapper[4025]: I0318 13:04:59.524117 4025 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 18 13:04:59.525180 master-0 kubenswrapper[4025]: I0318 13:04:59.524121 4025 flags.go:64] FLAG: --lock-file="" Mar 18 13:04:59.525180 master-0 kubenswrapper[4025]: I0318 13:04:59.524125 4025 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 18 13:04:59.525180 master-0 kubenswrapper[4025]: I0318 13:04:59.524130 4025 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 18 13:04:59.525180 master-0 kubenswrapper[4025]: I0318 13:04:59.524134 4025 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 18 13:04:59.525758 master-0 kubenswrapper[4025]: I0318 13:04:59.524142 4025 flags.go:64] FLAG: --log-json-split-stream="false" Mar 18 13:04:59.525758 master-0 kubenswrapper[4025]: I0318 13:04:59.524146 4025 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 18 13:04:59.525758 master-0 kubenswrapper[4025]: I0318 13:04:59.524150 4025 flags.go:64] FLAG: --log-text-split-stream="false" Mar 18 13:04:59.525758 master-0 kubenswrapper[4025]: I0318 13:04:59.524154 4025 flags.go:64] FLAG: --logging-format="text" Mar 18 13:04:59.525758 master-0 kubenswrapper[4025]: I0318 13:04:59.524158 4025 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 18 13:04:59.525758 master-0 kubenswrapper[4025]: I0318 13:04:59.524168 4025 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 18 13:04:59.525758 master-0 kubenswrapper[4025]: I0318 13:04:59.524172 4025 flags.go:64] FLAG: --manifest-url="" Mar 18 13:04:59.525758 master-0 kubenswrapper[4025]: I0318 13:04:59.524180 4025 flags.go:64] FLAG: --manifest-url-header="" Mar 18 13:04:59.525758 master-0 kubenswrapper[4025]: I0318 13:04:59.524186 4025 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 18 13:04:59.525758 master-0 kubenswrapper[4025]: I0318 13:04:59.524195 4025 flags.go:64] FLAG: --max-open-files="1000000" Mar 18 13:04:59.525758 master-0 kubenswrapper[4025]: I0318 13:04:59.524200 4025 flags.go:64] FLAG: --max-pods="110" Mar 18 13:04:59.525758 master-0 kubenswrapper[4025]: I0318 13:04:59.524204 4025 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 18 13:04:59.525758 master-0 kubenswrapper[4025]: I0318 13:04:59.524208 4025 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 18 13:04:59.525758 master-0 kubenswrapper[4025]: I0318 13:04:59.524216 4025 flags.go:64] FLAG: --memory-manager-policy="None" Mar 18 13:04:59.525758 master-0 kubenswrapper[4025]: I0318 13:04:59.524221 4025 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 18 13:04:59.525758 master-0 kubenswrapper[4025]: I0318 13:04:59.524229 4025 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 18 13:04:59.525758 master-0 kubenswrapper[4025]: I0318 13:04:59.524234 4025 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 18 13:04:59.525758 master-0 kubenswrapper[4025]: I0318 13:04:59.524238 4025 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 18 13:04:59.525758 master-0 kubenswrapper[4025]: I0318 13:04:59.524281 4025 flags.go:64] FLAG: --node-status-max-images="50" Mar 18 13:04:59.525758 master-0 kubenswrapper[4025]: I0318 13:04:59.524286 4025 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 18 13:04:59.525758 master-0 kubenswrapper[4025]: I0318 13:04:59.524295 4025 flags.go:64] FLAG: --oom-score-adj="-999" Mar 18 13:04:59.525758 master-0 kubenswrapper[4025]: I0318 13:04:59.524299 4025 flags.go:64] FLAG: --pod-cidr="" Mar 18 13:04:59.525758 master-0 kubenswrapper[4025]: I0318 13:04:59.524304 4025 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422" Mar 18 13:04:59.526376 master-0 kubenswrapper[4025]: I0318 13:04:59.524315 4025 flags.go:64] FLAG: --pod-manifest-path="" Mar 18 13:04:59.526376 master-0 kubenswrapper[4025]: I0318 13:04:59.524323 4025 flags.go:64] FLAG: --pod-max-pids="-1" Mar 18 13:04:59.526376 master-0 kubenswrapper[4025]: I0318 13:04:59.524328 4025 flags.go:64] FLAG: --pods-per-core="0" Mar 18 13:04:59.526376 master-0 kubenswrapper[4025]: I0318 13:04:59.524336 4025 flags.go:64] FLAG: --port="10250" Mar 18 13:04:59.526376 master-0 kubenswrapper[4025]: I0318 13:04:59.524340 4025 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 18 13:04:59.526376 master-0 kubenswrapper[4025]: I0318 13:04:59.524344 4025 flags.go:64] FLAG: --provider-id="" Mar 18 13:04:59.526376 master-0 kubenswrapper[4025]: I0318 13:04:59.524348 4025 flags.go:64] FLAG: --qos-reserved="" Mar 18 13:04:59.526376 master-0 kubenswrapper[4025]: I0318 13:04:59.524352 4025 flags.go:64] FLAG: --read-only-port="10255" Mar 18 13:04:59.526376 master-0 kubenswrapper[4025]: I0318 13:04:59.524356 4025 flags.go:64] FLAG: --register-node="true" Mar 18 13:04:59.526376 master-0 kubenswrapper[4025]: I0318 13:04:59.524360 4025 flags.go:64] FLAG: --register-schedulable="true" Mar 18 13:04:59.526376 master-0 kubenswrapper[4025]: I0318 13:04:59.524364 4025 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 18 13:04:59.526376 master-0 kubenswrapper[4025]: I0318 13:04:59.524385 4025 flags.go:64] FLAG: --registry-burst="10" Mar 18 13:04:59.526376 master-0 kubenswrapper[4025]: I0318 13:04:59.524389 4025 flags.go:64] FLAG: --registry-qps="5" Mar 18 13:04:59.526376 master-0 kubenswrapper[4025]: I0318 13:04:59.524395 4025 flags.go:64] FLAG: --reserved-cpus="" Mar 18 13:04:59.526376 master-0 kubenswrapper[4025]: I0318 13:04:59.524399 4025 flags.go:64] FLAG: --reserved-memory="" Mar 18 13:04:59.526376 master-0 kubenswrapper[4025]: I0318 13:04:59.524404 4025 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 18 13:04:59.526376 master-0 kubenswrapper[4025]: I0318 13:04:59.524425 4025 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 18 13:04:59.526376 master-0 kubenswrapper[4025]: I0318 13:04:59.524429 4025 flags.go:64] FLAG: --rotate-certificates="false" Mar 18 13:04:59.526376 master-0 kubenswrapper[4025]: I0318 13:04:59.524433 4025 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 18 13:04:59.526376 master-0 kubenswrapper[4025]: I0318 13:04:59.524437 4025 flags.go:64] FLAG: --runonce="false" Mar 18 13:04:59.526376 master-0 kubenswrapper[4025]: I0318 13:04:59.524441 4025 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 18 13:04:59.526376 master-0 kubenswrapper[4025]: I0318 13:04:59.524446 4025 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 18 13:04:59.526376 master-0 kubenswrapper[4025]: I0318 13:04:59.524450 4025 flags.go:64] FLAG: --seccomp-default="false" Mar 18 13:04:59.526376 master-0 kubenswrapper[4025]: I0318 13:04:59.524454 4025 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 18 13:04:59.526376 master-0 kubenswrapper[4025]: I0318 13:04:59.524458 4025 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 18 13:04:59.526376 master-0 kubenswrapper[4025]: I0318 13:04:59.524463 4025 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 18 13:04:59.527093 master-0 kubenswrapper[4025]: I0318 13:04:59.524467 4025 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 18 13:04:59.527093 master-0 kubenswrapper[4025]: I0318 13:04:59.524471 4025 flags.go:64] FLAG: --storage-driver-password="root" Mar 18 13:04:59.527093 master-0 kubenswrapper[4025]: I0318 13:04:59.524475 4025 flags.go:64] FLAG: --storage-driver-secure="false" Mar 18 13:04:59.527093 master-0 kubenswrapper[4025]: I0318 13:04:59.524479 4025 flags.go:64] FLAG: --storage-driver-table="stats" Mar 18 13:04:59.527093 master-0 kubenswrapper[4025]: I0318 13:04:59.524483 4025 flags.go:64] FLAG: --storage-driver-user="root" Mar 18 13:04:59.527093 master-0 kubenswrapper[4025]: I0318 13:04:59.524487 4025 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 18 13:04:59.527093 master-0 kubenswrapper[4025]: I0318 13:04:59.524496 4025 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 18 13:04:59.527093 master-0 kubenswrapper[4025]: I0318 13:04:59.524501 4025 flags.go:64] FLAG: --system-cgroups="" Mar 18 13:04:59.527093 master-0 kubenswrapper[4025]: I0318 13:04:59.524505 4025 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 18 13:04:59.527093 master-0 kubenswrapper[4025]: I0318 13:04:59.524511 4025 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 18 13:04:59.527093 master-0 kubenswrapper[4025]: I0318 13:04:59.524516 4025 flags.go:64] FLAG: --tls-cert-file="" Mar 18 13:04:59.527093 master-0 kubenswrapper[4025]: I0318 13:04:59.524520 4025 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 18 13:04:59.527093 master-0 kubenswrapper[4025]: I0318 13:04:59.524526 4025 flags.go:64] FLAG: --tls-min-version="" Mar 18 13:04:59.527093 master-0 kubenswrapper[4025]: I0318 13:04:59.524530 4025 flags.go:64] FLAG: --tls-private-key-file="" Mar 18 13:04:59.527093 master-0 kubenswrapper[4025]: I0318 13:04:59.524534 4025 flags.go:64] FLAG: --topology-manager-policy="none" Mar 18 13:04:59.527093 master-0 kubenswrapper[4025]: I0318 13:04:59.524537 4025 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 18 13:04:59.527093 master-0 kubenswrapper[4025]: I0318 13:04:59.524542 4025 flags.go:64] FLAG: --topology-manager-scope="container" Mar 18 13:04:59.527093 master-0 kubenswrapper[4025]: I0318 13:04:59.524546 4025 flags.go:64] FLAG: --v="2" Mar 18 13:04:59.527093 master-0 kubenswrapper[4025]: I0318 13:04:59.524551 4025 flags.go:64] FLAG: --version="false" Mar 18 13:04:59.527093 master-0 kubenswrapper[4025]: I0318 13:04:59.524557 4025 flags.go:64] FLAG: --vmodule="" Mar 18 13:04:59.527093 master-0 kubenswrapper[4025]: I0318 13:04:59.524562 4025 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 18 13:04:59.527093 master-0 kubenswrapper[4025]: I0318 13:04:59.524566 4025 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 18 13:04:59.527093 master-0 kubenswrapper[4025]: W0318 13:04:59.524689 4025 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 13:04:59.527093 master-0 kubenswrapper[4025]: W0318 13:04:59.524695 4025 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 13:04:59.527743 master-0 kubenswrapper[4025]: W0318 13:04:59.524699 4025 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 13:04:59.527743 master-0 kubenswrapper[4025]: W0318 13:04:59.524703 4025 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 13:04:59.527743 master-0 kubenswrapper[4025]: W0318 13:04:59.524706 4025 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 13:04:59.527743 master-0 kubenswrapper[4025]: W0318 13:04:59.524710 4025 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 13:04:59.527743 master-0 kubenswrapper[4025]: W0318 13:04:59.524714 4025 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 13:04:59.527743 master-0 kubenswrapper[4025]: W0318 13:04:59.524718 4025 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 13:04:59.527743 master-0 kubenswrapper[4025]: W0318 13:04:59.524723 4025 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 13:04:59.527743 master-0 kubenswrapper[4025]: W0318 13:04:59.524728 4025 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 13:04:59.527743 master-0 kubenswrapper[4025]: W0318 13:04:59.524732 4025 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 13:04:59.527743 master-0 kubenswrapper[4025]: W0318 13:04:59.524736 4025 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 13:04:59.527743 master-0 kubenswrapper[4025]: W0318 13:04:59.524740 4025 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 13:04:59.527743 master-0 kubenswrapper[4025]: W0318 13:04:59.524745 4025 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 13:04:59.527743 master-0 kubenswrapper[4025]: W0318 13:04:59.524749 4025 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 13:04:59.527743 master-0 kubenswrapper[4025]: W0318 13:04:59.524753 4025 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 13:04:59.527743 master-0 kubenswrapper[4025]: W0318 13:04:59.524759 4025 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 13:04:59.527743 master-0 kubenswrapper[4025]: W0318 13:04:59.524763 4025 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 13:04:59.527743 master-0 kubenswrapper[4025]: W0318 13:04:59.524767 4025 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 13:04:59.527743 master-0 kubenswrapper[4025]: W0318 13:04:59.524770 4025 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 13:04:59.527743 master-0 kubenswrapper[4025]: W0318 13:04:59.524774 4025 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 13:04:59.528243 master-0 kubenswrapper[4025]: W0318 13:04:59.524778 4025 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 13:04:59.528243 master-0 kubenswrapper[4025]: W0318 13:04:59.524782 4025 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 13:04:59.528243 master-0 kubenswrapper[4025]: W0318 13:04:59.524785 4025 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 13:04:59.528243 master-0 kubenswrapper[4025]: W0318 13:04:59.524789 4025 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 13:04:59.528243 master-0 kubenswrapper[4025]: W0318 13:04:59.524792 4025 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 13:04:59.528243 master-0 kubenswrapper[4025]: W0318 13:04:59.524796 4025 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 13:04:59.528243 master-0 kubenswrapper[4025]: W0318 13:04:59.524799 4025 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 13:04:59.528243 master-0 kubenswrapper[4025]: W0318 13:04:59.524803 4025 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 13:04:59.528243 master-0 kubenswrapper[4025]: W0318 13:04:59.524806 4025 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 13:04:59.528243 master-0 kubenswrapper[4025]: W0318 13:04:59.524810 4025 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 13:04:59.528243 master-0 kubenswrapper[4025]: W0318 13:04:59.524813 4025 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 13:04:59.528243 master-0 kubenswrapper[4025]: W0318 13:04:59.524817 4025 feature_gate.go:330] unrecognized feature gate: Example Mar 18 13:04:59.528243 master-0 kubenswrapper[4025]: W0318 13:04:59.524821 4025 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 13:04:59.528243 master-0 kubenswrapper[4025]: W0318 13:04:59.524824 4025 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 13:04:59.528243 master-0 kubenswrapper[4025]: W0318 13:04:59.524828 4025 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 13:04:59.528243 master-0 kubenswrapper[4025]: W0318 13:04:59.524831 4025 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 13:04:59.528243 master-0 kubenswrapper[4025]: W0318 13:04:59.524836 4025 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 13:04:59.528243 master-0 kubenswrapper[4025]: W0318 13:04:59.524840 4025 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 13:04:59.528243 master-0 kubenswrapper[4025]: W0318 13:04:59.524843 4025 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 13:04:59.528243 master-0 kubenswrapper[4025]: W0318 13:04:59.524847 4025 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 13:04:59.528711 master-0 kubenswrapper[4025]: W0318 13:04:59.524850 4025 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 13:04:59.528711 master-0 kubenswrapper[4025]: W0318 13:04:59.524854 4025 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 13:04:59.528711 master-0 kubenswrapper[4025]: W0318 13:04:59.524858 4025 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 13:04:59.528711 master-0 kubenswrapper[4025]: W0318 13:04:59.524861 4025 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 13:04:59.528711 master-0 kubenswrapper[4025]: W0318 13:04:59.524864 4025 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 13:04:59.528711 master-0 kubenswrapper[4025]: W0318 13:04:59.524868 4025 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 13:04:59.528711 master-0 kubenswrapper[4025]: W0318 13:04:59.524871 4025 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 13:04:59.528711 master-0 kubenswrapper[4025]: W0318 13:04:59.524878 4025 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 13:04:59.528711 master-0 kubenswrapper[4025]: W0318 13:04:59.524882 4025 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 13:04:59.528711 master-0 kubenswrapper[4025]: W0318 13:04:59.524885 4025 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 13:04:59.528711 master-0 kubenswrapper[4025]: W0318 13:04:59.524889 4025 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 13:04:59.528711 master-0 kubenswrapper[4025]: W0318 13:04:59.524893 4025 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 13:04:59.528711 master-0 kubenswrapper[4025]: W0318 13:04:59.524896 4025 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 13:04:59.528711 master-0 kubenswrapper[4025]: W0318 13:04:59.524900 4025 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 13:04:59.528711 master-0 kubenswrapper[4025]: W0318 13:04:59.524903 4025 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 13:04:59.528711 master-0 kubenswrapper[4025]: W0318 13:04:59.524907 4025 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 13:04:59.528711 master-0 kubenswrapper[4025]: W0318 13:04:59.524910 4025 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 13:04:59.528711 master-0 kubenswrapper[4025]: W0318 13:04:59.524914 4025 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 13:04:59.528711 master-0 kubenswrapper[4025]: W0318 13:04:59.524917 4025 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 13:04:59.528711 master-0 kubenswrapper[4025]: W0318 13:04:59.524921 4025 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 13:04:59.529131 master-0 kubenswrapper[4025]: W0318 13:04:59.524925 4025 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 13:04:59.529131 master-0 kubenswrapper[4025]: W0318 13:04:59.524929 4025 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 13:04:59.529131 master-0 kubenswrapper[4025]: W0318 13:04:59.524933 4025 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 13:04:59.529131 master-0 kubenswrapper[4025]: W0318 13:04:59.524937 4025 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 13:04:59.529131 master-0 kubenswrapper[4025]: W0318 13:04:59.524941 4025 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 13:04:59.529131 master-0 kubenswrapper[4025]: W0318 13:04:59.524944 4025 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 13:04:59.529131 master-0 kubenswrapper[4025]: W0318 13:04:59.524948 4025 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 13:04:59.529131 master-0 kubenswrapper[4025]: W0318 13:04:59.524952 4025 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 13:04:59.529131 master-0 kubenswrapper[4025]: W0318 13:04:59.524956 4025 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 13:04:59.529131 master-0 kubenswrapper[4025]: W0318 13:04:59.524960 4025 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 13:04:59.529131 master-0 kubenswrapper[4025]: W0318 13:04:59.524964 4025 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 13:04:59.529131 master-0 kubenswrapper[4025]: I0318 13:04:59.524975 4025 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 13:04:59.532375 master-0 kubenswrapper[4025]: I0318 13:04:59.532291 4025 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 18 13:04:59.532375 master-0 kubenswrapper[4025]: I0318 13:04:59.532345 4025 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 18 13:04:59.532568 master-0 kubenswrapper[4025]: W0318 13:04:59.532472 4025 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 13:04:59.532568 master-0 kubenswrapper[4025]: W0318 13:04:59.532483 4025 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 13:04:59.532568 master-0 kubenswrapper[4025]: W0318 13:04:59.532489 4025 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 13:04:59.532568 master-0 kubenswrapper[4025]: W0318 13:04:59.532494 4025 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 13:04:59.532568 master-0 kubenswrapper[4025]: W0318 13:04:59.532500 4025 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 13:04:59.532568 master-0 kubenswrapper[4025]: W0318 13:04:59.532507 4025 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 13:04:59.532568 master-0 kubenswrapper[4025]: W0318 13:04:59.532512 4025 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 13:04:59.532568 master-0 kubenswrapper[4025]: W0318 13:04:59.532517 4025 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 13:04:59.532568 master-0 kubenswrapper[4025]: W0318 13:04:59.532523 4025 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 13:04:59.532568 master-0 kubenswrapper[4025]: W0318 13:04:59.532528 4025 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 13:04:59.532568 master-0 kubenswrapper[4025]: W0318 13:04:59.532533 4025 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 13:04:59.532568 master-0 kubenswrapper[4025]: W0318 13:04:59.532538 4025 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 13:04:59.532568 master-0 kubenswrapper[4025]: W0318 13:04:59.532544 4025 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 13:04:59.532568 master-0 kubenswrapper[4025]: W0318 13:04:59.532549 4025 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 13:04:59.532568 master-0 kubenswrapper[4025]: W0318 13:04:59.532554 4025 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 13:04:59.532568 master-0 kubenswrapper[4025]: W0318 13:04:59.532559 4025 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 13:04:59.532568 master-0 kubenswrapper[4025]: W0318 13:04:59.532563 4025 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 13:04:59.532568 master-0 kubenswrapper[4025]: W0318 13:04:59.532569 4025 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 13:04:59.532568 master-0 kubenswrapper[4025]: W0318 13:04:59.532574 4025 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 13:04:59.532568 master-0 kubenswrapper[4025]: W0318 13:04:59.532579 4025 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 13:04:59.532568 master-0 kubenswrapper[4025]: W0318 13:04:59.532585 4025 feature_gate.go:330] unrecognized feature gate: Example Mar 18 13:04:59.533349 master-0 kubenswrapper[4025]: W0318 13:04:59.532590 4025 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 13:04:59.533349 master-0 kubenswrapper[4025]: W0318 13:04:59.532596 4025 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 13:04:59.533349 master-0 kubenswrapper[4025]: W0318 13:04:59.532601 4025 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 13:04:59.533349 master-0 kubenswrapper[4025]: W0318 13:04:59.532606 4025 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 13:04:59.533349 master-0 kubenswrapper[4025]: W0318 13:04:59.532612 4025 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 13:04:59.533349 master-0 kubenswrapper[4025]: W0318 13:04:59.532617 4025 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 13:04:59.533349 master-0 kubenswrapper[4025]: W0318 13:04:59.532622 4025 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 13:04:59.533349 master-0 kubenswrapper[4025]: W0318 13:04:59.532629 4025 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 13:04:59.533349 master-0 kubenswrapper[4025]: W0318 13:04:59.532638 4025 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 13:04:59.533349 master-0 kubenswrapper[4025]: W0318 13:04:59.532645 4025 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 13:04:59.533349 master-0 kubenswrapper[4025]: W0318 13:04:59.532652 4025 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 13:04:59.533349 master-0 kubenswrapper[4025]: W0318 13:04:59.532659 4025 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 13:04:59.533349 master-0 kubenswrapper[4025]: W0318 13:04:59.532665 4025 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 13:04:59.533349 master-0 kubenswrapper[4025]: W0318 13:04:59.532672 4025 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 13:04:59.533349 master-0 kubenswrapper[4025]: W0318 13:04:59.532677 4025 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 13:04:59.533349 master-0 kubenswrapper[4025]: W0318 13:04:59.532683 4025 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 13:04:59.533349 master-0 kubenswrapper[4025]: W0318 13:04:59.532689 4025 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 13:04:59.533349 master-0 kubenswrapper[4025]: W0318 13:04:59.532694 4025 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 13:04:59.534164 master-0 kubenswrapper[4025]: W0318 13:04:59.532700 4025 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 13:04:59.534164 master-0 kubenswrapper[4025]: W0318 13:04:59.532705 4025 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 13:04:59.534164 master-0 kubenswrapper[4025]: W0318 13:04:59.532711 4025 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 13:04:59.534164 master-0 kubenswrapper[4025]: W0318 13:04:59.532716 4025 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 13:04:59.534164 master-0 kubenswrapper[4025]: W0318 13:04:59.532722 4025 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 13:04:59.534164 master-0 kubenswrapper[4025]: W0318 13:04:59.532728 4025 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 13:04:59.534164 master-0 kubenswrapper[4025]: W0318 13:04:59.532733 4025 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 13:04:59.534164 master-0 kubenswrapper[4025]: W0318 13:04:59.532738 4025 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 13:04:59.534164 master-0 kubenswrapper[4025]: W0318 13:04:59.532744 4025 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 13:04:59.534164 master-0 kubenswrapper[4025]: W0318 13:04:59.532749 4025 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 13:04:59.534164 master-0 kubenswrapper[4025]: W0318 13:04:59.532754 4025 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 13:04:59.534164 master-0 kubenswrapper[4025]: W0318 13:04:59.532759 4025 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 13:04:59.534164 master-0 kubenswrapper[4025]: W0318 13:04:59.532764 4025 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 13:04:59.534164 master-0 kubenswrapper[4025]: W0318 13:04:59.532769 4025 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 13:04:59.534164 master-0 kubenswrapper[4025]: W0318 13:04:59.532774 4025 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 13:04:59.534164 master-0 kubenswrapper[4025]: W0318 13:04:59.532780 4025 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 13:04:59.534164 master-0 kubenswrapper[4025]: W0318 13:04:59.532785 4025 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 13:04:59.534164 master-0 kubenswrapper[4025]: W0318 13:04:59.532790 4025 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 13:04:59.534164 master-0 kubenswrapper[4025]: W0318 13:04:59.532795 4025 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 13:04:59.534164 master-0 kubenswrapper[4025]: W0318 13:04:59.532800 4025 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 13:04:59.535014 master-0 kubenswrapper[4025]: W0318 13:04:59.532805 4025 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 13:04:59.535014 master-0 kubenswrapper[4025]: W0318 13:04:59.532810 4025 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 13:04:59.535014 master-0 kubenswrapper[4025]: W0318 13:04:59.532815 4025 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 13:04:59.535014 master-0 kubenswrapper[4025]: W0318 13:04:59.532821 4025 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 13:04:59.535014 master-0 kubenswrapper[4025]: W0318 13:04:59.532825 4025 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 13:04:59.535014 master-0 kubenswrapper[4025]: W0318 13:04:59.532830 4025 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 13:04:59.535014 master-0 kubenswrapper[4025]: W0318 13:04:59.532837 4025 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 13:04:59.535014 master-0 kubenswrapper[4025]: W0318 13:04:59.532842 4025 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 13:04:59.535014 master-0 kubenswrapper[4025]: W0318 13:04:59.532847 4025 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 13:04:59.535014 master-0 kubenswrapper[4025]: W0318 13:04:59.532852 4025 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 13:04:59.535014 master-0 kubenswrapper[4025]: W0318 13:04:59.532856 4025 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 13:04:59.535014 master-0 kubenswrapper[4025]: W0318 13:04:59.532861 4025 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 13:04:59.535014 master-0 kubenswrapper[4025]: W0318 13:04:59.532866 4025 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 13:04:59.535014 master-0 kubenswrapper[4025]: I0318 13:04:59.532875 4025 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 13:04:59.535014 master-0 kubenswrapper[4025]: W0318 13:04:59.533040 4025 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 13:04:59.535726 master-0 kubenswrapper[4025]: W0318 13:04:59.533056 4025 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 13:04:59.535726 master-0 kubenswrapper[4025]: W0318 13:04:59.533066 4025 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 13:04:59.535726 master-0 kubenswrapper[4025]: W0318 13:04:59.533073 4025 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 13:04:59.535726 master-0 kubenswrapper[4025]: W0318 13:04:59.533079 4025 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 13:04:59.535726 master-0 kubenswrapper[4025]: W0318 13:04:59.533086 4025 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 13:04:59.535726 master-0 kubenswrapper[4025]: W0318 13:04:59.533093 4025 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 13:04:59.535726 master-0 kubenswrapper[4025]: W0318 13:04:59.533099 4025 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 13:04:59.535726 master-0 kubenswrapper[4025]: W0318 13:04:59.533106 4025 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 13:04:59.535726 master-0 kubenswrapper[4025]: W0318 13:04:59.533111 4025 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 13:04:59.535726 master-0 kubenswrapper[4025]: W0318 13:04:59.533116 4025 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 13:04:59.535726 master-0 kubenswrapper[4025]: W0318 13:04:59.533121 4025 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 13:04:59.535726 master-0 kubenswrapper[4025]: W0318 13:04:59.533126 4025 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 13:04:59.535726 master-0 kubenswrapper[4025]: W0318 13:04:59.533131 4025 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 13:04:59.535726 master-0 kubenswrapper[4025]: W0318 13:04:59.533138 4025 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 13:04:59.535726 master-0 kubenswrapper[4025]: W0318 13:04:59.533143 4025 feature_gate.go:330] unrecognized feature gate: Example Mar 18 13:04:59.535726 master-0 kubenswrapper[4025]: W0318 13:04:59.533148 4025 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 13:04:59.535726 master-0 kubenswrapper[4025]: W0318 13:04:59.533152 4025 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 13:04:59.535726 master-0 kubenswrapper[4025]: W0318 13:04:59.533158 4025 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 13:04:59.535726 master-0 kubenswrapper[4025]: W0318 13:04:59.533166 4025 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 13:04:59.536522 master-0 kubenswrapper[4025]: W0318 13:04:59.533173 4025 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 13:04:59.536522 master-0 kubenswrapper[4025]: W0318 13:04:59.533179 4025 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 13:04:59.536522 master-0 kubenswrapper[4025]: W0318 13:04:59.533185 4025 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 13:04:59.536522 master-0 kubenswrapper[4025]: W0318 13:04:59.533191 4025 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 13:04:59.536522 master-0 kubenswrapper[4025]: W0318 13:04:59.533198 4025 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 13:04:59.536522 master-0 kubenswrapper[4025]: W0318 13:04:59.533203 4025 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 13:04:59.536522 master-0 kubenswrapper[4025]: W0318 13:04:59.533209 4025 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 13:04:59.536522 master-0 kubenswrapper[4025]: W0318 13:04:59.533215 4025 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 13:04:59.536522 master-0 kubenswrapper[4025]: W0318 13:04:59.533238 4025 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 13:04:59.536522 master-0 kubenswrapper[4025]: W0318 13:04:59.533245 4025 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 13:04:59.536522 master-0 kubenswrapper[4025]: W0318 13:04:59.533250 4025 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 13:04:59.536522 master-0 kubenswrapper[4025]: W0318 13:04:59.533257 4025 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 13:04:59.536522 master-0 kubenswrapper[4025]: W0318 13:04:59.533263 4025 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 13:04:59.536522 master-0 kubenswrapper[4025]: W0318 13:04:59.533268 4025 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 13:04:59.536522 master-0 kubenswrapper[4025]: W0318 13:04:59.533274 4025 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 13:04:59.536522 master-0 kubenswrapper[4025]: W0318 13:04:59.533280 4025 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 13:04:59.536522 master-0 kubenswrapper[4025]: W0318 13:04:59.533285 4025 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 13:04:59.536522 master-0 kubenswrapper[4025]: W0318 13:04:59.533290 4025 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 13:04:59.536522 master-0 kubenswrapper[4025]: W0318 13:04:59.533295 4025 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 13:04:59.536522 master-0 kubenswrapper[4025]: W0318 13:04:59.533300 4025 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 13:04:59.537219 master-0 kubenswrapper[4025]: W0318 13:04:59.533306 4025 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 13:04:59.537219 master-0 kubenswrapper[4025]: W0318 13:04:59.533313 4025 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 13:04:59.537219 master-0 kubenswrapper[4025]: W0318 13:04:59.533319 4025 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 13:04:59.537219 master-0 kubenswrapper[4025]: W0318 13:04:59.533324 4025 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 13:04:59.537219 master-0 kubenswrapper[4025]: W0318 13:04:59.533329 4025 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 13:04:59.537219 master-0 kubenswrapper[4025]: W0318 13:04:59.533335 4025 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 13:04:59.537219 master-0 kubenswrapper[4025]: W0318 13:04:59.533342 4025 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 13:04:59.537219 master-0 kubenswrapper[4025]: W0318 13:04:59.533347 4025 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 13:04:59.537219 master-0 kubenswrapper[4025]: W0318 13:04:59.533352 4025 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 13:04:59.537219 master-0 kubenswrapper[4025]: W0318 13:04:59.533358 4025 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 13:04:59.537219 master-0 kubenswrapper[4025]: W0318 13:04:59.533364 4025 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 13:04:59.537219 master-0 kubenswrapper[4025]: W0318 13:04:59.533369 4025 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 13:04:59.537219 master-0 kubenswrapper[4025]: W0318 13:04:59.533375 4025 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 13:04:59.537219 master-0 kubenswrapper[4025]: W0318 13:04:59.533380 4025 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 13:04:59.537219 master-0 kubenswrapper[4025]: W0318 13:04:59.533385 4025 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 13:04:59.537219 master-0 kubenswrapper[4025]: W0318 13:04:59.533391 4025 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 13:04:59.537219 master-0 kubenswrapper[4025]: W0318 13:04:59.533397 4025 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 13:04:59.537219 master-0 kubenswrapper[4025]: W0318 13:04:59.533402 4025 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 13:04:59.537219 master-0 kubenswrapper[4025]: W0318 13:04:59.533408 4025 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 13:04:59.537219 master-0 kubenswrapper[4025]: W0318 13:04:59.533433 4025 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 13:04:59.538019 master-0 kubenswrapper[4025]: W0318 13:04:59.533439 4025 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 13:04:59.538019 master-0 kubenswrapper[4025]: W0318 13:04:59.533444 4025 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 13:04:59.538019 master-0 kubenswrapper[4025]: W0318 13:04:59.533449 4025 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 13:04:59.538019 master-0 kubenswrapper[4025]: W0318 13:04:59.533454 4025 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 13:04:59.538019 master-0 kubenswrapper[4025]: W0318 13:04:59.533461 4025 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 13:04:59.538019 master-0 kubenswrapper[4025]: W0318 13:04:59.533466 4025 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 13:04:59.538019 master-0 kubenswrapper[4025]: W0318 13:04:59.533471 4025 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 13:04:59.538019 master-0 kubenswrapper[4025]: W0318 13:04:59.533476 4025 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 13:04:59.538019 master-0 kubenswrapper[4025]: W0318 13:04:59.533481 4025 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 13:04:59.538019 master-0 kubenswrapper[4025]: W0318 13:04:59.533486 4025 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 13:04:59.538019 master-0 kubenswrapper[4025]: W0318 13:04:59.533491 4025 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 13:04:59.538019 master-0 kubenswrapper[4025]: W0318 13:04:59.533497 4025 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 13:04:59.538019 master-0 kubenswrapper[4025]: I0318 13:04:59.533505 4025 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 13:04:59.538019 master-0 kubenswrapper[4025]: I0318 13:04:59.533710 4025 server.go:940] "Client rotation is on, will bootstrap in background" Mar 18 13:04:59.538482 master-0 kubenswrapper[4025]: I0318 13:04:59.536711 4025 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 18 13:04:59.538482 master-0 kubenswrapper[4025]: I0318 13:04:59.537864 4025 server.go:997] "Starting client certificate rotation" Mar 18 13:04:59.538482 master-0 kubenswrapper[4025]: I0318 13:04:59.537891 4025 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 18 13:04:59.538482 master-0 kubenswrapper[4025]: I0318 13:04:59.538036 4025 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 18 13:04:59.561467 master-0 kubenswrapper[4025]: I0318 13:04:59.561386 4025 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 13:04:59.565552 master-0 kubenswrapper[4025]: E0318 13:04:59.565500 4025 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:04:59.566084 master-0 kubenswrapper[4025]: I0318 13:04:59.566051 4025 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 13:04:59.585173 master-0 kubenswrapper[4025]: I0318 13:04:59.585077 4025 log.go:25] "Validated CRI v1 runtime API" Mar 18 13:04:59.591833 master-0 kubenswrapper[4025]: I0318 13:04:59.591748 4025 log.go:25] "Validated CRI v1 image API" Mar 18 13:04:59.594795 master-0 kubenswrapper[4025]: I0318 13:04:59.594750 4025 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 18 13:04:59.598794 master-0 kubenswrapper[4025]: I0318 13:04:59.598734 4025 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 b51f6abc-d651-468e-ae51-7c88144268ce:/dev/vda3] Mar 18 13:04:59.598895 master-0 kubenswrapper[4025]: I0318 13:04:59.598795 4025 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Mar 18 13:04:59.620898 master-0 kubenswrapper[4025]: I0318 13:04:59.620642 4025 manager.go:217] Machine: {Timestamp:2026-03-18 13:04:59.619885388 +0000 UTC m=+0.539764030 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:0b28c177d1c547b6b192765c9d5bc20c SystemUUID:0b28c177-d1c5-47b6-b192-765c9d5bc20c BootID:82754421-b051-4950-9dab-4c3886d93f55 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:c1:64:46 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:ac:24:0f Speed:-1 Mtu:9000} {Name:ovs-system MacAddress:52:62:dd:3e:30:92 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 18 13:04:59.620898 master-0 kubenswrapper[4025]: I0318 13:04:59.620851 4025 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 18 13:04:59.621217 master-0 kubenswrapper[4025]: I0318 13:04:59.621048 4025 manager.go:233] Version: {KernelVersion:5.14.0-427.113.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202603021444-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 18 13:04:59.621505 master-0 kubenswrapper[4025]: I0318 13:04:59.621405 4025 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 18 13:04:59.621608 master-0 kubenswrapper[4025]: I0318 13:04:59.621572 4025 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 18 13:04:59.621831 master-0 kubenswrapper[4025]: I0318 13:04:59.621605 4025 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 18 13:04:59.623825 master-0 kubenswrapper[4025]: I0318 13:04:59.623774 4025 topology_manager.go:138] "Creating topology manager with none policy" Mar 18 13:04:59.623825 master-0 kubenswrapper[4025]: I0318 13:04:59.623799 4025 container_manager_linux.go:303] "Creating device plugin manager" Mar 18 13:04:59.624039 master-0 kubenswrapper[4025]: I0318 13:04:59.623877 4025 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 13:04:59.624039 master-0 kubenswrapper[4025]: I0318 13:04:59.623909 4025 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 13:04:59.624141 master-0 kubenswrapper[4025]: I0318 13:04:59.624044 4025 state_mem.go:36] "Initialized new in-memory state store" Mar 18 13:04:59.624141 master-0 kubenswrapper[4025]: I0318 13:04:59.624131 4025 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 18 13:04:59.629186 master-0 kubenswrapper[4025]: I0318 13:04:59.629127 4025 kubelet.go:418] "Attempting to sync node with API server" Mar 18 13:04:59.629186 master-0 kubenswrapper[4025]: I0318 13:04:59.629160 4025 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 18 13:04:59.629186 master-0 kubenswrapper[4025]: I0318 13:04:59.629191 4025 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 18 13:04:59.629479 master-0 kubenswrapper[4025]: I0318 13:04:59.629208 4025 kubelet.go:324] "Adding apiserver pod source" Mar 18 13:04:59.629479 master-0 kubenswrapper[4025]: I0318 13:04:59.629226 4025 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 18 13:04:59.634445 master-0 kubenswrapper[4025]: I0318 13:04:59.634373 4025 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 18 13:04:59.636311 master-0 kubenswrapper[4025]: I0318 13:04:59.636275 4025 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 18 13:04:59.636522 master-0 kubenswrapper[4025]: I0318 13:04:59.636469 4025 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 18 13:04:59.636522 master-0 kubenswrapper[4025]: I0318 13:04:59.636491 4025 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 18 13:04:59.636522 master-0 kubenswrapper[4025]: I0318 13:04:59.636498 4025 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 18 13:04:59.636522 master-0 kubenswrapper[4025]: I0318 13:04:59.636505 4025 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 18 13:04:59.636522 master-0 kubenswrapper[4025]: I0318 13:04:59.636512 4025 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 18 13:04:59.636522 master-0 kubenswrapper[4025]: I0318 13:04:59.636519 4025 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 18 13:04:59.636522 master-0 kubenswrapper[4025]: I0318 13:04:59.636526 4025 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 18 13:04:59.636522 master-0 kubenswrapper[4025]: I0318 13:04:59.636533 4025 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 18 13:04:59.636950 master-0 kubenswrapper[4025]: I0318 13:04:59.636571 4025 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 18 13:04:59.636950 master-0 kubenswrapper[4025]: I0318 13:04:59.636582 4025 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 18 13:04:59.636950 master-0 kubenswrapper[4025]: I0318 13:04:59.636594 4025 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 18 13:04:59.636950 master-0 kubenswrapper[4025]: I0318 13:04:59.636937 4025 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 18 13:04:59.637758 master-0 kubenswrapper[4025]: W0318 13:04:59.637686 4025 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:04:59.637758 master-0 kubenswrapper[4025]: W0318 13:04:59.637706 4025 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:04:59.637899 master-0 kubenswrapper[4025]: E0318 13:04:59.637792 4025 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:04:59.637899 master-0 kubenswrapper[4025]: E0318 13:04:59.637755 4025 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:04:59.639210 master-0 kubenswrapper[4025]: I0318 13:04:59.639157 4025 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 18 13:04:59.639951 master-0 kubenswrapper[4025]: I0318 13:04:59.639911 4025 server.go:1280] "Started kubelet" Mar 18 13:04:59.641671 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 18 13:04:59.641897 master-0 kubenswrapper[4025]: I0318 13:04:59.641782 4025 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 18 13:04:59.642483 master-0 kubenswrapper[4025]: I0318 13:04:59.641831 4025 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:04:59.642483 master-0 kubenswrapper[4025]: I0318 13:04:59.641988 4025 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 18 13:04:59.642483 master-0 kubenswrapper[4025]: I0318 13:04:59.642208 4025 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 18 13:04:59.642987 master-0 kubenswrapper[4025]: I0318 13:04:59.642918 4025 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 18 13:04:59.643945 master-0 kubenswrapper[4025]: I0318 13:04:59.643901 4025 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 18 13:04:59.643945 master-0 kubenswrapper[4025]: I0318 13:04:59.643941 4025 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 18 13:04:59.644341 master-0 kubenswrapper[4025]: E0318 13:04:59.644219 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:04:59.645499 master-0 kubenswrapper[4025]: I0318 13:04:59.645464 4025 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 18 13:04:59.645499 master-0 kubenswrapper[4025]: I0318 13:04:59.645479 4025 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 18 13:04:59.645716 master-0 kubenswrapper[4025]: I0318 13:04:59.645578 4025 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 18 13:04:59.645716 master-0 kubenswrapper[4025]: I0318 13:04:59.645644 4025 factory.go:55] Registering systemd factory Mar 18 13:04:59.645716 master-0 kubenswrapper[4025]: I0318 13:04:59.645681 4025 factory.go:221] Registration of the systemd container factory successfully Mar 18 13:04:59.645716 master-0 kubenswrapper[4025]: I0318 13:04:59.645694 4025 reconstruct.go:97] "Volume reconstruction finished" Mar 18 13:04:59.645716 master-0 kubenswrapper[4025]: I0318 13:04:59.645714 4025 reconciler.go:26] "Reconciler: start to sync state" Mar 18 13:04:59.646081 master-0 kubenswrapper[4025]: I0318 13:04:59.645912 4025 server.go:449] "Adding debug handlers to kubelet server" Mar 18 13:04:59.655240 master-0 kubenswrapper[4025]: W0318 13:04:59.655126 4025 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:04:59.655495 master-0 kubenswrapper[4025]: E0318 13:04:59.655252 4025 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:04:59.656773 master-0 kubenswrapper[4025]: I0318 13:04:59.656718 4025 factory.go:153] Registering CRI-O factory Mar 18 13:04:59.656773 master-0 kubenswrapper[4025]: I0318 13:04:59.656766 4025 factory.go:221] Registration of the crio container factory successfully Mar 18 13:04:59.656971 master-0 kubenswrapper[4025]: I0318 13:04:59.656929 4025 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 18 13:04:59.657049 master-0 kubenswrapper[4025]: I0318 13:04:59.656972 4025 factory.go:103] Registering Raw factory Mar 18 13:04:59.657296 master-0 kubenswrapper[4025]: I0318 13:04:59.657249 4025 manager.go:1196] Started watching for new ooms in manager Mar 18 13:04:59.659026 master-0 kubenswrapper[4025]: E0318 13:04:59.658921 4025 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 18 13:04:59.660588 master-0 kubenswrapper[4025]: I0318 13:04:59.660549 4025 manager.go:319] Starting recovery of all containers Mar 18 13:04:59.662765 master-0 kubenswrapper[4025]: E0318 13:04:59.661119 4025 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 18 13:04:59.664735 master-0 kubenswrapper[4025]: E0318 13:04:59.662341 4025 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189df1434ebb8e14 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:04:59.639860756 +0000 UTC m=+0.559739418,LastTimestamp:2026-03-18 13:04:59.639860756 +0000 UTC m=+0.559739418,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:04:59.684745 master-0 kubenswrapper[4025]: I0318 13:04:59.684458 4025 manager.go:324] Recovery completed Mar 18 13:04:59.696393 master-0 kubenswrapper[4025]: I0318 13:04:59.696346 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:04:59.698051 master-0 kubenswrapper[4025]: I0318 13:04:59.697994 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:04:59.698051 master-0 kubenswrapper[4025]: I0318 13:04:59.698038 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:04:59.698051 master-0 kubenswrapper[4025]: I0318 13:04:59.698049 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:04:59.699477 master-0 kubenswrapper[4025]: I0318 13:04:59.699449 4025 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 18 13:04:59.699477 master-0 kubenswrapper[4025]: I0318 13:04:59.699466 4025 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 18 13:04:59.699577 master-0 kubenswrapper[4025]: I0318 13:04:59.699495 4025 state_mem.go:36] "Initialized new in-memory state store" Mar 18 13:04:59.704225 master-0 kubenswrapper[4025]: I0318 13:04:59.704197 4025 policy_none.go:49] "None policy: Start" Mar 18 13:04:59.705012 master-0 kubenswrapper[4025]: I0318 13:04:59.704982 4025 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 18 13:04:59.705063 master-0 kubenswrapper[4025]: I0318 13:04:59.705028 4025 state_mem.go:35] "Initializing new in-memory state store" Mar 18 13:04:59.745057 master-0 kubenswrapper[4025]: E0318 13:04:59.745005 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:04:59.765594 master-0 kubenswrapper[4025]: I0318 13:04:59.765447 4025 manager.go:334] "Starting Device Plugin manager" Mar 18 13:04:59.765594 master-0 kubenswrapper[4025]: I0318 13:04:59.765508 4025 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 18 13:04:59.766127 master-0 kubenswrapper[4025]: I0318 13:04:59.766098 4025 server.go:79] "Starting device plugin registration server" Mar 18 13:04:59.766570 master-0 kubenswrapper[4025]: I0318 13:04:59.766545 4025 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 18 13:04:59.766625 master-0 kubenswrapper[4025]: I0318 13:04:59.766570 4025 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 18 13:04:59.766833 master-0 kubenswrapper[4025]: I0318 13:04:59.766795 4025 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 18 13:04:59.766942 master-0 kubenswrapper[4025]: I0318 13:04:59.766918 4025 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 18 13:04:59.766942 master-0 kubenswrapper[4025]: I0318 13:04:59.766938 4025 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 18 13:04:59.769221 master-0 kubenswrapper[4025]: E0318 13:04:59.769189 4025 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 18 13:04:59.801310 master-0 kubenswrapper[4025]: I0318 13:04:59.801220 4025 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 18 13:04:59.814283 master-0 kubenswrapper[4025]: I0318 13:04:59.802931 4025 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 18 13:04:59.814283 master-0 kubenswrapper[4025]: I0318 13:04:59.803008 4025 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 18 13:04:59.814283 master-0 kubenswrapper[4025]: I0318 13:04:59.803055 4025 kubelet.go:2335] "Starting kubelet main sync loop" Mar 18 13:04:59.814283 master-0 kubenswrapper[4025]: E0318 13:04:59.803123 4025 kubelet.go:2359] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Mar 18 13:04:59.814283 master-0 kubenswrapper[4025]: W0318 13:04:59.805480 4025 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:04:59.814283 master-0 kubenswrapper[4025]: E0318 13:04:59.805575 4025 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:04:59.860849 master-0 kubenswrapper[4025]: E0318 13:04:59.860759 4025 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 18 13:04:59.866919 master-0 kubenswrapper[4025]: I0318 13:04:59.866876 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:04:59.868069 master-0 kubenswrapper[4025]: I0318 13:04:59.867981 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:04:59.868069 master-0 kubenswrapper[4025]: I0318 13:04:59.868012 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:04:59.868069 master-0 kubenswrapper[4025]: I0318 13:04:59.868020 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:04:59.868069 master-0 kubenswrapper[4025]: I0318 13:04:59.868042 4025 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 13:04:59.868819 master-0 kubenswrapper[4025]: E0318 13:04:59.868770 4025 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 13:04:59.903911 master-0 kubenswrapper[4025]: I0318 13:04:59.903850 4025 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Mar 18 13:04:59.904161 master-0 kubenswrapper[4025]: I0318 13:04:59.904148 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:04:59.905239 master-0 kubenswrapper[4025]: I0318 13:04:59.905222 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:04:59.905346 master-0 kubenswrapper[4025]: I0318 13:04:59.905334 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:04:59.905434 master-0 kubenswrapper[4025]: I0318 13:04:59.905424 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:04:59.905580 master-0 kubenswrapper[4025]: I0318 13:04:59.905569 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:04:59.905868 master-0 kubenswrapper[4025]: I0318 13:04:59.905837 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:04:59.905930 master-0 kubenswrapper[4025]: I0318 13:04:59.905886 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:04:59.906166 master-0 kubenswrapper[4025]: I0318 13:04:59.906151 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:04:59.906220 master-0 kubenswrapper[4025]: I0318 13:04:59.906176 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:04:59.906220 master-0 kubenswrapper[4025]: I0318 13:04:59.906184 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:04:59.906461 master-0 kubenswrapper[4025]: I0318 13:04:59.906447 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:04:59.906571 master-0 kubenswrapper[4025]: I0318 13:04:59.906550 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:04:59.906621 master-0 kubenswrapper[4025]: I0318 13:04:59.906575 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:04:59.906718 master-0 kubenswrapper[4025]: I0318 13:04:59.906702 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:04:59.906791 master-0 kubenswrapper[4025]: I0318 13:04:59.906778 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:04:59.906870 master-0 kubenswrapper[4025]: I0318 13:04:59.906858 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:04:59.906982 master-0 kubenswrapper[4025]: I0318 13:04:59.906959 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:04:59.906982 master-0 kubenswrapper[4025]: I0318 13:04:59.906981 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:04:59.907082 master-0 kubenswrapper[4025]: I0318 13:04:59.906989 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:04:59.907082 master-0 kubenswrapper[4025]: I0318 13:04:59.907063 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:04:59.907082 master-0 kubenswrapper[4025]: I0318 13:04:59.907076 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:04:59.907168 master-0 kubenswrapper[4025]: I0318 13:04:59.907076 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:04:59.907168 master-0 kubenswrapper[4025]: I0318 13:04:59.907110 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:04:59.909073 master-0 kubenswrapper[4025]: I0318 13:04:59.907828 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:04:59.909073 master-0 kubenswrapper[4025]: I0318 13:04:59.907891 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:04:59.909073 master-0 kubenswrapper[4025]: I0318 13:04:59.908646 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:04:59.909073 master-0 kubenswrapper[4025]: I0318 13:04:59.908677 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:04:59.909073 master-0 kubenswrapper[4025]: I0318 13:04:59.908687 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:04:59.909073 master-0 kubenswrapper[4025]: I0318 13:04:59.908840 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:04:59.909378 master-0 kubenswrapper[4025]: I0318 13:04:59.909077 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 13:04:59.909378 master-0 kubenswrapper[4025]: I0318 13:04:59.909168 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:04:59.911131 master-0 kubenswrapper[4025]: I0318 13:04:59.910521 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:04:59.911131 master-0 kubenswrapper[4025]: I0318 13:04:59.910539 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:04:59.911131 master-0 kubenswrapper[4025]: I0318 13:04:59.910555 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:04:59.911131 master-0 kubenswrapper[4025]: I0318 13:04:59.910579 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:04:59.911131 master-0 kubenswrapper[4025]: I0318 13:04:59.910598 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:04:59.911131 master-0 kubenswrapper[4025]: I0318 13:04:59.910606 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:04:59.911131 master-0 kubenswrapper[4025]: I0318 13:04:59.910734 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:04:59.911131 master-0 kubenswrapper[4025]: I0318 13:04:59.910747 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:04:59.911131 master-0 kubenswrapper[4025]: I0318 13:04:59.910760 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:04:59.911131 master-0 kubenswrapper[4025]: I0318 13:04:59.910819 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:04:59.911131 master-0 kubenswrapper[4025]: I0318 13:04:59.910846 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:04:59.911892 master-0 kubenswrapper[4025]: I0318 13:04:59.911858 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:04:59.911955 master-0 kubenswrapper[4025]: I0318 13:04:59.911919 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:04:59.911955 master-0 kubenswrapper[4025]: I0318 13:04:59.911938 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:04:59.947537 master-0 kubenswrapper[4025]: I0318 13:04:59.947477 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:04:59.947665 master-0 kubenswrapper[4025]: I0318 13:04:59.947550 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:04:59.947665 master-0 kubenswrapper[4025]: I0318 13:04:59.947593 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:04:59.947665 master-0 kubenswrapper[4025]: I0318 13:04:59.947629 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:04:59.947665 master-0 kubenswrapper[4025]: I0318 13:04:59.947661 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:04:59.947784 master-0 kubenswrapper[4025]: I0318 13:04:59.947695 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:04:59.947784 master-0 kubenswrapper[4025]: I0318 13:04:59.947726 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 13:04:59.947784 master-0 kubenswrapper[4025]: I0318 13:04:59.947757 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:04:59.947862 master-0 kubenswrapper[4025]: I0318 13:04:59.947790 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:04:59.947862 master-0 kubenswrapper[4025]: I0318 13:04:59.947822 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:04:59.947917 master-0 kubenswrapper[4025]: I0318 13:04:59.947853 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:04:59.947917 master-0 kubenswrapper[4025]: I0318 13:04:59.947898 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 13:04:59.948016 master-0 kubenswrapper[4025]: I0318 13:04:59.947964 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:04:59.948016 master-0 kubenswrapper[4025]: I0318 13:04:59.948006 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:04:59.948077 master-0 kubenswrapper[4025]: I0318 13:04:59.948040 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:04:59.948109 master-0 kubenswrapper[4025]: I0318 13:04:59.948072 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:04:59.948146 master-0 kubenswrapper[4025]: I0318 13:04:59.948108 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:00.049017 master-0 kubenswrapper[4025]: I0318 13:05:00.048881 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:00.049017 master-0 kubenswrapper[4025]: I0318 13:05:00.048923 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:00.049017 master-0 kubenswrapper[4025]: I0318 13:05:00.048938 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:00.049017 master-0 kubenswrapper[4025]: I0318 13:05:00.048959 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:00.049017 master-0 kubenswrapper[4025]: I0318 13:05:00.048977 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:00.049355 master-0 kubenswrapper[4025]: I0318 13:05:00.049101 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:00.049355 master-0 kubenswrapper[4025]: I0318 13:05:00.049201 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:00.049355 master-0 kubenswrapper[4025]: I0318 13:05:00.049239 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:00.049355 master-0 kubenswrapper[4025]: I0318 13:05:00.049265 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:00.049355 master-0 kubenswrapper[4025]: I0318 13:05:00.049294 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:00.049587 master-0 kubenswrapper[4025]: I0318 13:05:00.049339 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 13:05:00.049587 master-0 kubenswrapper[4025]: I0318 13:05:00.049395 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:05:00.049587 master-0 kubenswrapper[4025]: I0318 13:05:00.049454 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:05:00.049587 master-0 kubenswrapper[4025]: I0318 13:05:00.049482 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:05:00.049587 master-0 kubenswrapper[4025]: I0318 13:05:00.049509 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:05:00.049587 master-0 kubenswrapper[4025]: I0318 13:05:00.049536 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:00.049587 master-0 kubenswrapper[4025]: I0318 13:05:00.049563 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:00.049843 master-0 kubenswrapper[4025]: I0318 13:05:00.049579 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:00.049843 master-0 kubenswrapper[4025]: I0318 13:05:00.049634 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 13:05:00.049843 master-0 kubenswrapper[4025]: I0318 13:05:00.049592 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 13:05:00.049843 master-0 kubenswrapper[4025]: I0318 13:05:00.049689 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:00.049843 master-0 kubenswrapper[4025]: I0318 13:05:00.049723 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:05:00.049843 master-0 kubenswrapper[4025]: I0318 13:05:00.049787 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:00.049843 master-0 kubenswrapper[4025]: I0318 13:05:00.049788 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:00.049843 master-0 kubenswrapper[4025]: I0318 13:05:00.049831 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:05:00.049843 master-0 kubenswrapper[4025]: I0318 13:05:00.049850 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:05:00.050142 master-0 kubenswrapper[4025]: I0318 13:05:00.049872 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:00.050142 master-0 kubenswrapper[4025]: I0318 13:05:00.049898 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:00.050142 master-0 kubenswrapper[4025]: I0318 13:05:00.049897 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:00.050142 master-0 kubenswrapper[4025]: I0318 13:05:00.049922 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:00.050142 master-0 kubenswrapper[4025]: I0318 13:05:00.049938 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 13:05:00.050142 master-0 kubenswrapper[4025]: I0318 13:05:00.049945 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:05:00.050142 master-0 kubenswrapper[4025]: I0318 13:05:00.049968 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:00.050142 master-0 kubenswrapper[4025]: I0318 13:05:00.050014 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:00.069553 master-0 kubenswrapper[4025]: I0318 13:05:00.069490 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:00.070444 master-0 kubenswrapper[4025]: I0318 13:05:00.070400 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:00.070483 master-0 kubenswrapper[4025]: I0318 13:05:00.070462 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:00.070483 master-0 kubenswrapper[4025]: I0318 13:05:00.070473 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:00.070541 master-0 kubenswrapper[4025]: I0318 13:05:00.070523 4025 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 13:05:00.071181 master-0 kubenswrapper[4025]: E0318 13:05:00.071142 4025 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 13:05:00.247398 master-0 kubenswrapper[4025]: I0318 13:05:00.247308 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:00.262903 master-0 kubenswrapper[4025]: E0318 13:05:00.262819 4025 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 18 13:05:00.266964 master-0 kubenswrapper[4025]: I0318 13:05:00.266921 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:05:00.283457 master-0 kubenswrapper[4025]: I0318 13:05:00.283364 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 13:05:00.307992 master-0 kubenswrapper[4025]: I0318 13:05:00.307835 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:00.320077 master-0 kubenswrapper[4025]: I0318 13:05:00.320028 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:05:00.471870 master-0 kubenswrapper[4025]: I0318 13:05:00.471812 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:00.472933 master-0 kubenswrapper[4025]: I0318 13:05:00.472898 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:00.472933 master-0 kubenswrapper[4025]: I0318 13:05:00.472925 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:00.472933 master-0 kubenswrapper[4025]: I0318 13:05:00.472933 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:00.473083 master-0 kubenswrapper[4025]: I0318 13:05:00.472976 4025 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 13:05:00.473776 master-0 kubenswrapper[4025]: E0318 13:05:00.473737 4025 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 13:05:00.485622 master-0 kubenswrapper[4025]: W0318 13:05:00.485536 4025 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:00.485716 master-0 kubenswrapper[4025]: E0318 13:05:00.485618 4025 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:05:00.576143 master-0 kubenswrapper[4025]: W0318 13:05:00.575961 4025 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:00.576143 master-0 kubenswrapper[4025]: E0318 13:05:00.576033 4025 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:05:00.608568 master-0 kubenswrapper[4025]: W0318 13:05:00.608515 4025 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:00.608629 master-0 kubenswrapper[4025]: E0318 13:05:00.608577 4025 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:05:00.644012 master-0 kubenswrapper[4025]: I0318 13:05:00.643886 4025 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:01.064339 master-0 kubenswrapper[4025]: E0318 13:05:01.064233 4025 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 18 13:05:01.127269 master-0 kubenswrapper[4025]: W0318 13:05:01.127126 4025 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:01.127269 master-0 kubenswrapper[4025]: E0318 13:05:01.127217 4025 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:05:01.151535 master-0 kubenswrapper[4025]: W0318 13:05:01.151468 4025 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd664a6d0d2a24360dee10612610f1b59.slice/crio-9b875352a426804c810cbcabae5c16bee69af1f5eb5abb8757a87c72322c1d90 WatchSource:0}: Error finding container 9b875352a426804c810cbcabae5c16bee69af1f5eb5abb8757a87c72322c1d90: Status 404 returned error can't find the container with id 9b875352a426804c810cbcabae5c16bee69af1f5eb5abb8757a87c72322c1d90 Mar 18 13:05:01.156820 master-0 kubenswrapper[4025]: I0318 13:05:01.156774 4025 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 13:05:01.189961 master-0 kubenswrapper[4025]: W0318 13:05:01.189887 4025 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46f265536aba6292ead501bc9b49f327.slice/crio-ad81888f7e81397819da2e276d066f221c19460ce9d1c808f123e49e5e137c12 WatchSource:0}: Error finding container ad81888f7e81397819da2e276d066f221c19460ce9d1c808f123e49e5e137c12: Status 404 returned error can't find the container with id ad81888f7e81397819da2e276d066f221c19460ce9d1c808f123e49e5e137c12 Mar 18 13:05:01.246262 master-0 kubenswrapper[4025]: W0318 13:05:01.246176 4025 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc83737980b9ee109184b1d78e942cf36.slice/crio-492f8b5295cdebec40bc2ae4858cc5e6b7f333a6d877b4f8238f7b732ed663e7 WatchSource:0}: Error finding container 492f8b5295cdebec40bc2ae4858cc5e6b7f333a6d877b4f8238f7b732ed663e7: Status 404 returned error can't find the container with id 492f8b5295cdebec40bc2ae4858cc5e6b7f333a6d877b4f8238f7b732ed663e7 Mar 18 13:05:01.274795 master-0 kubenswrapper[4025]: I0318 13:05:01.274715 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:01.276972 master-0 kubenswrapper[4025]: I0318 13:05:01.276921 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:01.276972 master-0 kubenswrapper[4025]: I0318 13:05:01.276977 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:01.277127 master-0 kubenswrapper[4025]: I0318 13:05:01.276993 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:01.277127 master-0 kubenswrapper[4025]: I0318 13:05:01.277078 4025 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 13:05:01.278329 master-0 kubenswrapper[4025]: E0318 13:05:01.278252 4025 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 13:05:01.339006 master-0 kubenswrapper[4025]: W0318 13:05:01.338944 4025 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49fac1b46a11e49501805e891baae4a9.slice/crio-dac657218e0dfd4a3c5aa3191ba5551afdd06a112b3e1902ad24f1784c7e77bb WatchSource:0}: Error finding container dac657218e0dfd4a3c5aa3191ba5551afdd06a112b3e1902ad24f1784c7e77bb: Status 404 returned error can't find the container with id dac657218e0dfd4a3c5aa3191ba5551afdd06a112b3e1902ad24f1784c7e77bb Mar 18 13:05:01.608532 master-0 kubenswrapper[4025]: I0318 13:05:01.608195 4025 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 18 13:05:01.610008 master-0 kubenswrapper[4025]: E0318 13:05:01.609949 4025 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:05:01.644468 master-0 kubenswrapper[4025]: I0318 13:05:01.644325 4025 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:01.782125 master-0 kubenswrapper[4025]: W0318 13:05:01.782045 4025 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1249822f86f23526277d165c0d5d3c19.slice/crio-c0f3b16d1ffaf44cfe1f64310fd36df2459a40e3dd2e9014c6e2cf3307b7b8c5 WatchSource:0}: Error finding container c0f3b16d1ffaf44cfe1f64310fd36df2459a40e3dd2e9014c6e2cf3307b7b8c5: Status 404 returned error can't find the container with id c0f3b16d1ffaf44cfe1f64310fd36df2459a40e3dd2e9014c6e2cf3307b7b8c5 Mar 18 13:05:01.809985 master-0 kubenswrapper[4025]: I0318 13:05:01.809860 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"492f8b5295cdebec40bc2ae4858cc5e6b7f333a6d877b4f8238f7b732ed663e7"} Mar 18 13:05:01.811550 master-0 kubenswrapper[4025]: I0318 13:05:01.811492 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"ad81888f7e81397819da2e276d066f221c19460ce9d1c808f123e49e5e137c12"} Mar 18 13:05:01.812731 master-0 kubenswrapper[4025]: I0318 13:05:01.812687 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"9b875352a426804c810cbcabae5c16bee69af1f5eb5abb8757a87c72322c1d90"} Mar 18 13:05:01.813602 master-0 kubenswrapper[4025]: I0318 13:05:01.813554 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"c0f3b16d1ffaf44cfe1f64310fd36df2459a40e3dd2e9014c6e2cf3307b7b8c5"} Mar 18 13:05:01.814397 master-0 kubenswrapper[4025]: I0318 13:05:01.814343 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"dac657218e0dfd4a3c5aa3191ba5551afdd06a112b3e1902ad24f1784c7e77bb"} Mar 18 13:05:02.643167 master-0 kubenswrapper[4025]: I0318 13:05:02.643126 4025 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:02.665249 master-0 kubenswrapper[4025]: E0318 13:05:02.665209 4025 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 18 13:05:02.829464 master-0 kubenswrapper[4025]: W0318 13:05:02.829394 4025 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:02.829636 master-0 kubenswrapper[4025]: E0318 13:05:02.829475 4025 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:05:02.878562 master-0 kubenswrapper[4025]: I0318 13:05:02.878523 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:02.879563 master-0 kubenswrapper[4025]: I0318 13:05:02.879524 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:02.879646 master-0 kubenswrapper[4025]: I0318 13:05:02.879573 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:02.879646 master-0 kubenswrapper[4025]: I0318 13:05:02.879584 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:02.879646 master-0 kubenswrapper[4025]: I0318 13:05:02.879647 4025 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 13:05:02.880384 master-0 kubenswrapper[4025]: E0318 13:05:02.880357 4025 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 13:05:03.119781 master-0 kubenswrapper[4025]: W0318 13:05:03.119728 4025 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:03.119781 master-0 kubenswrapper[4025]: E0318 13:05:03.119780 4025 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:05:03.249974 master-0 kubenswrapper[4025]: W0318 13:05:03.249915 4025 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:03.249974 master-0 kubenswrapper[4025]: E0318 13:05:03.249970 4025 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:05:03.302393 master-0 kubenswrapper[4025]: W0318 13:05:03.302327 4025 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:03.302393 master-0 kubenswrapper[4025]: E0318 13:05:03.302386 4025 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:05:03.643373 master-0 kubenswrapper[4025]: I0318 13:05:03.643051 4025 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:03.821835 master-0 kubenswrapper[4025]: I0318 13:05:03.821729 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"1eb12a87dc862d5b3d8d0f8d6df8c24ebffab83c33817eb9807a92d04594145f"} Mar 18 13:05:03.821835 master-0 kubenswrapper[4025]: I0318 13:05:03.821799 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:03.823013 master-0 kubenswrapper[4025]: I0318 13:05:03.822976 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:03.823013 master-0 kubenswrapper[4025]: I0318 13:05:03.823015 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:03.823112 master-0 kubenswrapper[4025]: I0318 13:05:03.823027 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:04.645470 master-0 kubenswrapper[4025]: I0318 13:05:04.643827 4025 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:04.826330 master-0 kubenswrapper[4025]: I0318 13:05:04.826281 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"ac4829129d2b10e54c93299397b9cc08e24d1caf7621cbb7c7ae941e1d3f8b33"} Mar 18 13:05:04.831996 master-0 kubenswrapper[4025]: I0318 13:05:04.831948 4025 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="1eb12a87dc862d5b3d8d0f8d6df8c24ebffab83c33817eb9807a92d04594145f" exitCode=0 Mar 18 13:05:04.831996 master-0 kubenswrapper[4025]: I0318 13:05:04.831990 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"1eb12a87dc862d5b3d8d0f8d6df8c24ebffab83c33817eb9807a92d04594145f"} Mar 18 13:05:04.832151 master-0 kubenswrapper[4025]: I0318 13:05:04.832076 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:04.832958 master-0 kubenswrapper[4025]: I0318 13:05:04.832904 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:04.832958 master-0 kubenswrapper[4025]: I0318 13:05:04.832957 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:04.833084 master-0 kubenswrapper[4025]: I0318 13:05:04.832969 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:05.643729 master-0 kubenswrapper[4025]: I0318 13:05:05.643684 4025 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:05.791195 master-0 kubenswrapper[4025]: I0318 13:05:05.791124 4025 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 18 13:05:05.792353 master-0 kubenswrapper[4025]: E0318 13:05:05.792283 4025 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:05:05.836031 master-0 kubenswrapper[4025]: I0318 13:05:05.835564 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"b680406b619eed9e8ddd7c1c6d8c3b3b49b30c6e09e5967e05d69c77c60fb8e7"} Mar 18 13:05:05.836031 master-0 kubenswrapper[4025]: I0318 13:05:05.835662 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:05.837008 master-0 kubenswrapper[4025]: I0318 13:05:05.836496 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:05.837008 master-0 kubenswrapper[4025]: I0318 13:05:05.836516 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:05.837008 master-0 kubenswrapper[4025]: I0318 13:05:05.836524 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:05.838268 master-0 kubenswrapper[4025]: I0318 13:05:05.838172 4025 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/0.log" Mar 18 13:05:05.838805 master-0 kubenswrapper[4025]: I0318 13:05:05.838481 4025 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="3a33b0f6ecfcf1d4b000bbfa61f4863aa3bf3e36fab966244e1bb2d1f6df82e9" exitCode=1 Mar 18 13:05:05.838805 master-0 kubenswrapper[4025]: I0318 13:05:05.838504 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"3a33b0f6ecfcf1d4b000bbfa61f4863aa3bf3e36fab966244e1bb2d1f6df82e9"} Mar 18 13:05:05.838805 master-0 kubenswrapper[4025]: I0318 13:05:05.838554 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:05.839531 master-0 kubenswrapper[4025]: I0318 13:05:05.839002 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:05.839531 master-0 kubenswrapper[4025]: I0318 13:05:05.839018 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:05.839531 master-0 kubenswrapper[4025]: I0318 13:05:05.839025 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:05.839531 master-0 kubenswrapper[4025]: I0318 13:05:05.839231 4025 scope.go:117] "RemoveContainer" containerID="3a33b0f6ecfcf1d4b000bbfa61f4863aa3bf3e36fab966244e1bb2d1f6df82e9" Mar 18 13:05:05.866776 master-0 kubenswrapper[4025]: E0318 13:05:05.866698 4025 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 18 13:05:06.080669 master-0 kubenswrapper[4025]: I0318 13:05:06.080605 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:06.083043 master-0 kubenswrapper[4025]: I0318 13:05:06.082579 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:06.083043 master-0 kubenswrapper[4025]: I0318 13:05:06.082623 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:06.083043 master-0 kubenswrapper[4025]: I0318 13:05:06.082632 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:06.083043 master-0 kubenswrapper[4025]: I0318 13:05:06.082688 4025 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 13:05:06.083550 master-0 kubenswrapper[4025]: E0318 13:05:06.083506 4025 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 13:05:06.418673 master-0 kubenswrapper[4025]: E0318 13:05:06.418466 4025 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189df1434ebb8e14 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:04:59.639860756 +0000 UTC m=+0.559739418,LastTimestamp:2026-03-18 13:04:59.639860756 +0000 UTC m=+0.559739418,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:06.643734 master-0 kubenswrapper[4025]: I0318 13:05:06.643620 4025 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:06.843084 master-0 kubenswrapper[4025]: I0318 13:05:06.843043 4025 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/1.log" Mar 18 13:05:06.843621 master-0 kubenswrapper[4025]: I0318 13:05:06.843578 4025 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/0.log" Mar 18 13:05:06.843902 master-0 kubenswrapper[4025]: I0318 13:05:06.843863 4025 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="ca2201597489546190add2483d5c8f3e314a7ecbcfe886ac1c853c808b648bff" exitCode=1 Mar 18 13:05:06.844035 master-0 kubenswrapper[4025]: I0318 13:05:06.844015 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:06.844576 master-0 kubenswrapper[4025]: I0318 13:05:06.844549 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:06.844791 master-0 kubenswrapper[4025]: I0318 13:05:06.844756 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"ca2201597489546190add2483d5c8f3e314a7ecbcfe886ac1c853c808b648bff"} Mar 18 13:05:06.844827 master-0 kubenswrapper[4025]: I0318 13:05:06.844811 4025 scope.go:117] "RemoveContainer" containerID="3a33b0f6ecfcf1d4b000bbfa61f4863aa3bf3e36fab966244e1bb2d1f6df82e9" Mar 18 13:05:06.845211 master-0 kubenswrapper[4025]: I0318 13:05:06.845190 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:06.845250 master-0 kubenswrapper[4025]: I0318 13:05:06.845221 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:06.845250 master-0 kubenswrapper[4025]: I0318 13:05:06.845233 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:06.845999 master-0 kubenswrapper[4025]: I0318 13:05:06.845971 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:06.845999 master-0 kubenswrapper[4025]: I0318 13:05:06.845994 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:06.846059 master-0 kubenswrapper[4025]: I0318 13:05:06.846005 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:06.846273 master-0 kubenswrapper[4025]: I0318 13:05:06.846245 4025 scope.go:117] "RemoveContainer" containerID="ca2201597489546190add2483d5c8f3e314a7ecbcfe886ac1c853c808b648bff" Mar 18 13:05:06.846406 master-0 kubenswrapper[4025]: E0318 13:05:06.846375 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 18 13:05:07.268752 master-0 kubenswrapper[4025]: W0318 13:05:07.268666 4025 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:07.268752 master-0 kubenswrapper[4025]: E0318 13:05:07.268745 4025 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:05:07.436917 master-0 kubenswrapper[4025]: W0318 13:05:07.436785 4025 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:07.436917 master-0 kubenswrapper[4025]: E0318 13:05:07.436905 4025 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:05:07.643793 master-0 kubenswrapper[4025]: I0318 13:05:07.643662 4025 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:07.846218 master-0 kubenswrapper[4025]: I0318 13:05:07.846178 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:07.847109 master-0 kubenswrapper[4025]: I0318 13:05:07.847084 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:07.847190 master-0 kubenswrapper[4025]: I0318 13:05:07.847122 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:07.847190 master-0 kubenswrapper[4025]: I0318 13:05:07.847131 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:07.847504 master-0 kubenswrapper[4025]: I0318 13:05:07.847489 4025 scope.go:117] "RemoveContainer" containerID="ca2201597489546190add2483d5c8f3e314a7ecbcfe886ac1c853c808b648bff" Mar 18 13:05:07.847663 master-0 kubenswrapper[4025]: E0318 13:05:07.847642 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 18 13:05:08.625014 master-0 kubenswrapper[4025]: W0318 13:05:08.624935 4025 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:08.625164 master-0 kubenswrapper[4025]: E0318 13:05:08.625016 4025 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:05:08.642846 master-0 kubenswrapper[4025]: I0318 13:05:08.642816 4025 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:09.028923 master-0 kubenswrapper[4025]: W0318 13:05:09.028815 4025 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:09.029358 master-0 kubenswrapper[4025]: E0318 13:05:09.028944 4025 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 13:05:09.643771 master-0 kubenswrapper[4025]: I0318 13:05:09.643614 4025 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 13:05:09.769618 master-0 kubenswrapper[4025]: E0318 13:05:09.769515 4025 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 18 13:05:09.857028 master-0 kubenswrapper[4025]: I0318 13:05:09.856953 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"ea4c7c8dc1dee8fb69dc17e4e5c096e51c691a4d47e30362ad839b224364d388"} Mar 18 13:05:09.859840 master-0 kubenswrapper[4025]: I0318 13:05:09.859798 4025 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/1.log" Mar 18 13:05:09.863381 master-0 kubenswrapper[4025]: I0318 13:05:09.863344 4025 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="d6a80d543c22a63f6dea0e7d105a4e1ffc771640a4b28b8a800b1b2d79f67b1d" exitCode=0 Mar 18 13:05:09.863698 master-0 kubenswrapper[4025]: I0318 13:05:09.863487 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerDied","Data":"d6a80d543c22a63f6dea0e7d105a4e1ffc771640a4b28b8a800b1b2d79f67b1d"} Mar 18 13:05:09.863904 master-0 kubenswrapper[4025]: I0318 13:05:09.863548 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:09.865027 master-0 kubenswrapper[4025]: I0318 13:05:09.864954 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:09.865027 master-0 kubenswrapper[4025]: I0318 13:05:09.865015 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:09.865027 master-0 kubenswrapper[4025]: I0318 13:05:09.865030 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:09.867721 master-0 kubenswrapper[4025]: I0318 13:05:09.867620 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"fa24e07dc1e554926055d55fec3f68de49cdd19d5efe278d06ec7ad571b7e767"} Mar 18 13:05:09.867721 master-0 kubenswrapper[4025]: I0318 13:05:09.867691 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:09.868528 master-0 kubenswrapper[4025]: I0318 13:05:09.868486 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:09.868528 master-0 kubenswrapper[4025]: I0318 13:05:09.868524 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:09.868528 master-0 kubenswrapper[4025]: I0318 13:05:09.868536 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:09.874690 master-0 kubenswrapper[4025]: I0318 13:05:09.874662 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:09.875598 master-0 kubenswrapper[4025]: I0318 13:05:09.875533 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:09.875598 master-0 kubenswrapper[4025]: I0318 13:05:09.875556 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:09.875598 master-0 kubenswrapper[4025]: I0318 13:05:09.875566 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:10.886349 master-0 kubenswrapper[4025]: I0318 13:05:10.886297 4025 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="ea4c7c8dc1dee8fb69dc17e4e5c096e51c691a4d47e30362ad839b224364d388" exitCode=1 Mar 18 13:05:10.887021 master-0 kubenswrapper[4025]: I0318 13:05:10.886369 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"ea4c7c8dc1dee8fb69dc17e4e5c096e51c691a4d47e30362ad839b224364d388"} Mar 18 13:05:10.888220 master-0 kubenswrapper[4025]: I0318 13:05:10.888057 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"2bf6097f58997d4a7857bf19b235ef6fc055563709efd33dba529c95e95347b7"} Mar 18 13:05:10.888220 master-0 kubenswrapper[4025]: I0318 13:05:10.888109 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:10.889362 master-0 kubenswrapper[4025]: I0318 13:05:10.888926 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:10.889362 master-0 kubenswrapper[4025]: I0318 13:05:10.888953 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:10.889362 master-0 kubenswrapper[4025]: I0318 13:05:10.888961 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:11.544468 master-0 kubenswrapper[4025]: I0318 13:05:11.543302 4025 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:05:11.648097 master-0 kubenswrapper[4025]: I0318 13:05:11.647474 4025 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:05:12.273394 master-0 kubenswrapper[4025]: E0318 13:05:12.272850 4025 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 18 13:05:12.484477 master-0 kubenswrapper[4025]: I0318 13:05:12.484048 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:12.485243 master-0 kubenswrapper[4025]: I0318 13:05:12.485193 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:12.485243 master-0 kubenswrapper[4025]: I0318 13:05:12.485245 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:12.485347 master-0 kubenswrapper[4025]: I0318 13:05:12.485261 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:12.485396 master-0 kubenswrapper[4025]: I0318 13:05:12.485353 4025 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 13:05:12.489822 master-0 kubenswrapper[4025]: E0318 13:05:12.489772 4025 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 18 13:05:12.647485 master-0 kubenswrapper[4025]: I0318 13:05:12.647370 4025 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:05:13.648914 master-0 kubenswrapper[4025]: I0318 13:05:13.648829 4025 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:05:13.828208 master-0 kubenswrapper[4025]: I0318 13:05:13.828008 4025 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 18 13:05:13.846159 master-0 kubenswrapper[4025]: I0318 13:05:13.846088 4025 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 18 13:05:13.907067 master-0 kubenswrapper[4025]: I0318 13:05:13.906962 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"ef75baaea3b231f0a943268458f551b383f49ce5906993775a78b47a21e43600"} Mar 18 13:05:13.907345 master-0 kubenswrapper[4025]: I0318 13:05:13.907081 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:13.910331 master-0 kubenswrapper[4025]: I0318 13:05:13.908456 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:13.910331 master-0 kubenswrapper[4025]: I0318 13:05:13.908493 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:13.910331 master-0 kubenswrapper[4025]: I0318 13:05:13.908529 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:13.910331 master-0 kubenswrapper[4025]: I0318 13:05:13.908966 4025 scope.go:117] "RemoveContainer" containerID="ea4c7c8dc1dee8fb69dc17e4e5c096e51c691a4d47e30362ad839b224364d388" Mar 18 13:05:13.910331 master-0 kubenswrapper[4025]: I0318 13:05:13.910140 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"a4f58e12d74ce5ae7d84f01a469edde896e16989e12c3d7d74c8926694ed4605"} Mar 18 13:05:13.910331 master-0 kubenswrapper[4025]: I0318 13:05:13.910249 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:13.911056 master-0 kubenswrapper[4025]: I0318 13:05:13.911012 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:13.911056 master-0 kubenswrapper[4025]: I0318 13:05:13.911051 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:13.911176 master-0 kubenswrapper[4025]: I0318 13:05:13.911062 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:13.980341 master-0 kubenswrapper[4025]: I0318 13:05:13.980233 4025 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:14.634561 master-0 kubenswrapper[4025]: W0318 13:05:14.634493 4025 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 18 13:05:14.634858 master-0 kubenswrapper[4025]: E0318 13:05:14.634562 4025 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 18 13:05:14.650788 master-0 kubenswrapper[4025]: I0318 13:05:14.650718 4025 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:05:14.915261 master-0 kubenswrapper[4025]: I0318 13:05:14.915236 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:14.915651 master-0 kubenswrapper[4025]: I0318 13:05:14.915222 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"f95a3bc3d3ba83cb38567fab408924e4ffe01d6a95b0daefb0d6bae2338f0525"} Mar 18 13:05:14.915651 master-0 kubenswrapper[4025]: I0318 13:05:14.915355 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:14.916527 master-0 kubenswrapper[4025]: I0318 13:05:14.916480 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:14.916611 master-0 kubenswrapper[4025]: I0318 13:05:14.916550 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:14.916611 master-0 kubenswrapper[4025]: I0318 13:05:14.916578 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:14.916821 master-0 kubenswrapper[4025]: I0318 13:05:14.916801 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:14.916924 master-0 kubenswrapper[4025]: I0318 13:05:14.916909 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:14.917001 master-0 kubenswrapper[4025]: I0318 13:05:14.916988 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:15.627761 master-0 kubenswrapper[4025]: W0318 13:05:15.627695 4025 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 18 13:05:15.627761 master-0 kubenswrapper[4025]: E0318 13:05:15.627756 4025 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 18 13:05:15.652105 master-0 kubenswrapper[4025]: I0318 13:05:15.652035 4025 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:05:15.724052 master-0 kubenswrapper[4025]: W0318 13:05:15.723986 4025 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 18 13:05:15.724316 master-0 kubenswrapper[4025]: E0318 13:05:15.724058 4025 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 18 13:05:15.917918 master-0 kubenswrapper[4025]: I0318 13:05:15.917863 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:15.918335 master-0 kubenswrapper[4025]: I0318 13:05:15.918304 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:15.918994 master-0 kubenswrapper[4025]: I0318 13:05:15.918955 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:15.918994 master-0 kubenswrapper[4025]: I0318 13:05:15.918995 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:15.919087 master-0 kubenswrapper[4025]: I0318 13:05:15.919005 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:15.919087 master-0 kubenswrapper[4025]: I0318 13:05:15.919018 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:15.919087 master-0 kubenswrapper[4025]: I0318 13:05:15.919043 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:15.919087 master-0 kubenswrapper[4025]: I0318 13:05:15.919053 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:16.427686 master-0 kubenswrapper[4025]: E0318 13:05:16.427504 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df1434ebb8e14 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:04:59.639860756 +0000 UTC m=+0.559739418,LastTimestamp:2026-03-18 13:04:59.639860756 +0000 UTC m=+0.559739418,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.433498 master-0 kubenswrapper[4025]: E0318 13:05:16.433344 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df1435233273e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:04:59.698030398 +0000 UTC m=+0.617909020,LastTimestamp:2026-03-18 13:04:59.698030398 +0000 UTC m=+0.617909020,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.440645 master-0 kubenswrapper[4025]: E0318 13:05:16.440556 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14352335cd6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:04:59.698044118 +0000 UTC m=+0.617922740,LastTimestamp:2026-03-18 13:04:59.698044118 +0000 UTC m=+0.617922740,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.446945 master-0 kubenswrapper[4025]: E0318 13:05:16.446814 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14352338831 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:04:59.698055217 +0000 UTC m=+0.617933839,LastTimestamp:2026-03-18 13:04:59.698055217 +0000 UTC m=+0.617933839,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.454559 master-0 kubenswrapper[4025]: E0318 13:05:16.454345 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df143565d88d4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:04:59.767916756 +0000 UTC m=+0.687795398,LastTimestamp:2026-03-18 13:04:59.767916756 +0000 UTC m=+0.687795398,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.462081 master-0 kubenswrapper[4025]: E0318 13:05:16.461929 4025 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df1435233273e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df1435233273e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:04:59.698030398 +0000 UTC m=+0.617909020,LastTimestamp:2026-03-18 13:04:59.867998986 +0000 UTC m=+0.787877608,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.469629 master-0 kubenswrapper[4025]: E0318 13:05:16.469463 4025 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df14352335cd6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14352335cd6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:04:59.698044118 +0000 UTC m=+0.617922740,LastTimestamp:2026-03-18 13:04:59.868017115 +0000 UTC m=+0.787895737,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.477330 master-0 kubenswrapper[4025]: E0318 13:05:16.477209 4025 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df14352338831\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14352338831 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:04:59.698055217 +0000 UTC m=+0.617933839,LastTimestamp:2026-03-18 13:04:59.868024755 +0000 UTC m=+0.787903377,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.482602 master-0 kubenswrapper[4025]: E0318 13:05:16.482449 4025 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df1435233273e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df1435233273e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:04:59.698030398 +0000 UTC m=+0.617909020,LastTimestamp:2026-03-18 13:04:59.905316726 +0000 UTC m=+0.825195368,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.488217 master-0 kubenswrapper[4025]: E0318 13:05:16.488099 4025 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df14352335cd6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14352335cd6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:04:59.698044118 +0000 UTC m=+0.617922740,LastTimestamp:2026-03-18 13:04:59.905396052 +0000 UTC m=+0.825274674,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.495907 master-0 kubenswrapper[4025]: E0318 13:05:16.495741 4025 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df14352338831\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14352338831 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:04:59.698055217 +0000 UTC m=+0.617933839,LastTimestamp:2026-03-18 13:04:59.905485949 +0000 UTC m=+0.825364571,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.502702 master-0 kubenswrapper[4025]: E0318 13:05:16.502583 4025 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df1435233273e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df1435233273e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:04:59.698030398 +0000 UTC m=+0.617909020,LastTimestamp:2026-03-18 13:04:59.906162333 +0000 UTC m=+0.826040955,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.509517 master-0 kubenswrapper[4025]: E0318 13:05:16.509335 4025 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df14352335cd6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14352335cd6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:04:59.698044118 +0000 UTC m=+0.617922740,LastTimestamp:2026-03-18 13:04:59.906181992 +0000 UTC m=+0.826060614,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.514224 master-0 kubenswrapper[4025]: E0318 13:05:16.514044 4025 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df14352338831\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14352338831 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:04:59.698055217 +0000 UTC m=+0.617933839,LastTimestamp:2026-03-18 13:04:59.906189442 +0000 UTC m=+0.826068064,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.520820 master-0 kubenswrapper[4025]: E0318 13:05:16.520727 4025 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df1435233273e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df1435233273e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:04:59.698030398 +0000 UTC m=+0.617909020,LastTimestamp:2026-03-18 13:04:59.906770468 +0000 UTC m=+0.826649100,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.525376 master-0 kubenswrapper[4025]: E0318 13:05:16.525236 4025 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df14352335cd6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14352335cd6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:04:59.698044118 +0000 UTC m=+0.617922740,LastTimestamp:2026-03-18 13:04:59.906842476 +0000 UTC m=+0.826721108,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.530720 master-0 kubenswrapper[4025]: E0318 13:05:16.530302 4025 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df14352338831\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14352338831 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:04:59.698055217 +0000 UTC m=+0.617933839,LastTimestamp:2026-03-18 13:04:59.906924702 +0000 UTC m=+0.826803334,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.539191 master-0 kubenswrapper[4025]: E0318 13:05:16.539044 4025 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df1435233273e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df1435233273e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:04:59.698030398 +0000 UTC m=+0.617909020,LastTimestamp:2026-03-18 13:04:59.90697369 +0000 UTC m=+0.826852312,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.544426 master-0 kubenswrapper[4025]: E0318 13:05:16.544277 4025 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df14352335cd6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14352335cd6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:04:59.698044118 +0000 UTC m=+0.617922740,LastTimestamp:2026-03-18 13:04:59.90698604 +0000 UTC m=+0.826864662,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.550797 master-0 kubenswrapper[4025]: E0318 13:05:16.550688 4025 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df14352338831\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14352338831 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:04:59.698055217 +0000 UTC m=+0.617933839,LastTimestamp:2026-03-18 13:04:59.906993979 +0000 UTC m=+0.826872601,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.558371 master-0 kubenswrapper[4025]: E0318 13:05:16.556483 4025 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df1435233273e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df1435233273e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:04:59.698030398 +0000 UTC m=+0.617909020,LastTimestamp:2026-03-18 13:04:59.907071986 +0000 UTC m=+0.826950608,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.565928 master-0 kubenswrapper[4025]: E0318 13:05:16.565631 4025 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df14352335cd6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14352335cd6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:04:59.698044118 +0000 UTC m=+0.617922740,LastTimestamp:2026-03-18 13:04:59.907104805 +0000 UTC m=+0.826983427,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.571821 master-0 kubenswrapper[4025]: E0318 13:05:16.571674 4025 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df14352338831\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14352338831 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:04:59.698055217 +0000 UTC m=+0.617933839,LastTimestamp:2026-03-18 13:04:59.907114665 +0000 UTC m=+0.826993287,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.579301 master-0 kubenswrapper[4025]: E0318 13:05:16.579146 4025 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df1435233273e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df1435233273e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:04:59.698030398 +0000 UTC m=+0.617909020,LastTimestamp:2026-03-18 13:04:59.908666764 +0000 UTC m=+0.828545396,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.586649 master-0 kubenswrapper[4025]: E0318 13:05:16.586313 4025 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189df14352335cd6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189df14352335cd6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:04:59.698044118 +0000 UTC m=+0.617922740,LastTimestamp:2026-03-18 13:04:59.908683073 +0000 UTC m=+0.828561695,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.597557 master-0 kubenswrapper[4025]: E0318 13:05:16.597314 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189df143a924974d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:01.156693837 +0000 UTC m=+2.076572459,LastTimestamp:2026-03-18 13:05:01.156693837 +0000 UTC m=+2.076572459,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.607173 master-0 kubenswrapper[4025]: E0318 13:05:16.606710 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189df143ab4c9a2a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:01.192870442 +0000 UTC m=+2.112749064,LastTimestamp:2026-03-18 13:05:01.192870442 +0000 UTC m=+2.112749064,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.611825 master-0 kubenswrapper[4025]: E0318 13:05:16.611701 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189df143aea712df kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:01.249131231 +0000 UTC m=+2.169009843,LastTimestamp:2026-03-18 13:05:01.249131231 +0000 UTC m=+2.169009843,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.619941 master-0 kubenswrapper[4025]: E0318 13:05:16.619808 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189df143b42d0afb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:01.341797115 +0000 UTC m=+2.261675737,LastTimestamp:2026-03-18 13:05:01.341797115 +0000 UTC m=+2.261675737,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.624577 master-0 kubenswrapper[4025]: E0318 13:05:16.624453 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189df143ce8f4f68 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:01.784444776 +0000 UTC m=+2.704323438,LastTimestamp:2026-03-18 13:05:01.784444776 +0000 UTC m=+2.704323438,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.631075 master-0 kubenswrapper[4025]: E0318 13:05:16.630991 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189df14433d5af07 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\" in 1.699s (1.699s including waiting). Image size: 465090934 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:03.483555591 +0000 UTC m=+4.403434213,LastTimestamp:2026-03-18 13:05:03.483555591 +0000 UTC m=+4.403434213,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.637543 master-0 kubenswrapper[4025]: E0318 13:05:16.637406 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189df1443e6d134e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:03.661249358 +0000 UTC m=+4.581127980,LastTimestamp:2026-03-18 13:05:03.661249358 +0000 UTC m=+4.581127980,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.643007 master-0 kubenswrapper[4025]: E0318 13:05:16.642911 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189df1443f46ae8b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:03.675510411 +0000 UTC m=+4.595389033,LastTimestamp:2026-03-18 13:05:03.675510411 +0000 UTC m=+4.595389033,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.648321 master-0 kubenswrapper[4025]: E0318 13:05:16.648158 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189df1446ec5160e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\" in 3.315s (3.315s including waiting). Image size: 529326739 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:04.472323598 +0000 UTC m=+5.392202220,LastTimestamp:2026-03-18 13:05:04.472323598 +0000 UTC m=+5.392202220,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.648631 master-0 kubenswrapper[4025]: I0318 13:05:16.648591 4025 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:05:16.651492 master-0 kubenswrapper[4025]: E0318 13:05:16.651393 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189df14478afdbdf openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:04.638704607 +0000 UTC m=+5.558583229,LastTimestamp:2026-03-18 13:05:04.638704607 +0000 UTC m=+5.558583229,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.655027 master-0 kubenswrapper[4025]: E0318 13:05:16.654947 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189df144794a5202 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:04.648827394 +0000 UTC m=+5.568706016,LastTimestamp:2026-03-18 13:05:04.648827394 +0000 UTC m=+5.568706016,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.660243 master-0 kubenswrapper[4025]: E0318 13:05:16.660150 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189df14479718801 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:04.651397121 +0000 UTC m=+5.571275743,LastTimestamp:2026-03-18 13:05:04.651397121 +0000 UTC m=+5.571275743,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.667405 master-0 kubenswrapper[4025]: E0318 13:05:16.667280 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189df14483b13e16 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:04.823344662 +0000 UTC m=+5.743223284,LastTimestamp:2026-03-18 13:05:04.823344662 +0000 UTC m=+5.743223284,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.671766 master-0 kubenswrapper[4025]: E0318 13:05:16.671686 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189df144846acfe2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:04.835506146 +0000 UTC m=+5.755384768,LastTimestamp:2026-03-18 13:05:04.835506146 +0000 UTC m=+5.755384768,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.675182 master-0 kubenswrapper[4025]: E0318 13:05:16.675092 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189df14484f93f0f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:04.844840719 +0000 UTC m=+5.764719341,LastTimestamp:2026-03-18 13:05:04.844840719 +0000 UTC m=+5.764719341,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.678777 master-0 kubenswrapper[4025]: E0318 13:05:16.678663 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189df1448f24472e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:05.015433006 +0000 UTC m=+5.935311628,LastTimestamp:2026-03-18 13:05:05.015433006 +0000 UTC m=+5.935311628,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.682074 master-0 kubenswrapper[4025]: E0318 13:05:16.681952 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189df1448fe29fe3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:05.027907555 +0000 UTC m=+5.947786177,LastTimestamp:2026-03-18 13:05:05.027907555 +0000 UTC m=+5.947786177,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.686321 master-0 kubenswrapper[4025]: E0318 13:05:16.686260 4025 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189df144846acfe2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189df144846acfe2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:04.835506146 +0000 UTC m=+5.755384768,LastTimestamp:2026-03-18 13:05:05.84193422 +0000 UTC m=+6.761812842,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.692314 master-0 kubenswrapper[4025]: E0318 13:05:16.692188 4025 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189df1448f24472e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189df1448f24472e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:05.015433006 +0000 UTC m=+5.935311628,LastTimestamp:2026-03-18 13:05:06.19058882 +0000 UTC m=+7.110467442,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.697971 master-0 kubenswrapper[4025]: E0318 13:05:16.697777 4025 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189df1448fe29fe3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189df1448fe29fe3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:05.027907555 +0000 UTC m=+5.947786177,LastTimestamp:2026-03-18 13:05:06.203071211 +0000 UTC m=+7.122949913,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.705795 master-0 kubenswrapper[4025]: E0318 13:05:16.705644 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189df144fc45d3c1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:06.846348225 +0000 UTC m=+7.766226847,LastTimestamp:2026-03-18 13:05:06.846348225 +0000 UTC m=+7.766226847,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.710868 master-0 kubenswrapper[4025]: E0318 13:05:16.710745 4025 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189df144fc45d3c1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189df144fc45d3c1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:06.846348225 +0000 UTC m=+7.766226847,LastTimestamp:2026-03-18 13:05:07.847615753 +0000 UTC m=+8.767494375,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.717713 master-0 kubenswrapper[4025]: E0318 13:05:16.717513 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189df1458a48c840 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" in 8.035s (8.036s including waiting). Image size: 943841779 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:09.22890656 +0000 UTC m=+10.148785222,LastTimestamp:2026-03-18 13:05:09.22890656 +0000 UTC m=+10.148785222,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.722933 master-0 kubenswrapper[4025]: E0318 13:05:16.722774 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189df1458cf2286a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" in 7.931s (7.931s including waiting). Image size: 943841779 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:09.273561194 +0000 UTC m=+10.193439816,LastTimestamp:2026-03-18 13:05:09.273561194 +0000 UTC m=+10.193439816,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.728978 master-0 kubenswrapper[4025]: E0318 13:05:16.728856 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189df1458cf3fac0 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" in 8.024s (8.024s including waiting). Image size: 943841779 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:09.273680576 +0000 UTC m=+10.193559238,LastTimestamp:2026-03-18 13:05:09.273680576 +0000 UTC m=+10.193559238,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.734938 master-0 kubenswrapper[4025]: E0318 13:05:16.734764 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189df1459625f981 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:09.427952001 +0000 UTC m=+10.347830623,LastTimestamp:2026-03-18 13:05:09.427952001 +0000 UTC m=+10.347830623,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.741604 master-0 kubenswrapper[4025]: E0318 13:05:16.741453 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189df14596d4365a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:09.439370842 +0000 UTC m=+10.359249464,LastTimestamp:2026-03-18 13:05:09.439370842 +0000 UTC m=+10.359249464,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.745684 master-0 kubenswrapper[4025]: E0318 13:05:16.745510 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189df14596e28941 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:09.440309569 +0000 UTC m=+10.360188191,LastTimestamp:2026-03-18 13:05:09.440309569 +0000 UTC m=+10.360188191,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.751637 master-0 kubenswrapper[4025]: E0318 13:05:16.751456 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189df145973f5d2b kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:09.446393131 +0000 UTC m=+10.366271753,LastTimestamp:2026-03-18 13:05:09.446393131 +0000 UTC m=+10.366271753,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.757253 master-0 kubenswrapper[4025]: E0318 13:05:16.757089 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189df14597576622 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:09.44796829 +0000 UTC m=+10.367846912,LastTimestamp:2026-03-18 13:05:09.44796829 +0000 UTC m=+10.367846912,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.762124 master-0 kubenswrapper[4025]: E0318 13:05:16.762015 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189df14597b732ce kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:09.454246606 +0000 UTC m=+10.374125228,LastTimestamp:2026-03-18 13:05:09.454246606 +0000 UTC m=+10.374125228,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.766497 master-0 kubenswrapper[4025]: E0318 13:05:16.766173 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189df14597f3bad6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:09.45821359 +0000 UTC m=+10.378092212,LastTimestamp:2026-03-18 13:05:09.45821359 +0000 UTC m=+10.378092212,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.772510 master-0 kubenswrapper[4025]: E0318 13:05:16.772375 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189df145b0c5368d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:09.874595469 +0000 UTC m=+10.794474101,LastTimestamp:2026-03-18 13:05:09.874595469 +0000 UTC m=+10.794474101,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.777082 master-0 kubenswrapper[4025]: E0318 13:05:16.776952 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189df145be81ccaf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:10.105058479 +0000 UTC m=+11.024937111,LastTimestamp:2026-03-18 13:05:10.105058479 +0000 UTC m=+11.024937111,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.781260 master-0 kubenswrapper[4025]: E0318 13:05:16.781116 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189df145bf142599 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:10.114649497 +0000 UTC m=+11.034528129,LastTimestamp:2026-03-18 13:05:10.114649497 +0000 UTC m=+11.034528129,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.788239 master-0 kubenswrapper[4025]: E0318 13:05:16.788075 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189df145bf24402c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:10.115704876 +0000 UTC m=+11.035583518,LastTimestamp:2026-03-18 13:05:10.115704876 +0000 UTC m=+11.035583518,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.795496 master-0 kubenswrapper[4025]: E0318 13:05:16.795342 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189df14688217a85 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\" in 3.371s (3.372s including waiting). Image size: 514984269 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:13.487743621 +0000 UTC m=+14.407622243,LastTimestamp:2026-03-18 13:05:13.487743621 +0000 UTC m=+14.407622243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.800890 master-0 kubenswrapper[4025]: E0318 13:05:16.800780 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189df14688a6e702 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\" in 4.056s (4.056s including waiting). Image size: 505246690 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:13.496487682 +0000 UTC m=+14.416366304,LastTimestamp:2026-03-18 13:05:13.496487682 +0000 UTC m=+14.416366304,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.806325 master-0 kubenswrapper[4025]: E0318 13:05:16.806198 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189df14693c28a2c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:13.6828483 +0000 UTC m=+14.602726922,LastTimestamp:2026-03-18 13:05:13.6828483 +0000 UTC m=+14.602726922,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.814139 master-0 kubenswrapper[4025]: E0318 13:05:16.814013 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189df14694764b8b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:13.694628747 +0000 UTC m=+14.614507369,LastTimestamp:2026-03-18 13:05:13.694628747 +0000 UTC m=+14.614507369,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.818712 master-0 kubenswrapper[4025]: E0318 13:05:16.818528 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189df146949e6b2d kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:13.697258285 +0000 UTC m=+14.617136927,LastTimestamp:2026-03-18 13:05:13.697258285 +0000 UTC m=+14.617136927,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.823360 master-0 kubenswrapper[4025]: E0318 13:05:16.823230 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189df146951a1c2d kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:13.705364525 +0000 UTC m=+14.625243147,LastTimestamp:2026-03-18 13:05:13.705364525 +0000 UTC m=+14.625243147,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.828182 master-0 kubenswrapper[4025]: E0318 13:05:16.828033 4025 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189df146a15fa360 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:13.911247712 +0000 UTC m=+14.831126334,LastTimestamp:2026-03-18 13:05:13.911247712 +0000 UTC m=+14.831126334,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.832081 master-0 kubenswrapper[4025]: E0318 13:05:16.831973 4025 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.189df1459625f981\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189df1459625f981 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:09.427952001 +0000 UTC m=+10.347830623,LastTimestamp:2026-03-18 13:05:14.078071729 +0000 UTC m=+14.997950351,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.837475 master-0 kubenswrapper[4025]: E0318 13:05:16.837327 4025 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.189df14596d4365a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189df14596d4365a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:09.439370842 +0000 UTC m=+10.359249464,LastTimestamp:2026-03-18 13:05:14.087173367 +0000 UTC m=+15.007051989,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:16.870946 master-0 kubenswrapper[4025]: W0318 13:05:16.870851 4025 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 18 13:05:16.870946 master-0 kubenswrapper[4025]: E0318 13:05:16.870926 4025 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 18 13:05:16.894543 master-0 kubenswrapper[4025]: I0318 13:05:16.894396 4025 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:16.920175 master-0 kubenswrapper[4025]: I0318 13:05:16.920097 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:16.920941 master-0 kubenswrapper[4025]: I0318 13:05:16.920889 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:16.920941 master-0 kubenswrapper[4025]: I0318 13:05:16.920927 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:16.921095 master-0 kubenswrapper[4025]: I0318 13:05:16.920950 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:17.098550 master-0 kubenswrapper[4025]: I0318 13:05:17.098353 4025 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:17.104935 master-0 kubenswrapper[4025]: I0318 13:05:17.104865 4025 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:17.605518 master-0 kubenswrapper[4025]: I0318 13:05:17.605385 4025 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:17.611644 master-0 kubenswrapper[4025]: I0318 13:05:17.606638 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:17.611644 master-0 kubenswrapper[4025]: I0318 13:05:17.610939 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:17.611644 master-0 kubenswrapper[4025]: I0318 13:05:17.610985 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:17.611644 master-0 kubenswrapper[4025]: I0318 13:05:17.610994 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:17.614280 master-0 kubenswrapper[4025]: I0318 13:05:17.614160 4025 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:17.650102 master-0 kubenswrapper[4025]: I0318 13:05:17.650053 4025 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:05:17.925147 master-0 kubenswrapper[4025]: I0318 13:05:17.924353 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:17.925147 master-0 kubenswrapper[4025]: I0318 13:05:17.924472 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:17.925993 master-0 kubenswrapper[4025]: I0318 13:05:17.925789 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:17.925993 master-0 kubenswrapper[4025]: I0318 13:05:17.925833 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:17.925993 master-0 kubenswrapper[4025]: I0318 13:05:17.925852 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:17.926253 master-0 kubenswrapper[4025]: I0318 13:05:17.926215 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:17.926253 master-0 kubenswrapper[4025]: I0318 13:05:17.926248 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:17.926337 master-0 kubenswrapper[4025]: I0318 13:05:17.926261 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:17.934214 master-0 kubenswrapper[4025]: I0318 13:05:17.934156 4025 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:05:18.650402 master-0 kubenswrapper[4025]: I0318 13:05:18.650257 4025 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:05:18.926647 master-0 kubenswrapper[4025]: I0318 13:05:18.926586 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:18.927357 master-0 kubenswrapper[4025]: I0318 13:05:18.926599 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:18.927825 master-0 kubenswrapper[4025]: I0318 13:05:18.927769 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:18.927825 master-0 kubenswrapper[4025]: I0318 13:05:18.927820 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:18.927965 master-0 kubenswrapper[4025]: I0318 13:05:18.927838 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:18.927965 master-0 kubenswrapper[4025]: I0318 13:05:18.927849 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:18.927965 master-0 kubenswrapper[4025]: I0318 13:05:18.927889 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:18.927965 master-0 kubenswrapper[4025]: I0318 13:05:18.927910 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:19.281657 master-0 kubenswrapper[4025]: E0318 13:05:19.281582 4025 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 18 13:05:19.334600 master-0 kubenswrapper[4025]: I0318 13:05:19.334488 4025 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:19.338352 master-0 kubenswrapper[4025]: I0318 13:05:19.338300 4025 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:19.490806 master-0 kubenswrapper[4025]: I0318 13:05:19.490714 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:19.492386 master-0 kubenswrapper[4025]: I0318 13:05:19.492331 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:19.492465 master-0 kubenswrapper[4025]: I0318 13:05:19.492395 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:19.492465 master-0 kubenswrapper[4025]: I0318 13:05:19.492442 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:19.492600 master-0 kubenswrapper[4025]: I0318 13:05:19.492569 4025 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 13:05:19.498494 master-0 kubenswrapper[4025]: E0318 13:05:19.498373 4025 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 18 13:05:19.649705 master-0 kubenswrapper[4025]: I0318 13:05:19.649559 4025 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:05:19.770075 master-0 kubenswrapper[4025]: E0318 13:05:19.769984 4025 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 18 13:05:19.792873 master-0 kubenswrapper[4025]: I0318 13:05:19.792785 4025 csr.go:261] certificate signing request csr-jdt5m is approved, waiting to be issued Mar 18 13:05:19.928864 master-0 kubenswrapper[4025]: I0318 13:05:19.928766 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:19.929718 master-0 kubenswrapper[4025]: I0318 13:05:19.928882 4025 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:19.929994 master-0 kubenswrapper[4025]: I0318 13:05:19.929919 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:19.929994 master-0 kubenswrapper[4025]: I0318 13:05:19.929978 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:19.929994 master-0 kubenswrapper[4025]: I0318 13:05:19.929995 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:20.649694 master-0 kubenswrapper[4025]: I0318 13:05:20.649624 4025 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:05:20.931561 master-0 kubenswrapper[4025]: I0318 13:05:20.931501 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:20.932658 master-0 kubenswrapper[4025]: I0318 13:05:20.932609 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:20.932709 master-0 kubenswrapper[4025]: I0318 13:05:20.932669 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:20.932709 master-0 kubenswrapper[4025]: I0318 13:05:20.932686 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:21.649449 master-0 kubenswrapper[4025]: I0318 13:05:21.649344 4025 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:05:21.804337 master-0 kubenswrapper[4025]: I0318 13:05:21.804241 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:21.805677 master-0 kubenswrapper[4025]: I0318 13:05:21.805588 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:21.805775 master-0 kubenswrapper[4025]: I0318 13:05:21.805690 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:21.805775 master-0 kubenswrapper[4025]: I0318 13:05:21.805725 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:21.806284 master-0 kubenswrapper[4025]: I0318 13:05:21.806234 4025 scope.go:117] "RemoveContainer" containerID="ca2201597489546190add2483d5c8f3e314a7ecbcfe886ac1c853c808b648bff" Mar 18 13:05:21.820779 master-0 kubenswrapper[4025]: E0318 13:05:21.820611 4025 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189df144846acfe2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189df144846acfe2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:04.835506146 +0000 UTC m=+5.755384768,LastTimestamp:2026-03-18 13:05:21.810166352 +0000 UTC m=+22.730044964,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:21.994957 master-0 kubenswrapper[4025]: E0318 13:05:21.994690 4025 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189df1448f24472e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189df1448f24472e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:05.015433006 +0000 UTC m=+5.935311628,LastTimestamp:2026-03-18 13:05:21.989566017 +0000 UTC m=+22.909444649,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:22.010873 master-0 kubenswrapper[4025]: E0318 13:05:22.010791 4025 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189df1448fe29fe3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189df1448fe29fe3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:05.027907555 +0000 UTC m=+5.947786177,LastTimestamp:2026-03-18 13:05:22.006067879 +0000 UTC m=+22.925946501,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:22.650286 master-0 kubenswrapper[4025]: I0318 13:05:22.650216 4025 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:05:22.947931 master-0 kubenswrapper[4025]: I0318 13:05:22.947898 4025 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 18 13:05:22.948870 master-0 kubenswrapper[4025]: I0318 13:05:22.948789 4025 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/1.log" Mar 18 13:05:22.949354 master-0 kubenswrapper[4025]: I0318 13:05:22.949301 4025 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="4d4a6fb4b82b14518b178f0f032fb16b2ce281c0de8e87c9f0449d46d7739b5b" exitCode=1 Mar 18 13:05:22.949455 master-0 kubenswrapper[4025]: I0318 13:05:22.949354 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"4d4a6fb4b82b14518b178f0f032fb16b2ce281c0de8e87c9f0449d46d7739b5b"} Mar 18 13:05:22.949455 master-0 kubenswrapper[4025]: I0318 13:05:22.949392 4025 scope.go:117] "RemoveContainer" containerID="ca2201597489546190add2483d5c8f3e314a7ecbcfe886ac1c853c808b648bff" Mar 18 13:05:22.949619 master-0 kubenswrapper[4025]: I0318 13:05:22.949595 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:22.950360 master-0 kubenswrapper[4025]: I0318 13:05:22.950326 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:22.950360 master-0 kubenswrapper[4025]: I0318 13:05:22.950357 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:22.950538 master-0 kubenswrapper[4025]: I0318 13:05:22.950369 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:22.950909 master-0 kubenswrapper[4025]: I0318 13:05:22.950715 4025 scope.go:117] "RemoveContainer" containerID="4d4a6fb4b82b14518b178f0f032fb16b2ce281c0de8e87c9f0449d46d7739b5b" Mar 18 13:05:22.950909 master-0 kubenswrapper[4025]: E0318 13:05:22.950878 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 18 13:05:22.956734 master-0 kubenswrapper[4025]: E0318 13:05:22.956593 4025 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189df144fc45d3c1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189df144fc45d3c1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:05:06.846348225 +0000 UTC m=+7.766226847,LastTimestamp:2026-03-18 13:05:22.950847639 +0000 UTC m=+23.870726271,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:05:23.650128 master-0 kubenswrapper[4025]: I0318 13:05:23.650077 4025 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:05:23.954788 master-0 kubenswrapper[4025]: I0318 13:05:23.954700 4025 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 18 13:05:24.647558 master-0 kubenswrapper[4025]: I0318 13:05:24.647508 4025 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:05:25.651633 master-0 kubenswrapper[4025]: I0318 13:05:25.651580 4025 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:05:26.291357 master-0 kubenswrapper[4025]: E0318 13:05:26.291229 4025 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 18 13:05:26.498935 master-0 kubenswrapper[4025]: I0318 13:05:26.498827 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:26.500390 master-0 kubenswrapper[4025]: I0318 13:05:26.500308 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:26.500390 master-0 kubenswrapper[4025]: I0318 13:05:26.500393 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:26.500662 master-0 kubenswrapper[4025]: I0318 13:05:26.500453 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:26.500662 master-0 kubenswrapper[4025]: I0318 13:05:26.500526 4025 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 13:05:26.506028 master-0 kubenswrapper[4025]: E0318 13:05:26.505944 4025 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 18 13:05:26.673472 master-0 kubenswrapper[4025]: I0318 13:05:26.671595 4025 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:05:26.901658 master-0 kubenswrapper[4025]: I0318 13:05:26.901592 4025 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:26.902101 master-0 kubenswrapper[4025]: I0318 13:05:26.901799 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:26.903144 master-0 kubenswrapper[4025]: I0318 13:05:26.903082 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:26.903273 master-0 kubenswrapper[4025]: I0318 13:05:26.903152 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:26.903273 master-0 kubenswrapper[4025]: I0318 13:05:26.903171 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:26.908363 master-0 kubenswrapper[4025]: I0318 13:05:26.908321 4025 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:05:26.975200 master-0 kubenswrapper[4025]: I0318 13:05:26.975013 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:26.976608 master-0 kubenswrapper[4025]: I0318 13:05:26.976554 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:26.976608 master-0 kubenswrapper[4025]: I0318 13:05:26.976610 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:26.976838 master-0 kubenswrapper[4025]: I0318 13:05:26.976627 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:27.649671 master-0 kubenswrapper[4025]: I0318 13:05:27.649558 4025 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:05:28.650797 master-0 kubenswrapper[4025]: I0318 13:05:28.650708 4025 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:05:29.648668 master-0 kubenswrapper[4025]: I0318 13:05:29.648590 4025 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:05:29.770569 master-0 kubenswrapper[4025]: E0318 13:05:29.770506 4025 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 18 13:05:30.647215 master-0 kubenswrapper[4025]: I0318 13:05:30.647144 4025 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:05:31.650980 master-0 kubenswrapper[4025]: I0318 13:05:31.650910 4025 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 13:05:32.613194 master-0 kubenswrapper[4025]: I0318 13:05:32.613156 4025 csr.go:257] certificate signing request csr-jdt5m is issued Mar 18 13:05:32.651170 master-0 kubenswrapper[4025]: I0318 13:05:32.651116 4025 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 13:05:32.666541 master-0 kubenswrapper[4025]: I0318 13:05:32.666481 4025 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 13:05:32.724141 master-0 kubenswrapper[4025]: I0318 13:05:32.724092 4025 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 13:05:32.991257 master-0 kubenswrapper[4025]: I0318 13:05:32.991224 4025 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 13:05:32.991566 master-0 kubenswrapper[4025]: E0318 13:05:32.991542 4025 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 18 13:05:33.012094 master-0 kubenswrapper[4025]: I0318 13:05:33.012061 4025 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 13:05:33.025229 master-0 kubenswrapper[4025]: I0318 13:05:33.025186 4025 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 13:05:33.079675 master-0 kubenswrapper[4025]: I0318 13:05:33.079637 4025 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 13:05:33.296229 master-0 kubenswrapper[4025]: E0318 13:05:33.296084 4025 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"master-0\" not found" node="master-0" Mar 18 13:05:33.341431 master-0 kubenswrapper[4025]: I0318 13:05:33.341377 4025 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 13:05:33.341606 master-0 kubenswrapper[4025]: E0318 13:05:33.341449 4025 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 18 13:05:33.439443 master-0 kubenswrapper[4025]: I0318 13:05:33.439348 4025 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 13:05:33.449556 master-0 kubenswrapper[4025]: I0318 13:05:33.449510 4025 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 18 13:05:33.455096 master-0 kubenswrapper[4025]: I0318 13:05:33.455061 4025 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 13:05:33.506595 master-0 kubenswrapper[4025]: I0318 13:05:33.506507 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:33.508520 master-0 kubenswrapper[4025]: I0318 13:05:33.508474 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:33.508520 master-0 kubenswrapper[4025]: I0318 13:05:33.508510 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:33.508520 master-0 kubenswrapper[4025]: I0318 13:05:33.508519 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:33.508865 master-0 kubenswrapper[4025]: I0318 13:05:33.508574 4025 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 13:05:33.511657 master-0 kubenswrapper[4025]: I0318 13:05:33.511611 4025 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 13:05:33.519221 master-0 kubenswrapper[4025]: I0318 13:05:33.519185 4025 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 18 13:05:33.519459 master-0 kubenswrapper[4025]: E0318 13:05:33.519403 4025 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Mar 18 13:05:33.528688 master-0 kubenswrapper[4025]: E0318 13:05:33.528640 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:33.538837 master-0 kubenswrapper[4025]: I0318 13:05:33.538788 4025 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 18 13:05:33.539135 master-0 kubenswrapper[4025]: W0318 13:05:33.539102 4025 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 18 13:05:33.614589 master-0 kubenswrapper[4025]: I0318 13:05:33.614486 4025 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-19 12:57:20 +0000 UTC, rotation deadline is 2026-03-19 08:34:38.315453555 +0000 UTC Mar 18 13:05:33.614589 master-0 kubenswrapper[4025]: I0318 13:05:33.614533 4025 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 19h29m4.700924008s for next certificate rotation Mar 18 13:05:33.629690 master-0 kubenswrapper[4025]: E0318 13:05:33.629655 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:33.672671 master-0 kubenswrapper[4025]: I0318 13:05:33.672630 4025 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 18 13:05:33.693141 master-0 kubenswrapper[4025]: I0318 13:05:33.693082 4025 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 18 13:05:33.730604 master-0 kubenswrapper[4025]: E0318 13:05:33.730554 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:33.830837 master-0 kubenswrapper[4025]: E0318 13:05:33.830719 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:33.931656 master-0 kubenswrapper[4025]: E0318 13:05:33.931621 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:34.033130 master-0 kubenswrapper[4025]: E0318 13:05:34.033005 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:34.134247 master-0 kubenswrapper[4025]: E0318 13:05:34.134171 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:34.234982 master-0 kubenswrapper[4025]: E0318 13:05:34.234839 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:34.335594 master-0 kubenswrapper[4025]: E0318 13:05:34.335544 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:34.436170 master-0 kubenswrapper[4025]: E0318 13:05:34.436132 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:34.537353 master-0 kubenswrapper[4025]: E0318 13:05:34.537245 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:34.637806 master-0 kubenswrapper[4025]: E0318 13:05:34.637721 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:34.739120 master-0 kubenswrapper[4025]: E0318 13:05:34.739068 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:34.766656 master-0 kubenswrapper[4025]: I0318 13:05:34.766575 4025 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 18 13:05:34.840046 master-0 kubenswrapper[4025]: E0318 13:05:34.839945 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:34.941085 master-0 kubenswrapper[4025]: E0318 13:05:34.941006 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:35.041641 master-0 kubenswrapper[4025]: E0318 13:05:35.041581 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:35.142895 master-0 kubenswrapper[4025]: E0318 13:05:35.142732 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:35.243633 master-0 kubenswrapper[4025]: E0318 13:05:35.243564 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:35.344494 master-0 kubenswrapper[4025]: E0318 13:05:35.344452 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:35.445104 master-0 kubenswrapper[4025]: E0318 13:05:35.445012 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:35.546005 master-0 kubenswrapper[4025]: E0318 13:05:35.545945 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:35.646652 master-0 kubenswrapper[4025]: E0318 13:05:35.646565 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:35.747634 master-0 kubenswrapper[4025]: E0318 13:05:35.747470 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:35.848681 master-0 kubenswrapper[4025]: E0318 13:05:35.848632 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:35.949291 master-0 kubenswrapper[4025]: E0318 13:05:35.949245 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:36.050152 master-0 kubenswrapper[4025]: E0318 13:05:36.050030 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:36.150380 master-0 kubenswrapper[4025]: E0318 13:05:36.150321 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:36.251390 master-0 kubenswrapper[4025]: E0318 13:05:36.251335 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:36.352486 master-0 kubenswrapper[4025]: E0318 13:05:36.352282 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:36.453360 master-0 kubenswrapper[4025]: E0318 13:05:36.453306 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:36.554327 master-0 kubenswrapper[4025]: E0318 13:05:36.554246 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:36.655002 master-0 kubenswrapper[4025]: E0318 13:05:36.654869 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:36.756051 master-0 kubenswrapper[4025]: E0318 13:05:36.755958 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:36.857640 master-0 kubenswrapper[4025]: E0318 13:05:36.857546 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:36.958074 master-0 kubenswrapper[4025]: E0318 13:05:36.957979 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:37.058806 master-0 kubenswrapper[4025]: E0318 13:05:37.058738 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:37.159798 master-0 kubenswrapper[4025]: E0318 13:05:37.159694 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:37.260961 master-0 kubenswrapper[4025]: E0318 13:05:37.260767 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:37.361044 master-0 kubenswrapper[4025]: E0318 13:05:37.360948 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:37.461855 master-0 kubenswrapper[4025]: E0318 13:05:37.461735 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:37.563027 master-0 kubenswrapper[4025]: E0318 13:05:37.562852 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:37.663661 master-0 kubenswrapper[4025]: E0318 13:05:37.663499 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:37.764464 master-0 kubenswrapper[4025]: E0318 13:05:37.764354 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:37.804276 master-0 kubenswrapper[4025]: I0318 13:05:37.804225 4025 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:05:37.805390 master-0 kubenswrapper[4025]: I0318 13:05:37.805326 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:05:37.805458 master-0 kubenswrapper[4025]: I0318 13:05:37.805399 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:05:37.805458 master-0 kubenswrapper[4025]: I0318 13:05:37.805428 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:05:37.805823 master-0 kubenswrapper[4025]: I0318 13:05:37.805793 4025 scope.go:117] "RemoveContainer" containerID="4d4a6fb4b82b14518b178f0f032fb16b2ce281c0de8e87c9f0449d46d7739b5b" Mar 18 13:05:37.805990 master-0 kubenswrapper[4025]: E0318 13:05:37.805959 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 18 13:05:37.865675 master-0 kubenswrapper[4025]: E0318 13:05:37.865484 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:37.966650 master-0 kubenswrapper[4025]: E0318 13:05:37.966525 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:38.067399 master-0 kubenswrapper[4025]: E0318 13:05:38.067291 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:38.167848 master-0 kubenswrapper[4025]: E0318 13:05:38.167758 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:38.268028 master-0 kubenswrapper[4025]: E0318 13:05:38.267921 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:38.368438 master-0 kubenswrapper[4025]: E0318 13:05:38.368346 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:38.468852 master-0 kubenswrapper[4025]: E0318 13:05:38.468675 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:38.569867 master-0 kubenswrapper[4025]: E0318 13:05:38.569771 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:38.670826 master-0 kubenswrapper[4025]: E0318 13:05:38.670710 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:38.771545 master-0 kubenswrapper[4025]: E0318 13:05:38.771373 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:38.871916 master-0 kubenswrapper[4025]: E0318 13:05:38.871788 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:38.972554 master-0 kubenswrapper[4025]: E0318 13:05:38.972479 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:39.073531 master-0 kubenswrapper[4025]: E0318 13:05:39.073317 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:39.178692 master-0 kubenswrapper[4025]: E0318 13:05:39.178491 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:39.279492 master-0 kubenswrapper[4025]: E0318 13:05:39.279302 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:39.379994 master-0 kubenswrapper[4025]: E0318 13:05:39.379763 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:39.480222 master-0 kubenswrapper[4025]: E0318 13:05:39.480065 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:39.581300 master-0 kubenswrapper[4025]: E0318 13:05:39.581203 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:39.682364 master-0 kubenswrapper[4025]: E0318 13:05:39.682266 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:39.771705 master-0 kubenswrapper[4025]: E0318 13:05:39.771589 4025 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 18 13:05:39.782503 master-0 kubenswrapper[4025]: E0318 13:05:39.782390 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:39.883464 master-0 kubenswrapper[4025]: E0318 13:05:39.883286 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:39.983696 master-0 kubenswrapper[4025]: E0318 13:05:39.983509 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:40.084806 master-0 kubenswrapper[4025]: E0318 13:05:40.084653 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:40.185572 master-0 kubenswrapper[4025]: E0318 13:05:40.185480 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:40.289404 master-0 kubenswrapper[4025]: E0318 13:05:40.289291 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:40.390058 master-0 kubenswrapper[4025]: E0318 13:05:40.389974 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:40.423691 master-0 kubenswrapper[4025]: I0318 13:05:40.423618 4025 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 18 13:05:40.490783 master-0 kubenswrapper[4025]: E0318 13:05:40.490702 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:40.591973 master-0 kubenswrapper[4025]: E0318 13:05:40.591656 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:40.627907 master-0 kubenswrapper[4025]: I0318 13:05:40.627811 4025 csr.go:261] certificate signing request csr-vxg49 is approved, waiting to be issued Mar 18 13:05:40.637081 master-0 kubenswrapper[4025]: I0318 13:05:40.636907 4025 csr.go:257] certificate signing request csr-vxg49 is issued Mar 18 13:05:40.692433 master-0 kubenswrapper[4025]: E0318 13:05:40.692341 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:40.792714 master-0 kubenswrapper[4025]: E0318 13:05:40.792572 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:40.893820 master-0 kubenswrapper[4025]: E0318 13:05:40.893695 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:40.994593 master-0 kubenswrapper[4025]: E0318 13:05:40.994514 4025 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 13:05:41.093041 master-0 kubenswrapper[4025]: I0318 13:05:41.092969 4025 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 18 13:05:41.638650 master-0 kubenswrapper[4025]: I0318 13:05:41.638603 4025 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-19 12:57:20 +0000 UTC, rotation deadline is 2026-03-19 06:01:04.102249854 +0000 UTC Mar 18 13:05:41.639012 master-0 kubenswrapper[4025]: I0318 13:05:41.638987 4025 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 16h55m22.463270485s for next certificate rotation Mar 18 13:05:41.649979 master-0 kubenswrapper[4025]: I0318 13:05:41.649929 4025 apiserver.go:52] "Watching apiserver" Mar 18 13:05:41.654743 master-0 kubenswrapper[4025]: I0318 13:05:41.654717 4025 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 18 13:05:41.655498 master-0 kubenswrapper[4025]: I0318 13:05:41.655480 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-7bfhd","openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp","openshift-network-operator/network-operator-7bd846bfc4-gxxbr"] Mar 18 13:05:41.655911 master-0 kubenswrapper[4025]: I0318 13:05:41.655886 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bd846bfc4-gxxbr" Mar 18 13:05:41.656057 master-0 kubenswrapper[4025]: I0318 13:05:41.656018 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:05:41.656155 master-0 kubenswrapper[4025]: I0318 13:05:41.655891 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-7bfhd" Mar 18 13:05:41.659482 master-0 kubenswrapper[4025]: I0318 13:05:41.657338 4025 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 18 13:05:41.659482 master-0 kubenswrapper[4025]: I0318 13:05:41.658730 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 18 13:05:41.659482 master-0 kubenswrapper[4025]: I0318 13:05:41.659375 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 18 13:05:41.660530 master-0 kubenswrapper[4025]: I0318 13:05:41.660498 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Mar 18 13:05:41.660673 master-0 kubenswrapper[4025]: I0318 13:05:41.660647 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 18 13:05:41.660855 master-0 kubenswrapper[4025]: I0318 13:05:41.660828 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Mar 18 13:05:41.661697 master-0 kubenswrapper[4025]: I0318 13:05:41.661077 4025 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 18 13:05:41.662095 master-0 kubenswrapper[4025]: I0318 13:05:41.661606 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Mar 18 13:05:41.662095 master-0 kubenswrapper[4025]: I0318 13:05:41.661866 4025 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Mar 18 13:05:41.662986 master-0 kubenswrapper[4025]: I0318 13:05:41.662954 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 18 13:05:41.746228 master-0 kubenswrapper[4025]: I0318 13:05:41.746176 4025 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 18 13:05:41.821751 master-0 kubenswrapper[4025]: I0318 13:05:41.821616 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr46g\" (UniqueName: \"kubernetes.io/projected/80daec9e-b15b-4782-a1f7-ce398bbe323b-kube-api-access-kr46g\") pod \"assisted-installer-controller-7bfhd\" (UID: \"80daec9e-b15b-4782-a1f7-ce398bbe323b\") " pod="assisted-installer/assisted-installer-controller-7bfhd" Mar 18 13:05:41.821751 master-0 kubenswrapper[4025]: I0318 13:05:41.821688 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/f7f4ae93-428b-4ebd-bfaa-18359b407ede-host-etc-kube\") pod \"network-operator-7bd846bfc4-gxxbr\" (UID: \"f7f4ae93-428b-4ebd-bfaa-18359b407ede\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gxxbr" Mar 18 13:05:41.821751 master-0 kubenswrapper[4025]: I0318 13:05:41.821760 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/80daec9e-b15b-4782-a1f7-ce398bbe323b-host-resolv-conf\") pod \"assisted-installer-controller-7bfhd\" (UID: \"80daec9e-b15b-4782-a1f7-ce398bbe323b\") " pod="assisted-installer/assisted-installer-controller-7bfhd" Mar 18 13:05:41.822987 master-0 kubenswrapper[4025]: I0318 13:05:41.821821 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/80daec9e-b15b-4782-a1f7-ce398bbe323b-sno-bootstrap-files\") pod \"assisted-installer-controller-7bfhd\" (UID: \"80daec9e-b15b-4782-a1f7-ce398bbe323b\") " pod="assisted-installer/assisted-installer-controller-7bfhd" Mar 18 13:05:41.822987 master-0 kubenswrapper[4025]: I0318 13:05:41.821914 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/80daec9e-b15b-4782-a1f7-ce398bbe323b-host-var-run-resolv-conf\") pod \"assisted-installer-controller-7bfhd\" (UID: \"80daec9e-b15b-4782-a1f7-ce398bbe323b\") " pod="assisted-installer/assisted-installer-controller-7bfhd" Mar 18 13:05:41.822987 master-0 kubenswrapper[4025]: I0318 13:05:41.821951 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f7f4ae93-428b-4ebd-bfaa-18359b407ede-metrics-tls\") pod \"network-operator-7bd846bfc4-gxxbr\" (UID: \"f7f4ae93-428b-4ebd-bfaa-18359b407ede\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gxxbr" Mar 18 13:05:41.822987 master-0 kubenswrapper[4025]: I0318 13:05:41.821983 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b06a568-4dad-44b4-8312-aa52911dbfb0-kube-api-access\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:05:41.822987 master-0 kubenswrapper[4025]: I0318 13:05:41.822020 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/80daec9e-b15b-4782-a1f7-ce398bbe323b-host-ca-bundle\") pod \"assisted-installer-controller-7bfhd\" (UID: \"80daec9e-b15b-4782-a1f7-ce398bbe323b\") " pod="assisted-installer/assisted-installer-controller-7bfhd" Mar 18 13:05:41.822987 master-0 kubenswrapper[4025]: I0318 13:05:41.822050 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc8t5\" (UniqueName: \"kubernetes.io/projected/f7f4ae93-428b-4ebd-bfaa-18359b407ede-kube-api-access-mc8t5\") pod \"network-operator-7bd846bfc4-gxxbr\" (UID: \"f7f4ae93-428b-4ebd-bfaa-18359b407ede\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gxxbr" Mar 18 13:05:41.822987 master-0 kubenswrapper[4025]: I0318 13:05:41.822082 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2b06a568-4dad-44b4-8312-aa52911dbfb0-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:05:41.822987 master-0 kubenswrapper[4025]: I0318 13:05:41.822114 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:05:41.822987 master-0 kubenswrapper[4025]: I0318 13:05:41.822144 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2b06a568-4dad-44b4-8312-aa52911dbfb0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:05:41.822987 master-0 kubenswrapper[4025]: I0318 13:05:41.822174 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2b06a568-4dad-44b4-8312-aa52911dbfb0-service-ca\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:05:41.922698 master-0 kubenswrapper[4025]: I0318 13:05:41.922567 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2b06a568-4dad-44b4-8312-aa52911dbfb0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:05:41.922698 master-0 kubenswrapper[4025]: I0318 13:05:41.922692 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2b06a568-4dad-44b4-8312-aa52911dbfb0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:05:41.923092 master-0 kubenswrapper[4025]: I0318 13:05:41.922775 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2b06a568-4dad-44b4-8312-aa52911dbfb0-service-ca\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:05:41.923092 master-0 kubenswrapper[4025]: I0318 13:05:41.922818 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr46g\" (UniqueName: \"kubernetes.io/projected/80daec9e-b15b-4782-a1f7-ce398bbe323b-kube-api-access-kr46g\") pod \"assisted-installer-controller-7bfhd\" (UID: \"80daec9e-b15b-4782-a1f7-ce398bbe323b\") " pod="assisted-installer/assisted-installer-controller-7bfhd" Mar 18 13:05:41.923092 master-0 kubenswrapper[4025]: I0318 13:05:41.922853 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/f7f4ae93-428b-4ebd-bfaa-18359b407ede-host-etc-kube\") pod \"network-operator-7bd846bfc4-gxxbr\" (UID: \"f7f4ae93-428b-4ebd-bfaa-18359b407ede\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gxxbr" Mar 18 13:05:41.923092 master-0 kubenswrapper[4025]: I0318 13:05:41.922888 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/80daec9e-b15b-4782-a1f7-ce398bbe323b-host-resolv-conf\") pod \"assisted-installer-controller-7bfhd\" (UID: \"80daec9e-b15b-4782-a1f7-ce398bbe323b\") " pod="assisted-installer/assisted-installer-controller-7bfhd" Mar 18 13:05:41.923092 master-0 kubenswrapper[4025]: I0318 13:05:41.922917 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/80daec9e-b15b-4782-a1f7-ce398bbe323b-sno-bootstrap-files\") pod \"assisted-installer-controller-7bfhd\" (UID: \"80daec9e-b15b-4782-a1f7-ce398bbe323b\") " pod="assisted-installer/assisted-installer-controller-7bfhd" Mar 18 13:05:41.923092 master-0 kubenswrapper[4025]: I0318 13:05:41.922948 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/80daec9e-b15b-4782-a1f7-ce398bbe323b-host-var-run-resolv-conf\") pod \"assisted-installer-controller-7bfhd\" (UID: \"80daec9e-b15b-4782-a1f7-ce398bbe323b\") " pod="assisted-installer/assisted-installer-controller-7bfhd" Mar 18 13:05:41.923092 master-0 kubenswrapper[4025]: I0318 13:05:41.922978 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f7f4ae93-428b-4ebd-bfaa-18359b407ede-metrics-tls\") pod \"network-operator-7bd846bfc4-gxxbr\" (UID: \"f7f4ae93-428b-4ebd-bfaa-18359b407ede\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gxxbr" Mar 18 13:05:41.923092 master-0 kubenswrapper[4025]: I0318 13:05:41.923012 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b06a568-4dad-44b4-8312-aa52911dbfb0-kube-api-access\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:05:41.923092 master-0 kubenswrapper[4025]: I0318 13:05:41.923047 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/80daec9e-b15b-4782-a1f7-ce398bbe323b-host-ca-bundle\") pod \"assisted-installer-controller-7bfhd\" (UID: \"80daec9e-b15b-4782-a1f7-ce398bbe323b\") " pod="assisted-installer/assisted-installer-controller-7bfhd" Mar 18 13:05:41.923092 master-0 kubenswrapper[4025]: I0318 13:05:41.923077 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc8t5\" (UniqueName: \"kubernetes.io/projected/f7f4ae93-428b-4ebd-bfaa-18359b407ede-kube-api-access-mc8t5\") pod \"network-operator-7bd846bfc4-gxxbr\" (UID: \"f7f4ae93-428b-4ebd-bfaa-18359b407ede\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gxxbr" Mar 18 13:05:41.924246 master-0 kubenswrapper[4025]: I0318 13:05:41.923110 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2b06a568-4dad-44b4-8312-aa52911dbfb0-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:05:41.924246 master-0 kubenswrapper[4025]: I0318 13:05:41.923144 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:05:41.924246 master-0 kubenswrapper[4025]: E0318 13:05:41.923271 4025 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 13:05:41.924246 master-0 kubenswrapper[4025]: E0318 13:05:41.923349 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert podName:2b06a568-4dad-44b4-8312-aa52911dbfb0 nodeName:}" failed. No retries permitted until 2026-03-18 13:05:42.423320284 +0000 UTC m=+43.343198946 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert") pod "cluster-version-operator-56d8475767-f2hjp" (UID: "2b06a568-4dad-44b4-8312-aa52911dbfb0") : secret "cluster-version-operator-serving-cert" not found Mar 18 13:05:41.925031 master-0 kubenswrapper[4025]: I0318 13:05:41.924904 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/80daec9e-b15b-4782-a1f7-ce398bbe323b-host-var-run-resolv-conf\") pod \"assisted-installer-controller-7bfhd\" (UID: \"80daec9e-b15b-4782-a1f7-ce398bbe323b\") " pod="assisted-installer/assisted-installer-controller-7bfhd" Mar 18 13:05:41.925335 master-0 kubenswrapper[4025]: I0318 13:05:41.925030 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/f7f4ae93-428b-4ebd-bfaa-18359b407ede-host-etc-kube\") pod \"network-operator-7bd846bfc4-gxxbr\" (UID: \"f7f4ae93-428b-4ebd-bfaa-18359b407ede\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gxxbr" Mar 18 13:05:41.925335 master-0 kubenswrapper[4025]: I0318 13:05:41.925286 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2b06a568-4dad-44b4-8312-aa52911dbfb0-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:05:41.925335 master-0 kubenswrapper[4025]: I0318 13:05:41.925300 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2b06a568-4dad-44b4-8312-aa52911dbfb0-service-ca\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:05:41.925335 master-0 kubenswrapper[4025]: I0318 13:05:41.925335 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/80daec9e-b15b-4782-a1f7-ce398bbe323b-host-resolv-conf\") pod \"assisted-installer-controller-7bfhd\" (UID: \"80daec9e-b15b-4782-a1f7-ce398bbe323b\") " pod="assisted-installer/assisted-installer-controller-7bfhd" Mar 18 13:05:41.925335 master-0 kubenswrapper[4025]: I0318 13:05:41.925501 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/80daec9e-b15b-4782-a1f7-ce398bbe323b-host-ca-bundle\") pod \"assisted-installer-controller-7bfhd\" (UID: \"80daec9e-b15b-4782-a1f7-ce398bbe323b\") " pod="assisted-installer/assisted-installer-controller-7bfhd" Mar 18 13:05:41.927094 master-0 kubenswrapper[4025]: I0318 13:05:41.926767 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/80daec9e-b15b-4782-a1f7-ce398bbe323b-sno-bootstrap-files\") pod \"assisted-installer-controller-7bfhd\" (UID: \"80daec9e-b15b-4782-a1f7-ce398bbe323b\") " pod="assisted-installer/assisted-installer-controller-7bfhd" Mar 18 13:05:41.928066 master-0 kubenswrapper[4025]: I0318 13:05:41.927829 4025 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 18 13:05:41.936569 master-0 kubenswrapper[4025]: I0318 13:05:41.933320 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f7f4ae93-428b-4ebd-bfaa-18359b407ede-metrics-tls\") pod \"network-operator-7bd846bfc4-gxxbr\" (UID: \"f7f4ae93-428b-4ebd-bfaa-18359b407ede\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gxxbr" Mar 18 13:05:41.945477 master-0 kubenswrapper[4025]: I0318 13:05:41.944676 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr46g\" (UniqueName: \"kubernetes.io/projected/80daec9e-b15b-4782-a1f7-ce398bbe323b-kube-api-access-kr46g\") pod \"assisted-installer-controller-7bfhd\" (UID: \"80daec9e-b15b-4782-a1f7-ce398bbe323b\") " pod="assisted-installer/assisted-installer-controller-7bfhd" Mar 18 13:05:41.962291 master-0 kubenswrapper[4025]: I0318 13:05:41.962201 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b06a568-4dad-44b4-8312-aa52911dbfb0-kube-api-access\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:05:41.962934 master-0 kubenswrapper[4025]: I0318 13:05:41.962879 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc8t5\" (UniqueName: \"kubernetes.io/projected/f7f4ae93-428b-4ebd-bfaa-18359b407ede-kube-api-access-mc8t5\") pod \"network-operator-7bd846bfc4-gxxbr\" (UID: \"f7f4ae93-428b-4ebd-bfaa-18359b407ede\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gxxbr" Mar 18 13:05:41.973377 master-0 kubenswrapper[4025]: I0318 13:05:41.973308 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bd846bfc4-gxxbr" Mar 18 13:05:42.007202 master-0 kubenswrapper[4025]: I0318 13:05:42.007147 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-7bfhd" Mar 18 13:05:42.012239 master-0 kubenswrapper[4025]: I0318 13:05:42.012197 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-gxxbr" event={"ID":"f7f4ae93-428b-4ebd-bfaa-18359b407ede","Type":"ContainerStarted","Data":"2df3167c99041fb8b521641e83cdf585c987ff07f0be8411cb46dd3d61303f4c"} Mar 18 13:05:42.023220 master-0 kubenswrapper[4025]: W0318 13:05:42.023181 4025 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80daec9e_b15b_4782_a1f7_ce398bbe323b.slice/crio-b75c64ccdcd6c7429e0eee9f5d3eef7c095db4da53200590b14c4e34b0c5741d WatchSource:0}: Error finding container b75c64ccdcd6c7429e0eee9f5d3eef7c095db4da53200590b14c4e34b0c5741d: Status 404 returned error can't find the container with id b75c64ccdcd6c7429e0eee9f5d3eef7c095db4da53200590b14c4e34b0c5741d Mar 18 13:05:42.426317 master-0 kubenswrapper[4025]: I0318 13:05:42.426218 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:05:42.426700 master-0 kubenswrapper[4025]: E0318 13:05:42.426404 4025 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 13:05:42.426700 master-0 kubenswrapper[4025]: E0318 13:05:42.426538 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert podName:2b06a568-4dad-44b4-8312-aa52911dbfb0 nodeName:}" failed. No retries permitted until 2026-03-18 13:05:43.426513189 +0000 UTC m=+44.346391851 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert") pod "cluster-version-operator-56d8475767-f2hjp" (UID: "2b06a568-4dad-44b4-8312-aa52911dbfb0") : secret "cluster-version-operator-serving-cert" not found Mar 18 13:05:42.639612 master-0 kubenswrapper[4025]: I0318 13:05:42.639523 4025 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-19 12:57:20 +0000 UTC, rotation deadline is 2026-03-19 09:14:05.66095134 +0000 UTC Mar 18 13:05:42.639612 master-0 kubenswrapper[4025]: I0318 13:05:42.639569 4025 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h8m23.021386877s for next certificate rotation Mar 18 13:05:43.016006 master-0 kubenswrapper[4025]: I0318 13:05:43.015911 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-7bfhd" event={"ID":"80daec9e-b15b-4782-a1f7-ce398bbe323b","Type":"ContainerStarted","Data":"b75c64ccdcd6c7429e0eee9f5d3eef7c095db4da53200590b14c4e34b0c5741d"} Mar 18 13:05:43.434370 master-0 kubenswrapper[4025]: I0318 13:05:43.434300 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:05:43.434769 master-0 kubenswrapper[4025]: E0318 13:05:43.434561 4025 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 13:05:43.434769 master-0 kubenswrapper[4025]: E0318 13:05:43.434732 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert podName:2b06a568-4dad-44b4-8312-aa52911dbfb0 nodeName:}" failed. No retries permitted until 2026-03-18 13:05:45.434700785 +0000 UTC m=+46.354579447 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert") pod "cluster-version-operator-56d8475767-f2hjp" (UID: "2b06a568-4dad-44b4-8312-aa52911dbfb0") : secret "cluster-version-operator-serving-cert" not found Mar 18 13:05:45.448909 master-0 kubenswrapper[4025]: I0318 13:05:45.448817 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:05:45.450094 master-0 kubenswrapper[4025]: E0318 13:05:45.449002 4025 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 13:05:45.450094 master-0 kubenswrapper[4025]: E0318 13:05:45.449094 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert podName:2b06a568-4dad-44b4-8312-aa52911dbfb0 nodeName:}" failed. No retries permitted until 2026-03-18 13:05:49.449064486 +0000 UTC m=+50.368943108 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert") pod "cluster-version-operator-56d8475767-f2hjp" (UID: "2b06a568-4dad-44b4-8312-aa52911dbfb0") : secret "cluster-version-operator-serving-cert" not found Mar 18 13:05:47.026379 master-0 kubenswrapper[4025]: I0318 13:05:47.026301 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-gxxbr" event={"ID":"f7f4ae93-428b-4ebd-bfaa-18359b407ede","Type":"ContainerStarted","Data":"0f68e5c45ea6d8fc8605559b1dd3501571f6348a64337151b3b9a1c54518d47c"} Mar 18 13:05:47.038491 master-0 kubenswrapper[4025]: I0318 13:05:47.038429 4025 generic.go:334] "Generic (PLEG): container finished" podID="80daec9e-b15b-4782-a1f7-ce398bbe323b" containerID="2307d9f9b6edb7075e27303dc674c0604795c0e793d990a0bd35a8d4c7882a78" exitCode=0 Mar 18 13:05:47.038491 master-0 kubenswrapper[4025]: I0318 13:05:47.038474 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-7bfhd" event={"ID":"80daec9e-b15b-4782-a1f7-ce398bbe323b","Type":"ContainerDied","Data":"2307d9f9b6edb7075e27303dc674c0604795c0e793d990a0bd35a8d4c7882a78"} Mar 18 13:05:47.071924 master-0 kubenswrapper[4025]: I0318 13:05:47.071763 4025 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/network-operator-7bd846bfc4-gxxbr" podStartSLOduration=8.448168663 podStartE2EDuration="13.071702775s" podCreationTimestamp="2026-03-18 13:05:34 +0000 UTC" firstStartedPulling="2026-03-18 13:05:41.996373977 +0000 UTC m=+42.916252599" lastFinishedPulling="2026-03-18 13:05:46.619908089 +0000 UTC m=+47.539786711" observedRunningTime="2026-03-18 13:05:47.050220857 +0000 UTC m=+47.970099489" watchObservedRunningTime="2026-03-18 13:05:47.071702775 +0000 UTC m=+47.991581417" Mar 18 13:05:48.059067 master-0 kubenswrapper[4025]: I0318 13:05:48.059048 4025 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-7bfhd" Mar 18 13:05:48.166606 master-0 kubenswrapper[4025]: I0318 13:05:48.166532 4025 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr46g\" (UniqueName: \"kubernetes.io/projected/80daec9e-b15b-4782-a1f7-ce398bbe323b-kube-api-access-kr46g\") pod \"80daec9e-b15b-4782-a1f7-ce398bbe323b\" (UID: \"80daec9e-b15b-4782-a1f7-ce398bbe323b\") " Mar 18 13:05:48.166606 master-0 kubenswrapper[4025]: I0318 13:05:48.166580 4025 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/80daec9e-b15b-4782-a1f7-ce398bbe323b-host-resolv-conf\") pod \"80daec9e-b15b-4782-a1f7-ce398bbe323b\" (UID: \"80daec9e-b15b-4782-a1f7-ce398bbe323b\") " Mar 18 13:05:48.166606 master-0 kubenswrapper[4025]: I0318 13:05:48.166609 4025 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/80daec9e-b15b-4782-a1f7-ce398bbe323b-host-ca-bundle\") pod \"80daec9e-b15b-4782-a1f7-ce398bbe323b\" (UID: \"80daec9e-b15b-4782-a1f7-ce398bbe323b\") " Mar 18 13:05:48.166880 master-0 kubenswrapper[4025]: I0318 13:05:48.166648 4025 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80daec9e-b15b-4782-a1f7-ce398bbe323b-host-ca-bundle" (OuterVolumeSpecName: "host-ca-bundle") pod "80daec9e-b15b-4782-a1f7-ce398bbe323b" (UID: "80daec9e-b15b-4782-a1f7-ce398bbe323b"). InnerVolumeSpecName "host-ca-bundle". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:05:48.166880 master-0 kubenswrapper[4025]: I0318 13:05:48.166688 4025 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80daec9e-b15b-4782-a1f7-ce398bbe323b-host-resolv-conf" (OuterVolumeSpecName: "host-resolv-conf") pod "80daec9e-b15b-4782-a1f7-ce398bbe323b" (UID: "80daec9e-b15b-4782-a1f7-ce398bbe323b"). InnerVolumeSpecName "host-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:05:48.166880 master-0 kubenswrapper[4025]: I0318 13:05:48.166708 4025 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/80daec9e-b15b-4782-a1f7-ce398bbe323b-host-var-run-resolv-conf\") pod \"80daec9e-b15b-4782-a1f7-ce398bbe323b\" (UID: \"80daec9e-b15b-4782-a1f7-ce398bbe323b\") " Mar 18 13:05:48.166880 master-0 kubenswrapper[4025]: I0318 13:05:48.166727 4025 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/80daec9e-b15b-4782-a1f7-ce398bbe323b-sno-bootstrap-files\") pod \"80daec9e-b15b-4782-a1f7-ce398bbe323b\" (UID: \"80daec9e-b15b-4782-a1f7-ce398bbe323b\") " Mar 18 13:05:48.166880 master-0 kubenswrapper[4025]: I0318 13:05:48.166816 4025 reconciler_common.go:293] "Volume detached for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/80daec9e-b15b-4782-a1f7-ce398bbe323b-host-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 18 13:05:48.166880 master-0 kubenswrapper[4025]: I0318 13:05:48.166833 4025 reconciler_common.go:293] "Volume detached for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/80daec9e-b15b-4782-a1f7-ce398bbe323b-host-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:05:48.167156 master-0 kubenswrapper[4025]: I0318 13:05:48.167135 4025 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80daec9e-b15b-4782-a1f7-ce398bbe323b-sno-bootstrap-files" (OuterVolumeSpecName: "sno-bootstrap-files") pod "80daec9e-b15b-4782-a1f7-ce398bbe323b" (UID: "80daec9e-b15b-4782-a1f7-ce398bbe323b"). InnerVolumeSpecName "sno-bootstrap-files". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:05:48.167261 master-0 kubenswrapper[4025]: I0318 13:05:48.167210 4025 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80daec9e-b15b-4782-a1f7-ce398bbe323b-host-var-run-resolv-conf" (OuterVolumeSpecName: "host-var-run-resolv-conf") pod "80daec9e-b15b-4782-a1f7-ce398bbe323b" (UID: "80daec9e-b15b-4782-a1f7-ce398bbe323b"). InnerVolumeSpecName "host-var-run-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:05:48.169935 master-0 kubenswrapper[4025]: I0318 13:05:48.169901 4025 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80daec9e-b15b-4782-a1f7-ce398bbe323b-kube-api-access-kr46g" (OuterVolumeSpecName: "kube-api-access-kr46g") pod "80daec9e-b15b-4782-a1f7-ce398bbe323b" (UID: "80daec9e-b15b-4782-a1f7-ce398bbe323b"). InnerVolumeSpecName "kube-api-access-kr46g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:05:48.267493 master-0 kubenswrapper[4025]: I0318 13:05:48.267436 4025 reconciler_common.go:293] "Volume detached for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/80daec9e-b15b-4782-a1f7-ce398bbe323b-host-var-run-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 18 13:05:48.267493 master-0 kubenswrapper[4025]: I0318 13:05:48.267484 4025 reconciler_common.go:293] "Volume detached for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/80daec9e-b15b-4782-a1f7-ce398bbe323b-sno-bootstrap-files\") on node \"master-0\" DevicePath \"\"" Mar 18 13:05:48.267493 master-0 kubenswrapper[4025]: I0318 13:05:48.267498 4025 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kr46g\" (UniqueName: \"kubernetes.io/projected/80daec9e-b15b-4782-a1f7-ce398bbe323b-kube-api-access-kr46g\") on node \"master-0\" DevicePath \"\"" Mar 18 13:05:49.044874 master-0 kubenswrapper[4025]: I0318 13:05:49.044797 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-7bfhd" event={"ID":"80daec9e-b15b-4782-a1f7-ce398bbe323b","Type":"ContainerDied","Data":"b75c64ccdcd6c7429e0eee9f5d3eef7c095db4da53200590b14c4e34b0c5741d"} Mar 18 13:05:49.044874 master-0 kubenswrapper[4025]: I0318 13:05:49.044842 4025 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b75c64ccdcd6c7429e0eee9f5d3eef7c095db4da53200590b14c4e34b0c5741d" Mar 18 13:05:49.044874 master-0 kubenswrapper[4025]: I0318 13:05:49.044846 4025 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-7bfhd" Mar 18 13:05:49.360823 master-0 kubenswrapper[4025]: I0318 13:05:49.360480 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/mtu-prober-pr494"] Mar 18 13:05:49.360823 master-0 kubenswrapper[4025]: E0318 13:05:49.360621 4025 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80daec9e-b15b-4782-a1f7-ce398bbe323b" containerName="assisted-installer-controller" Mar 18 13:05:49.360823 master-0 kubenswrapper[4025]: I0318 13:05:49.360648 4025 state_mem.go:107] "Deleted CPUSet assignment" podUID="80daec9e-b15b-4782-a1f7-ce398bbe323b" containerName="assisted-installer-controller" Mar 18 13:05:49.360823 master-0 kubenswrapper[4025]: I0318 13:05:49.360701 4025 memory_manager.go:354] "RemoveStaleState removing state" podUID="80daec9e-b15b-4782-a1f7-ce398bbe323b" containerName="assisted-installer-controller" Mar 18 13:05:49.361542 master-0 kubenswrapper[4025]: I0318 13:05:49.361013 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-pr494" Mar 18 13:05:49.491826 master-0 kubenswrapper[4025]: I0318 13:05:49.491737 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5c8n\" (UniqueName: \"kubernetes.io/projected/1bf6a38b-0bdd-4767-bc33-7cc12d9537e7-kube-api-access-n5c8n\") pod \"mtu-prober-pr494\" (UID: \"1bf6a38b-0bdd-4767-bc33-7cc12d9537e7\") " pod="openshift-network-operator/mtu-prober-pr494" Mar 18 13:05:49.491826 master-0 kubenswrapper[4025]: I0318 13:05:49.491816 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:05:49.492093 master-0 kubenswrapper[4025]: E0318 13:05:49.491939 4025 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 13:05:49.492093 master-0 kubenswrapper[4025]: E0318 13:05:49.492003 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert podName:2b06a568-4dad-44b4-8312-aa52911dbfb0 nodeName:}" failed. No retries permitted until 2026-03-18 13:05:57.491985586 +0000 UTC m=+58.411864218 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert") pod "cluster-version-operator-56d8475767-f2hjp" (UID: "2b06a568-4dad-44b4-8312-aa52911dbfb0") : secret "cluster-version-operator-serving-cert" not found Mar 18 13:05:49.592455 master-0 kubenswrapper[4025]: I0318 13:05:49.592326 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5c8n\" (UniqueName: \"kubernetes.io/projected/1bf6a38b-0bdd-4767-bc33-7cc12d9537e7-kube-api-access-n5c8n\") pod \"mtu-prober-pr494\" (UID: \"1bf6a38b-0bdd-4767-bc33-7cc12d9537e7\") " pod="openshift-network-operator/mtu-prober-pr494" Mar 18 13:05:49.614468 master-0 kubenswrapper[4025]: I0318 13:05:49.614242 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5c8n\" (UniqueName: \"kubernetes.io/projected/1bf6a38b-0bdd-4767-bc33-7cc12d9537e7-kube-api-access-n5c8n\") pod \"mtu-prober-pr494\" (UID: \"1bf6a38b-0bdd-4767-bc33-7cc12d9537e7\") " pod="openshift-network-operator/mtu-prober-pr494" Mar 18 13:05:49.673688 master-0 kubenswrapper[4025]: I0318 13:05:49.673583 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-pr494" Mar 18 13:05:49.697040 master-0 kubenswrapper[4025]: W0318 13:05:49.696985 4025 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bf6a38b_0bdd_4767_bc33_7cc12d9537e7.slice/crio-40b261faec74f68e9f91e396dd4e777ae1879f65ef14ec6f7736af761907c26b WatchSource:0}: Error finding container 40b261faec74f68e9f91e396dd4e777ae1879f65ef14ec6f7736af761907c26b: Status 404 returned error can't find the container with id 40b261faec74f68e9f91e396dd4e777ae1879f65ef14ec6f7736af761907c26b Mar 18 13:05:50.049319 master-0 kubenswrapper[4025]: I0318 13:05:50.049216 4025 generic.go:334] "Generic (PLEG): container finished" podID="1bf6a38b-0bdd-4767-bc33-7cc12d9537e7" containerID="b515c044e4a53f4787c6a1c5354de363795974706495b5ee9abee555e41455a3" exitCode=0 Mar 18 13:05:50.049319 master-0 kubenswrapper[4025]: I0318 13:05:50.049262 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-pr494" event={"ID":"1bf6a38b-0bdd-4767-bc33-7cc12d9537e7","Type":"ContainerDied","Data":"b515c044e4a53f4787c6a1c5354de363795974706495b5ee9abee555e41455a3"} Mar 18 13:05:50.049319 master-0 kubenswrapper[4025]: I0318 13:05:50.049290 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-pr494" event={"ID":"1bf6a38b-0bdd-4767-bc33-7cc12d9537e7","Type":"ContainerStarted","Data":"40b261faec74f68e9f91e396dd4e777ae1879f65ef14ec6f7736af761907c26b"} Mar 18 13:05:51.072546 master-0 kubenswrapper[4025]: I0318 13:05:51.072437 4025 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-pr494" Mar 18 13:05:51.204805 master-0 kubenswrapper[4025]: I0318 13:05:51.204666 4025 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5c8n\" (UniqueName: \"kubernetes.io/projected/1bf6a38b-0bdd-4767-bc33-7cc12d9537e7-kube-api-access-n5c8n\") pod \"1bf6a38b-0bdd-4767-bc33-7cc12d9537e7\" (UID: \"1bf6a38b-0bdd-4767-bc33-7cc12d9537e7\") " Mar 18 13:05:51.210208 master-0 kubenswrapper[4025]: I0318 13:05:51.210135 4025 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf6a38b-0bdd-4767-bc33-7cc12d9537e7-kube-api-access-n5c8n" (OuterVolumeSpecName: "kube-api-access-n5c8n") pod "1bf6a38b-0bdd-4767-bc33-7cc12d9537e7" (UID: "1bf6a38b-0bdd-4767-bc33-7cc12d9537e7"). InnerVolumeSpecName "kube-api-access-n5c8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:05:51.305842 master-0 kubenswrapper[4025]: I0318 13:05:51.305758 4025 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5c8n\" (UniqueName: \"kubernetes.io/projected/1bf6a38b-0bdd-4767-bc33-7cc12d9537e7-kube-api-access-n5c8n\") on node \"master-0\" DevicePath \"\"" Mar 18 13:05:51.872032 master-0 kubenswrapper[4025]: I0318 13:05:51.871909 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Mar 18 13:05:51.872566 master-0 kubenswrapper[4025]: I0318 13:05:51.872497 4025 scope.go:117] "RemoveContainer" containerID="4d4a6fb4b82b14518b178f0f032fb16b2ce281c0de8e87c9f0449d46d7739b5b" Mar 18 13:05:52.058876 master-0 kubenswrapper[4025]: I0318 13:05:52.058571 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-pr494" event={"ID":"1bf6a38b-0bdd-4767-bc33-7cc12d9537e7","Type":"ContainerDied","Data":"40b261faec74f68e9f91e396dd4e777ae1879f65ef14ec6f7736af761907c26b"} Mar 18 13:05:52.058876 master-0 kubenswrapper[4025]: I0318 13:05:52.058862 4025 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40b261faec74f68e9f91e396dd4e777ae1879f65ef14ec6f7736af761907c26b" Mar 18 13:05:52.058876 master-0 kubenswrapper[4025]: I0318 13:05:52.058664 4025 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-pr494" Mar 18 13:05:53.064540 master-0 kubenswrapper[4025]: I0318 13:05:53.064402 4025 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 18 13:05:53.065371 master-0 kubenswrapper[4025]: I0318 13:05:53.064997 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"c6b7be01dc24d7f26b3d57447fbf2490a6f4dfb2fb1c9fdf65bee4f74420bdb3"} Mar 18 13:05:53.109399 master-0 kubenswrapper[4025]: I0318 13:05:53.109286 4025 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podStartSLOduration=2.10925993 podStartE2EDuration="2.10925993s" podCreationTimestamp="2026-03-18 13:05:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:05:53.108743937 +0000 UTC m=+54.028622629" watchObservedRunningTime="2026-03-18 13:05:53.10925993 +0000 UTC m=+54.029138592" Mar 18 13:05:54.374141 master-0 kubenswrapper[4025]: I0318 13:05:54.374079 4025 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-network-operator/mtu-prober-pr494"] Mar 18 13:05:54.378347 master-0 kubenswrapper[4025]: I0318 13:05:54.378290 4025 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-network-operator/mtu-prober-pr494"] Mar 18 13:05:55.807149 master-0 kubenswrapper[4025]: I0318 13:05:55.807083 4025 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf6a38b-0bdd-4767-bc33-7cc12d9537e7" path="/var/lib/kubelet/pods/1bf6a38b-0bdd-4767-bc33-7cc12d9537e7/volumes" Mar 18 13:05:57.552933 master-0 kubenswrapper[4025]: I0318 13:05:57.552850 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:05:57.553561 master-0 kubenswrapper[4025]: E0318 13:05:57.553016 4025 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 13:05:57.553561 master-0 kubenswrapper[4025]: E0318 13:05:57.553107 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert podName:2b06a568-4dad-44b4-8312-aa52911dbfb0 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:13.553083669 +0000 UTC m=+74.472962311 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert") pod "cluster-version-operator-56d8475767-f2hjp" (UID: "2b06a568-4dad-44b4-8312-aa52911dbfb0") : secret "cluster-version-operator-serving-cert" not found Mar 18 13:05:59.244515 master-0 kubenswrapper[4025]: I0318 13:05:59.244240 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-vkbvp"] Mar 18 13:05:59.244515 master-0 kubenswrapper[4025]: E0318 13:05:59.244314 4025 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bf6a38b-0bdd-4767-bc33-7cc12d9537e7" containerName="prober" Mar 18 13:05:59.244515 master-0 kubenswrapper[4025]: I0318 13:05:59.244324 4025 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bf6a38b-0bdd-4767-bc33-7cc12d9537e7" containerName="prober" Mar 18 13:05:59.244515 master-0 kubenswrapper[4025]: I0318 13:05:59.244344 4025 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bf6a38b-0bdd-4767-bc33-7cc12d9537e7" containerName="prober" Mar 18 13:05:59.245547 master-0 kubenswrapper[4025]: I0318 13:05:59.244532 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.247843 master-0 kubenswrapper[4025]: I0318 13:05:59.247770 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 18 13:05:59.247985 master-0 kubenswrapper[4025]: I0318 13:05:59.247968 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 18 13:05:59.249375 master-0 kubenswrapper[4025]: I0318 13:05:59.249305 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 18 13:05:59.251474 master-0 kubenswrapper[4025]: I0318 13:05:59.251398 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 18 13:05:59.265667 master-0 kubenswrapper[4025]: I0318 13:05:59.265590 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-cnibin\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.265667 master-0 kubenswrapper[4025]: I0318 13:05:59.265664 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-multus-conf-dir\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.265968 master-0 kubenswrapper[4025]: I0318 13:05:59.265699 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsvmx\" (UniqueName: \"kubernetes.io/projected/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-kube-api-access-xsvmx\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.265968 master-0 kubenswrapper[4025]: I0318 13:05:59.265732 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-hostroot\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.265968 master-0 kubenswrapper[4025]: I0318 13:05:59.265765 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-multus-socket-dir-parent\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.265968 master-0 kubenswrapper[4025]: I0318 13:05:59.265795 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-var-lib-kubelet\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.265968 master-0 kubenswrapper[4025]: I0318 13:05:59.265824 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-run-k8s-cni-cncf-io\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.265968 master-0 kubenswrapper[4025]: I0318 13:05:59.265854 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-var-lib-cni-multus\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.265968 master-0 kubenswrapper[4025]: I0318 13:05:59.265888 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-multus-cni-dir\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.265968 master-0 kubenswrapper[4025]: I0318 13:05:59.265926 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-cni-binary-copy\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.265968 master-0 kubenswrapper[4025]: I0318 13:05:59.265966 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-run-netns\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.266640 master-0 kubenswrapper[4025]: I0318 13:05:59.266002 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-multus-daemon-config\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.266640 master-0 kubenswrapper[4025]: I0318 13:05:59.266032 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-os-release\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.266640 master-0 kubenswrapper[4025]: I0318 13:05:59.266060 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-var-lib-cni-bin\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.266640 master-0 kubenswrapper[4025]: I0318 13:05:59.266087 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-etc-kubernetes\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.266640 master-0 kubenswrapper[4025]: I0318 13:05:59.266117 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-system-cni-dir\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.266640 master-0 kubenswrapper[4025]: I0318 13:05:59.266147 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-run-multus-certs\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.366996 master-0 kubenswrapper[4025]: I0318 13:05:59.366909 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-cnibin\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.366996 master-0 kubenswrapper[4025]: I0318 13:05:59.366997 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-multus-conf-dir\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.366996 master-0 kubenswrapper[4025]: I0318 13:05:59.367017 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-cnibin\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.366996 master-0 kubenswrapper[4025]: I0318 13:05:59.367044 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-multus-conf-dir\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.366996 master-0 kubenswrapper[4025]: I0318 13:05:59.367058 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsvmx\" (UniqueName: \"kubernetes.io/projected/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-kube-api-access-xsvmx\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.367461 master-0 kubenswrapper[4025]: I0318 13:05:59.367096 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-hostroot\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.367461 master-0 kubenswrapper[4025]: I0318 13:05:59.367117 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-multus-socket-dir-parent\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.367461 master-0 kubenswrapper[4025]: I0318 13:05:59.367134 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-var-lib-kubelet\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.367461 master-0 kubenswrapper[4025]: I0318 13:05:59.367153 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-run-k8s-cni-cncf-io\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.367461 master-0 kubenswrapper[4025]: I0318 13:05:59.367199 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-hostroot\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.367461 master-0 kubenswrapper[4025]: I0318 13:05:59.367234 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-var-lib-cni-multus\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.367461 master-0 kubenswrapper[4025]: I0318 13:05:59.367272 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-run-k8s-cni-cncf-io\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.367461 master-0 kubenswrapper[4025]: I0318 13:05:59.367321 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-multus-socket-dir-parent\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.367461 master-0 kubenswrapper[4025]: I0318 13:05:59.367350 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-var-lib-kubelet\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.367461 master-0 kubenswrapper[4025]: I0318 13:05:59.367386 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-multus-daemon-config\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.367461 master-0 kubenswrapper[4025]: I0318 13:05:59.367434 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-multus-cni-dir\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.367461 master-0 kubenswrapper[4025]: I0318 13:05:59.367462 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-cni-binary-copy\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.367930 master-0 kubenswrapper[4025]: I0318 13:05:59.367483 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-run-netns\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.367930 master-0 kubenswrapper[4025]: I0318 13:05:59.367487 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-var-lib-cni-multus\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.367930 master-0 kubenswrapper[4025]: I0318 13:05:59.367676 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-run-netns\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.367930 master-0 kubenswrapper[4025]: I0318 13:05:59.367710 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-var-lib-cni-bin\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.367930 master-0 kubenswrapper[4025]: I0318 13:05:59.367733 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-etc-kubernetes\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.367930 master-0 kubenswrapper[4025]: I0318 13:05:59.367753 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-os-release\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.367930 master-0 kubenswrapper[4025]: I0318 13:05:59.367783 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-system-cni-dir\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.367930 master-0 kubenswrapper[4025]: I0318 13:05:59.367860 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-multus-cni-dir\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.368211 master-0 kubenswrapper[4025]: I0318 13:05:59.367872 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-run-multus-certs\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.368211 master-0 kubenswrapper[4025]: I0318 13:05:59.367986 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-run-multus-certs\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.368211 master-0 kubenswrapper[4025]: I0318 13:05:59.367994 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-os-release\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.368211 master-0 kubenswrapper[4025]: I0318 13:05:59.368094 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-var-lib-cni-bin\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.368211 master-0 kubenswrapper[4025]: I0318 13:05:59.368158 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-system-cni-dir\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.368211 master-0 kubenswrapper[4025]: I0318 13:05:59.368185 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-etc-kubernetes\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.368484 master-0 kubenswrapper[4025]: I0318 13:05:59.368238 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-cni-binary-copy\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.368484 master-0 kubenswrapper[4025]: I0318 13:05:59.368282 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-multus-daemon-config\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.389823 master-0 kubenswrapper[4025]: I0318 13:05:59.389760 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsvmx\" (UniqueName: \"kubernetes.io/projected/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-kube-api-access-xsvmx\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.422609 master-0 kubenswrapper[4025]: I0318 13:05:59.422530 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-ttdn5"] Mar 18 13:05:59.422988 master-0 kubenswrapper[4025]: I0318 13:05:59.422954 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:05:59.426047 master-0 kubenswrapper[4025]: I0318 13:05:59.425990 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 18 13:05:59.426703 master-0 kubenswrapper[4025]: I0318 13:05:59.426661 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 18 13:05:59.468730 master-0 kubenswrapper[4025]: I0318 13:05:59.468627 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/767da57e-44e4-4861-bc6f-427c5bbb4d9d-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:05:59.468730 master-0 kubenswrapper[4025]: I0318 13:05:59.468728 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/767da57e-44e4-4861-bc6f-427c5bbb4d9d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:05:59.468974 master-0 kubenswrapper[4025]: I0318 13:05:59.468780 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nxzr\" (UniqueName: \"kubernetes.io/projected/767da57e-44e4-4861-bc6f-427c5bbb4d9d-kube-api-access-2nxzr\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:05:59.468974 master-0 kubenswrapper[4025]: I0318 13:05:59.468831 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/767da57e-44e4-4861-bc6f-427c5bbb4d9d-system-cni-dir\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:05:59.468974 master-0 kubenswrapper[4025]: I0318 13:05:59.468872 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/767da57e-44e4-4861-bc6f-427c5bbb4d9d-cnibin\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:05:59.468974 master-0 kubenswrapper[4025]: I0318 13:05:59.468915 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/767da57e-44e4-4861-bc6f-427c5bbb4d9d-cni-binary-copy\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:05:59.469277 master-0 kubenswrapper[4025]: I0318 13:05:59.469100 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/767da57e-44e4-4861-bc6f-427c5bbb4d9d-os-release\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:05:59.469277 master-0 kubenswrapper[4025]: I0318 13:05:59.469171 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/767da57e-44e4-4861-bc6f-427c5bbb4d9d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:05:59.556252 master-0 kubenswrapper[4025]: I0318 13:05:59.556068 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-vkbvp" Mar 18 13:05:59.569655 master-0 kubenswrapper[4025]: I0318 13:05:59.569609 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/767da57e-44e4-4861-bc6f-427c5bbb4d9d-cnibin\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:05:59.569655 master-0 kubenswrapper[4025]: I0318 13:05:59.569661 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/767da57e-44e4-4861-bc6f-427c5bbb4d9d-cni-binary-copy\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:05:59.569797 master-0 kubenswrapper[4025]: I0318 13:05:59.569690 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/767da57e-44e4-4861-bc6f-427c5bbb4d9d-os-release\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:05:59.569797 master-0 kubenswrapper[4025]: I0318 13:05:59.569711 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/767da57e-44e4-4861-bc6f-427c5bbb4d9d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:05:59.569797 master-0 kubenswrapper[4025]: I0318 13:05:59.569740 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/767da57e-44e4-4861-bc6f-427c5bbb4d9d-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:05:59.569797 master-0 kubenswrapper[4025]: I0318 13:05:59.569776 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/767da57e-44e4-4861-bc6f-427c5bbb4d9d-system-cni-dir\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:05:59.569797 master-0 kubenswrapper[4025]: I0318 13:05:59.569796 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/767da57e-44e4-4861-bc6f-427c5bbb4d9d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:05:59.569975 master-0 kubenswrapper[4025]: I0318 13:05:59.569816 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nxzr\" (UniqueName: \"kubernetes.io/projected/767da57e-44e4-4861-bc6f-427c5bbb4d9d-kube-api-access-2nxzr\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:05:59.570970 master-0 kubenswrapper[4025]: I0318 13:05:59.570347 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/767da57e-44e4-4861-bc6f-427c5bbb4d9d-system-cni-dir\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:05:59.570970 master-0 kubenswrapper[4025]: I0318 13:05:59.570590 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/767da57e-44e4-4861-bc6f-427c5bbb4d9d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:05:59.570970 master-0 kubenswrapper[4025]: I0318 13:05:59.570602 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/767da57e-44e4-4861-bc6f-427c5bbb4d9d-cnibin\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:05:59.570970 master-0 kubenswrapper[4025]: I0318 13:05:59.570683 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/767da57e-44e4-4861-bc6f-427c5bbb4d9d-os-release\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:05:59.570970 master-0 kubenswrapper[4025]: I0318 13:05:59.570911 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/767da57e-44e4-4861-bc6f-427c5bbb4d9d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:05:59.571189 master-0 kubenswrapper[4025]: I0318 13:05:59.571168 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/767da57e-44e4-4861-bc6f-427c5bbb4d9d-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:05:59.571307 master-0 kubenswrapper[4025]: I0318 13:05:59.571262 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/767da57e-44e4-4861-bc6f-427c5bbb4d9d-cni-binary-copy\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:05:59.571982 master-0 kubenswrapper[4025]: W0318 13:05:59.571938 4025 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb54f7c6d_b9ec_47c6_90a3_5a8d9bd15b10.slice/crio-95c74df7f62f7dc8b78c4b8838181b79403ce4060666c2043d78b39ebd9f0419 WatchSource:0}: Error finding container 95c74df7f62f7dc8b78c4b8838181b79403ce4060666c2043d78b39ebd9f0419: Status 404 returned error can't find the container with id 95c74df7f62f7dc8b78c4b8838181b79403ce4060666c2043d78b39ebd9f0419 Mar 18 13:05:59.585183 master-0 kubenswrapper[4025]: I0318 13:05:59.585115 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nxzr\" (UniqueName: \"kubernetes.io/projected/767da57e-44e4-4861-bc6f-427c5bbb4d9d-kube-api-access-2nxzr\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:05:59.733964 master-0 kubenswrapper[4025]: I0318 13:05:59.733865 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:05:59.749130 master-0 kubenswrapper[4025]: W0318 13:05:59.749059 4025 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod767da57e_44e4_4861_bc6f_427c5bbb4d9d.slice/crio-c9e13e3e5b49e845caaa9344e46631934d49f01be6a7ead87d5884c85f85894d WatchSource:0}: Error finding container c9e13e3e5b49e845caaa9344e46631934d49f01be6a7ead87d5884c85f85894d: Status 404 returned error can't find the container with id c9e13e3e5b49e845caaa9344e46631934d49f01be6a7ead87d5884c85f85894d Mar 18 13:06:00.085536 master-0 kubenswrapper[4025]: I0318 13:06:00.085461 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttdn5" event={"ID":"767da57e-44e4-4861-bc6f-427c5bbb4d9d","Type":"ContainerStarted","Data":"c9e13e3e5b49e845caaa9344e46631934d49f01be6a7ead87d5884c85f85894d"} Mar 18 13:06:00.086849 master-0 kubenswrapper[4025]: I0318 13:06:00.086751 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vkbvp" event={"ID":"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10","Type":"ContainerStarted","Data":"95c74df7f62f7dc8b78c4b8838181b79403ce4060666c2043d78b39ebd9f0419"} Mar 18 13:06:00.224280 master-0 kubenswrapper[4025]: I0318 13:06:00.223673 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-kbfbq"] Mar 18 13:06:00.227014 master-0 kubenswrapper[4025]: I0318 13:06:00.226972 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:00.227366 master-0 kubenswrapper[4025]: E0318 13:06:00.227080 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kbfbq" podUID="bf1cc230-0a79-4a1d-b500-a65d02e50973" Mar 18 13:06:00.275680 master-0 kubenswrapper[4025]: I0318 13:06:00.275530 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs\") pod \"network-metrics-daemon-kbfbq\" (UID: \"bf1cc230-0a79-4a1d-b500-a65d02e50973\") " pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:00.276155 master-0 kubenswrapper[4025]: I0318 13:06:00.275713 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvdtw\" (UniqueName: \"kubernetes.io/projected/bf1cc230-0a79-4a1d-b500-a65d02e50973-kube-api-access-dvdtw\") pod \"network-metrics-daemon-kbfbq\" (UID: \"bf1cc230-0a79-4a1d-b500-a65d02e50973\") " pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:00.376353 master-0 kubenswrapper[4025]: I0318 13:06:00.376189 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs\") pod \"network-metrics-daemon-kbfbq\" (UID: \"bf1cc230-0a79-4a1d-b500-a65d02e50973\") " pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:00.376353 master-0 kubenswrapper[4025]: I0318 13:06:00.376271 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvdtw\" (UniqueName: \"kubernetes.io/projected/bf1cc230-0a79-4a1d-b500-a65d02e50973-kube-api-access-dvdtw\") pod \"network-metrics-daemon-kbfbq\" (UID: \"bf1cc230-0a79-4a1d-b500-a65d02e50973\") " pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:00.376605 master-0 kubenswrapper[4025]: E0318 13:06:00.376500 4025 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 13:06:00.376692 master-0 kubenswrapper[4025]: E0318 13:06:00.376641 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs podName:bf1cc230-0a79-4a1d-b500-a65d02e50973 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:00.876613084 +0000 UTC m=+61.796491706 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs") pod "network-metrics-daemon-kbfbq" (UID: "bf1cc230-0a79-4a1d-b500-a65d02e50973") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 13:06:00.400496 master-0 kubenswrapper[4025]: I0318 13:06:00.399193 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvdtw\" (UniqueName: \"kubernetes.io/projected/bf1cc230-0a79-4a1d-b500-a65d02e50973-kube-api-access-dvdtw\") pod \"network-metrics-daemon-kbfbq\" (UID: \"bf1cc230-0a79-4a1d-b500-a65d02e50973\") " pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:00.881355 master-0 kubenswrapper[4025]: I0318 13:06:00.880781 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs\") pod \"network-metrics-daemon-kbfbq\" (UID: \"bf1cc230-0a79-4a1d-b500-a65d02e50973\") " pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:00.881355 master-0 kubenswrapper[4025]: E0318 13:06:00.880941 4025 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 13:06:00.881355 master-0 kubenswrapper[4025]: E0318 13:06:00.881009 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs podName:bf1cc230-0a79-4a1d-b500-a65d02e50973 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:01.880990678 +0000 UTC m=+62.800869300 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs") pod "network-metrics-daemon-kbfbq" (UID: "bf1cc230-0a79-4a1d-b500-a65d02e50973") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 13:06:01.803578 master-0 kubenswrapper[4025]: I0318 13:06:01.803502 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:01.804080 master-0 kubenswrapper[4025]: E0318 13:06:01.803664 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kbfbq" podUID="bf1cc230-0a79-4a1d-b500-a65d02e50973" Mar 18 13:06:01.891871 master-0 kubenswrapper[4025]: I0318 13:06:01.891822 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs\") pod \"network-metrics-daemon-kbfbq\" (UID: \"bf1cc230-0a79-4a1d-b500-a65d02e50973\") " pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:01.892054 master-0 kubenswrapper[4025]: E0318 13:06:01.891939 4025 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 13:06:01.892054 master-0 kubenswrapper[4025]: E0318 13:06:01.891988 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs podName:bf1cc230-0a79-4a1d-b500-a65d02e50973 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:03.891975359 +0000 UTC m=+64.811853981 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs") pod "network-metrics-daemon-kbfbq" (UID: "bf1cc230-0a79-4a1d-b500-a65d02e50973") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 13:06:03.096263 master-0 kubenswrapper[4025]: I0318 13:06:03.096183 4025 generic.go:334] "Generic (PLEG): container finished" podID="767da57e-44e4-4861-bc6f-427c5bbb4d9d" containerID="b002856dfe7358511cd094dcfacc7030cb861d82b50197ce9130a1536facf510" exitCode=0 Mar 18 13:06:03.096263 master-0 kubenswrapper[4025]: I0318 13:06:03.096249 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttdn5" event={"ID":"767da57e-44e4-4861-bc6f-427c5bbb4d9d","Type":"ContainerDied","Data":"b002856dfe7358511cd094dcfacc7030cb861d82b50197ce9130a1536facf510"} Mar 18 13:06:03.807132 master-0 kubenswrapper[4025]: I0318 13:06:03.804071 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:03.807132 master-0 kubenswrapper[4025]: E0318 13:06:03.804311 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kbfbq" podUID="bf1cc230-0a79-4a1d-b500-a65d02e50973" Mar 18 13:06:03.906983 master-0 kubenswrapper[4025]: I0318 13:06:03.906926 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs\") pod \"network-metrics-daemon-kbfbq\" (UID: \"bf1cc230-0a79-4a1d-b500-a65d02e50973\") " pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:03.907249 master-0 kubenswrapper[4025]: E0318 13:06:03.907103 4025 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 13:06:03.907249 master-0 kubenswrapper[4025]: E0318 13:06:03.907199 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs podName:bf1cc230-0a79-4a1d-b500-a65d02e50973 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:07.907178861 +0000 UTC m=+68.827057483 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs") pod "network-metrics-daemon-kbfbq" (UID: "bf1cc230-0a79-4a1d-b500-a65d02e50973") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 13:06:05.803830 master-0 kubenswrapper[4025]: I0318 13:06:05.803759 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:05.804332 master-0 kubenswrapper[4025]: E0318 13:06:05.803959 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kbfbq" podUID="bf1cc230-0a79-4a1d-b500-a65d02e50973" Mar 18 13:06:07.803274 master-0 kubenswrapper[4025]: I0318 13:06:07.803216 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:07.803925 master-0 kubenswrapper[4025]: E0318 13:06:07.803336 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kbfbq" podUID="bf1cc230-0a79-4a1d-b500-a65d02e50973" Mar 18 13:06:07.964000 master-0 kubenswrapper[4025]: I0318 13:06:07.963927 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs\") pod \"network-metrics-daemon-kbfbq\" (UID: \"bf1cc230-0a79-4a1d-b500-a65d02e50973\") " pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:07.964180 master-0 kubenswrapper[4025]: E0318 13:06:07.964131 4025 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 13:06:07.964241 master-0 kubenswrapper[4025]: E0318 13:06:07.964222 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs podName:bf1cc230-0a79-4a1d-b500-a65d02e50973 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:15.964198755 +0000 UTC m=+76.884077377 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs") pod "network-metrics-daemon-kbfbq" (UID: "bf1cc230-0a79-4a1d-b500-a65d02e50973") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 13:06:09.804036 master-0 kubenswrapper[4025]: I0318 13:06:09.803993 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:09.804475 master-0 kubenswrapper[4025]: E0318 13:06:09.804084 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kbfbq" podUID="bf1cc230-0a79-4a1d-b500-a65d02e50973" Mar 18 13:06:11.621581 master-0 kubenswrapper[4025]: I0318 13:06:11.621529 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4"] Mar 18 13:06:11.622059 master-0 kubenswrapper[4025]: I0318 13:06:11.621837 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" Mar 18 13:06:11.623874 master-0 kubenswrapper[4025]: I0318 13:06:11.623829 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 18 13:06:11.624140 master-0 kubenswrapper[4025]: I0318 13:06:11.624122 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 18 13:06:11.624263 master-0 kubenswrapper[4025]: I0318 13:06:11.624206 4025 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 18 13:06:11.624308 master-0 kubenswrapper[4025]: I0318 13:06:11.624279 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 18 13:06:11.624468 master-0 kubenswrapper[4025]: I0318 13:06:11.624448 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 18 13:06:11.697052 master-0 kubenswrapper[4025]: I0318 13:06:11.696705 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d42bcf13-548b-46c4-9a3d-a46f1b6ec045-env-overrides\") pod \"ovnkube-control-plane-57f769d897-hvnt4\" (UID: \"d42bcf13-548b-46c4-9a3d-a46f1b6ec045\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" Mar 18 13:06:11.697052 master-0 kubenswrapper[4025]: I0318 13:06:11.697031 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d42bcf13-548b-46c4-9a3d-a46f1b6ec045-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-hvnt4\" (UID: \"d42bcf13-548b-46c4-9a3d-a46f1b6ec045\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" Mar 18 13:06:11.697326 master-0 kubenswrapper[4025]: I0318 13:06:11.697147 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkxxg\" (UniqueName: \"kubernetes.io/projected/d42bcf13-548b-46c4-9a3d-a46f1b6ec045-kube-api-access-vkxxg\") pod \"ovnkube-control-plane-57f769d897-hvnt4\" (UID: \"d42bcf13-548b-46c4-9a3d-a46f1b6ec045\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" Mar 18 13:06:11.697326 master-0 kubenswrapper[4025]: I0318 13:06:11.697212 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d42bcf13-548b-46c4-9a3d-a46f1b6ec045-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-hvnt4\" (UID: \"d42bcf13-548b-46c4-9a3d-a46f1b6ec045\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" Mar 18 13:06:11.797935 master-0 kubenswrapper[4025]: I0318 13:06:11.797762 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d42bcf13-548b-46c4-9a3d-a46f1b6ec045-env-overrides\") pod \"ovnkube-control-plane-57f769d897-hvnt4\" (UID: \"d42bcf13-548b-46c4-9a3d-a46f1b6ec045\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" Mar 18 13:06:11.797935 master-0 kubenswrapper[4025]: I0318 13:06:11.797866 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkxxg\" (UniqueName: \"kubernetes.io/projected/d42bcf13-548b-46c4-9a3d-a46f1b6ec045-kube-api-access-vkxxg\") pod \"ovnkube-control-plane-57f769d897-hvnt4\" (UID: \"d42bcf13-548b-46c4-9a3d-a46f1b6ec045\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" Mar 18 13:06:11.797935 master-0 kubenswrapper[4025]: I0318 13:06:11.797904 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d42bcf13-548b-46c4-9a3d-a46f1b6ec045-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-hvnt4\" (UID: \"d42bcf13-548b-46c4-9a3d-a46f1b6ec045\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" Mar 18 13:06:11.798277 master-0 kubenswrapper[4025]: I0318 13:06:11.798030 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d42bcf13-548b-46c4-9a3d-a46f1b6ec045-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-hvnt4\" (UID: \"d42bcf13-548b-46c4-9a3d-a46f1b6ec045\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" Mar 18 13:06:11.801180 master-0 kubenswrapper[4025]: I0318 13:06:11.801137 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d42bcf13-548b-46c4-9a3d-a46f1b6ec045-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-hvnt4\" (UID: \"d42bcf13-548b-46c4-9a3d-a46f1b6ec045\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" Mar 18 13:06:11.801815 master-0 kubenswrapper[4025]: I0318 13:06:11.801784 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d42bcf13-548b-46c4-9a3d-a46f1b6ec045-env-overrides\") pod \"ovnkube-control-plane-57f769d897-hvnt4\" (UID: \"d42bcf13-548b-46c4-9a3d-a46f1b6ec045\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" Mar 18 13:06:11.804438 master-0 kubenswrapper[4025]: I0318 13:06:11.804376 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d42bcf13-548b-46c4-9a3d-a46f1b6ec045-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-hvnt4\" (UID: \"d42bcf13-548b-46c4-9a3d-a46f1b6ec045\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" Mar 18 13:06:11.809808 master-0 kubenswrapper[4025]: I0318 13:06:11.809479 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:11.809808 master-0 kubenswrapper[4025]: E0318 13:06:11.809637 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kbfbq" podUID="bf1cc230-0a79-4a1d-b500-a65d02e50973" Mar 18 13:06:11.814147 master-0 kubenswrapper[4025]: I0318 13:06:11.813516 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 18 13:06:11.816474 master-0 kubenswrapper[4025]: I0318 13:06:11.815736 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkxxg\" (UniqueName: \"kubernetes.io/projected/d42bcf13-548b-46c4-9a3d-a46f1b6ec045-kube-api-access-vkxxg\") pod \"ovnkube-control-plane-57f769d897-hvnt4\" (UID: \"d42bcf13-548b-46c4-9a3d-a46f1b6ec045\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" Mar 18 13:06:11.835531 master-0 kubenswrapper[4025]: I0318 13:06:11.835465 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vz82q"] Mar 18 13:06:11.836246 master-0 kubenswrapper[4025]: I0318 13:06:11.836209 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:11.839101 master-0 kubenswrapper[4025]: I0318 13:06:11.839055 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 18 13:06:11.842630 master-0 kubenswrapper[4025]: I0318 13:06:11.842593 4025 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 18 13:06:11.861283 master-0 kubenswrapper[4025]: I0318 13:06:11.861190 4025 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-scheduler-master-0" podStartSLOduration=0.861171453 podStartE2EDuration="861.171453ms" podCreationTimestamp="2026-03-18 13:06:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:06:11.86101357 +0000 UTC m=+72.780892192" watchObservedRunningTime="2026-03-18 13:06:11.861171453 +0000 UTC m=+72.781050075" Mar 18 13:06:11.898783 master-0 kubenswrapper[4025]: I0318 13:06:11.898717 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-run-openvswitch\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:11.898783 master-0 kubenswrapper[4025]: I0318 13:06:11.898779 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-etc-openvswitch\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:11.898933 master-0 kubenswrapper[4025]: I0318 13:06:11.898803 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-slash\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:11.898933 master-0 kubenswrapper[4025]: I0318 13:06:11.898879 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-var-lib-openvswitch\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:11.899021 master-0 kubenswrapper[4025]: I0318 13:06:11.898985 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-run-ovn-kubernetes\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:11.899065 master-0 kubenswrapper[4025]: I0318 13:06:11.899026 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbvxj\" (UniqueName: \"kubernetes.io/projected/c0a7c756-575a-4000-b7c1-4f68a93870e8-kube-api-access-hbvxj\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:11.899065 master-0 kubenswrapper[4025]: I0318 13:06:11.899056 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-run-netns\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:11.899140 master-0 kubenswrapper[4025]: I0318 13:06:11.899080 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-cni-netd\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:11.899140 master-0 kubenswrapper[4025]: I0318 13:06:11.899112 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-systemd-units\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:11.899233 master-0 kubenswrapper[4025]: I0318 13:06:11.899140 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-run-ovn\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:11.899233 master-0 kubenswrapper[4025]: I0318 13:06:11.899162 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-cni-bin\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:11.899233 master-0 kubenswrapper[4025]: I0318 13:06:11.899188 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:11.899233 master-0 kubenswrapper[4025]: I0318 13:06:11.899216 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-kubelet\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:11.899233 master-0 kubenswrapper[4025]: I0318 13:06:11.899233 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c0a7c756-575a-4000-b7c1-4f68a93870e8-ovnkube-config\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:11.899620 master-0 kubenswrapper[4025]: I0318 13:06:11.899247 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-node-log\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:11.899620 master-0 kubenswrapper[4025]: I0318 13:06:11.899262 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-log-socket\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:11.899620 master-0 kubenswrapper[4025]: I0318 13:06:11.899318 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-run-systemd\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:11.899620 master-0 kubenswrapper[4025]: I0318 13:06:11.899557 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c0a7c756-575a-4000-b7c1-4f68a93870e8-env-overrides\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:11.899620 master-0 kubenswrapper[4025]: I0318 13:06:11.899586 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c0a7c756-575a-4000-b7c1-4f68a93870e8-ovnkube-script-lib\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:11.899620 master-0 kubenswrapper[4025]: I0318 13:06:11.899609 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c0a7c756-575a-4000-b7c1-4f68a93870e8-ovn-node-metrics-cert\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:11.957955 master-0 kubenswrapper[4025]: I0318 13:06:11.957909 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" Mar 18 13:06:12.000866 master-0 kubenswrapper[4025]: I0318 13:06:12.000810 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-run-ovn-kubernetes\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.000866 master-0 kubenswrapper[4025]: I0318 13:06:12.000857 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbvxj\" (UniqueName: \"kubernetes.io/projected/c0a7c756-575a-4000-b7c1-4f68a93870e8-kube-api-access-hbvxj\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.001069 master-0 kubenswrapper[4025]: I0318 13:06:12.000883 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-run-netns\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.001069 master-0 kubenswrapper[4025]: I0318 13:06:12.000904 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-cni-netd\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.001069 master-0 kubenswrapper[4025]: I0318 13:06:12.000924 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-systemd-units\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.001069 master-0 kubenswrapper[4025]: I0318 13:06:12.000945 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-run-ovn\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.001069 master-0 kubenswrapper[4025]: I0318 13:06:12.000964 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-cni-bin\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.001069 master-0 kubenswrapper[4025]: I0318 13:06:12.000985 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.001069 master-0 kubenswrapper[4025]: I0318 13:06:12.001014 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-kubelet\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.001069 master-0 kubenswrapper[4025]: I0318 13:06:12.001036 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c0a7c756-575a-4000-b7c1-4f68a93870e8-ovnkube-config\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.001069 master-0 kubenswrapper[4025]: I0318 13:06:12.001056 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-node-log\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.001542 master-0 kubenswrapper[4025]: I0318 13:06:12.001077 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-log-socket\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.001542 master-0 kubenswrapper[4025]: I0318 13:06:12.001112 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-run-systemd\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.001542 master-0 kubenswrapper[4025]: I0318 13:06:12.001135 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c0a7c756-575a-4000-b7c1-4f68a93870e8-env-overrides\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.001542 master-0 kubenswrapper[4025]: I0318 13:06:12.001220 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.001542 master-0 kubenswrapper[4025]: I0318 13:06:12.001440 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-run-ovn-kubernetes\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.001755 master-0 kubenswrapper[4025]: I0318 13:06:12.001719 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-kubelet\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.002098 master-0 kubenswrapper[4025]: I0318 13:06:12.002063 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c0a7c756-575a-4000-b7c1-4f68a93870e8-env-overrides\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.002185 master-0 kubenswrapper[4025]: I0318 13:06:12.002120 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-cni-bin\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.002185 master-0 kubenswrapper[4025]: I0318 13:06:12.002153 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-run-netns\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.002185 master-0 kubenswrapper[4025]: I0318 13:06:12.002182 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-cni-netd\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.002340 master-0 kubenswrapper[4025]: I0318 13:06:12.002212 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-systemd-units\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.002340 master-0 kubenswrapper[4025]: I0318 13:06:12.002248 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-run-systemd\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.002340 master-0 kubenswrapper[4025]: I0318 13:06:12.002279 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-node-log\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.002340 master-0 kubenswrapper[4025]: I0318 13:06:12.002294 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-run-ovn\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.002340 master-0 kubenswrapper[4025]: I0318 13:06:12.002313 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-log-socket\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.002592 master-0 kubenswrapper[4025]: I0318 13:06:12.002379 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c0a7c756-575a-4000-b7c1-4f68a93870e8-ovnkube-script-lib\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.002592 master-0 kubenswrapper[4025]: I0318 13:06:12.002387 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c0a7c756-575a-4000-b7c1-4f68a93870e8-ovnkube-config\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.002592 master-0 kubenswrapper[4025]: I0318 13:06:12.002404 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c0a7c756-575a-4000-b7c1-4f68a93870e8-ovn-node-metrics-cert\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.002592 master-0 kubenswrapper[4025]: I0318 13:06:12.002464 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-run-openvswitch\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.002592 master-0 kubenswrapper[4025]: I0318 13:06:12.002480 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-etc-openvswitch\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.002592 master-0 kubenswrapper[4025]: I0318 13:06:12.002495 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-slash\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.002592 master-0 kubenswrapper[4025]: I0318 13:06:12.002512 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-var-lib-openvswitch\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.002592 master-0 kubenswrapper[4025]: I0318 13:06:12.002570 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-var-lib-openvswitch\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.003016 master-0 kubenswrapper[4025]: I0318 13:06:12.002916 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-run-openvswitch\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.003016 master-0 kubenswrapper[4025]: I0318 13:06:12.002968 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-etc-openvswitch\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.003016 master-0 kubenswrapper[4025]: I0318 13:06:12.002938 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c0a7c756-575a-4000-b7c1-4f68a93870e8-ovnkube-script-lib\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.003016 master-0 kubenswrapper[4025]: I0318 13:06:12.003005 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-slash\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.007235 master-0 kubenswrapper[4025]: I0318 13:06:12.007178 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c0a7c756-575a-4000-b7c1-4f68a93870e8-ovn-node-metrics-cert\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.026626 master-0 kubenswrapper[4025]: I0318 13:06:12.026588 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbvxj\" (UniqueName: \"kubernetes.io/projected/c0a7c756-575a-4000-b7c1-4f68a93870e8-kube-api-access-hbvxj\") pod \"ovnkube-node-vz82q\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.148583 master-0 kubenswrapper[4025]: I0318 13:06:12.148452 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:12.438553 master-0 kubenswrapper[4025]: W0318 13:06:12.438502 4025 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd42bcf13_548b_46c4_9a3d_a46f1b6ec045.slice/crio-4b4e7da45f4c21e2af79784fe7c50524be71fedf894b5ecf38d0d12d29080573 WatchSource:0}: Error finding container 4b4e7da45f4c21e2af79784fe7c50524be71fedf894b5ecf38d0d12d29080573: Status 404 returned error can't find the container with id 4b4e7da45f4c21e2af79784fe7c50524be71fedf894b5ecf38d0d12d29080573 Mar 18 13:06:13.115945 master-0 kubenswrapper[4025]: I0318 13:06:13.115832 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vkbvp" event={"ID":"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10","Type":"ContainerStarted","Data":"3f3dca80b39a4776e47d8b812a2714786234fa3f72e9236861e58ef6c6314c8f"} Mar 18 13:06:13.117775 master-0 kubenswrapper[4025]: I0318 13:06:13.116586 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" event={"ID":"c0a7c756-575a-4000-b7c1-4f68a93870e8","Type":"ContainerStarted","Data":"7827d1f532304b88e2b1fd0c1b038cfb1742dc595ef169131ab754ace7193b44"} Mar 18 13:06:13.118707 master-0 kubenswrapper[4025]: I0318 13:06:13.118667 4025 generic.go:334] "Generic (PLEG): container finished" podID="767da57e-44e4-4861-bc6f-427c5bbb4d9d" containerID="2c7ef62a916ad3298edbd1aa1cbc3e8ff60647bfc3a55655d38feae6a6189afb" exitCode=0 Mar 18 13:06:13.118768 master-0 kubenswrapper[4025]: I0318 13:06:13.118740 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttdn5" event={"ID":"767da57e-44e4-4861-bc6f-427c5bbb4d9d","Type":"ContainerDied","Data":"2c7ef62a916ad3298edbd1aa1cbc3e8ff60647bfc3a55655d38feae6a6189afb"} Mar 18 13:06:13.120378 master-0 kubenswrapper[4025]: I0318 13:06:13.120309 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" event={"ID":"d42bcf13-548b-46c4-9a3d-a46f1b6ec045","Type":"ContainerStarted","Data":"290bb48008e0b7c46cc865de084cb0c95db01085d2c9c06c7668d41505cbf49a"} Mar 18 13:06:13.120459 master-0 kubenswrapper[4025]: I0318 13:06:13.120386 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" event={"ID":"d42bcf13-548b-46c4-9a3d-a46f1b6ec045","Type":"ContainerStarted","Data":"4b4e7da45f4c21e2af79784fe7c50524be71fedf894b5ecf38d0d12d29080573"} Mar 18 13:06:13.130219 master-0 kubenswrapper[4025]: I0318 13:06:13.130145 4025 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-vkbvp" podStartSLOduration=1.187606733 podStartE2EDuration="14.130128283s" podCreationTimestamp="2026-03-18 13:05:59 +0000 UTC" firstStartedPulling="2026-03-18 13:05:59.575055811 +0000 UTC m=+60.494934433" lastFinishedPulling="2026-03-18 13:06:12.517577361 +0000 UTC m=+73.437455983" observedRunningTime="2026-03-18 13:06:13.128251508 +0000 UTC m=+74.048130160" watchObservedRunningTime="2026-03-18 13:06:13.130128283 +0000 UTC m=+74.050006935" Mar 18 13:06:13.618212 master-0 kubenswrapper[4025]: I0318 13:06:13.618143 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:06:13.618377 master-0 kubenswrapper[4025]: E0318 13:06:13.618344 4025 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 13:06:13.618485 master-0 kubenswrapper[4025]: E0318 13:06:13.618456 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert podName:2b06a568-4dad-44b4-8312-aa52911dbfb0 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:45.618431486 +0000 UTC m=+106.538310098 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert") pod "cluster-version-operator-56d8475767-f2hjp" (UID: "2b06a568-4dad-44b4-8312-aa52911dbfb0") : secret "cluster-version-operator-serving-cert" not found Mar 18 13:06:13.803957 master-0 kubenswrapper[4025]: I0318 13:06:13.803886 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:13.804190 master-0 kubenswrapper[4025]: E0318 13:06:13.804012 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kbfbq" podUID="bf1cc230-0a79-4a1d-b500-a65d02e50973" Mar 18 13:06:14.819894 master-0 kubenswrapper[4025]: I0318 13:06:14.819859 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-kcsgp"] Mar 18 13:06:14.820431 master-0 kubenswrapper[4025]: I0318 13:06:14.820174 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:06:14.820431 master-0 kubenswrapper[4025]: E0318 13:06:14.820240 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-kcsgp" podUID="0278b04b-b27b-4717-a009-a70315fd05a6" Mar 18 13:06:14.929198 master-0 kubenswrapper[4025]: I0318 13:06:14.929123 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2snjj\" (UniqueName: \"kubernetes.io/projected/0278b04b-b27b-4717-a009-a70315fd05a6-kube-api-access-2snjj\") pod \"network-check-target-kcsgp\" (UID: \"0278b04b-b27b-4717-a009-a70315fd05a6\") " pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:06:15.029839 master-0 kubenswrapper[4025]: I0318 13:06:15.029748 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2snjj\" (UniqueName: \"kubernetes.io/projected/0278b04b-b27b-4717-a009-a70315fd05a6-kube-api-access-2snjj\") pod \"network-check-target-kcsgp\" (UID: \"0278b04b-b27b-4717-a009-a70315fd05a6\") " pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:06:15.043492 master-0 kubenswrapper[4025]: E0318 13:06:15.043458 4025 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 13:06:15.043492 master-0 kubenswrapper[4025]: E0318 13:06:15.043490 4025 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 13:06:15.043602 master-0 kubenswrapper[4025]: E0318 13:06:15.043504 4025 projected.go:194] Error preparing data for projected volume kube-api-access-2snjj for pod openshift-network-diagnostics/network-check-target-kcsgp: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 13:06:15.043602 master-0 kubenswrapper[4025]: E0318 13:06:15.043562 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0278b04b-b27b-4717-a009-a70315fd05a6-kube-api-access-2snjj podName:0278b04b-b27b-4717-a009-a70315fd05a6 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:15.543546425 +0000 UTC m=+76.463425047 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2snjj" (UniqueName: "kubernetes.io/projected/0278b04b-b27b-4717-a009-a70315fd05a6-kube-api-access-2snjj") pod "network-check-target-kcsgp" (UID: "0278b04b-b27b-4717-a009-a70315fd05a6") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 13:06:15.098054 master-0 kubenswrapper[4025]: I0318 13:06:15.097986 4025 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 18 13:06:15.128064 master-0 kubenswrapper[4025]: I0318 13:06:15.127207 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttdn5" event={"ID":"767da57e-44e4-4861-bc6f-427c5bbb4d9d","Type":"ContainerStarted","Data":"e6c5e39905127934bde209ce2f1016715a59ddc9fc387b1a3a64af536455bdb8"} Mar 18 13:06:15.634987 master-0 kubenswrapper[4025]: I0318 13:06:15.634931 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2snjj\" (UniqueName: \"kubernetes.io/projected/0278b04b-b27b-4717-a009-a70315fd05a6-kube-api-access-2snjj\") pod \"network-check-target-kcsgp\" (UID: \"0278b04b-b27b-4717-a009-a70315fd05a6\") " pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:06:15.635180 master-0 kubenswrapper[4025]: E0318 13:06:15.635119 4025 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 13:06:15.635180 master-0 kubenswrapper[4025]: E0318 13:06:15.635145 4025 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 13:06:15.635180 master-0 kubenswrapper[4025]: E0318 13:06:15.635160 4025 projected.go:194] Error preparing data for projected volume kube-api-access-2snjj for pod openshift-network-diagnostics/network-check-target-kcsgp: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 13:06:15.635358 master-0 kubenswrapper[4025]: E0318 13:06:15.635222 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0278b04b-b27b-4717-a009-a70315fd05a6-kube-api-access-2snjj podName:0278b04b-b27b-4717-a009-a70315fd05a6 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:16.63520529 +0000 UTC m=+77.555083972 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2snjj" (UniqueName: "kubernetes.io/projected/0278b04b-b27b-4717-a009-a70315fd05a6-kube-api-access-2snjj") pod "network-check-target-kcsgp" (UID: "0278b04b-b27b-4717-a009-a70315fd05a6") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 13:06:15.803956 master-0 kubenswrapper[4025]: I0318 13:06:15.803883 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:15.804167 master-0 kubenswrapper[4025]: E0318 13:06:15.803996 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kbfbq" podUID="bf1cc230-0a79-4a1d-b500-a65d02e50973" Mar 18 13:06:16.038094 master-0 kubenswrapper[4025]: I0318 13:06:16.038007 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs\") pod \"network-metrics-daemon-kbfbq\" (UID: \"bf1cc230-0a79-4a1d-b500-a65d02e50973\") " pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:16.038899 master-0 kubenswrapper[4025]: E0318 13:06:16.038225 4025 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 13:06:16.038899 master-0 kubenswrapper[4025]: E0318 13:06:16.038324 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs podName:bf1cc230-0a79-4a1d-b500-a65d02e50973 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:32.038299202 +0000 UTC m=+92.958177824 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs") pod "network-metrics-daemon-kbfbq" (UID: "bf1cc230-0a79-4a1d-b500-a65d02e50973") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 13:06:16.132139 master-0 kubenswrapper[4025]: I0318 13:06:16.132049 4025 generic.go:334] "Generic (PLEG): container finished" podID="767da57e-44e4-4861-bc6f-427c5bbb4d9d" containerID="e6c5e39905127934bde209ce2f1016715a59ddc9fc387b1a3a64af536455bdb8" exitCode=0 Mar 18 13:06:16.132139 master-0 kubenswrapper[4025]: I0318 13:06:16.132112 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttdn5" event={"ID":"767da57e-44e4-4861-bc6f-427c5bbb4d9d","Type":"ContainerDied","Data":"e6c5e39905127934bde209ce2f1016715a59ddc9fc387b1a3a64af536455bdb8"} Mar 18 13:06:16.645050 master-0 kubenswrapper[4025]: I0318 13:06:16.644984 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2snjj\" (UniqueName: \"kubernetes.io/projected/0278b04b-b27b-4717-a009-a70315fd05a6-kube-api-access-2snjj\") pod \"network-check-target-kcsgp\" (UID: \"0278b04b-b27b-4717-a009-a70315fd05a6\") " pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:06:16.645258 master-0 kubenswrapper[4025]: E0318 13:06:16.645178 4025 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 13:06:16.645258 master-0 kubenswrapper[4025]: E0318 13:06:16.645196 4025 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 13:06:16.645258 master-0 kubenswrapper[4025]: E0318 13:06:16.645208 4025 projected.go:194] Error preparing data for projected volume kube-api-access-2snjj for pod openshift-network-diagnostics/network-check-target-kcsgp: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 13:06:16.645258 master-0 kubenswrapper[4025]: E0318 13:06:16.645249 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0278b04b-b27b-4717-a009-a70315fd05a6-kube-api-access-2snjj podName:0278b04b-b27b-4717-a009-a70315fd05a6 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:18.645234852 +0000 UTC m=+79.565113474 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2snjj" (UniqueName: "kubernetes.io/projected/0278b04b-b27b-4717-a009-a70315fd05a6-kube-api-access-2snjj") pod "network-check-target-kcsgp" (UID: "0278b04b-b27b-4717-a009-a70315fd05a6") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 13:06:16.804818 master-0 kubenswrapper[4025]: I0318 13:06:16.804734 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:06:16.805009 master-0 kubenswrapper[4025]: E0318 13:06:16.804945 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-kcsgp" podUID="0278b04b-b27b-4717-a009-a70315fd05a6" Mar 18 13:06:17.804016 master-0 kubenswrapper[4025]: I0318 13:06:17.803970 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:17.804539 master-0 kubenswrapper[4025]: E0318 13:06:17.804124 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kbfbq" podUID="bf1cc230-0a79-4a1d-b500-a65d02e50973" Mar 18 13:06:18.660531 master-0 kubenswrapper[4025]: I0318 13:06:18.660487 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2snjj\" (UniqueName: \"kubernetes.io/projected/0278b04b-b27b-4717-a009-a70315fd05a6-kube-api-access-2snjj\") pod \"network-check-target-kcsgp\" (UID: \"0278b04b-b27b-4717-a009-a70315fd05a6\") " pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:06:18.660712 master-0 kubenswrapper[4025]: E0318 13:06:18.660669 4025 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 13:06:18.660712 master-0 kubenswrapper[4025]: E0318 13:06:18.660688 4025 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 13:06:18.660712 master-0 kubenswrapper[4025]: E0318 13:06:18.660698 4025 projected.go:194] Error preparing data for projected volume kube-api-access-2snjj for pod openshift-network-diagnostics/network-check-target-kcsgp: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 13:06:18.660800 master-0 kubenswrapper[4025]: E0318 13:06:18.660745 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0278b04b-b27b-4717-a009-a70315fd05a6-kube-api-access-2snjj podName:0278b04b-b27b-4717-a009-a70315fd05a6 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:22.660731106 +0000 UTC m=+83.580609728 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2snjj" (UniqueName: "kubernetes.io/projected/0278b04b-b27b-4717-a009-a70315fd05a6-kube-api-access-2snjj") pod "network-check-target-kcsgp" (UID: "0278b04b-b27b-4717-a009-a70315fd05a6") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 13:06:18.803706 master-0 kubenswrapper[4025]: I0318 13:06:18.803570 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:06:18.803706 master-0 kubenswrapper[4025]: E0318 13:06:18.803716 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-kcsgp" podUID="0278b04b-b27b-4717-a009-a70315fd05a6" Mar 18 13:06:19.804051 master-0 kubenswrapper[4025]: I0318 13:06:19.803981 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:19.804717 master-0 kubenswrapper[4025]: E0318 13:06:19.804668 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kbfbq" podUID="bf1cc230-0a79-4a1d-b500-a65d02e50973" Mar 18 13:06:19.899044 master-0 kubenswrapper[4025]: W0318 13:06:19.898984 4025 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 18 13:06:19.899495 master-0 kubenswrapper[4025]: I0318 13:06:19.899452 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 18 13:06:20.803717 master-0 kubenswrapper[4025]: I0318 13:06:20.803662 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:06:20.803901 master-0 kubenswrapper[4025]: E0318 13:06:20.803780 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-kcsgp" podUID="0278b04b-b27b-4717-a009-a70315fd05a6" Mar 18 13:06:21.005869 master-0 kubenswrapper[4025]: I0318 13:06:21.005751 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-x8r78"] Mar 18 13:06:21.006253 master-0 kubenswrapper[4025]: I0318 13:06:21.006132 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-x8r78" Mar 18 13:06:21.008775 master-0 kubenswrapper[4025]: I0318 13:06:21.008746 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 18 13:06:21.009003 master-0 kubenswrapper[4025]: I0318 13:06:21.008931 4025 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 18 13:06:21.009003 master-0 kubenswrapper[4025]: I0318 13:06:21.008966 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 18 13:06:21.010016 master-0 kubenswrapper[4025]: I0318 13:06:21.009030 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 18 13:06:21.010084 master-0 kubenswrapper[4025]: I0318 13:06:21.009092 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 18 13:06:21.058257 master-0 kubenswrapper[4025]: I0318 13:06:21.058110 4025 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0-master-0" podStartSLOduration=2.058090146 podStartE2EDuration="2.058090146s" podCreationTimestamp="2026-03-18 13:06:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:06:21.034049214 +0000 UTC m=+81.953927836" watchObservedRunningTime="2026-03-18 13:06:21.058090146 +0000 UTC m=+81.977968768" Mar 18 13:06:21.080748 master-0 kubenswrapper[4025]: I0318 13:06:21.080680 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e7156cf-2d68-4de8-b7e7-60e1539590dd-webhook-cert\") pod \"network-node-identity-x8r78\" (UID: \"0e7156cf-2d68-4de8-b7e7-60e1539590dd\") " pod="openshift-network-node-identity/network-node-identity-x8r78" Mar 18 13:06:21.080748 master-0 kubenswrapper[4025]: I0318 13:06:21.080743 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z84cq\" (UniqueName: \"kubernetes.io/projected/0e7156cf-2d68-4de8-b7e7-60e1539590dd-kube-api-access-z84cq\") pod \"network-node-identity-x8r78\" (UID: \"0e7156cf-2d68-4de8-b7e7-60e1539590dd\") " pod="openshift-network-node-identity/network-node-identity-x8r78" Mar 18 13:06:21.080976 master-0 kubenswrapper[4025]: I0318 13:06:21.080786 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/0e7156cf-2d68-4de8-b7e7-60e1539590dd-ovnkube-identity-cm\") pod \"network-node-identity-x8r78\" (UID: \"0e7156cf-2d68-4de8-b7e7-60e1539590dd\") " pod="openshift-network-node-identity/network-node-identity-x8r78" Mar 18 13:06:21.080976 master-0 kubenswrapper[4025]: I0318 13:06:21.080820 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0e7156cf-2d68-4de8-b7e7-60e1539590dd-env-overrides\") pod \"network-node-identity-x8r78\" (UID: \"0e7156cf-2d68-4de8-b7e7-60e1539590dd\") " pod="openshift-network-node-identity/network-node-identity-x8r78" Mar 18 13:06:21.182162 master-0 kubenswrapper[4025]: I0318 13:06:21.182118 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e7156cf-2d68-4de8-b7e7-60e1539590dd-webhook-cert\") pod \"network-node-identity-x8r78\" (UID: \"0e7156cf-2d68-4de8-b7e7-60e1539590dd\") " pod="openshift-network-node-identity/network-node-identity-x8r78" Mar 18 13:06:21.182162 master-0 kubenswrapper[4025]: I0318 13:06:21.182163 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z84cq\" (UniqueName: \"kubernetes.io/projected/0e7156cf-2d68-4de8-b7e7-60e1539590dd-kube-api-access-z84cq\") pod \"network-node-identity-x8r78\" (UID: \"0e7156cf-2d68-4de8-b7e7-60e1539590dd\") " pod="openshift-network-node-identity/network-node-identity-x8r78" Mar 18 13:06:21.182375 master-0 kubenswrapper[4025]: I0318 13:06:21.182191 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/0e7156cf-2d68-4de8-b7e7-60e1539590dd-ovnkube-identity-cm\") pod \"network-node-identity-x8r78\" (UID: \"0e7156cf-2d68-4de8-b7e7-60e1539590dd\") " pod="openshift-network-node-identity/network-node-identity-x8r78" Mar 18 13:06:21.182375 master-0 kubenswrapper[4025]: E0318 13:06:21.182313 4025 secret.go:189] Couldn't get secret openshift-network-node-identity/network-node-identity-cert: secret "network-node-identity-cert" not found Mar 18 13:06:21.182531 master-0 kubenswrapper[4025]: E0318 13:06:21.182382 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e7156cf-2d68-4de8-b7e7-60e1539590dd-webhook-cert podName:0e7156cf-2d68-4de8-b7e7-60e1539590dd nodeName:}" failed. No retries permitted until 2026-03-18 13:06:21.682363517 +0000 UTC m=+82.602242139 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/0e7156cf-2d68-4de8-b7e7-60e1539590dd-webhook-cert") pod "network-node-identity-x8r78" (UID: "0e7156cf-2d68-4de8-b7e7-60e1539590dd") : secret "network-node-identity-cert" not found Mar 18 13:06:21.182531 master-0 kubenswrapper[4025]: I0318 13:06:21.182425 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0e7156cf-2d68-4de8-b7e7-60e1539590dd-env-overrides\") pod \"network-node-identity-x8r78\" (UID: \"0e7156cf-2d68-4de8-b7e7-60e1539590dd\") " pod="openshift-network-node-identity/network-node-identity-x8r78" Mar 18 13:06:21.183023 master-0 kubenswrapper[4025]: I0318 13:06:21.182992 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0e7156cf-2d68-4de8-b7e7-60e1539590dd-env-overrides\") pod \"network-node-identity-x8r78\" (UID: \"0e7156cf-2d68-4de8-b7e7-60e1539590dd\") " pod="openshift-network-node-identity/network-node-identity-x8r78" Mar 18 13:06:21.186965 master-0 kubenswrapper[4025]: I0318 13:06:21.183640 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/0e7156cf-2d68-4de8-b7e7-60e1539590dd-ovnkube-identity-cm\") pod \"network-node-identity-x8r78\" (UID: \"0e7156cf-2d68-4de8-b7e7-60e1539590dd\") " pod="openshift-network-node-identity/network-node-identity-x8r78" Mar 18 13:06:21.197773 master-0 kubenswrapper[4025]: I0318 13:06:21.197722 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z84cq\" (UniqueName: \"kubernetes.io/projected/0e7156cf-2d68-4de8-b7e7-60e1539590dd-kube-api-access-z84cq\") pod \"network-node-identity-x8r78\" (UID: \"0e7156cf-2d68-4de8-b7e7-60e1539590dd\") " pod="openshift-network-node-identity/network-node-identity-x8r78" Mar 18 13:06:21.687003 master-0 kubenswrapper[4025]: I0318 13:06:21.686941 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e7156cf-2d68-4de8-b7e7-60e1539590dd-webhook-cert\") pod \"network-node-identity-x8r78\" (UID: \"0e7156cf-2d68-4de8-b7e7-60e1539590dd\") " pod="openshift-network-node-identity/network-node-identity-x8r78" Mar 18 13:06:21.690827 master-0 kubenswrapper[4025]: I0318 13:06:21.690789 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e7156cf-2d68-4de8-b7e7-60e1539590dd-webhook-cert\") pod \"network-node-identity-x8r78\" (UID: \"0e7156cf-2d68-4de8-b7e7-60e1539590dd\") " pod="openshift-network-node-identity/network-node-identity-x8r78" Mar 18 13:06:21.803811 master-0 kubenswrapper[4025]: I0318 13:06:21.803736 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:21.804053 master-0 kubenswrapper[4025]: E0318 13:06:21.804009 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kbfbq" podUID="bf1cc230-0a79-4a1d-b500-a65d02e50973" Mar 18 13:06:21.923631 master-0 kubenswrapper[4025]: I0318 13:06:21.921525 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-x8r78" Mar 18 13:06:21.939771 master-0 kubenswrapper[4025]: W0318 13:06:21.939657 4025 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e7156cf_2d68_4de8_b7e7_60e1539590dd.slice/crio-6a1e7d1316bc2cf34931aa8f311ea4a4a0854b6b6c453298e452fe499d1407a7 WatchSource:0}: Error finding container 6a1e7d1316bc2cf34931aa8f311ea4a4a0854b6b6c453298e452fe499d1407a7: Status 404 returned error can't find the container with id 6a1e7d1316bc2cf34931aa8f311ea4a4a0854b6b6c453298e452fe499d1407a7 Mar 18 13:06:22.149149 master-0 kubenswrapper[4025]: I0318 13:06:22.149082 4025 generic.go:334] "Generic (PLEG): container finished" podID="767da57e-44e4-4861-bc6f-427c5bbb4d9d" containerID="ce72b00f2972d5446b5f276006e7acfa3fdc14bc227bc60b88d427b8aca46c01" exitCode=0 Mar 18 13:06:22.149660 master-0 kubenswrapper[4025]: I0318 13:06:22.149172 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttdn5" event={"ID":"767da57e-44e4-4861-bc6f-427c5bbb4d9d","Type":"ContainerDied","Data":"ce72b00f2972d5446b5f276006e7acfa3fdc14bc227bc60b88d427b8aca46c01"} Mar 18 13:06:22.152790 master-0 kubenswrapper[4025]: I0318 13:06:22.152755 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-x8r78" event={"ID":"0e7156cf-2d68-4de8-b7e7-60e1539590dd","Type":"ContainerStarted","Data":"6a1e7d1316bc2cf34931aa8f311ea4a4a0854b6b6c453298e452fe499d1407a7"} Mar 18 13:06:22.696646 master-0 kubenswrapper[4025]: I0318 13:06:22.696597 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2snjj\" (UniqueName: \"kubernetes.io/projected/0278b04b-b27b-4717-a009-a70315fd05a6-kube-api-access-2snjj\") pod \"network-check-target-kcsgp\" (UID: \"0278b04b-b27b-4717-a009-a70315fd05a6\") " pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:06:22.696839 master-0 kubenswrapper[4025]: E0318 13:06:22.696743 4025 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 13:06:22.696839 master-0 kubenswrapper[4025]: E0318 13:06:22.696765 4025 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 13:06:22.696839 master-0 kubenswrapper[4025]: E0318 13:06:22.696780 4025 projected.go:194] Error preparing data for projected volume kube-api-access-2snjj for pod openshift-network-diagnostics/network-check-target-kcsgp: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 13:06:22.696839 master-0 kubenswrapper[4025]: E0318 13:06:22.696833 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0278b04b-b27b-4717-a009-a70315fd05a6-kube-api-access-2snjj podName:0278b04b-b27b-4717-a009-a70315fd05a6 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:30.696817654 +0000 UTC m=+91.616696286 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-2snjj" (UniqueName: "kubernetes.io/projected/0278b04b-b27b-4717-a009-a70315fd05a6-kube-api-access-2snjj") pod "network-check-target-kcsgp" (UID: "0278b04b-b27b-4717-a009-a70315fd05a6") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 13:06:22.804153 master-0 kubenswrapper[4025]: I0318 13:06:22.804110 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:06:22.804333 master-0 kubenswrapper[4025]: E0318 13:06:22.804271 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-kcsgp" podUID="0278b04b-b27b-4717-a009-a70315fd05a6" Mar 18 13:06:23.804730 master-0 kubenswrapper[4025]: I0318 13:06:23.804133 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:23.804730 master-0 kubenswrapper[4025]: E0318 13:06:23.804301 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kbfbq" podUID="bf1cc230-0a79-4a1d-b500-a65d02e50973" Mar 18 13:06:24.804085 master-0 kubenswrapper[4025]: I0318 13:06:24.803694 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:06:24.804085 master-0 kubenswrapper[4025]: E0318 13:06:24.803861 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-kcsgp" podUID="0278b04b-b27b-4717-a009-a70315fd05a6" Mar 18 13:06:25.805759 master-0 kubenswrapper[4025]: I0318 13:06:25.804127 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:25.805759 master-0 kubenswrapper[4025]: E0318 13:06:25.804283 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kbfbq" podUID="bf1cc230-0a79-4a1d-b500-a65d02e50973" Mar 18 13:06:26.804239 master-0 kubenswrapper[4025]: I0318 13:06:26.803767 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:06:26.804239 master-0 kubenswrapper[4025]: E0318 13:06:26.803906 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-kcsgp" podUID="0278b04b-b27b-4717-a009-a70315fd05a6" Mar 18 13:06:27.804486 master-0 kubenswrapper[4025]: I0318 13:06:27.804101 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:27.804486 master-0 kubenswrapper[4025]: E0318 13:06:27.804244 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kbfbq" podUID="bf1cc230-0a79-4a1d-b500-a65d02e50973" Mar 18 13:06:28.803701 master-0 kubenswrapper[4025]: I0318 13:06:28.803661 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:06:28.803924 master-0 kubenswrapper[4025]: E0318 13:06:28.803763 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-kcsgp" podUID="0278b04b-b27b-4717-a009-a70315fd05a6" Mar 18 13:06:29.803325 master-0 kubenswrapper[4025]: I0318 13:06:29.803271 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:29.804207 master-0 kubenswrapper[4025]: E0318 13:06:29.803909 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kbfbq" podUID="bf1cc230-0a79-4a1d-b500-a65d02e50973" Mar 18 13:06:30.762189 master-0 kubenswrapper[4025]: I0318 13:06:30.762131 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2snjj\" (UniqueName: \"kubernetes.io/projected/0278b04b-b27b-4717-a009-a70315fd05a6-kube-api-access-2snjj\") pod \"network-check-target-kcsgp\" (UID: \"0278b04b-b27b-4717-a009-a70315fd05a6\") " pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:06:30.762788 master-0 kubenswrapper[4025]: E0318 13:06:30.762269 4025 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 13:06:30.762788 master-0 kubenswrapper[4025]: E0318 13:06:30.762284 4025 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 13:06:30.762788 master-0 kubenswrapper[4025]: E0318 13:06:30.762293 4025 projected.go:194] Error preparing data for projected volume kube-api-access-2snjj for pod openshift-network-diagnostics/network-check-target-kcsgp: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 13:06:30.762788 master-0 kubenswrapper[4025]: E0318 13:06:30.762333 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0278b04b-b27b-4717-a009-a70315fd05a6-kube-api-access-2snjj podName:0278b04b-b27b-4717-a009-a70315fd05a6 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:46.762321324 +0000 UTC m=+107.682199946 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-2snjj" (UniqueName: "kubernetes.io/projected/0278b04b-b27b-4717-a009-a70315fd05a6-kube-api-access-2snjj") pod "network-check-target-kcsgp" (UID: "0278b04b-b27b-4717-a009-a70315fd05a6") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 13:06:30.803587 master-0 kubenswrapper[4025]: I0318 13:06:30.803540 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:06:30.803587 master-0 kubenswrapper[4025]: E0318 13:06:30.803665 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-kcsgp" podUID="0278b04b-b27b-4717-a009-a70315fd05a6" Mar 18 13:06:31.803858 master-0 kubenswrapper[4025]: I0318 13:06:31.803740 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:31.804286 master-0 kubenswrapper[4025]: E0318 13:06:31.803998 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kbfbq" podUID="bf1cc230-0a79-4a1d-b500-a65d02e50973" Mar 18 13:06:32.072833 master-0 kubenswrapper[4025]: I0318 13:06:32.072769 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs\") pod \"network-metrics-daemon-kbfbq\" (UID: \"bf1cc230-0a79-4a1d-b500-a65d02e50973\") " pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:32.072989 master-0 kubenswrapper[4025]: E0318 13:06:32.072914 4025 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 13:06:32.072989 master-0 kubenswrapper[4025]: E0318 13:06:32.072963 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs podName:bf1cc230-0a79-4a1d-b500-a65d02e50973 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:04.072949567 +0000 UTC m=+124.992828189 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs") pod "network-metrics-daemon-kbfbq" (UID: "bf1cc230-0a79-4a1d-b500-a65d02e50973") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 13:06:32.176286 master-0 kubenswrapper[4025]: I0318 13:06:32.176247 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" event={"ID":"d42bcf13-548b-46c4-9a3d-a46f1b6ec045","Type":"ContainerStarted","Data":"bdb06b047a43d8f5cc135f15126477528bd6743cd5d10a3d7306b59927303450"} Mar 18 13:06:32.188293 master-0 kubenswrapper[4025]: I0318 13:06:32.188259 4025 generic.go:334] "Generic (PLEG): container finished" podID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerID="021952df9a8c624e7b2c202a63c90895d083ef7dabb6b26e0225c77b24cf2ab4" exitCode=0 Mar 18 13:06:32.188293 master-0 kubenswrapper[4025]: I0318 13:06:32.188309 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" event={"ID":"c0a7c756-575a-4000-b7c1-4f68a93870e8","Type":"ContainerDied","Data":"021952df9a8c624e7b2c202a63c90895d083ef7dabb6b26e0225c77b24cf2ab4"} Mar 18 13:06:32.188585 master-0 kubenswrapper[4025]: I0318 13:06:32.188467 4025 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" podStartSLOduration=1.966262329 podStartE2EDuration="21.188449359s" podCreationTimestamp="2026-03-18 13:06:11 +0000 UTC" firstStartedPulling="2026-03-18 13:06:12.633806899 +0000 UTC m=+73.553685521" lastFinishedPulling="2026-03-18 13:06:31.855993919 +0000 UTC m=+92.775872551" observedRunningTime="2026-03-18 13:06:32.187476175 +0000 UTC m=+93.107354817" watchObservedRunningTime="2026-03-18 13:06:32.188449359 +0000 UTC m=+93.108327981" Mar 18 13:06:32.803828 master-0 kubenswrapper[4025]: I0318 13:06:32.803397 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:06:32.804064 master-0 kubenswrapper[4025]: E0318 13:06:32.804014 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-kcsgp" podUID="0278b04b-b27b-4717-a009-a70315fd05a6" Mar 18 13:06:33.197993 master-0 kubenswrapper[4025]: I0318 13:06:33.197915 4025 generic.go:334] "Generic (PLEG): container finished" podID="767da57e-44e4-4861-bc6f-427c5bbb4d9d" containerID="7dd7465ff0a0e7bd1744dc8ce263fa13a50d77f65ff8439074a245d515a4445a" exitCode=0 Mar 18 13:06:33.198184 master-0 kubenswrapper[4025]: I0318 13:06:33.198044 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttdn5" event={"ID":"767da57e-44e4-4861-bc6f-427c5bbb4d9d","Type":"ContainerDied","Data":"7dd7465ff0a0e7bd1744dc8ce263fa13a50d77f65ff8439074a245d515a4445a"} Mar 18 13:06:33.204173 master-0 kubenswrapper[4025]: I0318 13:06:33.203907 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" event={"ID":"c0a7c756-575a-4000-b7c1-4f68a93870e8","Type":"ContainerStarted","Data":"ca309c0d2495becf9414a0126f27e8c969700d4f19e54162f23d4618d6328a5a"} Mar 18 13:06:33.204256 master-0 kubenswrapper[4025]: I0318 13:06:33.204174 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" event={"ID":"c0a7c756-575a-4000-b7c1-4f68a93870e8","Type":"ContainerStarted","Data":"493dfe9b51c8c759aa463a5b1071f77bf81027c1ff66108c97fa3d08852f58a0"} Mar 18 13:06:33.204256 master-0 kubenswrapper[4025]: I0318 13:06:33.204185 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" event={"ID":"c0a7c756-575a-4000-b7c1-4f68a93870e8","Type":"ContainerStarted","Data":"5f34b57b95f8e159690599ed439ca1174115eb38d5aa6ea3ac9710d124e4b96b"} Mar 18 13:06:33.204256 master-0 kubenswrapper[4025]: I0318 13:06:33.204195 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" event={"ID":"c0a7c756-575a-4000-b7c1-4f68a93870e8","Type":"ContainerStarted","Data":"730e302be33fbd2d3a011769ec1b6593ca19ebf0b7337b509fe03fc735096bab"} Mar 18 13:06:33.204256 master-0 kubenswrapper[4025]: I0318 13:06:33.204203 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" event={"ID":"c0a7c756-575a-4000-b7c1-4f68a93870e8","Type":"ContainerStarted","Data":"e341f0b47b4e315a8a056d4358c9519406dcd10924971a2272c438c2a8e23f53"} Mar 18 13:06:33.204256 master-0 kubenswrapper[4025]: I0318 13:06:33.204235 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" event={"ID":"c0a7c756-575a-4000-b7c1-4f68a93870e8","Type":"ContainerStarted","Data":"5986658441ac8e0ea1b629bb542ae3fb5107089a49a39ad390ad0c655ec0db33"} Mar 18 13:06:33.803912 master-0 kubenswrapper[4025]: I0318 13:06:33.803854 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:33.804377 master-0 kubenswrapper[4025]: E0318 13:06:33.804084 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kbfbq" podUID="bf1cc230-0a79-4a1d-b500-a65d02e50973" Mar 18 13:06:34.209397 master-0 kubenswrapper[4025]: I0318 13:06:34.209345 4025 generic.go:334] "Generic (PLEG): container finished" podID="767da57e-44e4-4861-bc6f-427c5bbb4d9d" containerID="22e1bd5e28c298ede758e5ddea0b33351ac8c7be1111bab8e7269abdb7d0b24d" exitCode=0 Mar 18 13:06:34.209397 master-0 kubenswrapper[4025]: I0318 13:06:34.209386 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttdn5" event={"ID":"767da57e-44e4-4861-bc6f-427c5bbb4d9d","Type":"ContainerDied","Data":"22e1bd5e28c298ede758e5ddea0b33351ac8c7be1111bab8e7269abdb7d0b24d"} Mar 18 13:06:34.803922 master-0 kubenswrapper[4025]: I0318 13:06:34.803785 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:06:34.804102 master-0 kubenswrapper[4025]: E0318 13:06:34.803921 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-kcsgp" podUID="0278b04b-b27b-4717-a009-a70315fd05a6" Mar 18 13:06:35.215977 master-0 kubenswrapper[4025]: I0318 13:06:35.215912 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-x8r78" event={"ID":"0e7156cf-2d68-4de8-b7e7-60e1539590dd","Type":"ContainerStarted","Data":"4a2b96ab3e758ccd953d067f7229799e7c3da85d90ceb61612bf33b3cfdeebe2"} Mar 18 13:06:35.215977 master-0 kubenswrapper[4025]: I0318 13:06:35.215957 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-x8r78" event={"ID":"0e7156cf-2d68-4de8-b7e7-60e1539590dd","Type":"ContainerStarted","Data":"3e2abd83cb78987bc1cea9ff0bde57ccd8d857515f5058285b44c75df988a5ac"} Mar 18 13:06:35.221659 master-0 kubenswrapper[4025]: I0318 13:06:35.221601 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttdn5" event={"ID":"767da57e-44e4-4861-bc6f-427c5bbb4d9d","Type":"ContainerStarted","Data":"a3d21d22429f0c722ae40da9b05bb2559f747cf0cc8ac63b185beb2d14f0e235"} Mar 18 13:06:35.226862 master-0 kubenswrapper[4025]: I0318 13:06:35.226779 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" event={"ID":"c0a7c756-575a-4000-b7c1-4f68a93870e8","Type":"ContainerStarted","Data":"4af20364ad415dccb32c409cb881684320bee0de4dc961b8377a1319ec53144c"} Mar 18 13:06:35.233690 master-0 kubenswrapper[4025]: I0318 13:06:35.233592 4025 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-node-identity/network-node-identity-x8r78" podStartSLOduration=2.750049464 podStartE2EDuration="15.233563711s" podCreationTimestamp="2026-03-18 13:06:20 +0000 UTC" firstStartedPulling="2026-03-18 13:06:21.943716945 +0000 UTC m=+82.863595567" lastFinishedPulling="2026-03-18 13:06:34.427231192 +0000 UTC m=+95.347109814" observedRunningTime="2026-03-18 13:06:35.232823303 +0000 UTC m=+96.152701935" watchObservedRunningTime="2026-03-18 13:06:35.233563711 +0000 UTC m=+96.153442393" Mar 18 13:06:35.263561 master-0 kubenswrapper[4025]: I0318 13:06:35.263466 4025 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-ttdn5" podStartSLOduration=4.09976027 podStartE2EDuration="36.263444343s" podCreationTimestamp="2026-03-18 13:05:59 +0000 UTC" firstStartedPulling="2026-03-18 13:05:59.752344806 +0000 UTC m=+60.672223478" lastFinishedPulling="2026-03-18 13:06:31.916028919 +0000 UTC m=+92.835907551" observedRunningTime="2026-03-18 13:06:35.261939597 +0000 UTC m=+96.181818269" watchObservedRunningTime="2026-03-18 13:06:35.263444343 +0000 UTC m=+96.183323005" Mar 18 13:06:35.804392 master-0 kubenswrapper[4025]: I0318 13:06:35.804293 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:35.804639 master-0 kubenswrapper[4025]: E0318 13:06:35.804598 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kbfbq" podUID="bf1cc230-0a79-4a1d-b500-a65d02e50973" Mar 18 13:06:36.803447 master-0 kubenswrapper[4025]: I0318 13:06:36.803340 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:06:36.804243 master-0 kubenswrapper[4025]: E0318 13:06:36.803581 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-kcsgp" podUID="0278b04b-b27b-4717-a009-a70315fd05a6" Mar 18 13:06:37.806095 master-0 kubenswrapper[4025]: I0318 13:06:37.806035 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:37.807180 master-0 kubenswrapper[4025]: E0318 13:06:37.806195 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kbfbq" podUID="bf1cc230-0a79-4a1d-b500-a65d02e50973" Mar 18 13:06:38.243226 master-0 kubenswrapper[4025]: I0318 13:06:38.242865 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" event={"ID":"c0a7c756-575a-4000-b7c1-4f68a93870e8","Type":"ContainerStarted","Data":"1cf077facb895a3207c5561c9c28e741d1ee77714479ab314ca1ccc9a66f6ea9"} Mar 18 13:06:38.243629 master-0 kubenswrapper[4025]: I0318 13:06:38.243482 4025 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:38.243629 master-0 kubenswrapper[4025]: I0318 13:06:38.243520 4025 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:38.243629 master-0 kubenswrapper[4025]: I0318 13:06:38.243534 4025 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:38.387898 master-0 kubenswrapper[4025]: I0318 13:06:38.387846 4025 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:38.388339 master-0 kubenswrapper[4025]: I0318 13:06:38.388305 4025 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:38.804049 master-0 kubenswrapper[4025]: I0318 13:06:38.803896 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:06:38.804463 master-0 kubenswrapper[4025]: E0318 13:06:38.804092 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-kcsgp" podUID="0278b04b-b27b-4717-a009-a70315fd05a6" Mar 18 13:06:39.785708 master-0 kubenswrapper[4025]: I0318 13:06:39.785604 4025 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" podStartSLOduration=9.238556942 podStartE2EDuration="28.78557901s" podCreationTimestamp="2026-03-18 13:06:11 +0000 UTC" firstStartedPulling="2026-03-18 13:06:12.431877429 +0000 UTC m=+73.351756051" lastFinishedPulling="2026-03-18 13:06:31.978899497 +0000 UTC m=+92.898778119" observedRunningTime="2026-03-18 13:06:39.784653328 +0000 UTC m=+100.704532020" watchObservedRunningTime="2026-03-18 13:06:39.78557901 +0000 UTC m=+100.705457652" Mar 18 13:06:39.804733 master-0 kubenswrapper[4025]: I0318 13:06:39.804657 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:39.804895 master-0 kubenswrapper[4025]: E0318 13:06:39.804842 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kbfbq" podUID="bf1cc230-0a79-4a1d-b500-a65d02e50973" Mar 18 13:06:40.180441 master-0 kubenswrapper[4025]: I0318 13:06:40.180363 4025 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vz82q"] Mar 18 13:06:40.766804 master-0 kubenswrapper[4025]: I0318 13:06:40.766762 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-kcsgp"] Mar 18 13:06:40.767002 master-0 kubenswrapper[4025]: I0318 13:06:40.766875 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:06:40.767002 master-0 kubenswrapper[4025]: E0318 13:06:40.766975 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-kcsgp" podUID="0278b04b-b27b-4717-a009-a70315fd05a6" Mar 18 13:06:40.768578 master-0 kubenswrapper[4025]: I0318 13:06:40.768548 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kbfbq"] Mar 18 13:06:40.768663 master-0 kubenswrapper[4025]: I0318 13:06:40.768623 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:40.768710 master-0 kubenswrapper[4025]: E0318 13:06:40.768683 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kbfbq" podUID="bf1cc230-0a79-4a1d-b500-a65d02e50973" Mar 18 13:06:41.251578 master-0 kubenswrapper[4025]: I0318 13:06:41.251492 4025 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" podUID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerName="ovn-controller" containerID="cri-o://5986658441ac8e0ea1b629bb542ae3fb5107089a49a39ad390ad0c655ec0db33" gracePeriod=30 Mar 18 13:06:41.251578 master-0 kubenswrapper[4025]: I0318 13:06:41.251531 4025 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" podUID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerName="nbdb" containerID="cri-o://ca309c0d2495becf9414a0126f27e8c969700d4f19e54162f23d4618d6328a5a" gracePeriod=30 Mar 18 13:06:41.251578 master-0 kubenswrapper[4025]: I0318 13:06:41.251561 4025 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" podUID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://5f34b57b95f8e159690599ed439ca1174115eb38d5aa6ea3ac9710d124e4b96b" gracePeriod=30 Mar 18 13:06:41.253358 master-0 kubenswrapper[4025]: I0318 13:06:41.251639 4025 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" podUID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerName="ovn-acl-logging" containerID="cri-o://e341f0b47b4e315a8a056d4358c9519406dcd10924971a2272c438c2a8e23f53" gracePeriod=30 Mar 18 13:06:41.253358 master-0 kubenswrapper[4025]: I0318 13:06:41.251682 4025 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" podUID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerName="kube-rbac-proxy-node" containerID="cri-o://730e302be33fbd2d3a011769ec1b6593ca19ebf0b7337b509fe03fc735096bab" gracePeriod=30 Mar 18 13:06:41.253358 master-0 kubenswrapper[4025]: I0318 13:06:41.251708 4025 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" podUID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerName="sbdb" containerID="cri-o://4af20364ad415dccb32c409cb881684320bee0de4dc961b8377a1319ec53144c" gracePeriod=30 Mar 18 13:06:41.253358 master-0 kubenswrapper[4025]: I0318 13:06:41.251918 4025 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" podUID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerName="northd" containerID="cri-o://493dfe9b51c8c759aa463a5b1071f77bf81027c1ff66108c97fa3d08852f58a0" gracePeriod=30 Mar 18 13:06:41.284040 master-0 kubenswrapper[4025]: I0318 13:06:41.283719 4025 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" podUID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerName="ovnkube-controller" containerID="cri-o://1cf077facb895a3207c5561c9c28e741d1ee77714479ab314ca1ccc9a66f6ea9" gracePeriod=30 Mar 18 13:06:41.720317 master-0 kubenswrapper[4025]: I0318 13:06:41.720293 4025 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vz82q_c0a7c756-575a-4000-b7c1-4f68a93870e8/ovnkube-controller/0.log" Mar 18 13:06:41.722357 master-0 kubenswrapper[4025]: I0318 13:06:41.722305 4025 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vz82q_c0a7c756-575a-4000-b7c1-4f68a93870e8/kube-rbac-proxy-ovn-metrics/0.log" Mar 18 13:06:41.722928 master-0 kubenswrapper[4025]: I0318 13:06:41.722915 4025 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vz82q_c0a7c756-575a-4000-b7c1-4f68a93870e8/kube-rbac-proxy-node/0.log" Mar 18 13:06:41.723602 master-0 kubenswrapper[4025]: I0318 13:06:41.723558 4025 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vz82q_c0a7c756-575a-4000-b7c1-4f68a93870e8/ovn-acl-logging/0.log" Mar 18 13:06:41.724447 master-0 kubenswrapper[4025]: I0318 13:06:41.724432 4025 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vz82q_c0a7c756-575a-4000-b7c1-4f68a93870e8/ovn-controller/0.log" Mar 18 13:06:41.724969 master-0 kubenswrapper[4025]: I0318 13:06:41.724954 4025 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:41.773743 master-0 kubenswrapper[4025]: I0318 13:06:41.773640 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-kxqjc"] Mar 18 13:06:41.773897 master-0 kubenswrapper[4025]: E0318 13:06:41.773748 4025 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerName="northd" Mar 18 13:06:41.773897 master-0 kubenswrapper[4025]: I0318 13:06:41.773763 4025 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerName="northd" Mar 18 13:06:41.773897 master-0 kubenswrapper[4025]: E0318 13:06:41.773773 4025 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerName="kubecfg-setup" Mar 18 13:06:41.773897 master-0 kubenswrapper[4025]: I0318 13:06:41.773781 4025 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerName="kubecfg-setup" Mar 18 13:06:41.773897 master-0 kubenswrapper[4025]: E0318 13:06:41.773789 4025 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerName="ovn-acl-logging" Mar 18 13:06:41.773897 master-0 kubenswrapper[4025]: I0318 13:06:41.773798 4025 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerName="ovn-acl-logging" Mar 18 13:06:41.773897 master-0 kubenswrapper[4025]: E0318 13:06:41.773806 4025 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerName="nbdb" Mar 18 13:06:41.773897 master-0 kubenswrapper[4025]: I0318 13:06:41.773813 4025 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerName="nbdb" Mar 18 13:06:41.773897 master-0 kubenswrapper[4025]: E0318 13:06:41.773822 4025 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerName="kube-rbac-proxy-ovn-metrics" Mar 18 13:06:41.773897 master-0 kubenswrapper[4025]: I0318 13:06:41.773830 4025 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerName="kube-rbac-proxy-ovn-metrics" Mar 18 13:06:41.773897 master-0 kubenswrapper[4025]: E0318 13:06:41.773838 4025 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerName="ovnkube-controller" Mar 18 13:06:41.773897 master-0 kubenswrapper[4025]: I0318 13:06:41.773845 4025 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerName="ovnkube-controller" Mar 18 13:06:41.773897 master-0 kubenswrapper[4025]: E0318 13:06:41.773854 4025 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerName="ovn-controller" Mar 18 13:06:41.773897 master-0 kubenswrapper[4025]: I0318 13:06:41.773861 4025 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerName="ovn-controller" Mar 18 13:06:41.773897 master-0 kubenswrapper[4025]: E0318 13:06:41.773868 4025 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerName="kube-rbac-proxy-node" Mar 18 13:06:41.773897 master-0 kubenswrapper[4025]: I0318 13:06:41.773875 4025 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerName="kube-rbac-proxy-node" Mar 18 13:06:41.773897 master-0 kubenswrapper[4025]: E0318 13:06:41.773883 4025 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerName="sbdb" Mar 18 13:06:41.773897 master-0 kubenswrapper[4025]: I0318 13:06:41.773889 4025 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerName="sbdb" Mar 18 13:06:41.774445 master-0 kubenswrapper[4025]: I0318 13:06:41.773929 4025 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerName="sbdb" Mar 18 13:06:41.774445 master-0 kubenswrapper[4025]: I0318 13:06:41.773940 4025 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerName="ovnkube-controller" Mar 18 13:06:41.774445 master-0 kubenswrapper[4025]: I0318 13:06:41.773949 4025 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerName="ovn-acl-logging" Mar 18 13:06:41.774445 master-0 kubenswrapper[4025]: I0318 13:06:41.773957 4025 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerName="northd" Mar 18 13:06:41.774445 master-0 kubenswrapper[4025]: I0318 13:06:41.773964 4025 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerName="ovn-controller" Mar 18 13:06:41.774445 master-0 kubenswrapper[4025]: I0318 13:06:41.773971 4025 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerName="kube-rbac-proxy-node" Mar 18 13:06:41.774445 master-0 kubenswrapper[4025]: I0318 13:06:41.773978 4025 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerName="kube-rbac-proxy-ovn-metrics" Mar 18 13:06:41.774445 master-0 kubenswrapper[4025]: I0318 13:06:41.773986 4025 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerName="nbdb" Mar 18 13:06:41.774838 master-0 kubenswrapper[4025]: I0318 13:06:41.774674 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:41.829763 master-0 kubenswrapper[4025]: I0318 13:06:41.829707 4025 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-run-netns\") pod \"c0a7c756-575a-4000-b7c1-4f68a93870e8\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " Mar 18 13:06:41.829763 master-0 kubenswrapper[4025]: I0318 13:06:41.829764 4025 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbvxj\" (UniqueName: \"kubernetes.io/projected/c0a7c756-575a-4000-b7c1-4f68a93870e8-kube-api-access-hbvxj\") pod \"c0a7c756-575a-4000-b7c1-4f68a93870e8\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " Mar 18 13:06:41.829991 master-0 kubenswrapper[4025]: I0318 13:06:41.829786 4025 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"c0a7c756-575a-4000-b7c1-4f68a93870e8\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " Mar 18 13:06:41.829991 master-0 kubenswrapper[4025]: I0318 13:06:41.829809 4025 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-run-systemd\") pod \"c0a7c756-575a-4000-b7c1-4f68a93870e8\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " Mar 18 13:06:41.829991 master-0 kubenswrapper[4025]: I0318 13:06:41.829825 4025 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-run-openvswitch\") pod \"c0a7c756-575a-4000-b7c1-4f68a93870e8\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " Mar 18 13:06:41.829991 master-0 kubenswrapper[4025]: I0318 13:06:41.829844 4025 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-slash\") pod \"c0a7c756-575a-4000-b7c1-4f68a93870e8\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " Mar 18 13:06:41.829991 master-0 kubenswrapper[4025]: I0318 13:06:41.829857 4025 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-kubelet\") pod \"c0a7c756-575a-4000-b7c1-4f68a93870e8\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " Mar 18 13:06:41.829991 master-0 kubenswrapper[4025]: I0318 13:06:41.829875 4025 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c0a7c756-575a-4000-b7c1-4f68a93870e8-ovn-node-metrics-cert\") pod \"c0a7c756-575a-4000-b7c1-4f68a93870e8\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " Mar 18 13:06:41.829991 master-0 kubenswrapper[4025]: I0318 13:06:41.829890 4025 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-var-lib-openvswitch\") pod \"c0a7c756-575a-4000-b7c1-4f68a93870e8\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " Mar 18 13:06:41.829991 master-0 kubenswrapper[4025]: I0318 13:06:41.829904 4025 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-systemd-units\") pod \"c0a7c756-575a-4000-b7c1-4f68a93870e8\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " Mar 18 13:06:41.829991 master-0 kubenswrapper[4025]: I0318 13:06:41.829919 4025 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-log-socket\") pod \"c0a7c756-575a-4000-b7c1-4f68a93870e8\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " Mar 18 13:06:41.829991 master-0 kubenswrapper[4025]: I0318 13:06:41.829935 4025 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c0a7c756-575a-4000-b7c1-4f68a93870e8-env-overrides\") pod \"c0a7c756-575a-4000-b7c1-4f68a93870e8\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " Mar 18 13:06:41.829991 master-0 kubenswrapper[4025]: I0318 13:06:41.829948 4025 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-etc-openvswitch\") pod \"c0a7c756-575a-4000-b7c1-4f68a93870e8\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " Mar 18 13:06:41.829991 master-0 kubenswrapper[4025]: I0318 13:06:41.829968 4025 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-run-ovn\") pod \"c0a7c756-575a-4000-b7c1-4f68a93870e8\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " Mar 18 13:06:41.829991 master-0 kubenswrapper[4025]: I0318 13:06:41.829986 4025 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-cni-bin\") pod \"c0a7c756-575a-4000-b7c1-4f68a93870e8\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " Mar 18 13:06:41.830317 master-0 kubenswrapper[4025]: I0318 13:06:41.830005 4025 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c0a7c756-575a-4000-b7c1-4f68a93870e8-ovnkube-config\") pod \"c0a7c756-575a-4000-b7c1-4f68a93870e8\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " Mar 18 13:06:41.830317 master-0 kubenswrapper[4025]: I0318 13:06:41.830019 4025 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-node-log\") pod \"c0a7c756-575a-4000-b7c1-4f68a93870e8\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " Mar 18 13:06:41.830317 master-0 kubenswrapper[4025]: I0318 13:06:41.830034 4025 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c0a7c756-575a-4000-b7c1-4f68a93870e8-ovnkube-script-lib\") pod \"c0a7c756-575a-4000-b7c1-4f68a93870e8\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " Mar 18 13:06:41.830317 master-0 kubenswrapper[4025]: I0318 13:06:41.830051 4025 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-run-ovn-kubernetes\") pod \"c0a7c756-575a-4000-b7c1-4f68a93870e8\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " Mar 18 13:06:41.830317 master-0 kubenswrapper[4025]: I0318 13:06:41.830069 4025 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-cni-netd\") pod \"c0a7c756-575a-4000-b7c1-4f68a93870e8\" (UID: \"c0a7c756-575a-4000-b7c1-4f68a93870e8\") " Mar 18 13:06:41.830317 master-0 kubenswrapper[4025]: I0318 13:06:41.830227 4025 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "c0a7c756-575a-4000-b7c1-4f68a93870e8" (UID: "c0a7c756-575a-4000-b7c1-4f68a93870e8"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:06:41.830317 master-0 kubenswrapper[4025]: I0318 13:06:41.830258 4025 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "c0a7c756-575a-4000-b7c1-4f68a93870e8" (UID: "c0a7c756-575a-4000-b7c1-4f68a93870e8"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:06:41.830642 master-0 kubenswrapper[4025]: I0318 13:06:41.830600 4025 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "c0a7c756-575a-4000-b7c1-4f68a93870e8" (UID: "c0a7c756-575a-4000-b7c1-4f68a93870e8"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:06:41.830690 master-0 kubenswrapper[4025]: I0318 13:06:41.830619 4025 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-log-socket" (OuterVolumeSpecName: "log-socket") pod "c0a7c756-575a-4000-b7c1-4f68a93870e8" (UID: "c0a7c756-575a-4000-b7c1-4f68a93870e8"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:06:41.831015 master-0 kubenswrapper[4025]: I0318 13:06:41.830996 4025 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0a7c756-575a-4000-b7c1-4f68a93870e8-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "c0a7c756-575a-4000-b7c1-4f68a93870e8" (UID: "c0a7c756-575a-4000-b7c1-4f68a93870e8"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:06:41.831084 master-0 kubenswrapper[4025]: I0318 13:06:41.831038 4025 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-node-log" (OuterVolumeSpecName: "node-log") pod "c0a7c756-575a-4000-b7c1-4f68a93870e8" (UID: "c0a7c756-575a-4000-b7c1-4f68a93870e8"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:06:41.831365 master-0 kubenswrapper[4025]: I0318 13:06:41.831346 4025 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "c0a7c756-575a-4000-b7c1-4f68a93870e8" (UID: "c0a7c756-575a-4000-b7c1-4f68a93870e8"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:06:41.831449 master-0 kubenswrapper[4025]: I0318 13:06:41.831372 4025 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "c0a7c756-575a-4000-b7c1-4f68a93870e8" (UID: "c0a7c756-575a-4000-b7c1-4f68a93870e8"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:06:41.831449 master-0 kubenswrapper[4025]: I0318 13:06:41.831394 4025 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "c0a7c756-575a-4000-b7c1-4f68a93870e8" (UID: "c0a7c756-575a-4000-b7c1-4f68a93870e8"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:06:41.831449 master-0 kubenswrapper[4025]: I0318 13:06:41.831431 4025 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "c0a7c756-575a-4000-b7c1-4f68a93870e8" (UID: "c0a7c756-575a-4000-b7c1-4f68a93870e8"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:06:41.831615 master-0 kubenswrapper[4025]: I0318 13:06:41.831453 4025 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "c0a7c756-575a-4000-b7c1-4f68a93870e8" (UID: "c0a7c756-575a-4000-b7c1-4f68a93870e8"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:06:41.831615 master-0 kubenswrapper[4025]: I0318 13:06:41.831471 4025 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "c0a7c756-575a-4000-b7c1-4f68a93870e8" (UID: "c0a7c756-575a-4000-b7c1-4f68a93870e8"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:06:41.831615 master-0 kubenswrapper[4025]: I0318 13:06:41.831490 4025 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-slash" (OuterVolumeSpecName: "host-slash") pod "c0a7c756-575a-4000-b7c1-4f68a93870e8" (UID: "c0a7c756-575a-4000-b7c1-4f68a93870e8"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:06:41.831615 master-0 kubenswrapper[4025]: I0318 13:06:41.831511 4025 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "c0a7c756-575a-4000-b7c1-4f68a93870e8" (UID: "c0a7c756-575a-4000-b7c1-4f68a93870e8"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:06:41.831615 master-0 kubenswrapper[4025]: I0318 13:06:41.831501 4025 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0a7c756-575a-4000-b7c1-4f68a93870e8-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "c0a7c756-575a-4000-b7c1-4f68a93870e8" (UID: "c0a7c756-575a-4000-b7c1-4f68a93870e8"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:06:41.831615 master-0 kubenswrapper[4025]: I0318 13:06:41.831522 4025 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "c0a7c756-575a-4000-b7c1-4f68a93870e8" (UID: "c0a7c756-575a-4000-b7c1-4f68a93870e8"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:06:41.831918 master-0 kubenswrapper[4025]: I0318 13:06:41.831793 4025 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0a7c756-575a-4000-b7c1-4f68a93870e8-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "c0a7c756-575a-4000-b7c1-4f68a93870e8" (UID: "c0a7c756-575a-4000-b7c1-4f68a93870e8"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:06:41.834510 master-0 kubenswrapper[4025]: I0318 13:06:41.834279 4025 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0a7c756-575a-4000-b7c1-4f68a93870e8-kube-api-access-hbvxj" (OuterVolumeSpecName: "kube-api-access-hbvxj") pod "c0a7c756-575a-4000-b7c1-4f68a93870e8" (UID: "c0a7c756-575a-4000-b7c1-4f68a93870e8"). InnerVolumeSpecName "kube-api-access-hbvxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:06:41.835739 master-0 kubenswrapper[4025]: I0318 13:06:41.835719 4025 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "c0a7c756-575a-4000-b7c1-4f68a93870e8" (UID: "c0a7c756-575a-4000-b7c1-4f68a93870e8"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:06:41.838524 master-0 kubenswrapper[4025]: I0318 13:06:41.838503 4025 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0a7c756-575a-4000-b7c1-4f68a93870e8-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "c0a7c756-575a-4000-b7c1-4f68a93870e8" (UID: "c0a7c756-575a-4000-b7c1-4f68a93870e8"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:06:41.931292 master-0 kubenswrapper[4025]: I0318 13:06:41.931202 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-run-netns\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:41.931292 master-0 kubenswrapper[4025]: I0318 13:06:41.931251 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-systemd-units\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:41.931292 master-0 kubenswrapper[4025]: I0318 13:06:41.931271 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-node-log\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:41.931292 master-0 kubenswrapper[4025]: I0318 13:06:41.931305 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-cni-netd\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:41.931787 master-0 kubenswrapper[4025]: I0318 13:06:41.931322 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-run-ovn-kubernetes\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:41.931787 master-0 kubenswrapper[4025]: I0318 13:06:41.931343 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-kubelet\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:41.931787 master-0 kubenswrapper[4025]: I0318 13:06:41.931357 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-run-openvswitch\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:41.931787 master-0 kubenswrapper[4025]: I0318 13:06:41.931373 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-cni-bin\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:41.931787 master-0 kubenswrapper[4025]: I0318 13:06:41.931389 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:41.931787 master-0 kubenswrapper[4025]: I0318 13:06:41.931406 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab2f96fb-ef55-4427-a598-7e3f1e224045-env-overrides\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:41.931787 master-0 kubenswrapper[4025]: I0318 13:06:41.931443 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ab2f96fb-ef55-4427-a598-7e3f1e224045-ovnkube-script-lib\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:41.931787 master-0 kubenswrapper[4025]: I0318 13:06:41.931457 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab2f96fb-ef55-4427-a598-7e3f1e224045-ovn-node-metrics-cert\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:41.931787 master-0 kubenswrapper[4025]: I0318 13:06:41.931473 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-slash\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:41.931787 master-0 kubenswrapper[4025]: I0318 13:06:41.931495 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab2f96fb-ef55-4427-a598-7e3f1e224045-ovnkube-config\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:41.931787 master-0 kubenswrapper[4025]: I0318 13:06:41.931519 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-log-socket\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:41.931787 master-0 kubenswrapper[4025]: I0318 13:06:41.931535 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mddh9\" (UniqueName: \"kubernetes.io/projected/ab2f96fb-ef55-4427-a598-7e3f1e224045-kube-api-access-mddh9\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:41.931787 master-0 kubenswrapper[4025]: I0318 13:06:41.931551 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-var-lib-openvswitch\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:41.931787 master-0 kubenswrapper[4025]: I0318 13:06:41.931564 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-etc-openvswitch\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:41.931787 master-0 kubenswrapper[4025]: I0318 13:06:41.931578 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-run-systemd\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:41.931787 master-0 kubenswrapper[4025]: I0318 13:06:41.931591 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-run-ovn\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:41.933034 master-0 kubenswrapper[4025]: I0318 13:06:41.931617 4025 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-var-lib-cni-networks-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 18 13:06:41.933034 master-0 kubenswrapper[4025]: I0318 13:06:41.931627 4025 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-run-systemd\") on node \"master-0\" DevicePath \"\"" Mar 18 13:06:41.933034 master-0 kubenswrapper[4025]: I0318 13:06:41.931636 4025 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-run-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 18 13:06:41.933034 master-0 kubenswrapper[4025]: I0318 13:06:41.931766 4025 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-slash\") on node \"master-0\" DevicePath \"\"" Mar 18 13:06:41.933034 master-0 kubenswrapper[4025]: I0318 13:06:41.931801 4025 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-kubelet\") on node \"master-0\" DevicePath \"\"" Mar 18 13:06:41.933034 master-0 kubenswrapper[4025]: I0318 13:06:41.931814 4025 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c0a7c756-575a-4000-b7c1-4f68a93870e8-ovn-node-metrics-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:06:41.933034 master-0 kubenswrapper[4025]: I0318 13:06:41.931859 4025 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-var-lib-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 18 13:06:41.933034 master-0 kubenswrapper[4025]: I0318 13:06:41.931871 4025 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-systemd-units\") on node \"master-0\" DevicePath \"\"" Mar 18 13:06:41.933034 master-0 kubenswrapper[4025]: I0318 13:06:41.931879 4025 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-log-socket\") on node \"master-0\" DevicePath \"\"" Mar 18 13:06:41.933034 master-0 kubenswrapper[4025]: I0318 13:06:41.931887 4025 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c0a7c756-575a-4000-b7c1-4f68a93870e8-env-overrides\") on node \"master-0\" DevicePath \"\"" Mar 18 13:06:41.933034 master-0 kubenswrapper[4025]: I0318 13:06:41.931895 4025 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-etc-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 18 13:06:41.933034 master-0 kubenswrapper[4025]: I0318 13:06:41.931902 4025 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-run-ovn\") on node \"master-0\" DevicePath \"\"" Mar 18 13:06:41.933034 master-0 kubenswrapper[4025]: I0318 13:06:41.931910 4025 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c0a7c756-575a-4000-b7c1-4f68a93870e8-ovnkube-script-lib\") on node \"master-0\" DevicePath \"\"" Mar 18 13:06:41.933034 master-0 kubenswrapper[4025]: I0318 13:06:41.931918 4025 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-cni-bin\") on node \"master-0\" DevicePath \"\"" Mar 18 13:06:41.933034 master-0 kubenswrapper[4025]: I0318 13:06:41.931926 4025 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c0a7c756-575a-4000-b7c1-4f68a93870e8-ovnkube-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:06:41.933034 master-0 kubenswrapper[4025]: I0318 13:06:41.931934 4025 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-node-log\") on node \"master-0\" DevicePath \"\"" Mar 18 13:06:41.933034 master-0 kubenswrapper[4025]: I0318 13:06:41.931942 4025 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-run-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 18 13:06:41.933034 master-0 kubenswrapper[4025]: I0318 13:06:41.931950 4025 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-cni-netd\") on node \"master-0\" DevicePath \"\"" Mar 18 13:06:41.933034 master-0 kubenswrapper[4025]: I0318 13:06:41.931959 4025 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c0a7c756-575a-4000-b7c1-4f68a93870e8-host-run-netns\") on node \"master-0\" DevicePath \"\"" Mar 18 13:06:41.933034 master-0 kubenswrapper[4025]: I0318 13:06:41.931967 4025 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbvxj\" (UniqueName: \"kubernetes.io/projected/c0a7c756-575a-4000-b7c1-4f68a93870e8-kube-api-access-hbvxj\") on node \"master-0\" DevicePath \"\"" Mar 18 13:06:42.033473 master-0 kubenswrapper[4025]: I0318 13:06:42.033270 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-run-ovn-kubernetes\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.033473 master-0 kubenswrapper[4025]: I0318 13:06:42.033370 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-kubelet\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.033834 master-0 kubenswrapper[4025]: I0318 13:06:42.033466 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-run-openvswitch\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.033834 master-0 kubenswrapper[4025]: I0318 13:06:42.033566 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-kubelet\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.033834 master-0 kubenswrapper[4025]: I0318 13:06:42.033592 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-cni-bin\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.033834 master-0 kubenswrapper[4025]: I0318 13:06:42.033660 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-cni-bin\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.033834 master-0 kubenswrapper[4025]: I0318 13:06:42.033666 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.033834 master-0 kubenswrapper[4025]: I0318 13:06:42.033732 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.033834 master-0 kubenswrapper[4025]: I0318 13:06:42.033784 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-run-openvswitch\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.034369 master-0 kubenswrapper[4025]: I0318 13:06:42.033874 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab2f96fb-ef55-4427-a598-7e3f1e224045-env-overrides\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.034369 master-0 kubenswrapper[4025]: I0318 13:06:42.033931 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ab2f96fb-ef55-4427-a598-7e3f1e224045-ovnkube-script-lib\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.034773 master-0 kubenswrapper[4025]: I0318 13:06:42.034683 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-run-ovn-kubernetes\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.034893 master-0 kubenswrapper[4025]: I0318 13:06:42.034704 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab2f96fb-ef55-4427-a598-7e3f1e224045-ovn-node-metrics-cert\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.034893 master-0 kubenswrapper[4025]: I0318 13:06:42.034855 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-slash\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.035050 master-0 kubenswrapper[4025]: I0318 13:06:42.034983 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-slash\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.035146 master-0 kubenswrapper[4025]: I0318 13:06:42.035051 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab2f96fb-ef55-4427-a598-7e3f1e224045-ovnkube-config\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.035146 master-0 kubenswrapper[4025]: I0318 13:06:42.035141 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-log-socket\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.035315 master-0 kubenswrapper[4025]: I0318 13:06:42.035174 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mddh9\" (UniqueName: \"kubernetes.io/projected/ab2f96fb-ef55-4427-a598-7e3f1e224045-kube-api-access-mddh9\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.035315 master-0 kubenswrapper[4025]: I0318 13:06:42.035205 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-var-lib-openvswitch\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.035315 master-0 kubenswrapper[4025]: I0318 13:06:42.035227 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-etc-openvswitch\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.035315 master-0 kubenswrapper[4025]: I0318 13:06:42.035260 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-run-systemd\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.035315 master-0 kubenswrapper[4025]: I0318 13:06:42.035300 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-run-ovn\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.035315 master-0 kubenswrapper[4025]: I0318 13:06:42.035325 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-run-netns\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.036446 master-0 kubenswrapper[4025]: I0318 13:06:42.035361 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-systemd-units\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.036446 master-0 kubenswrapper[4025]: I0318 13:06:42.035391 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-node-log\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.036446 master-0 kubenswrapper[4025]: I0318 13:06:42.035388 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-log-socket\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.036446 master-0 kubenswrapper[4025]: I0318 13:06:42.035484 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-cni-netd\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.036446 master-0 kubenswrapper[4025]: I0318 13:06:42.035501 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab2f96fb-ef55-4427-a598-7e3f1e224045-env-overrides\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.036446 master-0 kubenswrapper[4025]: I0318 13:06:42.035542 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-node-log\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.036446 master-0 kubenswrapper[4025]: I0318 13:06:42.035544 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-run-systemd\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.036446 master-0 kubenswrapper[4025]: I0318 13:06:42.035508 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-systemd-units\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.036446 master-0 kubenswrapper[4025]: I0318 13:06:42.035638 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-var-lib-openvswitch\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.036446 master-0 kubenswrapper[4025]: I0318 13:06:42.035675 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-cni-netd\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.036446 master-0 kubenswrapper[4025]: I0318 13:06:42.035661 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-etc-openvswitch\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.036446 master-0 kubenswrapper[4025]: I0318 13:06:42.035768 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-run-ovn\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.036446 master-0 kubenswrapper[4025]: I0318 13:06:42.035856 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-run-netns\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.036446 master-0 kubenswrapper[4025]: I0318 13:06:42.036305 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab2f96fb-ef55-4427-a598-7e3f1e224045-ovnkube-config\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.038557 master-0 kubenswrapper[4025]: I0318 13:06:42.036682 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ab2f96fb-ef55-4427-a598-7e3f1e224045-ovnkube-script-lib\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.042575 master-0 kubenswrapper[4025]: I0318 13:06:42.042506 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab2f96fb-ef55-4427-a598-7e3f1e224045-ovn-node-metrics-cert\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.063152 master-0 kubenswrapper[4025]: I0318 13:06:42.063070 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mddh9\" (UniqueName: \"kubernetes.io/projected/ab2f96fb-ef55-4427-a598-7e3f1e224045-kube-api-access-mddh9\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.088205 master-0 kubenswrapper[4025]: I0318 13:06:42.088129 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:42.258642 master-0 kubenswrapper[4025]: I0318 13:06:42.258533 4025 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vz82q_c0a7c756-575a-4000-b7c1-4f68a93870e8/ovnkube-controller/0.log" Mar 18 13:06:42.261672 master-0 kubenswrapper[4025]: I0318 13:06:42.261623 4025 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vz82q_c0a7c756-575a-4000-b7c1-4f68a93870e8/kube-rbac-proxy-ovn-metrics/0.log" Mar 18 13:06:42.262357 master-0 kubenswrapper[4025]: I0318 13:06:42.262294 4025 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vz82q_c0a7c756-575a-4000-b7c1-4f68a93870e8/kube-rbac-proxy-node/0.log" Mar 18 13:06:42.263128 master-0 kubenswrapper[4025]: I0318 13:06:42.263077 4025 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vz82q_c0a7c756-575a-4000-b7c1-4f68a93870e8/ovn-acl-logging/0.log" Mar 18 13:06:42.263874 master-0 kubenswrapper[4025]: I0318 13:06:42.263829 4025 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vz82q_c0a7c756-575a-4000-b7c1-4f68a93870e8/ovn-controller/0.log" Mar 18 13:06:42.264500 master-0 kubenswrapper[4025]: I0318 13:06:42.264455 4025 generic.go:334] "Generic (PLEG): container finished" podID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerID="1cf077facb895a3207c5561c9c28e741d1ee77714479ab314ca1ccc9a66f6ea9" exitCode=1 Mar 18 13:06:42.264500 master-0 kubenswrapper[4025]: I0318 13:06:42.264490 4025 generic.go:334] "Generic (PLEG): container finished" podID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerID="4af20364ad415dccb32c409cb881684320bee0de4dc961b8377a1319ec53144c" exitCode=0 Mar 18 13:06:42.264500 master-0 kubenswrapper[4025]: I0318 13:06:42.264503 4025 generic.go:334] "Generic (PLEG): container finished" podID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerID="ca309c0d2495becf9414a0126f27e8c969700d4f19e54162f23d4618d6328a5a" exitCode=0 Mar 18 13:06:42.264500 master-0 kubenswrapper[4025]: I0318 13:06:42.264509 4025 generic.go:334] "Generic (PLEG): container finished" podID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerID="493dfe9b51c8c759aa463a5b1071f77bf81027c1ff66108c97fa3d08852f58a0" exitCode=0 Mar 18 13:06:42.264500 master-0 kubenswrapper[4025]: I0318 13:06:42.264516 4025 generic.go:334] "Generic (PLEG): container finished" podID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerID="5f34b57b95f8e159690599ed439ca1174115eb38d5aa6ea3ac9710d124e4b96b" exitCode=143 Mar 18 13:06:42.264500 master-0 kubenswrapper[4025]: I0318 13:06:42.264524 4025 generic.go:334] "Generic (PLEG): container finished" podID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerID="730e302be33fbd2d3a011769ec1b6593ca19ebf0b7337b509fe03fc735096bab" exitCode=143 Mar 18 13:06:42.264500 master-0 kubenswrapper[4025]: I0318 13:06:42.264531 4025 generic.go:334] "Generic (PLEG): container finished" podID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerID="e341f0b47b4e315a8a056d4358c9519406dcd10924971a2272c438c2a8e23f53" exitCode=143 Mar 18 13:06:42.264500 master-0 kubenswrapper[4025]: I0318 13:06:42.264538 4025 generic.go:334] "Generic (PLEG): container finished" podID="c0a7c756-575a-4000-b7c1-4f68a93870e8" containerID="5986658441ac8e0ea1b629bb542ae3fb5107089a49a39ad390ad0c655ec0db33" exitCode=143 Mar 18 13:06:42.264803 master-0 kubenswrapper[4025]: I0318 13:06:42.264521 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" event={"ID":"c0a7c756-575a-4000-b7c1-4f68a93870e8","Type":"ContainerDied","Data":"1cf077facb895a3207c5561c9c28e741d1ee77714479ab314ca1ccc9a66f6ea9"} Mar 18 13:06:42.264803 master-0 kubenswrapper[4025]: I0318 13:06:42.264598 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" event={"ID":"c0a7c756-575a-4000-b7c1-4f68a93870e8","Type":"ContainerDied","Data":"4af20364ad415dccb32c409cb881684320bee0de4dc961b8377a1319ec53144c"} Mar 18 13:06:42.264803 master-0 kubenswrapper[4025]: I0318 13:06:42.264620 4025 scope.go:117] "RemoveContainer" containerID="1cf077facb895a3207c5561c9c28e741d1ee77714479ab314ca1ccc9a66f6ea9" Mar 18 13:06:42.264803 master-0 kubenswrapper[4025]: I0318 13:06:42.264634 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" event={"ID":"c0a7c756-575a-4000-b7c1-4f68a93870e8","Type":"ContainerDied","Data":"ca309c0d2495becf9414a0126f27e8c969700d4f19e54162f23d4618d6328a5a"} Mar 18 13:06:42.264803 master-0 kubenswrapper[4025]: I0318 13:06:42.264668 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" event={"ID":"c0a7c756-575a-4000-b7c1-4f68a93870e8","Type":"ContainerDied","Data":"493dfe9b51c8c759aa463a5b1071f77bf81027c1ff66108c97fa3d08852f58a0"} Mar 18 13:06:42.264803 master-0 kubenswrapper[4025]: I0318 13:06:42.264680 4025 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" Mar 18 13:06:42.264803 master-0 kubenswrapper[4025]: I0318 13:06:42.264697 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" event={"ID":"c0a7c756-575a-4000-b7c1-4f68a93870e8","Type":"ContainerDied","Data":"5f34b57b95f8e159690599ed439ca1174115eb38d5aa6ea3ac9710d124e4b96b"} Mar 18 13:06:42.264803 master-0 kubenswrapper[4025]: I0318 13:06:42.264726 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" event={"ID":"c0a7c756-575a-4000-b7c1-4f68a93870e8","Type":"ContainerDied","Data":"730e302be33fbd2d3a011769ec1b6593ca19ebf0b7337b509fe03fc735096bab"} Mar 18 13:06:42.265178 master-0 kubenswrapper[4025]: I0318 13:06:42.264774 4025 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e341f0b47b4e315a8a056d4358c9519406dcd10924971a2272c438c2a8e23f53"} Mar 18 13:06:42.265219 master-0 kubenswrapper[4025]: I0318 13:06:42.265170 4025 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5986658441ac8e0ea1b629bb542ae3fb5107089a49a39ad390ad0c655ec0db33"} Mar 18 13:06:42.265219 master-0 kubenswrapper[4025]: I0318 13:06:42.265192 4025 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"021952df9a8c624e7b2c202a63c90895d083ef7dabb6b26e0225c77b24cf2ab4"} Mar 18 13:06:42.265273 master-0 kubenswrapper[4025]: I0318 13:06:42.265219 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" event={"ID":"c0a7c756-575a-4000-b7c1-4f68a93870e8","Type":"ContainerDied","Data":"e341f0b47b4e315a8a056d4358c9519406dcd10924971a2272c438c2a8e23f53"} Mar 18 13:06:42.265273 master-0 kubenswrapper[4025]: I0318 13:06:42.265248 4025 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1cf077facb895a3207c5561c9c28e741d1ee77714479ab314ca1ccc9a66f6ea9"} Mar 18 13:06:42.265339 master-0 kubenswrapper[4025]: I0318 13:06:42.265269 4025 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4af20364ad415dccb32c409cb881684320bee0de4dc961b8377a1319ec53144c"} Mar 18 13:06:42.265339 master-0 kubenswrapper[4025]: I0318 13:06:42.265289 4025 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ca309c0d2495becf9414a0126f27e8c969700d4f19e54162f23d4618d6328a5a"} Mar 18 13:06:42.265339 master-0 kubenswrapper[4025]: I0318 13:06:42.265304 4025 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"493dfe9b51c8c759aa463a5b1071f77bf81027c1ff66108c97fa3d08852f58a0"} Mar 18 13:06:42.265339 master-0 kubenswrapper[4025]: I0318 13:06:42.265319 4025 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f34b57b95f8e159690599ed439ca1174115eb38d5aa6ea3ac9710d124e4b96b"} Mar 18 13:06:42.265339 master-0 kubenswrapper[4025]: I0318 13:06:42.265334 4025 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"730e302be33fbd2d3a011769ec1b6593ca19ebf0b7337b509fe03fc735096bab"} Mar 18 13:06:42.265501 master-0 kubenswrapper[4025]: I0318 13:06:42.265350 4025 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e341f0b47b4e315a8a056d4358c9519406dcd10924971a2272c438c2a8e23f53"} Mar 18 13:06:42.265501 master-0 kubenswrapper[4025]: I0318 13:06:42.265366 4025 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5986658441ac8e0ea1b629bb542ae3fb5107089a49a39ad390ad0c655ec0db33"} Mar 18 13:06:42.265501 master-0 kubenswrapper[4025]: I0318 13:06:42.265381 4025 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"021952df9a8c624e7b2c202a63c90895d083ef7dabb6b26e0225c77b24cf2ab4"} Mar 18 13:06:42.265501 master-0 kubenswrapper[4025]: I0318 13:06:42.265406 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" event={"ID":"c0a7c756-575a-4000-b7c1-4f68a93870e8","Type":"ContainerDied","Data":"5986658441ac8e0ea1b629bb542ae3fb5107089a49a39ad390ad0c655ec0db33"} Mar 18 13:06:42.265501 master-0 kubenswrapper[4025]: I0318 13:06:42.265468 4025 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1cf077facb895a3207c5561c9c28e741d1ee77714479ab314ca1ccc9a66f6ea9"} Mar 18 13:06:42.265501 master-0 kubenswrapper[4025]: I0318 13:06:42.265489 4025 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4af20364ad415dccb32c409cb881684320bee0de4dc961b8377a1319ec53144c"} Mar 18 13:06:42.265653 master-0 kubenswrapper[4025]: I0318 13:06:42.265504 4025 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ca309c0d2495becf9414a0126f27e8c969700d4f19e54162f23d4618d6328a5a"} Mar 18 13:06:42.265653 master-0 kubenswrapper[4025]: I0318 13:06:42.265520 4025 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"493dfe9b51c8c759aa463a5b1071f77bf81027c1ff66108c97fa3d08852f58a0"} Mar 18 13:06:42.265653 master-0 kubenswrapper[4025]: I0318 13:06:42.265535 4025 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f34b57b95f8e159690599ed439ca1174115eb38d5aa6ea3ac9710d124e4b96b"} Mar 18 13:06:42.265653 master-0 kubenswrapper[4025]: I0318 13:06:42.265551 4025 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"730e302be33fbd2d3a011769ec1b6593ca19ebf0b7337b509fe03fc735096bab"} Mar 18 13:06:42.265653 master-0 kubenswrapper[4025]: I0318 13:06:42.265566 4025 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e341f0b47b4e315a8a056d4358c9519406dcd10924971a2272c438c2a8e23f53"} Mar 18 13:06:42.265653 master-0 kubenswrapper[4025]: I0318 13:06:42.265581 4025 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5986658441ac8e0ea1b629bb542ae3fb5107089a49a39ad390ad0c655ec0db33"} Mar 18 13:06:42.265653 master-0 kubenswrapper[4025]: I0318 13:06:42.265596 4025 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"021952df9a8c624e7b2c202a63c90895d083ef7dabb6b26e0225c77b24cf2ab4"} Mar 18 13:06:42.265653 master-0 kubenswrapper[4025]: I0318 13:06:42.265619 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vz82q" event={"ID":"c0a7c756-575a-4000-b7c1-4f68a93870e8","Type":"ContainerDied","Data":"7827d1f532304b88e2b1fd0c1b038cfb1742dc595ef169131ab754ace7193b44"} Mar 18 13:06:42.265653 master-0 kubenswrapper[4025]: I0318 13:06:42.265647 4025 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1cf077facb895a3207c5561c9c28e741d1ee77714479ab314ca1ccc9a66f6ea9"} Mar 18 13:06:42.265984 master-0 kubenswrapper[4025]: I0318 13:06:42.265665 4025 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4af20364ad415dccb32c409cb881684320bee0de4dc961b8377a1319ec53144c"} Mar 18 13:06:42.265984 master-0 kubenswrapper[4025]: I0318 13:06:42.265681 4025 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ca309c0d2495becf9414a0126f27e8c969700d4f19e54162f23d4618d6328a5a"} Mar 18 13:06:42.265984 master-0 kubenswrapper[4025]: I0318 13:06:42.265698 4025 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"493dfe9b51c8c759aa463a5b1071f77bf81027c1ff66108c97fa3d08852f58a0"} Mar 18 13:06:42.265984 master-0 kubenswrapper[4025]: I0318 13:06:42.265713 4025 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f34b57b95f8e159690599ed439ca1174115eb38d5aa6ea3ac9710d124e4b96b"} Mar 18 13:06:42.265984 master-0 kubenswrapper[4025]: I0318 13:06:42.265729 4025 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"730e302be33fbd2d3a011769ec1b6593ca19ebf0b7337b509fe03fc735096bab"} Mar 18 13:06:42.265984 master-0 kubenswrapper[4025]: I0318 13:06:42.265745 4025 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e341f0b47b4e315a8a056d4358c9519406dcd10924971a2272c438c2a8e23f53"} Mar 18 13:06:42.265984 master-0 kubenswrapper[4025]: I0318 13:06:42.265762 4025 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5986658441ac8e0ea1b629bb542ae3fb5107089a49a39ad390ad0c655ec0db33"} Mar 18 13:06:42.265984 master-0 kubenswrapper[4025]: I0318 13:06:42.265778 4025 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"021952df9a8c624e7b2c202a63c90895d083ef7dabb6b26e0225c77b24cf2ab4"} Mar 18 13:06:42.267271 master-0 kubenswrapper[4025]: I0318 13:06:42.266912 4025 generic.go:334] "Generic (PLEG): container finished" podID="ab2f96fb-ef55-4427-a598-7e3f1e224045" containerID="5848e50846e9206c31c30b47f8e7f2df5ddc303c266302abaf44f36dbaa6229a" exitCode=0 Mar 18 13:06:42.267271 master-0 kubenswrapper[4025]: I0318 13:06:42.266982 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" event={"ID":"ab2f96fb-ef55-4427-a598-7e3f1e224045","Type":"ContainerDied","Data":"5848e50846e9206c31c30b47f8e7f2df5ddc303c266302abaf44f36dbaa6229a"} Mar 18 13:06:42.267271 master-0 kubenswrapper[4025]: I0318 13:06:42.267024 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" event={"ID":"ab2f96fb-ef55-4427-a598-7e3f1e224045","Type":"ContainerStarted","Data":"35f2a49474234a3cc3d6b357341939ab9604ca7cc08b21e5412a5ae4810169c5"} Mar 18 13:06:42.282989 master-0 kubenswrapper[4025]: I0318 13:06:42.282941 4025 scope.go:117] "RemoveContainer" containerID="4af20364ad415dccb32c409cb881684320bee0de4dc961b8377a1319ec53144c" Mar 18 13:06:42.290845 master-0 kubenswrapper[4025]: I0318 13:06:42.290824 4025 scope.go:117] "RemoveContainer" containerID="ca309c0d2495becf9414a0126f27e8c969700d4f19e54162f23d4618d6328a5a" Mar 18 13:06:42.302023 master-0 kubenswrapper[4025]: I0318 13:06:42.301964 4025 scope.go:117] "RemoveContainer" containerID="493dfe9b51c8c759aa463a5b1071f77bf81027c1ff66108c97fa3d08852f58a0" Mar 18 13:06:42.316736 master-0 kubenswrapper[4025]: I0318 13:06:42.316696 4025 scope.go:117] "RemoveContainer" containerID="5f34b57b95f8e159690599ed439ca1174115eb38d5aa6ea3ac9710d124e4b96b" Mar 18 13:06:42.331753 master-0 kubenswrapper[4025]: I0318 13:06:42.329057 4025 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vz82q"] Mar 18 13:06:42.335428 master-0 kubenswrapper[4025]: I0318 13:06:42.335373 4025 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vz82q"] Mar 18 13:06:42.340752 master-0 kubenswrapper[4025]: I0318 13:06:42.340706 4025 scope.go:117] "RemoveContainer" containerID="730e302be33fbd2d3a011769ec1b6593ca19ebf0b7337b509fe03fc735096bab" Mar 18 13:06:42.352347 master-0 kubenswrapper[4025]: I0318 13:06:42.352314 4025 scope.go:117] "RemoveContainer" containerID="e341f0b47b4e315a8a056d4358c9519406dcd10924971a2272c438c2a8e23f53" Mar 18 13:06:42.359718 master-0 kubenswrapper[4025]: I0318 13:06:42.359689 4025 scope.go:117] "RemoveContainer" containerID="5986658441ac8e0ea1b629bb542ae3fb5107089a49a39ad390ad0c655ec0db33" Mar 18 13:06:42.368208 master-0 kubenswrapper[4025]: I0318 13:06:42.368140 4025 scope.go:117] "RemoveContainer" containerID="021952df9a8c624e7b2c202a63c90895d083ef7dabb6b26e0225c77b24cf2ab4" Mar 18 13:06:42.378741 master-0 kubenswrapper[4025]: I0318 13:06:42.378659 4025 scope.go:117] "RemoveContainer" containerID="1cf077facb895a3207c5561c9c28e741d1ee77714479ab314ca1ccc9a66f6ea9" Mar 18 13:06:42.379244 master-0 kubenswrapper[4025]: E0318 13:06:42.379206 4025 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cf077facb895a3207c5561c9c28e741d1ee77714479ab314ca1ccc9a66f6ea9\": container with ID starting with 1cf077facb895a3207c5561c9c28e741d1ee77714479ab314ca1ccc9a66f6ea9 not found: ID does not exist" containerID="1cf077facb895a3207c5561c9c28e741d1ee77714479ab314ca1ccc9a66f6ea9" Mar 18 13:06:42.379296 master-0 kubenswrapper[4025]: I0318 13:06:42.379263 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cf077facb895a3207c5561c9c28e741d1ee77714479ab314ca1ccc9a66f6ea9"} err="failed to get container status \"1cf077facb895a3207c5561c9c28e741d1ee77714479ab314ca1ccc9a66f6ea9\": rpc error: code = NotFound desc = could not find container \"1cf077facb895a3207c5561c9c28e741d1ee77714479ab314ca1ccc9a66f6ea9\": container with ID starting with 1cf077facb895a3207c5561c9c28e741d1ee77714479ab314ca1ccc9a66f6ea9 not found: ID does not exist" Mar 18 13:06:42.379333 master-0 kubenswrapper[4025]: I0318 13:06:42.379305 4025 scope.go:117] "RemoveContainer" containerID="4af20364ad415dccb32c409cb881684320bee0de4dc961b8377a1319ec53144c" Mar 18 13:06:42.382305 master-0 kubenswrapper[4025]: E0318 13:06:42.382282 4025 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4af20364ad415dccb32c409cb881684320bee0de4dc961b8377a1319ec53144c\": container with ID starting with 4af20364ad415dccb32c409cb881684320bee0de4dc961b8377a1319ec53144c not found: ID does not exist" containerID="4af20364ad415dccb32c409cb881684320bee0de4dc961b8377a1319ec53144c" Mar 18 13:06:42.382366 master-0 kubenswrapper[4025]: I0318 13:06:42.382320 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4af20364ad415dccb32c409cb881684320bee0de4dc961b8377a1319ec53144c"} err="failed to get container status \"4af20364ad415dccb32c409cb881684320bee0de4dc961b8377a1319ec53144c\": rpc error: code = NotFound desc = could not find container \"4af20364ad415dccb32c409cb881684320bee0de4dc961b8377a1319ec53144c\": container with ID starting with 4af20364ad415dccb32c409cb881684320bee0de4dc961b8377a1319ec53144c not found: ID does not exist" Mar 18 13:06:42.382366 master-0 kubenswrapper[4025]: I0318 13:06:42.382347 4025 scope.go:117] "RemoveContainer" containerID="ca309c0d2495becf9414a0126f27e8c969700d4f19e54162f23d4618d6328a5a" Mar 18 13:06:42.382633 master-0 kubenswrapper[4025]: E0318 13:06:42.382609 4025 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca309c0d2495becf9414a0126f27e8c969700d4f19e54162f23d4618d6328a5a\": container with ID starting with ca309c0d2495becf9414a0126f27e8c969700d4f19e54162f23d4618d6328a5a not found: ID does not exist" containerID="ca309c0d2495becf9414a0126f27e8c969700d4f19e54162f23d4618d6328a5a" Mar 18 13:06:42.382715 master-0 kubenswrapper[4025]: I0318 13:06:42.382640 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca309c0d2495becf9414a0126f27e8c969700d4f19e54162f23d4618d6328a5a"} err="failed to get container status \"ca309c0d2495becf9414a0126f27e8c969700d4f19e54162f23d4618d6328a5a\": rpc error: code = NotFound desc = could not find container \"ca309c0d2495becf9414a0126f27e8c969700d4f19e54162f23d4618d6328a5a\": container with ID starting with ca309c0d2495becf9414a0126f27e8c969700d4f19e54162f23d4618d6328a5a not found: ID does not exist" Mar 18 13:06:42.382715 master-0 kubenswrapper[4025]: I0318 13:06:42.382658 4025 scope.go:117] "RemoveContainer" containerID="493dfe9b51c8c759aa463a5b1071f77bf81027c1ff66108c97fa3d08852f58a0" Mar 18 13:06:42.382963 master-0 kubenswrapper[4025]: E0318 13:06:42.382947 4025 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"493dfe9b51c8c759aa463a5b1071f77bf81027c1ff66108c97fa3d08852f58a0\": container with ID starting with 493dfe9b51c8c759aa463a5b1071f77bf81027c1ff66108c97fa3d08852f58a0 not found: ID does not exist" containerID="493dfe9b51c8c759aa463a5b1071f77bf81027c1ff66108c97fa3d08852f58a0" Mar 18 13:06:42.383018 master-0 kubenswrapper[4025]: I0318 13:06:42.382966 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"493dfe9b51c8c759aa463a5b1071f77bf81027c1ff66108c97fa3d08852f58a0"} err="failed to get container status \"493dfe9b51c8c759aa463a5b1071f77bf81027c1ff66108c97fa3d08852f58a0\": rpc error: code = NotFound desc = could not find container \"493dfe9b51c8c759aa463a5b1071f77bf81027c1ff66108c97fa3d08852f58a0\": container with ID starting with 493dfe9b51c8c759aa463a5b1071f77bf81027c1ff66108c97fa3d08852f58a0 not found: ID does not exist" Mar 18 13:06:42.383018 master-0 kubenswrapper[4025]: I0318 13:06:42.382983 4025 scope.go:117] "RemoveContainer" containerID="5f34b57b95f8e159690599ed439ca1174115eb38d5aa6ea3ac9710d124e4b96b" Mar 18 13:06:42.383590 master-0 kubenswrapper[4025]: E0318 13:06:42.383560 4025 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f34b57b95f8e159690599ed439ca1174115eb38d5aa6ea3ac9710d124e4b96b\": container with ID starting with 5f34b57b95f8e159690599ed439ca1174115eb38d5aa6ea3ac9710d124e4b96b not found: ID does not exist" containerID="5f34b57b95f8e159690599ed439ca1174115eb38d5aa6ea3ac9710d124e4b96b" Mar 18 13:06:42.383639 master-0 kubenswrapper[4025]: I0318 13:06:42.383590 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f34b57b95f8e159690599ed439ca1174115eb38d5aa6ea3ac9710d124e4b96b"} err="failed to get container status \"5f34b57b95f8e159690599ed439ca1174115eb38d5aa6ea3ac9710d124e4b96b\": rpc error: code = NotFound desc = could not find container \"5f34b57b95f8e159690599ed439ca1174115eb38d5aa6ea3ac9710d124e4b96b\": container with ID starting with 5f34b57b95f8e159690599ed439ca1174115eb38d5aa6ea3ac9710d124e4b96b not found: ID does not exist" Mar 18 13:06:42.383639 master-0 kubenswrapper[4025]: I0318 13:06:42.383611 4025 scope.go:117] "RemoveContainer" containerID="730e302be33fbd2d3a011769ec1b6593ca19ebf0b7337b509fe03fc735096bab" Mar 18 13:06:42.383978 master-0 kubenswrapper[4025]: E0318 13:06:42.383943 4025 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"730e302be33fbd2d3a011769ec1b6593ca19ebf0b7337b509fe03fc735096bab\": container with ID starting with 730e302be33fbd2d3a011769ec1b6593ca19ebf0b7337b509fe03fc735096bab not found: ID does not exist" containerID="730e302be33fbd2d3a011769ec1b6593ca19ebf0b7337b509fe03fc735096bab" Mar 18 13:06:42.383978 master-0 kubenswrapper[4025]: I0318 13:06:42.383963 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"730e302be33fbd2d3a011769ec1b6593ca19ebf0b7337b509fe03fc735096bab"} err="failed to get container status \"730e302be33fbd2d3a011769ec1b6593ca19ebf0b7337b509fe03fc735096bab\": rpc error: code = NotFound desc = could not find container \"730e302be33fbd2d3a011769ec1b6593ca19ebf0b7337b509fe03fc735096bab\": container with ID starting with 730e302be33fbd2d3a011769ec1b6593ca19ebf0b7337b509fe03fc735096bab not found: ID does not exist" Mar 18 13:06:42.383978 master-0 kubenswrapper[4025]: I0318 13:06:42.383977 4025 scope.go:117] "RemoveContainer" containerID="e341f0b47b4e315a8a056d4358c9519406dcd10924971a2272c438c2a8e23f53" Mar 18 13:06:42.384203 master-0 kubenswrapper[4025]: E0318 13:06:42.384177 4025 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e341f0b47b4e315a8a056d4358c9519406dcd10924971a2272c438c2a8e23f53\": container with ID starting with e341f0b47b4e315a8a056d4358c9519406dcd10924971a2272c438c2a8e23f53 not found: ID does not exist" containerID="e341f0b47b4e315a8a056d4358c9519406dcd10924971a2272c438c2a8e23f53" Mar 18 13:06:42.384243 master-0 kubenswrapper[4025]: I0318 13:06:42.384203 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e341f0b47b4e315a8a056d4358c9519406dcd10924971a2272c438c2a8e23f53"} err="failed to get container status \"e341f0b47b4e315a8a056d4358c9519406dcd10924971a2272c438c2a8e23f53\": rpc error: code = NotFound desc = could not find container \"e341f0b47b4e315a8a056d4358c9519406dcd10924971a2272c438c2a8e23f53\": container with ID starting with e341f0b47b4e315a8a056d4358c9519406dcd10924971a2272c438c2a8e23f53 not found: ID does not exist" Mar 18 13:06:42.384243 master-0 kubenswrapper[4025]: I0318 13:06:42.384222 4025 scope.go:117] "RemoveContainer" containerID="5986658441ac8e0ea1b629bb542ae3fb5107089a49a39ad390ad0c655ec0db33" Mar 18 13:06:42.384540 master-0 kubenswrapper[4025]: E0318 13:06:42.384512 4025 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5986658441ac8e0ea1b629bb542ae3fb5107089a49a39ad390ad0c655ec0db33\": container with ID starting with 5986658441ac8e0ea1b629bb542ae3fb5107089a49a39ad390ad0c655ec0db33 not found: ID does not exist" containerID="5986658441ac8e0ea1b629bb542ae3fb5107089a49a39ad390ad0c655ec0db33" Mar 18 13:06:42.384595 master-0 kubenswrapper[4025]: I0318 13:06:42.384542 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5986658441ac8e0ea1b629bb542ae3fb5107089a49a39ad390ad0c655ec0db33"} err="failed to get container status \"5986658441ac8e0ea1b629bb542ae3fb5107089a49a39ad390ad0c655ec0db33\": rpc error: code = NotFound desc = could not find container \"5986658441ac8e0ea1b629bb542ae3fb5107089a49a39ad390ad0c655ec0db33\": container with ID starting with 5986658441ac8e0ea1b629bb542ae3fb5107089a49a39ad390ad0c655ec0db33 not found: ID does not exist" Mar 18 13:06:42.384595 master-0 kubenswrapper[4025]: I0318 13:06:42.384572 4025 scope.go:117] "RemoveContainer" containerID="021952df9a8c624e7b2c202a63c90895d083ef7dabb6b26e0225c77b24cf2ab4" Mar 18 13:06:42.386668 master-0 kubenswrapper[4025]: E0318 13:06:42.386608 4025 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"021952df9a8c624e7b2c202a63c90895d083ef7dabb6b26e0225c77b24cf2ab4\": container with ID starting with 021952df9a8c624e7b2c202a63c90895d083ef7dabb6b26e0225c77b24cf2ab4 not found: ID does not exist" containerID="021952df9a8c624e7b2c202a63c90895d083ef7dabb6b26e0225c77b24cf2ab4" Mar 18 13:06:42.386668 master-0 kubenswrapper[4025]: I0318 13:06:42.386642 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"021952df9a8c624e7b2c202a63c90895d083ef7dabb6b26e0225c77b24cf2ab4"} err="failed to get container status \"021952df9a8c624e7b2c202a63c90895d083ef7dabb6b26e0225c77b24cf2ab4\": rpc error: code = NotFound desc = could not find container \"021952df9a8c624e7b2c202a63c90895d083ef7dabb6b26e0225c77b24cf2ab4\": container with ID starting with 021952df9a8c624e7b2c202a63c90895d083ef7dabb6b26e0225c77b24cf2ab4 not found: ID does not exist" Mar 18 13:06:42.386668 master-0 kubenswrapper[4025]: I0318 13:06:42.386666 4025 scope.go:117] "RemoveContainer" containerID="1cf077facb895a3207c5561c9c28e741d1ee77714479ab314ca1ccc9a66f6ea9" Mar 18 13:06:42.387045 master-0 kubenswrapper[4025]: I0318 13:06:42.387003 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cf077facb895a3207c5561c9c28e741d1ee77714479ab314ca1ccc9a66f6ea9"} err="failed to get container status \"1cf077facb895a3207c5561c9c28e741d1ee77714479ab314ca1ccc9a66f6ea9\": rpc error: code = NotFound desc = could not find container \"1cf077facb895a3207c5561c9c28e741d1ee77714479ab314ca1ccc9a66f6ea9\": container with ID starting with 1cf077facb895a3207c5561c9c28e741d1ee77714479ab314ca1ccc9a66f6ea9 not found: ID does not exist" Mar 18 13:06:42.387045 master-0 kubenswrapper[4025]: I0318 13:06:42.387032 4025 scope.go:117] "RemoveContainer" containerID="4af20364ad415dccb32c409cb881684320bee0de4dc961b8377a1319ec53144c" Mar 18 13:06:42.387295 master-0 kubenswrapper[4025]: I0318 13:06:42.387266 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4af20364ad415dccb32c409cb881684320bee0de4dc961b8377a1319ec53144c"} err="failed to get container status \"4af20364ad415dccb32c409cb881684320bee0de4dc961b8377a1319ec53144c\": rpc error: code = NotFound desc = could not find container \"4af20364ad415dccb32c409cb881684320bee0de4dc961b8377a1319ec53144c\": container with ID starting with 4af20364ad415dccb32c409cb881684320bee0de4dc961b8377a1319ec53144c not found: ID does not exist" Mar 18 13:06:42.387295 master-0 kubenswrapper[4025]: I0318 13:06:42.387288 4025 scope.go:117] "RemoveContainer" containerID="ca309c0d2495becf9414a0126f27e8c969700d4f19e54162f23d4618d6328a5a" Mar 18 13:06:42.387529 master-0 kubenswrapper[4025]: I0318 13:06:42.387496 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca309c0d2495becf9414a0126f27e8c969700d4f19e54162f23d4618d6328a5a"} err="failed to get container status \"ca309c0d2495becf9414a0126f27e8c969700d4f19e54162f23d4618d6328a5a\": rpc error: code = NotFound desc = could not find container \"ca309c0d2495becf9414a0126f27e8c969700d4f19e54162f23d4618d6328a5a\": container with ID starting with ca309c0d2495becf9414a0126f27e8c969700d4f19e54162f23d4618d6328a5a not found: ID does not exist" Mar 18 13:06:42.387529 master-0 kubenswrapper[4025]: I0318 13:06:42.387527 4025 scope.go:117] "RemoveContainer" containerID="493dfe9b51c8c759aa463a5b1071f77bf81027c1ff66108c97fa3d08852f58a0" Mar 18 13:06:42.388029 master-0 kubenswrapper[4025]: I0318 13:06:42.387968 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"493dfe9b51c8c759aa463a5b1071f77bf81027c1ff66108c97fa3d08852f58a0"} err="failed to get container status \"493dfe9b51c8c759aa463a5b1071f77bf81027c1ff66108c97fa3d08852f58a0\": rpc error: code = NotFound desc = could not find container \"493dfe9b51c8c759aa463a5b1071f77bf81027c1ff66108c97fa3d08852f58a0\": container with ID starting with 493dfe9b51c8c759aa463a5b1071f77bf81027c1ff66108c97fa3d08852f58a0 not found: ID does not exist" Mar 18 13:06:42.388029 master-0 kubenswrapper[4025]: I0318 13:06:42.387995 4025 scope.go:117] "RemoveContainer" containerID="5f34b57b95f8e159690599ed439ca1174115eb38d5aa6ea3ac9710d124e4b96b" Mar 18 13:06:42.388299 master-0 kubenswrapper[4025]: I0318 13:06:42.388230 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f34b57b95f8e159690599ed439ca1174115eb38d5aa6ea3ac9710d124e4b96b"} err="failed to get container status \"5f34b57b95f8e159690599ed439ca1174115eb38d5aa6ea3ac9710d124e4b96b\": rpc error: code = NotFound desc = could not find container \"5f34b57b95f8e159690599ed439ca1174115eb38d5aa6ea3ac9710d124e4b96b\": container with ID starting with 5f34b57b95f8e159690599ed439ca1174115eb38d5aa6ea3ac9710d124e4b96b not found: ID does not exist" Mar 18 13:06:42.388299 master-0 kubenswrapper[4025]: I0318 13:06:42.388252 4025 scope.go:117] "RemoveContainer" containerID="730e302be33fbd2d3a011769ec1b6593ca19ebf0b7337b509fe03fc735096bab" Mar 18 13:06:42.388786 master-0 kubenswrapper[4025]: I0318 13:06:42.388757 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"730e302be33fbd2d3a011769ec1b6593ca19ebf0b7337b509fe03fc735096bab"} err="failed to get container status \"730e302be33fbd2d3a011769ec1b6593ca19ebf0b7337b509fe03fc735096bab\": rpc error: code = NotFound desc = could not find container \"730e302be33fbd2d3a011769ec1b6593ca19ebf0b7337b509fe03fc735096bab\": container with ID starting with 730e302be33fbd2d3a011769ec1b6593ca19ebf0b7337b509fe03fc735096bab not found: ID does not exist" Mar 18 13:06:42.388786 master-0 kubenswrapper[4025]: I0318 13:06:42.388785 4025 scope.go:117] "RemoveContainer" containerID="e341f0b47b4e315a8a056d4358c9519406dcd10924971a2272c438c2a8e23f53" Mar 18 13:06:42.389009 master-0 kubenswrapper[4025]: I0318 13:06:42.388983 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e341f0b47b4e315a8a056d4358c9519406dcd10924971a2272c438c2a8e23f53"} err="failed to get container status \"e341f0b47b4e315a8a056d4358c9519406dcd10924971a2272c438c2a8e23f53\": rpc error: code = NotFound desc = could not find container \"e341f0b47b4e315a8a056d4358c9519406dcd10924971a2272c438c2a8e23f53\": container with ID starting with e341f0b47b4e315a8a056d4358c9519406dcd10924971a2272c438c2a8e23f53 not found: ID does not exist" Mar 18 13:06:42.389009 master-0 kubenswrapper[4025]: I0318 13:06:42.389003 4025 scope.go:117] "RemoveContainer" containerID="5986658441ac8e0ea1b629bb542ae3fb5107089a49a39ad390ad0c655ec0db33" Mar 18 13:06:42.389462 master-0 kubenswrapper[4025]: I0318 13:06:42.389355 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5986658441ac8e0ea1b629bb542ae3fb5107089a49a39ad390ad0c655ec0db33"} err="failed to get container status \"5986658441ac8e0ea1b629bb542ae3fb5107089a49a39ad390ad0c655ec0db33\": rpc error: code = NotFound desc = could not find container \"5986658441ac8e0ea1b629bb542ae3fb5107089a49a39ad390ad0c655ec0db33\": container with ID starting with 5986658441ac8e0ea1b629bb542ae3fb5107089a49a39ad390ad0c655ec0db33 not found: ID does not exist" Mar 18 13:06:42.389462 master-0 kubenswrapper[4025]: I0318 13:06:42.389372 4025 scope.go:117] "RemoveContainer" containerID="021952df9a8c624e7b2c202a63c90895d083ef7dabb6b26e0225c77b24cf2ab4" Mar 18 13:06:42.390391 master-0 kubenswrapper[4025]: I0318 13:06:42.390358 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"021952df9a8c624e7b2c202a63c90895d083ef7dabb6b26e0225c77b24cf2ab4"} err="failed to get container status \"021952df9a8c624e7b2c202a63c90895d083ef7dabb6b26e0225c77b24cf2ab4\": rpc error: code = NotFound desc = could not find container \"021952df9a8c624e7b2c202a63c90895d083ef7dabb6b26e0225c77b24cf2ab4\": container with ID starting with 021952df9a8c624e7b2c202a63c90895d083ef7dabb6b26e0225c77b24cf2ab4 not found: ID does not exist" Mar 18 13:06:42.390391 master-0 kubenswrapper[4025]: I0318 13:06:42.390379 4025 scope.go:117] "RemoveContainer" containerID="1cf077facb895a3207c5561c9c28e741d1ee77714479ab314ca1ccc9a66f6ea9" Mar 18 13:06:42.390747 master-0 kubenswrapper[4025]: I0318 13:06:42.390706 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cf077facb895a3207c5561c9c28e741d1ee77714479ab314ca1ccc9a66f6ea9"} err="failed to get container status \"1cf077facb895a3207c5561c9c28e741d1ee77714479ab314ca1ccc9a66f6ea9\": rpc error: code = NotFound desc = could not find container \"1cf077facb895a3207c5561c9c28e741d1ee77714479ab314ca1ccc9a66f6ea9\": container with ID starting with 1cf077facb895a3207c5561c9c28e741d1ee77714479ab314ca1ccc9a66f6ea9 not found: ID does not exist" Mar 18 13:06:42.390747 master-0 kubenswrapper[4025]: I0318 13:06:42.390738 4025 scope.go:117] "RemoveContainer" containerID="4af20364ad415dccb32c409cb881684320bee0de4dc961b8377a1319ec53144c" Mar 18 13:06:42.391069 master-0 kubenswrapper[4025]: I0318 13:06:42.391030 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4af20364ad415dccb32c409cb881684320bee0de4dc961b8377a1319ec53144c"} err="failed to get container status \"4af20364ad415dccb32c409cb881684320bee0de4dc961b8377a1319ec53144c\": rpc error: code = NotFound desc = could not find container \"4af20364ad415dccb32c409cb881684320bee0de4dc961b8377a1319ec53144c\": container with ID starting with 4af20364ad415dccb32c409cb881684320bee0de4dc961b8377a1319ec53144c not found: ID does not exist" Mar 18 13:06:42.391069 master-0 kubenswrapper[4025]: I0318 13:06:42.391058 4025 scope.go:117] "RemoveContainer" containerID="ca309c0d2495becf9414a0126f27e8c969700d4f19e54162f23d4618d6328a5a" Mar 18 13:06:42.391283 master-0 kubenswrapper[4025]: I0318 13:06:42.391255 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca309c0d2495becf9414a0126f27e8c969700d4f19e54162f23d4618d6328a5a"} err="failed to get container status \"ca309c0d2495becf9414a0126f27e8c969700d4f19e54162f23d4618d6328a5a\": rpc error: code = NotFound desc = could not find container \"ca309c0d2495becf9414a0126f27e8c969700d4f19e54162f23d4618d6328a5a\": container with ID starting with ca309c0d2495becf9414a0126f27e8c969700d4f19e54162f23d4618d6328a5a not found: ID does not exist" Mar 18 13:06:42.391283 master-0 kubenswrapper[4025]: I0318 13:06:42.391276 4025 scope.go:117] "RemoveContainer" containerID="493dfe9b51c8c759aa463a5b1071f77bf81027c1ff66108c97fa3d08852f58a0" Mar 18 13:06:42.391541 master-0 kubenswrapper[4025]: I0318 13:06:42.391510 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"493dfe9b51c8c759aa463a5b1071f77bf81027c1ff66108c97fa3d08852f58a0"} err="failed to get container status \"493dfe9b51c8c759aa463a5b1071f77bf81027c1ff66108c97fa3d08852f58a0\": rpc error: code = NotFound desc = could not find container \"493dfe9b51c8c759aa463a5b1071f77bf81027c1ff66108c97fa3d08852f58a0\": container with ID starting with 493dfe9b51c8c759aa463a5b1071f77bf81027c1ff66108c97fa3d08852f58a0 not found: ID does not exist" Mar 18 13:06:42.391541 master-0 kubenswrapper[4025]: I0318 13:06:42.391530 4025 scope.go:117] "RemoveContainer" containerID="5f34b57b95f8e159690599ed439ca1174115eb38d5aa6ea3ac9710d124e4b96b" Mar 18 13:06:42.391762 master-0 kubenswrapper[4025]: I0318 13:06:42.391736 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f34b57b95f8e159690599ed439ca1174115eb38d5aa6ea3ac9710d124e4b96b"} err="failed to get container status \"5f34b57b95f8e159690599ed439ca1174115eb38d5aa6ea3ac9710d124e4b96b\": rpc error: code = NotFound desc = could not find container \"5f34b57b95f8e159690599ed439ca1174115eb38d5aa6ea3ac9710d124e4b96b\": container with ID starting with 5f34b57b95f8e159690599ed439ca1174115eb38d5aa6ea3ac9710d124e4b96b not found: ID does not exist" Mar 18 13:06:42.391850 master-0 kubenswrapper[4025]: I0318 13:06:42.391761 4025 scope.go:117] "RemoveContainer" containerID="730e302be33fbd2d3a011769ec1b6593ca19ebf0b7337b509fe03fc735096bab" Mar 18 13:06:42.391955 master-0 kubenswrapper[4025]: I0318 13:06:42.391937 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"730e302be33fbd2d3a011769ec1b6593ca19ebf0b7337b509fe03fc735096bab"} err="failed to get container status \"730e302be33fbd2d3a011769ec1b6593ca19ebf0b7337b509fe03fc735096bab\": rpc error: code = NotFound desc = could not find container \"730e302be33fbd2d3a011769ec1b6593ca19ebf0b7337b509fe03fc735096bab\": container with ID starting with 730e302be33fbd2d3a011769ec1b6593ca19ebf0b7337b509fe03fc735096bab not found: ID does not exist" Mar 18 13:06:42.392024 master-0 kubenswrapper[4025]: I0318 13:06:42.391954 4025 scope.go:117] "RemoveContainer" containerID="e341f0b47b4e315a8a056d4358c9519406dcd10924971a2272c438c2a8e23f53" Mar 18 13:06:42.392158 master-0 kubenswrapper[4025]: I0318 13:06:42.392121 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e341f0b47b4e315a8a056d4358c9519406dcd10924971a2272c438c2a8e23f53"} err="failed to get container status \"e341f0b47b4e315a8a056d4358c9519406dcd10924971a2272c438c2a8e23f53\": rpc error: code = NotFound desc = could not find container \"e341f0b47b4e315a8a056d4358c9519406dcd10924971a2272c438c2a8e23f53\": container with ID starting with e341f0b47b4e315a8a056d4358c9519406dcd10924971a2272c438c2a8e23f53 not found: ID does not exist" Mar 18 13:06:42.392158 master-0 kubenswrapper[4025]: I0318 13:06:42.392138 4025 scope.go:117] "RemoveContainer" containerID="5986658441ac8e0ea1b629bb542ae3fb5107089a49a39ad390ad0c655ec0db33" Mar 18 13:06:42.392340 master-0 kubenswrapper[4025]: I0318 13:06:42.392319 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5986658441ac8e0ea1b629bb542ae3fb5107089a49a39ad390ad0c655ec0db33"} err="failed to get container status \"5986658441ac8e0ea1b629bb542ae3fb5107089a49a39ad390ad0c655ec0db33\": rpc error: code = NotFound desc = could not find container \"5986658441ac8e0ea1b629bb542ae3fb5107089a49a39ad390ad0c655ec0db33\": container with ID starting with 5986658441ac8e0ea1b629bb542ae3fb5107089a49a39ad390ad0c655ec0db33 not found: ID does not exist" Mar 18 13:06:42.392340 master-0 kubenswrapper[4025]: I0318 13:06:42.392336 4025 scope.go:117] "RemoveContainer" containerID="021952df9a8c624e7b2c202a63c90895d083ef7dabb6b26e0225c77b24cf2ab4" Mar 18 13:06:42.392570 master-0 kubenswrapper[4025]: I0318 13:06:42.392542 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"021952df9a8c624e7b2c202a63c90895d083ef7dabb6b26e0225c77b24cf2ab4"} err="failed to get container status \"021952df9a8c624e7b2c202a63c90895d083ef7dabb6b26e0225c77b24cf2ab4\": rpc error: code = NotFound desc = could not find container \"021952df9a8c624e7b2c202a63c90895d083ef7dabb6b26e0225c77b24cf2ab4\": container with ID starting with 021952df9a8c624e7b2c202a63c90895d083ef7dabb6b26e0225c77b24cf2ab4 not found: ID does not exist" Mar 18 13:06:42.392570 master-0 kubenswrapper[4025]: I0318 13:06:42.392566 4025 scope.go:117] "RemoveContainer" containerID="1cf077facb895a3207c5561c9c28e741d1ee77714479ab314ca1ccc9a66f6ea9" Mar 18 13:06:42.392757 master-0 kubenswrapper[4025]: I0318 13:06:42.392735 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cf077facb895a3207c5561c9c28e741d1ee77714479ab314ca1ccc9a66f6ea9"} err="failed to get container status \"1cf077facb895a3207c5561c9c28e741d1ee77714479ab314ca1ccc9a66f6ea9\": rpc error: code = NotFound desc = could not find container \"1cf077facb895a3207c5561c9c28e741d1ee77714479ab314ca1ccc9a66f6ea9\": container with ID starting with 1cf077facb895a3207c5561c9c28e741d1ee77714479ab314ca1ccc9a66f6ea9 not found: ID does not exist" Mar 18 13:06:42.392757 master-0 kubenswrapper[4025]: I0318 13:06:42.392756 4025 scope.go:117] "RemoveContainer" containerID="4af20364ad415dccb32c409cb881684320bee0de4dc961b8377a1319ec53144c" Mar 18 13:06:42.392974 master-0 kubenswrapper[4025]: I0318 13:06:42.392950 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4af20364ad415dccb32c409cb881684320bee0de4dc961b8377a1319ec53144c"} err="failed to get container status \"4af20364ad415dccb32c409cb881684320bee0de4dc961b8377a1319ec53144c\": rpc error: code = NotFound desc = could not find container \"4af20364ad415dccb32c409cb881684320bee0de4dc961b8377a1319ec53144c\": container with ID starting with 4af20364ad415dccb32c409cb881684320bee0de4dc961b8377a1319ec53144c not found: ID does not exist" Mar 18 13:06:42.392974 master-0 kubenswrapper[4025]: I0318 13:06:42.392972 4025 scope.go:117] "RemoveContainer" containerID="ca309c0d2495becf9414a0126f27e8c969700d4f19e54162f23d4618d6328a5a" Mar 18 13:06:42.393260 master-0 kubenswrapper[4025]: I0318 13:06:42.393185 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca309c0d2495becf9414a0126f27e8c969700d4f19e54162f23d4618d6328a5a"} err="failed to get container status \"ca309c0d2495becf9414a0126f27e8c969700d4f19e54162f23d4618d6328a5a\": rpc error: code = NotFound desc = could not find container \"ca309c0d2495becf9414a0126f27e8c969700d4f19e54162f23d4618d6328a5a\": container with ID starting with ca309c0d2495becf9414a0126f27e8c969700d4f19e54162f23d4618d6328a5a not found: ID does not exist" Mar 18 13:06:42.393333 master-0 kubenswrapper[4025]: I0318 13:06:42.393261 4025 scope.go:117] "RemoveContainer" containerID="493dfe9b51c8c759aa463a5b1071f77bf81027c1ff66108c97fa3d08852f58a0" Mar 18 13:06:42.393495 master-0 kubenswrapper[4025]: I0318 13:06:42.393462 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"493dfe9b51c8c759aa463a5b1071f77bf81027c1ff66108c97fa3d08852f58a0"} err="failed to get container status \"493dfe9b51c8c759aa463a5b1071f77bf81027c1ff66108c97fa3d08852f58a0\": rpc error: code = NotFound desc = could not find container \"493dfe9b51c8c759aa463a5b1071f77bf81027c1ff66108c97fa3d08852f58a0\": container with ID starting with 493dfe9b51c8c759aa463a5b1071f77bf81027c1ff66108c97fa3d08852f58a0 not found: ID does not exist" Mar 18 13:06:42.393495 master-0 kubenswrapper[4025]: I0318 13:06:42.393479 4025 scope.go:117] "RemoveContainer" containerID="5f34b57b95f8e159690599ed439ca1174115eb38d5aa6ea3ac9710d124e4b96b" Mar 18 13:06:42.393683 master-0 kubenswrapper[4025]: I0318 13:06:42.393660 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f34b57b95f8e159690599ed439ca1174115eb38d5aa6ea3ac9710d124e4b96b"} err="failed to get container status \"5f34b57b95f8e159690599ed439ca1174115eb38d5aa6ea3ac9710d124e4b96b\": rpc error: code = NotFound desc = could not find container \"5f34b57b95f8e159690599ed439ca1174115eb38d5aa6ea3ac9710d124e4b96b\": container with ID starting with 5f34b57b95f8e159690599ed439ca1174115eb38d5aa6ea3ac9710d124e4b96b not found: ID does not exist" Mar 18 13:06:42.393683 master-0 kubenswrapper[4025]: I0318 13:06:42.393679 4025 scope.go:117] "RemoveContainer" containerID="730e302be33fbd2d3a011769ec1b6593ca19ebf0b7337b509fe03fc735096bab" Mar 18 13:06:42.393909 master-0 kubenswrapper[4025]: I0318 13:06:42.393863 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"730e302be33fbd2d3a011769ec1b6593ca19ebf0b7337b509fe03fc735096bab"} err="failed to get container status \"730e302be33fbd2d3a011769ec1b6593ca19ebf0b7337b509fe03fc735096bab\": rpc error: code = NotFound desc = could not find container \"730e302be33fbd2d3a011769ec1b6593ca19ebf0b7337b509fe03fc735096bab\": container with ID starting with 730e302be33fbd2d3a011769ec1b6593ca19ebf0b7337b509fe03fc735096bab not found: ID does not exist" Mar 18 13:06:42.393909 master-0 kubenswrapper[4025]: I0318 13:06:42.393888 4025 scope.go:117] "RemoveContainer" containerID="e341f0b47b4e315a8a056d4358c9519406dcd10924971a2272c438c2a8e23f53" Mar 18 13:06:42.394089 master-0 kubenswrapper[4025]: I0318 13:06:42.394067 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e341f0b47b4e315a8a056d4358c9519406dcd10924971a2272c438c2a8e23f53"} err="failed to get container status \"e341f0b47b4e315a8a056d4358c9519406dcd10924971a2272c438c2a8e23f53\": rpc error: code = NotFound desc = could not find container \"e341f0b47b4e315a8a056d4358c9519406dcd10924971a2272c438c2a8e23f53\": container with ID starting with e341f0b47b4e315a8a056d4358c9519406dcd10924971a2272c438c2a8e23f53 not found: ID does not exist" Mar 18 13:06:42.394089 master-0 kubenswrapper[4025]: I0318 13:06:42.394088 4025 scope.go:117] "RemoveContainer" containerID="5986658441ac8e0ea1b629bb542ae3fb5107089a49a39ad390ad0c655ec0db33" Mar 18 13:06:42.394262 master-0 kubenswrapper[4025]: I0318 13:06:42.394242 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5986658441ac8e0ea1b629bb542ae3fb5107089a49a39ad390ad0c655ec0db33"} err="failed to get container status \"5986658441ac8e0ea1b629bb542ae3fb5107089a49a39ad390ad0c655ec0db33\": rpc error: code = NotFound desc = could not find container \"5986658441ac8e0ea1b629bb542ae3fb5107089a49a39ad390ad0c655ec0db33\": container with ID starting with 5986658441ac8e0ea1b629bb542ae3fb5107089a49a39ad390ad0c655ec0db33 not found: ID does not exist" Mar 18 13:06:42.394262 master-0 kubenswrapper[4025]: I0318 13:06:42.394260 4025 scope.go:117] "RemoveContainer" containerID="021952df9a8c624e7b2c202a63c90895d083ef7dabb6b26e0225c77b24cf2ab4" Mar 18 13:06:42.394453 master-0 kubenswrapper[4025]: I0318 13:06:42.394425 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"021952df9a8c624e7b2c202a63c90895d083ef7dabb6b26e0225c77b24cf2ab4"} err="failed to get container status \"021952df9a8c624e7b2c202a63c90895d083ef7dabb6b26e0225c77b24cf2ab4\": rpc error: code = NotFound desc = could not find container \"021952df9a8c624e7b2c202a63c90895d083ef7dabb6b26e0225c77b24cf2ab4\": container with ID starting with 021952df9a8c624e7b2c202a63c90895d083ef7dabb6b26e0225c77b24cf2ab4 not found: ID does not exist" Mar 18 13:06:42.394507 master-0 kubenswrapper[4025]: I0318 13:06:42.394453 4025 scope.go:117] "RemoveContainer" containerID="1cf077facb895a3207c5561c9c28e741d1ee77714479ab314ca1ccc9a66f6ea9" Mar 18 13:06:42.394655 master-0 kubenswrapper[4025]: I0318 13:06:42.394623 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cf077facb895a3207c5561c9c28e741d1ee77714479ab314ca1ccc9a66f6ea9"} err="failed to get container status \"1cf077facb895a3207c5561c9c28e741d1ee77714479ab314ca1ccc9a66f6ea9\": rpc error: code = NotFound desc = could not find container \"1cf077facb895a3207c5561c9c28e741d1ee77714479ab314ca1ccc9a66f6ea9\": container with ID starting with 1cf077facb895a3207c5561c9c28e741d1ee77714479ab314ca1ccc9a66f6ea9 not found: ID does not exist" Mar 18 13:06:42.394655 master-0 kubenswrapper[4025]: I0318 13:06:42.394640 4025 scope.go:117] "RemoveContainer" containerID="4af20364ad415dccb32c409cb881684320bee0de4dc961b8377a1319ec53144c" Mar 18 13:06:42.394819 master-0 kubenswrapper[4025]: I0318 13:06:42.394800 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4af20364ad415dccb32c409cb881684320bee0de4dc961b8377a1319ec53144c"} err="failed to get container status \"4af20364ad415dccb32c409cb881684320bee0de4dc961b8377a1319ec53144c\": rpc error: code = NotFound desc = could not find container \"4af20364ad415dccb32c409cb881684320bee0de4dc961b8377a1319ec53144c\": container with ID starting with 4af20364ad415dccb32c409cb881684320bee0de4dc961b8377a1319ec53144c not found: ID does not exist" Mar 18 13:06:42.394819 master-0 kubenswrapper[4025]: I0318 13:06:42.394816 4025 scope.go:117] "RemoveContainer" containerID="ca309c0d2495becf9414a0126f27e8c969700d4f19e54162f23d4618d6328a5a" Mar 18 13:06:42.394967 master-0 kubenswrapper[4025]: I0318 13:06:42.394950 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca309c0d2495becf9414a0126f27e8c969700d4f19e54162f23d4618d6328a5a"} err="failed to get container status \"ca309c0d2495becf9414a0126f27e8c969700d4f19e54162f23d4618d6328a5a\": rpc error: code = NotFound desc = could not find container \"ca309c0d2495becf9414a0126f27e8c969700d4f19e54162f23d4618d6328a5a\": container with ID starting with ca309c0d2495becf9414a0126f27e8c969700d4f19e54162f23d4618d6328a5a not found: ID does not exist" Mar 18 13:06:42.394967 master-0 kubenswrapper[4025]: I0318 13:06:42.394965 4025 scope.go:117] "RemoveContainer" containerID="493dfe9b51c8c759aa463a5b1071f77bf81027c1ff66108c97fa3d08852f58a0" Mar 18 13:06:42.395141 master-0 kubenswrapper[4025]: I0318 13:06:42.395122 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"493dfe9b51c8c759aa463a5b1071f77bf81027c1ff66108c97fa3d08852f58a0"} err="failed to get container status \"493dfe9b51c8c759aa463a5b1071f77bf81027c1ff66108c97fa3d08852f58a0\": rpc error: code = NotFound desc = could not find container \"493dfe9b51c8c759aa463a5b1071f77bf81027c1ff66108c97fa3d08852f58a0\": container with ID starting with 493dfe9b51c8c759aa463a5b1071f77bf81027c1ff66108c97fa3d08852f58a0 not found: ID does not exist" Mar 18 13:06:42.395174 master-0 kubenswrapper[4025]: I0318 13:06:42.395141 4025 scope.go:117] "RemoveContainer" containerID="5f34b57b95f8e159690599ed439ca1174115eb38d5aa6ea3ac9710d124e4b96b" Mar 18 13:06:42.395351 master-0 kubenswrapper[4025]: I0318 13:06:42.395323 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f34b57b95f8e159690599ed439ca1174115eb38d5aa6ea3ac9710d124e4b96b"} err="failed to get container status \"5f34b57b95f8e159690599ed439ca1174115eb38d5aa6ea3ac9710d124e4b96b\": rpc error: code = NotFound desc = could not find container \"5f34b57b95f8e159690599ed439ca1174115eb38d5aa6ea3ac9710d124e4b96b\": container with ID starting with 5f34b57b95f8e159690599ed439ca1174115eb38d5aa6ea3ac9710d124e4b96b not found: ID does not exist" Mar 18 13:06:42.395351 master-0 kubenswrapper[4025]: I0318 13:06:42.395347 4025 scope.go:117] "RemoveContainer" containerID="730e302be33fbd2d3a011769ec1b6593ca19ebf0b7337b509fe03fc735096bab" Mar 18 13:06:42.395529 master-0 kubenswrapper[4025]: I0318 13:06:42.395507 4025 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"730e302be33fbd2d3a011769ec1b6593ca19ebf0b7337b509fe03fc735096bab"} err="failed to get container status \"730e302be33fbd2d3a011769ec1b6593ca19ebf0b7337b509fe03fc735096bab\": rpc error: code = NotFound desc = could not find container \"730e302be33fbd2d3a011769ec1b6593ca19ebf0b7337b509fe03fc735096bab\": container with ID starting with 730e302be33fbd2d3a011769ec1b6593ca19ebf0b7337b509fe03fc735096bab not found: ID does not exist" Mar 18 13:06:42.803604 master-0 kubenswrapper[4025]: I0318 13:06:42.803554 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:06:42.803705 master-0 kubenswrapper[4025]: I0318 13:06:42.803629 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:42.803705 master-0 kubenswrapper[4025]: E0318 13:06:42.803670 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-kcsgp" podUID="0278b04b-b27b-4717-a009-a70315fd05a6" Mar 18 13:06:42.803824 master-0 kubenswrapper[4025]: E0318 13:06:42.803789 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kbfbq" podUID="bf1cc230-0a79-4a1d-b500-a65d02e50973" Mar 18 13:06:43.278379 master-0 kubenswrapper[4025]: I0318 13:06:43.278219 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" event={"ID":"ab2f96fb-ef55-4427-a598-7e3f1e224045","Type":"ContainerStarted","Data":"f9cf42fa7b1174c3acfbdce01651da3818728bde2c45ab899be0bac58cf14d63"} Mar 18 13:06:43.278379 master-0 kubenswrapper[4025]: I0318 13:06:43.278272 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" event={"ID":"ab2f96fb-ef55-4427-a598-7e3f1e224045","Type":"ContainerStarted","Data":"c2910bb6cfe70a0b7fe7aec2dfbcd08566c4710c84d5c9e277f29f1f256e1137"} Mar 18 13:06:43.278379 master-0 kubenswrapper[4025]: I0318 13:06:43.278292 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" event={"ID":"ab2f96fb-ef55-4427-a598-7e3f1e224045","Type":"ContainerStarted","Data":"907b5d903e5957f54b3bce3ec41f050fa3c3f32c30e69541581441ecd0e3d71f"} Mar 18 13:06:43.278379 master-0 kubenswrapper[4025]: I0318 13:06:43.278310 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" event={"ID":"ab2f96fb-ef55-4427-a598-7e3f1e224045","Type":"ContainerStarted","Data":"2b48ee454e0a7faaac5086b96c579627b7ea7c7f153d481a4dbe7060bc0b9ae5"} Mar 18 13:06:43.278379 master-0 kubenswrapper[4025]: I0318 13:06:43.278328 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" event={"ID":"ab2f96fb-ef55-4427-a598-7e3f1e224045","Type":"ContainerStarted","Data":"ab9a907722835d84f71abb4d9eab924c219518b800540915689a3323e5847cd3"} Mar 18 13:06:43.278379 master-0 kubenswrapper[4025]: I0318 13:06:43.278344 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" event={"ID":"ab2f96fb-ef55-4427-a598-7e3f1e224045","Type":"ContainerStarted","Data":"ac602c08b43d4d9d84ca16d70364a42759fac3f28c0a56e0ee205a06885a2fad"} Mar 18 13:06:43.809784 master-0 kubenswrapper[4025]: I0318 13:06:43.809330 4025 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0a7c756-575a-4000-b7c1-4f68a93870e8" path="/var/lib/kubelet/pods/c0a7c756-575a-4000-b7c1-4f68a93870e8/volumes" Mar 18 13:06:44.804342 master-0 kubenswrapper[4025]: I0318 13:06:44.804198 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:06:44.805076 master-0 kubenswrapper[4025]: I0318 13:06:44.804258 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:44.805076 master-0 kubenswrapper[4025]: E0318 13:06:44.804470 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-kcsgp" podUID="0278b04b-b27b-4717-a009-a70315fd05a6" Mar 18 13:06:44.805076 master-0 kubenswrapper[4025]: E0318 13:06:44.804611 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kbfbq" podUID="bf1cc230-0a79-4a1d-b500-a65d02e50973" Mar 18 13:06:44.820744 master-0 kubenswrapper[4025]: I0318 13:06:44.820690 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 18 13:06:45.287875 master-0 kubenswrapper[4025]: I0318 13:06:45.287793 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" event={"ID":"ab2f96fb-ef55-4427-a598-7e3f1e224045","Type":"ContainerStarted","Data":"e209b4b73eecb88cb54a49758853303bdcdf5c32268cc6c2b82da80281f8a70f"} Mar 18 13:06:45.667393 master-0 kubenswrapper[4025]: I0318 13:06:45.667268 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:06:45.667782 master-0 kubenswrapper[4025]: E0318 13:06:45.667568 4025 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 13:06:45.667782 master-0 kubenswrapper[4025]: E0318 13:06:45.667718 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert podName:2b06a568-4dad-44b4-8312-aa52911dbfb0 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:49.667680946 +0000 UTC m=+170.587559608 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert") pod "cluster-version-operator-56d8475767-f2hjp" (UID: "2b06a568-4dad-44b4-8312-aa52911dbfb0") : secret "cluster-version-operator-serving-cert" not found Mar 18 13:06:46.779395 master-0 kubenswrapper[4025]: I0318 13:06:46.779276 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2snjj\" (UniqueName: \"kubernetes.io/projected/0278b04b-b27b-4717-a009-a70315fd05a6-kube-api-access-2snjj\") pod \"network-check-target-kcsgp\" (UID: \"0278b04b-b27b-4717-a009-a70315fd05a6\") " pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:06:46.780288 master-0 kubenswrapper[4025]: E0318 13:06:46.779569 4025 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 13:06:46.780288 master-0 kubenswrapper[4025]: E0318 13:06:46.779638 4025 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 13:06:46.780288 master-0 kubenswrapper[4025]: E0318 13:06:46.779664 4025 projected.go:194] Error preparing data for projected volume kube-api-access-2snjj for pod openshift-network-diagnostics/network-check-target-kcsgp: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 13:06:46.780288 master-0 kubenswrapper[4025]: E0318 13:06:46.779768 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0278b04b-b27b-4717-a009-a70315fd05a6-kube-api-access-2snjj podName:0278b04b-b27b-4717-a009-a70315fd05a6 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:18.779736588 +0000 UTC m=+139.699615250 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-2snjj" (UniqueName: "kubernetes.io/projected/0278b04b-b27b-4717-a009-a70315fd05a6-kube-api-access-2snjj") pod "network-check-target-kcsgp" (UID: "0278b04b-b27b-4717-a009-a70315fd05a6") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 13:06:46.804000 master-0 kubenswrapper[4025]: I0318 13:06:46.803895 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:06:46.804267 master-0 kubenswrapper[4025]: I0318 13:06:46.803907 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:46.804267 master-0 kubenswrapper[4025]: E0318 13:06:46.804075 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-kcsgp" podUID="0278b04b-b27b-4717-a009-a70315fd05a6" Mar 18 13:06:46.804267 master-0 kubenswrapper[4025]: E0318 13:06:46.804218 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kbfbq" podUID="bf1cc230-0a79-4a1d-b500-a65d02e50973" Mar 18 13:06:47.298672 master-0 kubenswrapper[4025]: I0318 13:06:47.298402 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" event={"ID":"ab2f96fb-ef55-4427-a598-7e3f1e224045","Type":"ContainerStarted","Data":"119d9e30269ff1ad7de2d957f082d56c48f1d252e3456aa6d95395e7b3eb424b"} Mar 18 13:06:47.302265 master-0 kubenswrapper[4025]: I0318 13:06:47.299728 4025 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:47.302265 master-0 kubenswrapper[4025]: I0318 13:06:47.299861 4025 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:47.319512 master-0 kubenswrapper[4025]: I0318 13:06:47.319425 4025 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:47.453827 master-0 kubenswrapper[4025]: I0318 13:06:47.453743 4025 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-controller-manager-master-0" podStartSLOduration=3.453724485 podStartE2EDuration="3.453724485s" podCreationTimestamp="2026-03-18 13:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:06:47.453546431 +0000 UTC m=+108.373425103" watchObservedRunningTime="2026-03-18 13:06:47.453724485 +0000 UTC m=+108.373603107" Mar 18 13:06:47.478146 master-0 kubenswrapper[4025]: I0318 13:06:47.478057 4025 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" podStartSLOduration=6.478036134 podStartE2EDuration="6.478036134s" podCreationTimestamp="2026-03-18 13:06:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:06:47.477912351 +0000 UTC m=+108.397791003" watchObservedRunningTime="2026-03-18 13:06:47.478036134 +0000 UTC m=+108.397914766" Mar 18 13:06:47.816447 master-0 kubenswrapper[4025]: I0318 13:06:47.816385 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 18 13:06:48.301030 master-0 kubenswrapper[4025]: I0318 13:06:48.300996 4025 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:48.323303 master-0 kubenswrapper[4025]: I0318 13:06:48.323257 4025 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:06:48.366125 master-0 kubenswrapper[4025]: I0318 13:06:48.365646 4025 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podStartSLOduration=1.365626829 podStartE2EDuration="1.365626829s" podCreationTimestamp="2026-03-18 13:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:06:48.365600408 +0000 UTC m=+109.285479050" watchObservedRunningTime="2026-03-18 13:06:48.365626829 +0000 UTC m=+109.285505451" Mar 18 13:06:48.803677 master-0 kubenswrapper[4025]: I0318 13:06:48.803583 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:06:48.803677 master-0 kubenswrapper[4025]: I0318 13:06:48.803620 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:48.804025 master-0 kubenswrapper[4025]: E0318 13:06:48.803766 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-kcsgp" podUID="0278b04b-b27b-4717-a009-a70315fd05a6" Mar 18 13:06:48.804025 master-0 kubenswrapper[4025]: E0318 13:06:48.803941 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kbfbq" podUID="bf1cc230-0a79-4a1d-b500-a65d02e50973" Mar 18 13:06:50.803382 master-0 kubenswrapper[4025]: I0318 13:06:50.803288 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:06:50.803382 master-0 kubenswrapper[4025]: I0318 13:06:50.803339 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:50.804502 master-0 kubenswrapper[4025]: E0318 13:06:50.803551 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-kcsgp" podUID="0278b04b-b27b-4717-a009-a70315fd05a6" Mar 18 13:06:50.804502 master-0 kubenswrapper[4025]: E0318 13:06:50.803702 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kbfbq" podUID="bf1cc230-0a79-4a1d-b500-a65d02e50973" Mar 18 13:06:52.367880 master-0 kubenswrapper[4025]: I0318 13:06:52.367760 4025 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Mar 18 13:06:52.368875 master-0 kubenswrapper[4025]: I0318 13:06:52.368834 4025 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Mar 18 13:06:52.411874 master-0 kubenswrapper[4025]: I0318 13:06:52.411820 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5"] Mar 18 13:06:52.412483 master-0 kubenswrapper[4025]: I0318 13:06:52.412382 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5" Mar 18 13:06:52.415528 master-0 kubenswrapper[4025]: I0318 13:06:52.414488 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 18 13:06:52.415528 master-0 kubenswrapper[4025]: I0318 13:06:52.415441 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 13:06:52.416793 master-0 kubenswrapper[4025]: I0318 13:06:52.416642 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 18 13:06:52.416969 master-0 kubenswrapper[4025]: I0318 13:06:52.416936 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp"] Mar 18 13:06:52.417512 master-0 kubenswrapper[4025]: I0318 13:06:52.417366 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:06:52.417693 master-0 kubenswrapper[4025]: I0318 13:06:52.417665 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-68lgz"] Mar 18 13:06:52.420497 master-0 kubenswrapper[4025]: I0318 13:06:52.418035 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-68lgz" Mar 18 13:06:52.420497 master-0 kubenswrapper[4025]: I0318 13:06:52.418700 4025 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 18 13:06:52.420497 master-0 kubenswrapper[4025]: I0318 13:06:52.418894 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-9c5679d8f-5lzzn"] Mar 18 13:06:52.420497 master-0 kubenswrapper[4025]: I0318 13:06:52.418974 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 18 13:06:52.420497 master-0 kubenswrapper[4025]: I0318 13:06:52.419102 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-r4dzk"] Mar 18 13:06:52.420497 master-0 kubenswrapper[4025]: I0318 13:06:52.419168 4025 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 18 13:06:52.420497 master-0 kubenswrapper[4025]: I0318 13:06:52.419279 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-r4dzk" Mar 18 13:06:52.420497 master-0 kubenswrapper[4025]: I0318 13:06:52.419329 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 18 13:06:52.420497 master-0 kubenswrapper[4025]: I0318 13:06:52.419467 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-9c5679d8f-5lzzn" Mar 18 13:06:52.420497 master-0 kubenswrapper[4025]: I0318 13:06:52.420231 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf"] Mar 18 13:06:52.421105 master-0 kubenswrapper[4025]: I0318 13:06:52.420570 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:06:52.424478 master-0 kubenswrapper[4025]: I0318 13:06:52.422832 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv"] Mar 18 13:06:52.424478 master-0 kubenswrapper[4025]: I0318 13:06:52.422861 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 18 13:06:52.424478 master-0 kubenswrapper[4025]: I0318 13:06:52.423048 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb"] Mar 18 13:06:52.424478 master-0 kubenswrapper[4025]: I0318 13:06:52.423102 4025 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 18 13:06:52.424478 master-0 kubenswrapper[4025]: I0318 13:06:52.423116 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 18 13:06:52.424478 master-0 kubenswrapper[4025]: I0318 13:06:52.423143 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 18 13:06:52.424478 master-0 kubenswrapper[4025]: I0318 13:06:52.423149 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 18 13:06:52.424478 master-0 kubenswrapper[4025]: I0318 13:06:52.423221 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk"] Mar 18 13:06:52.424478 master-0 kubenswrapper[4025]: I0318 13:06:52.423259 4025 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 18 13:06:52.424478 master-0 kubenswrapper[4025]: I0318 13:06:52.423390 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:06:52.424478 master-0 kubenswrapper[4025]: I0318 13:06:52.423476 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb" Mar 18 13:06:52.424478 master-0 kubenswrapper[4025]: I0318 13:06:52.423497 4025 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 18 13:06:52.424478 master-0 kubenswrapper[4025]: I0318 13:06:52.423533 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:06:52.424478 master-0 kubenswrapper[4025]: I0318 13:06:52.424465 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 18 13:06:52.425247 master-0 kubenswrapper[4025]: I0318 13:06:52.424787 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 18 13:06:52.425247 master-0 kubenswrapper[4025]: I0318 13:06:52.424953 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 18 13:06:52.425357 master-0 kubenswrapper[4025]: I0318 13:06:52.425325 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 18 13:06:52.428527 master-0 kubenswrapper[4025]: I0318 13:06:52.425522 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 18 13:06:52.428527 master-0 kubenswrapper[4025]: I0318 13:06:52.425767 4025 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 18 13:06:52.428527 master-0 kubenswrapper[4025]: I0318 13:06:52.425953 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 18 13:06:52.428527 master-0 kubenswrapper[4025]: I0318 13:06:52.426088 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 18 13:06:52.428527 master-0 kubenswrapper[4025]: I0318 13:06:52.428094 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 18 13:06:52.428527 master-0 kubenswrapper[4025]: I0318 13:06:52.428370 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 18 13:06:52.428527 master-0 kubenswrapper[4025]: I0318 13:06:52.428508 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 18 13:06:52.433199 master-0 kubenswrapper[4025]: I0318 13:06:52.433149 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-k8tv4"] Mar 18 13:06:52.433664 master-0 kubenswrapper[4025]: I0318 13:06:52.433630 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-k8tv4" Mar 18 13:06:52.442734 master-0 kubenswrapper[4025]: I0318 13:06:52.439914 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9"] Mar 18 13:06:52.442734 master-0 kubenswrapper[4025]: I0318 13:06:52.441135 4025 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 18 13:06:52.442734 master-0 kubenswrapper[4025]: I0318 13:06:52.441406 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 18 13:06:52.442734 master-0 kubenswrapper[4025]: I0318 13:06:52.441490 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:06:52.442734 master-0 kubenswrapper[4025]: I0318 13:06:52.441544 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m"] Mar 18 13:06:52.442734 master-0 kubenswrapper[4025]: I0318 13:06:52.441646 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 18 13:06:52.451467 master-0 kubenswrapper[4025]: I0318 13:06:52.447925 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 18 13:06:52.451467 master-0 kubenswrapper[4025]: I0318 13:06:52.448064 4025 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 18 13:06:52.451467 master-0 kubenswrapper[4025]: I0318 13:06:52.448504 4025 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 18 13:06:52.451467 master-0 kubenswrapper[4025]: I0318 13:06:52.448592 4025 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 18 13:06:52.451467 master-0 kubenswrapper[4025]: I0318 13:06:52.448620 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn"] Mar 18 13:06:52.454472 master-0 kubenswrapper[4025]: I0318 13:06:52.452983 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m" Mar 18 13:06:52.465444 master-0 kubenswrapper[4025]: I0318 13:06:52.462569 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 18 13:06:52.465444 master-0 kubenswrapper[4025]: I0318 13:06:52.462945 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 18 13:06:52.465444 master-0 kubenswrapper[4025]: I0318 13:06:52.463716 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd"] Mar 18 13:06:52.465444 master-0 kubenswrapper[4025]: I0318 13:06:52.463957 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-89ccd998f-99pzm"] Mar 18 13:06:52.465444 master-0 kubenswrapper[4025]: I0318 13:06:52.464155 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p"] Mar 18 13:06:52.465444 master-0 kubenswrapper[4025]: I0318 13:06:52.464505 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4"] Mar 18 13:06:52.465444 master-0 kubenswrapper[4025]: I0318 13:06:52.464726 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx"] Mar 18 13:06:52.465444 master-0 kubenswrapper[4025]: I0318 13:06:52.465044 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-4bqf9"] Mar 18 13:06:52.465444 master-0 kubenswrapper[4025]: I0318 13:06:52.465255 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt"] Mar 18 13:06:52.465880 master-0 kubenswrapper[4025]: I0318 13:06:52.465543 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld"] Mar 18 13:06:52.465880 master-0 kubenswrapper[4025]: I0318 13:06:52.465566 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt" Mar 18 13:06:52.465880 master-0 kubenswrapper[4025]: I0318 13:06:52.465802 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 18 13:06:52.465999 master-0 kubenswrapper[4025]: I0318 13:06:52.465907 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:06:52.465999 master-0 kubenswrapper[4025]: I0318 13:06:52.465943 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:06:52.465999 master-0 kubenswrapper[4025]: I0318 13:06:52.465995 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" Mar 18 13:06:52.466125 master-0 kubenswrapper[4025]: I0318 13:06:52.466052 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:06:52.466125 master-0 kubenswrapper[4025]: I0318 13:06:52.466098 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 18 13:06:52.466444 master-0 kubenswrapper[4025]: I0318 13:06:52.466306 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-4bqf9" Mar 18 13:06:52.466500 master-0 kubenswrapper[4025]: I0318 13:06:52.466462 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4" Mar 18 13:06:52.470435 master-0 kubenswrapper[4025]: I0318 13:06:52.467340 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7bpz\" (UniqueName: \"kubernetes.io/projected/394061b4-1bac-4699-96d2-88558c1adaf8-kube-api-access-r7bpz\") pod \"csi-snapshot-controller-operator-5f5d689c6b-68lgz\" (UID: \"394061b4-1bac-4699-96d2-88558c1adaf8\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-68lgz" Mar 18 13:06:52.470435 master-0 kubenswrapper[4025]: I0318 13:06:52.467398 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-cpqm5\" (UID: \"7dca7577-6bee-4dd3-917a-7b7ccc42f0fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5" Mar 18 13:06:52.470435 master-0 kubenswrapper[4025]: I0318 13:06:52.467447 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert\") pod \"catalog-operator-68f85b4d6c-t84s9\" (UID: \"d2455453-5943-49ef-bfea-cba077197da0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:06:52.470435 master-0 kubenswrapper[4025]: I0318 13:06:52.467471 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07505113-d5e7-4ea3-b9cc-8f08cba45ccc-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb\" (UID: \"07505113-d5e7-4ea3-b9cc-8f08cba45ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb" Mar 18 13:06:52.470435 master-0 kubenswrapper[4025]: I0318 13:06:52.467492 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/595f697b-d238-4500-84ce-1ea00377f05e-config\") pod \"service-ca-operator-b865698dc-hnr6m\" (UID: \"595f697b-d238-4500-84ce-1ea00377f05e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m" Mar 18 13:06:52.470435 master-0 kubenswrapper[4025]: I0318 13:06:52.467512 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhs5w\" (UniqueName: \"kubernetes.io/projected/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc-kube-api-access-qhs5w\") pod \"openshift-controller-manager-operator-8c94f4649-cpqm5\" (UID: \"7dca7577-6bee-4dd3-917a-7b7ccc42f0fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5" Mar 18 13:06:52.470435 master-0 kubenswrapper[4025]: I0318 13:06:52.467532 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9zbp\" (UniqueName: \"kubernetes.io/projected/19a76585-a9ac-4ed9-9146-bb77b31848c6-kube-api-access-w9zbp\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:06:52.470435 master-0 kubenswrapper[4025]: I0318 13:06:52.467570 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34a3a84b-048f-4822-9f05-0e7509327ca2-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-k8tv4\" (UID: \"34a3a84b-048f-4822-9f05-0e7509327ca2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-k8tv4" Mar 18 13:06:52.470435 master-0 kubenswrapper[4025]: I0318 13:06:52.467592 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls\") pod \"dns-operator-9c5679d8f-5lzzn\" (UID: \"59bf5114-29f9-4f70-8582-108e95327cb2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-5lzzn" Mar 18 13:06:52.470435 master-0 kubenswrapper[4025]: I0318 13:06:52.467611 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc-config\") pod \"openshift-controller-manager-operator-8c94f4649-cpqm5\" (UID: \"7dca7577-6bee-4dd3-917a-7b7ccc42f0fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5" Mar 18 13:06:52.470435 master-0 kubenswrapper[4025]: I0318 13:06:52.467634 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:06:52.470435 master-0 kubenswrapper[4025]: I0318 13:06:52.467656 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/19a76585-a9ac-4ed9-9146-bb77b31848c6-etcd-client\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:06:52.470435 master-0 kubenswrapper[4025]: I0318 13:06:52.467679 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:06:52.470435 master-0 kubenswrapper[4025]: I0318 13:06:52.467697 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19a76585-a9ac-4ed9-9146-bb77b31848c6-serving-cert\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:06:52.470997 master-0 kubenswrapper[4025]: I0318 13:06:52.467717 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxk9v\" (UniqueName: \"kubernetes.io/projected/d2455453-5943-49ef-bfea-cba077197da0-kube-api-access-lxk9v\") pod \"catalog-operator-68f85b4d6c-t84s9\" (UID: \"d2455453-5943-49ef-bfea-cba077197da0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:06:52.470997 master-0 kubenswrapper[4025]: I0318 13:06:52.467739 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5xgh\" (UniqueName: \"kubernetes.io/projected/59bf5114-29f9-4f70-8582-108e95327cb2-kube-api-access-z5xgh\") pod \"dns-operator-9c5679d8f-5lzzn\" (UID: \"59bf5114-29f9-4f70-8582-108e95327cb2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-5lzzn" Mar 18 13:06:52.470997 master-0 kubenswrapper[4025]: I0318 13:06:52.467771 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-images\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:06:52.470997 master-0 kubenswrapper[4025]: I0318 13:06:52.467790 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19a76585-a9ac-4ed9-9146-bb77b31848c6-config\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:06:52.470997 master-0 kubenswrapper[4025]: I0318 13:06:52.467821 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/15a97fe2-5022-4997-9936-4247ae7ecb43-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-r4dzk\" (UID: \"15a97fe2-5022-4997-9936-4247ae7ecb43\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-r4dzk" Mar 18 13:06:52.470997 master-0 kubenswrapper[4025]: I0318 13:06:52.467843 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/595f697b-d238-4500-84ce-1ea00377f05e-serving-cert\") pod \"service-ca-operator-b865698dc-hnr6m\" (UID: \"595f697b-d238-4500-84ce-1ea00377f05e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m" Mar 18 13:06:52.470997 master-0 kubenswrapper[4025]: I0318 13:06:52.467863 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:06:52.470997 master-0 kubenswrapper[4025]: I0318 13:06:52.467882 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34a3a84b-048f-4822-9f05-0e7509327ca2-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-k8tv4\" (UID: \"34a3a84b-048f-4822-9f05-0e7509327ca2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-k8tv4" Mar 18 13:06:52.470997 master-0 kubenswrapper[4025]: I0318 13:06:52.467905 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-config\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:06:52.470997 master-0 kubenswrapper[4025]: I0318 13:06:52.467927 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4vtf\" (UniqueName: \"kubernetes.io/projected/15a97fe2-5022-4997-9936-4247ae7ecb43-kube-api-access-h4vtf\") pod \"cluster-storage-operator-7d87854d6-r4dzk\" (UID: \"15a97fe2-5022-4997-9936-4247ae7ecb43\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-r4dzk" Mar 18 13:06:52.470997 master-0 kubenswrapper[4025]: I0318 13:06:52.467948 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:06:52.470997 master-0 kubenswrapper[4025]: I0318 13:06:52.467967 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9d09a56-ed4c-40b7-8be1-f3934c07296e-bound-sa-token\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:06:52.470997 master-0 kubenswrapper[4025]: I0318 13:06:52.467991 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmzr4\" (UniqueName: \"kubernetes.io/projected/d9d09a56-ed4c-40b7-8be1-f3934c07296e-kube-api-access-wmzr4\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:06:52.470997 master-0 kubenswrapper[4025]: I0318 13:06:52.468026 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07505113-d5e7-4ea3-b9cc-8f08cba45ccc-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb\" (UID: \"07505113-d5e7-4ea3-b9cc-8f08cba45ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb" Mar 18 13:06:52.470997 master-0 kubenswrapper[4025]: I0318 13:06:52.468044 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9d09a56-ed4c-40b7-8be1-f3934c07296e-trusted-ca\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:06:52.471523 master-0 kubenswrapper[4025]: I0318 13:06:52.468067 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdkx7\" (UniqueName: \"kubernetes.io/projected/290d1f84-5c5c-4bff-b045-e6020793cded-kube-api-access-rdkx7\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:06:52.471523 master-0 kubenswrapper[4025]: I0318 13:06:52.468086 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/19a76585-a9ac-4ed9-9146-bb77b31848c6-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:06:52.471523 master-0 kubenswrapper[4025]: I0318 13:06:52.468109 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/290d1f84-5c5c-4bff-b045-e6020793cded-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:06:52.471523 master-0 kubenswrapper[4025]: I0318 13:06:52.468130 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/290d1f84-5c5c-4bff-b045-e6020793cded-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:06:52.471523 master-0 kubenswrapper[4025]: I0318 13:06:52.468153 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fxgl\" (UniqueName: \"kubernetes.io/projected/595f697b-d238-4500-84ce-1ea00377f05e-kube-api-access-4fxgl\") pod \"service-ca-operator-b865698dc-hnr6m\" (UID: \"595f697b-d238-4500-84ce-1ea00377f05e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m" Mar 18 13:06:52.471523 master-0 kubenswrapper[4025]: I0318 13:06:52.468175 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34a3a84b-048f-4822-9f05-0e7509327ca2-config\") pod \"kube-apiserver-operator-8b68b9d9b-k8tv4\" (UID: \"34a3a84b-048f-4822-9f05-0e7509327ca2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-k8tv4" Mar 18 13:06:52.471523 master-0 kubenswrapper[4025]: I0318 13:06:52.468196 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgzkd\" (UniqueName: \"kubernetes.io/projected/07505113-d5e7-4ea3-b9cc-8f08cba45ccc-kube-api-access-lgzkd\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb\" (UID: \"07505113-d5e7-4ea3-b9cc-8f08cba45ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb" Mar 18 13:06:52.471523 master-0 kubenswrapper[4025]: I0318 13:06:52.468216 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlzqd\" (UniqueName: \"kubernetes.io/projected/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-kube-api-access-zlzqd\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:06:52.471523 master-0 kubenswrapper[4025]: I0318 13:06:52.468237 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/19a76585-a9ac-4ed9-9146-bb77b31848c6-etcd-ca\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:06:52.471523 master-0 kubenswrapper[4025]: I0318 13:06:52.468315 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm"] Mar 18 13:06:52.471523 master-0 kubenswrapper[4025]: I0318 13:06:52.470269 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 18 13:06:52.471523 master-0 kubenswrapper[4025]: I0318 13:06:52.470575 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 18 13:06:52.478470 master-0 kubenswrapper[4025]: I0318 13:06:52.475968 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4"] Mar 18 13:06:52.478470 master-0 kubenswrapper[4025]: I0318 13:06:52.478244 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" Mar 18 13:06:52.478470 master-0 kubenswrapper[4025]: I0318 13:06:52.476996 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 18 13:06:52.478724 master-0 kubenswrapper[4025]: I0318 13:06:52.477604 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:06:52.478724 master-0 kubenswrapper[4025]: I0318 13:06:52.477702 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd" Mar 18 13:06:52.478724 master-0 kubenswrapper[4025]: I0318 13:06:52.477070 4025 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 18 13:06:52.478724 master-0 kubenswrapper[4025]: I0318 13:06:52.477179 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 18 13:06:52.478724 master-0 kubenswrapper[4025]: I0318 13:06:52.477238 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 18 13:06:52.478895 master-0 kubenswrapper[4025]: I0318 13:06:52.477276 4025 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 18 13:06:52.478895 master-0 kubenswrapper[4025]: I0318 13:06:52.477405 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 18 13:06:52.482004 master-0 kubenswrapper[4025]: I0318 13:06:52.479527 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h"] Mar 18 13:06:52.482004 master-0 kubenswrapper[4025]: I0318 13:06:52.479632 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4" Mar 18 13:06:52.482004 master-0 kubenswrapper[4025]: I0318 13:06:52.480123 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl"] Mar 18 13:06:52.482004 master-0 kubenswrapper[4025]: I0318 13:06:52.480220 4025 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 18 13:06:52.482004 master-0 kubenswrapper[4025]: I0318 13:06:52.480308 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5"] Mar 18 13:06:52.482004 master-0 kubenswrapper[4025]: I0318 13:06:52.480321 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-68lgz"] Mar 18 13:06:52.482004 master-0 kubenswrapper[4025]: I0318 13:06:52.480167 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 18 13:06:52.482004 master-0 kubenswrapper[4025]: I0318 13:06:52.480391 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" Mar 18 13:06:52.482004 master-0 kubenswrapper[4025]: I0318 13:06:52.480331 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb"] Mar 18 13:06:52.482004 master-0 kubenswrapper[4025]: I0318 13:06:52.480543 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp"] Mar 18 13:06:52.482004 master-0 kubenswrapper[4025]: I0318 13:06:52.480366 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" Mar 18 13:06:52.482004 master-0 kubenswrapper[4025]: I0318 13:06:52.481014 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 18 13:06:52.482004 master-0 kubenswrapper[4025]: I0318 13:06:52.481100 4025 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 18 13:06:52.482004 master-0 kubenswrapper[4025]: I0318 13:06:52.481130 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 18 13:06:52.482004 master-0 kubenswrapper[4025]: I0318 13:06:52.481280 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 18 13:06:52.482004 master-0 kubenswrapper[4025]: I0318 13:06:52.481303 4025 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 18 13:06:52.482004 master-0 kubenswrapper[4025]: I0318 13:06:52.481327 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 18 13:06:52.482004 master-0 kubenswrapper[4025]: I0318 13:06:52.481500 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 18 13:06:52.482004 master-0 kubenswrapper[4025]: I0318 13:06:52.481536 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 18 13:06:52.482004 master-0 kubenswrapper[4025]: I0318 13:06:52.481753 4025 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 18 13:06:52.482004 master-0 kubenswrapper[4025]: I0318 13:06:52.481768 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 18 13:06:52.482004 master-0 kubenswrapper[4025]: I0318 13:06:52.481826 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 18 13:06:52.482004 master-0 kubenswrapper[4025]: I0318 13:06:52.481919 4025 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 18 13:06:52.484212 master-0 kubenswrapper[4025]: I0318 13:06:52.482628 4025 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 18 13:06:52.484212 master-0 kubenswrapper[4025]: I0318 13:06:52.482634 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 18 13:06:52.484212 master-0 kubenswrapper[4025]: I0318 13:06:52.482644 4025 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 18 13:06:52.484212 master-0 kubenswrapper[4025]: I0318 13:06:52.482671 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 18 13:06:52.484212 master-0 kubenswrapper[4025]: I0318 13:06:52.482795 4025 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 18 13:06:52.484212 master-0 kubenswrapper[4025]: I0318 13:06:52.483075 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 18 13:06:52.484212 master-0 kubenswrapper[4025]: I0318 13:06:52.483367 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 18 13:06:52.484212 master-0 kubenswrapper[4025]: I0318 13:06:52.483741 4025 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 18 13:06:52.502530 master-0 kubenswrapper[4025]: I0318 13:06:52.500252 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf"] Mar 18 13:06:52.502530 master-0 kubenswrapper[4025]: I0318 13:06:52.500509 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 18 13:06:52.502530 master-0 kubenswrapper[4025]: I0318 13:06:52.501435 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 18 13:06:52.502530 master-0 kubenswrapper[4025]: I0318 13:06:52.501649 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk"] Mar 18 13:06:52.502530 master-0 kubenswrapper[4025]: I0318 13:06:52.501673 4025 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 18 13:06:52.502530 master-0 kubenswrapper[4025]: I0318 13:06:52.501822 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 18 13:06:52.502530 master-0 kubenswrapper[4025]: I0318 13:06:52.501984 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 18 13:06:52.502530 master-0 kubenswrapper[4025]: I0318 13:06:52.502052 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 18 13:06:52.502530 master-0 kubenswrapper[4025]: I0318 13:06:52.502117 4025 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 18 13:06:52.502530 master-0 kubenswrapper[4025]: I0318 13:06:52.502119 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 18 13:06:52.502530 master-0 kubenswrapper[4025]: I0318 13:06:52.502164 4025 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 18 13:06:52.502530 master-0 kubenswrapper[4025]: I0318 13:06:52.502180 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 18 13:06:52.502530 master-0 kubenswrapper[4025]: I0318 13:06:52.502194 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 18 13:06:52.502530 master-0 kubenswrapper[4025]: I0318 13:06:52.502224 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 18 13:06:52.502530 master-0 kubenswrapper[4025]: I0318 13:06:52.502264 4025 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 18 13:06:52.505274 master-0 kubenswrapper[4025]: I0318 13:06:52.502804 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 18 13:06:52.505274 master-0 kubenswrapper[4025]: I0318 13:06:52.502951 4025 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 18 13:06:52.505274 master-0 kubenswrapper[4025]: I0318 13:06:52.503051 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-r4dzk"] Mar 18 13:06:52.505274 master-0 kubenswrapper[4025]: I0318 13:06:52.503553 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 18 13:06:52.505274 master-0 kubenswrapper[4025]: I0318 13:06:52.503649 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 18 13:06:52.505274 master-0 kubenswrapper[4025]: I0318 13:06:52.503845 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 18 13:06:52.505274 master-0 kubenswrapper[4025]: I0318 13:06:52.503952 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 18 13:06:52.505274 master-0 kubenswrapper[4025]: I0318 13:06:52.504064 4025 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 18 13:06:52.505274 master-0 kubenswrapper[4025]: I0318 13:06:52.504222 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-k8tv4"] Mar 18 13:06:52.518199 master-0 kubenswrapper[4025]: I0318 13:06:52.511148 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 18 13:06:52.518199 master-0 kubenswrapper[4025]: I0318 13:06:52.512965 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 18 13:06:52.518199 master-0 kubenswrapper[4025]: I0318 13:06:52.513657 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m"] Mar 18 13:06:52.518199 master-0 kubenswrapper[4025]: I0318 13:06:52.513706 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-9c5679d8f-5lzzn"] Mar 18 13:06:52.518199 master-0 kubenswrapper[4025]: I0318 13:06:52.513719 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd"] Mar 18 13:06:52.518199 master-0 kubenswrapper[4025]: I0318 13:06:52.515854 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-89ccd998f-99pzm"] Mar 18 13:06:52.518199 master-0 kubenswrapper[4025]: I0318 13:06:52.517171 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-4bqf9"] Mar 18 13:06:52.518199 master-0 kubenswrapper[4025]: I0318 13:06:52.517681 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm"] Mar 18 13:06:52.518509 master-0 kubenswrapper[4025]: I0318 13:06:52.518372 4025 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-jkl4x"] Mar 18 13:06:52.518904 master-0 kubenswrapper[4025]: I0318 13:06:52.518867 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-jkl4x" Mar 18 13:06:52.519321 master-0 kubenswrapper[4025]: I0318 13:06:52.519297 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h"] Mar 18 13:06:52.521236 master-0 kubenswrapper[4025]: I0318 13:06:52.520279 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 18 13:06:52.521236 master-0 kubenswrapper[4025]: I0318 13:06:52.520550 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9"] Mar 18 13:06:52.521527 master-0 kubenswrapper[4025]: I0318 13:06:52.521505 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4"] Mar 18 13:06:52.521871 master-0 kubenswrapper[4025]: I0318 13:06:52.521829 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn"] Mar 18 13:06:52.523457 master-0 kubenswrapper[4025]: I0318 13:06:52.523406 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv"] Mar 18 13:06:52.524512 master-0 kubenswrapper[4025]: I0318 13:06:52.524482 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4"] Mar 18 13:06:52.525990 master-0 kubenswrapper[4025]: I0318 13:06:52.525965 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt"] Mar 18 13:06:52.527354 master-0 kubenswrapper[4025]: I0318 13:06:52.527312 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p"] Mar 18 13:06:52.528432 master-0 kubenswrapper[4025]: I0318 13:06:52.528389 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx"] Mar 18 13:06:52.535389 master-0 kubenswrapper[4025]: I0318 13:06:52.531976 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld"] Mar 18 13:06:52.535389 master-0 kubenswrapper[4025]: I0318 13:06:52.532035 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl"] Mar 18 13:06:52.573625 master-0 kubenswrapper[4025]: I0318 13:06:52.570839 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgzkd\" (UniqueName: \"kubernetes.io/projected/07505113-d5e7-4ea3-b9cc-8f08cba45ccc-kube-api-access-lgzkd\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb\" (UID: \"07505113-d5e7-4ea3-b9cc-8f08cba45ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb" Mar 18 13:06:52.573625 master-0 kubenswrapper[4025]: I0318 13:06:52.570886 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlzqd\" (UniqueName: \"kubernetes.io/projected/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-kube-api-access-zlzqd\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:06:52.573625 master-0 kubenswrapper[4025]: I0318 13:06:52.570919 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf9d21f9-64d6-4e21-a985-491197038568-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-sp4ld\" (UID: \"bf9d21f9-64d6-4e21-a985-491197038568\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:06:52.573625 master-0 kubenswrapper[4025]: I0318 13:06:52.570946 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c2c4a58-9780-4ecd-b417-e590ac3576ed-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-4bqf9\" (UID: \"0c2c4a58-9780-4ecd-b417-e590ac3576ed\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-4bqf9" Mar 18 13:06:52.573625 master-0 kubenswrapper[4025]: I0318 13:06:52.570970 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbqfh\" (UniqueName: \"kubernetes.io/projected/720a1f60-c1cb-4aef-aaec-f082090ca631-kube-api-access-nbqfh\") pod \"multus-admission-controller-5dbbb8b86f-zrc8h\" (UID: \"720a1f60-c1cb-4aef-aaec-f082090ca631\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" Mar 18 13:06:52.573625 master-0 kubenswrapper[4025]: I0318 13:06:52.571001 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/19a76585-a9ac-4ed9-9146-bb77b31848c6-etcd-ca\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:06:52.573625 master-0 kubenswrapper[4025]: I0318 13:06:52.571025 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf9d21f9-64d6-4e21-a985-491197038568-config\") pod \"authentication-operator-5885bfd7f4-sp4ld\" (UID: \"bf9d21f9-64d6-4e21-a985-491197038568\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:06:52.573625 master-0 kubenswrapper[4025]: I0318 13:06:52.571048 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:06:52.573625 master-0 kubenswrapper[4025]: I0318 13:06:52.571082 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7bpz\" (UniqueName: \"kubernetes.io/projected/394061b4-1bac-4699-96d2-88558c1adaf8-kube-api-access-r7bpz\") pod \"csi-snapshot-controller-operator-5f5d689c6b-68lgz\" (UID: \"394061b4-1bac-4699-96d2-88558c1adaf8\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-68lgz" Mar 18 13:06:52.573625 master-0 kubenswrapper[4025]: I0318 13:06:52.571104 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b75d4622-ac12-4f82-afc9-ab63e6278b0c-config\") pod \"kube-controller-manager-operator-ff989d6cc-n7fn4\" (UID: \"b75d4622-ac12-4f82-afc9-ab63e6278b0c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4" Mar 18 13:06:52.573625 master-0 kubenswrapper[4025]: I0318 13:06:52.571140 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnxv5\" (UniqueName: \"kubernetes.io/projected/053cc9bc-f98e-46f6-93bb-b5344d20bf74-kube-api-access-gnxv5\") pod \"iptables-alerter-jkl4x\" (UID: \"053cc9bc-f98e-46f6-93bb-b5344d20bf74\") " pod="openshift-network-operator/iptables-alerter-jkl4x" Mar 18 13:06:52.573625 master-0 kubenswrapper[4025]: I0318 13:06:52.571169 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-cpqm5\" (UID: \"7dca7577-6bee-4dd3-917a-7b7ccc42f0fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5" Mar 18 13:06:52.573625 master-0 kubenswrapper[4025]: I0318 13:06:52.571192 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert\") pod \"catalog-operator-68f85b4d6c-t84s9\" (UID: \"d2455453-5943-49ef-bfea-cba077197da0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:06:52.573625 master-0 kubenswrapper[4025]: I0318 13:06:52.571214 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgffb\" (UniqueName: \"kubernetes.io/projected/bf9d21f9-64d6-4e21-a985-491197038568-kube-api-access-qgffb\") pod \"authentication-operator-5885bfd7f4-sp4ld\" (UID: \"bf9d21f9-64d6-4e21-a985-491197038568\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:06:52.573993 master-0 kubenswrapper[4025]: I0318 13:06:52.571239 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3ff09ab-cbe1-49e7-8121-5f71997a5176-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:06:52.573993 master-0 kubenswrapper[4025]: I0318 13:06:52.571263 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07505113-d5e7-4ea3-b9cc-8f08cba45ccc-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb\" (UID: \"07505113-d5e7-4ea3-b9cc-8f08cba45ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb" Mar 18 13:06:52.573993 master-0 kubenswrapper[4025]: I0318 13:06:52.571285 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/595f697b-d238-4500-84ce-1ea00377f05e-config\") pod \"service-ca-operator-b865698dc-hnr6m\" (UID: \"595f697b-d238-4500-84ce-1ea00377f05e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m" Mar 18 13:06:52.573993 master-0 kubenswrapper[4025]: I0318 13:06:52.571308 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert\") pod \"olm-operator-5c9796789-z8jkt\" (UID: \"822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt" Mar 18 13:06:52.573993 master-0 kubenswrapper[4025]: I0318 13:06:52.571348 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhs5w\" (UniqueName: \"kubernetes.io/projected/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc-kube-api-access-qhs5w\") pod \"openshift-controller-manager-operator-8c94f4649-cpqm5\" (UID: \"7dca7577-6bee-4dd3-917a-7b7ccc42f0fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5" Mar 18 13:06:52.573993 master-0 kubenswrapper[4025]: I0318 13:06:52.571369 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9zbp\" (UniqueName: \"kubernetes.io/projected/19a76585-a9ac-4ed9-9146-bb77b31848c6-kube-api-access-w9zbp\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:06:52.573993 master-0 kubenswrapper[4025]: I0318 13:06:52.571392 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb5b6\" (UniqueName: \"kubernetes.io/projected/5a4202c2-c330-4a5d-87e7-0a63d069113f-kube-api-access-kb5b6\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:06:52.573993 master-0 kubenswrapper[4025]: I0318 13:06:52.571441 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/053cc9bc-f98e-46f6-93bb-b5344d20bf74-iptables-alerter-script\") pod \"iptables-alerter-jkl4x\" (UID: \"053cc9bc-f98e-46f6-93bb-b5344d20bf74\") " pod="openshift-network-operator/iptables-alerter-jkl4x" Mar 18 13:06:52.573993 master-0 kubenswrapper[4025]: I0318 13:06:52.571468 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34a3a84b-048f-4822-9f05-0e7509327ca2-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-k8tv4\" (UID: \"34a3a84b-048f-4822-9f05-0e7509327ca2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-k8tv4" Mar 18 13:06:52.573993 master-0 kubenswrapper[4025]: I0318 13:06:52.571493 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c2c4a58-9780-4ecd-b417-e590ac3576ed-config\") pod \"openshift-apiserver-operator-d65958b8-4bqf9\" (UID: \"0c2c4a58-9780-4ecd-b417-e590ac3576ed\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-4bqf9" Mar 18 13:06:52.573993 master-0 kubenswrapper[4025]: I0318 13:06:52.571519 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-zrc8h\" (UID: \"720a1f60-c1cb-4aef-aaec-f082090ca631\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" Mar 18 13:06:52.573993 master-0 kubenswrapper[4025]: I0318 13:06:52.571540 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls\") pod \"dns-operator-9c5679d8f-5lzzn\" (UID: \"59bf5114-29f9-4f70-8582-108e95327cb2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-5lzzn" Mar 18 13:06:52.573993 master-0 kubenswrapper[4025]: I0318 13:06:52.572323 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/19a76585-a9ac-4ed9-9146-bb77b31848c6-etcd-ca\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:06:52.573993 master-0 kubenswrapper[4025]: E0318 13:06:52.572453 4025 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 13:06:52.573993 master-0 kubenswrapper[4025]: E0318 13:06:52.572527 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert podName:d2455453-5943-49ef-bfea-cba077197da0 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:53.072509048 +0000 UTC m=+113.992387800 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert") pod "catalog-operator-68f85b4d6c-t84s9" (UID: "d2455453-5943-49ef-bfea-cba077197da0") : secret "catalog-operator-serving-cert" not found Mar 18 13:06:52.573993 master-0 kubenswrapper[4025]: E0318 13:06:52.572574 4025 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:06:52.577655 master-0 kubenswrapper[4025]: E0318 13:06:52.572702 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls podName:59bf5114-29f9-4f70-8582-108e95327cb2 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:53.072685162 +0000 UTC m=+113.992563874 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls") pod "dns-operator-9c5679d8f-5lzzn" (UID: "59bf5114-29f9-4f70-8582-108e95327cb2") : secret "metrics-tls" not found Mar 18 13:06:52.577655 master-0 kubenswrapper[4025]: I0318 13:06:52.573327 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/595f697b-d238-4500-84ce-1ea00377f05e-config\") pod \"service-ca-operator-b865698dc-hnr6m\" (UID: \"595f697b-d238-4500-84ce-1ea00377f05e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m" Mar 18 13:06:52.577655 master-0 kubenswrapper[4025]: I0318 13:06:52.573451 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc-config\") pod \"openshift-controller-manager-operator-8c94f4649-cpqm5\" (UID: \"7dca7577-6bee-4dd3-917a-7b7ccc42f0fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5" Mar 18 13:06:52.577655 master-0 kubenswrapper[4025]: I0318 13:06:52.571560 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc-config\") pod \"openshift-controller-manager-operator-8c94f4649-cpqm5\" (UID: \"7dca7577-6bee-4dd3-917a-7b7ccc42f0fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5" Mar 18 13:06:52.577655 master-0 kubenswrapper[4025]: I0318 13:06:52.575046 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf9d21f9-64d6-4e21-a985-491197038568-serving-cert\") pod \"authentication-operator-5885bfd7f4-sp4ld\" (UID: \"bf9d21f9-64d6-4e21-a985-491197038568\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:06:52.577655 master-0 kubenswrapper[4025]: I0318 13:06:52.575066 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-p7vvx\" (UID: \"2ea9eb53-0385-4a1a-a64f-696f8520cf49\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" Mar 18 13:06:52.577655 master-0 kubenswrapper[4025]: I0318 13:06:52.575093 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:06:52.577655 master-0 kubenswrapper[4025]: I0318 13:06:52.575109 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/19a76585-a9ac-4ed9-9146-bb77b31848c6-etcd-client\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:06:52.577655 master-0 kubenswrapper[4025]: E0318 13:06:52.575445 4025 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 13:06:52.577655 master-0 kubenswrapper[4025]: E0318 13:06:52.575500 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls podName:290d1f84-5c5c-4bff-b045-e6020793cded nodeName:}" failed. No retries permitted until 2026-03-18 13:06:53.075482949 +0000 UTC m=+113.995361571 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-6l7pv" (UID: "290d1f84-5c5c-4bff-b045-e6020793cded") : secret "image-registry-operator-tls" not found Mar 18 13:06:52.577655 master-0 kubenswrapper[4025]: I0318 13:06:52.575776 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2hxh\" (UniqueName: \"kubernetes.io/projected/c3ff09ab-cbe1-49e7-8121-5f71997a5176-kube-api-access-n2hxh\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:06:52.577655 master-0 kubenswrapper[4025]: I0318 13:06:52.575811 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b75d4622-ac12-4f82-afc9-ab63e6278b0c-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-n7fn4\" (UID: \"b75d4622-ac12-4f82-afc9-ab63e6278b0c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4" Mar 18 13:06:52.577655 master-0 kubenswrapper[4025]: I0318 13:06:52.575837 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5xgh\" (UniqueName: \"kubernetes.io/projected/59bf5114-29f9-4f70-8582-108e95327cb2-kube-api-access-z5xgh\") pod \"dns-operator-9c5679d8f-5lzzn\" (UID: \"59bf5114-29f9-4f70-8582-108e95327cb2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-5lzzn" Mar 18 13:06:52.577655 master-0 kubenswrapper[4025]: I0318 13:06:52.575894 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:06:52.577655 master-0 kubenswrapper[4025]: E0318 13:06:52.575981 4025 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:06:52.578030 master-0 kubenswrapper[4025]: E0318 13:06:52.576018 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert podName:ac6d8eb6-1d5e-4757-9823-5ffe478c711c nodeName:}" failed. No retries permitted until 2026-03-18 13:06:53.076003731 +0000 UTC m=+113.995882353 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert") pod "cluster-baremetal-operator-6f69995874-mz4qp" (UID: "ac6d8eb6-1d5e-4757-9823-5ffe478c711c") : secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:06:52.578030 master-0 kubenswrapper[4025]: I0318 13:06:52.576053 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19a76585-a9ac-4ed9-9146-bb77b31848c6-serving-cert\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:06:52.578030 master-0 kubenswrapper[4025]: I0318 13:06:52.576379 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxk9v\" (UniqueName: \"kubernetes.io/projected/d2455453-5943-49ef-bfea-cba077197da0-kube-api-access-lxk9v\") pod \"catalog-operator-68f85b4d6c-t84s9\" (UID: \"d2455453-5943-49ef-bfea-cba077197da0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:06:52.578030 master-0 kubenswrapper[4025]: I0318 13:06:52.576526 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6zmc\" (UniqueName: \"kubernetes.io/projected/0c2c4a58-9780-4ecd-b417-e590ac3576ed-kube-api-access-v6zmc\") pod \"openshift-apiserver-operator-d65958b8-4bqf9\" (UID: \"0c2c4a58-9780-4ecd-b417-e590ac3576ed\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-4bqf9" Mar 18 13:06:52.578030 master-0 kubenswrapper[4025]: I0318 13:06:52.576561 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-images\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:06:52.578030 master-0 kubenswrapper[4025]: I0318 13:06:52.576581 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19a76585-a9ac-4ed9-9146-bb77b31848c6-config\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:06:52.578030 master-0 kubenswrapper[4025]: I0318 13:06:52.576600 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dw4r\" (UniqueName: \"kubernetes.io/projected/2ea9eb53-0385-4a1a-a64f-696f8520cf49-kube-api-access-4dw4r\") pod \"package-server-manager-7b95f86987-p7vvx\" (UID: \"2ea9eb53-0385-4a1a-a64f-696f8520cf49\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" Mar 18 13:06:52.578030 master-0 kubenswrapper[4025]: I0318 13:06:52.576620 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgt5t\" (UniqueName: \"kubernetes.io/projected/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-kube-api-access-lgt5t\") pod \"cluster-monitoring-operator-58845fbb57-n8hgl\" (UID: \"8c0e5eca-819b-40f3-bf77-0cd90a4f6e94\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" Mar 18 13:06:52.578030 master-0 kubenswrapper[4025]: I0318 13:06:52.576659 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/902909ca-ab08-49aa-9736-70e073f8e67d-operand-assets\") pod \"cluster-olm-operator-67dcd4998-bppd4\" (UID: \"902909ca-ab08-49aa-9736-70e073f8e67d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4" Mar 18 13:06:52.578030 master-0 kubenswrapper[4025]: I0318 13:06:52.576682 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/15a97fe2-5022-4997-9936-4247ae7ecb43-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-r4dzk\" (UID: \"15a97fe2-5022-4997-9936-4247ae7ecb43\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-r4dzk" Mar 18 13:06:52.578030 master-0 kubenswrapper[4025]: I0318 13:06:52.576680 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-cpqm5\" (UID: \"7dca7577-6bee-4dd3-917a-7b7ccc42f0fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5" Mar 18 13:06:52.578030 master-0 kubenswrapper[4025]: I0318 13:06:52.576701 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/595f697b-d238-4500-84ce-1ea00377f05e-serving-cert\") pod \"service-ca-operator-b865698dc-hnr6m\" (UID: \"595f697b-d238-4500-84ce-1ea00377f05e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m" Mar 18 13:06:52.578030 master-0 kubenswrapper[4025]: I0318 13:06:52.576721 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/053cc9bc-f98e-46f6-93bb-b5344d20bf74-host-slash\") pod \"iptables-alerter-jkl4x\" (UID: \"053cc9bc-f98e-46f6-93bb-b5344d20bf74\") " pod="openshift-network-operator/iptables-alerter-jkl4x" Mar 18 13:06:52.578030 master-0 kubenswrapper[4025]: I0318 13:06:52.577068 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19a76585-a9ac-4ed9-9146-bb77b31848c6-config\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:06:52.578030 master-0 kubenswrapper[4025]: I0318 13:06:52.577182 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-images\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:06:52.578388 master-0 kubenswrapper[4025]: I0318 13:06:52.577260 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:06:52.578388 master-0 kubenswrapper[4025]: I0318 13:06:52.577299 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34a3a84b-048f-4822-9f05-0e7509327ca2-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-k8tv4\" (UID: \"34a3a84b-048f-4822-9f05-0e7509327ca2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-k8tv4" Mar 18 13:06:52.578388 master-0 kubenswrapper[4025]: I0318 13:06:52.577334 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvdg2\" (UniqueName: \"kubernetes.io/projected/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-kube-api-access-qvdg2\") pod \"olm-operator-5c9796789-z8jkt\" (UID: \"822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt" Mar 18 13:06:52.578388 master-0 kubenswrapper[4025]: E0318 13:06:52.577342 4025 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 18 13:06:52.578388 master-0 kubenswrapper[4025]: I0318 13:06:52.577365 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-config\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:06:52.578388 master-0 kubenswrapper[4025]: E0318 13:06:52.577381 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls podName:ac6d8eb6-1d5e-4757-9823-5ffe478c711c nodeName:}" failed. No retries permitted until 2026-03-18 13:06:53.077369484 +0000 UTC m=+113.997248106 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-mz4qp" (UID: "ac6d8eb6-1d5e-4757-9823-5ffe478c711c") : secret "cluster-baremetal-operator-tls" not found Mar 18 13:06:52.578388 master-0 kubenswrapper[4025]: I0318 13:06:52.577402 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a4202c2-c330-4a5d-87e7-0a63d069113f-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:06:52.578388 master-0 kubenswrapper[4025]: I0318 13:06:52.577529 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4vtf\" (UniqueName: \"kubernetes.io/projected/15a97fe2-5022-4997-9936-4247ae7ecb43-kube-api-access-h4vtf\") pod \"cluster-storage-operator-7d87854d6-r4dzk\" (UID: \"15a97fe2-5022-4997-9936-4247ae7ecb43\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-r4dzk" Mar 18 13:06:52.578388 master-0 kubenswrapper[4025]: I0318 13:06:52.577673 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:06:52.578388 master-0 kubenswrapper[4025]: I0318 13:06:52.577701 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9d09a56-ed4c-40b7-8be1-f3934c07296e-bound-sa-token\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:06:52.578388 master-0 kubenswrapper[4025]: I0318 13:06:52.577727 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:06:52.578388 master-0 kubenswrapper[4025]: I0318 13:06:52.577756 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmzr4\" (UniqueName: \"kubernetes.io/projected/d9d09a56-ed4c-40b7-8be1-f3934c07296e-kube-api-access-wmzr4\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:06:52.578388 master-0 kubenswrapper[4025]: E0318 13:06:52.577763 4025 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:06:52.578388 master-0 kubenswrapper[4025]: I0318 13:06:52.577782 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/902909ca-ab08-49aa-9736-70e073f8e67d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-bppd4\" (UID: \"902909ca-ab08-49aa-9736-70e073f8e67d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4" Mar 18 13:06:52.578388 master-0 kubenswrapper[4025]: E0318 13:06:52.577810 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls podName:d9d09a56-ed4c-40b7-8be1-f3934c07296e nodeName:}" failed. No retries permitted until 2026-03-18 13:06:53.077793474 +0000 UTC m=+113.997672096 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls") pod "ingress-operator-66b84d69b-wqxpk" (UID: "d9d09a56-ed4c-40b7-8be1-f3934c07296e") : secret "metrics-tls" not found Mar 18 13:06:52.578848 master-0 kubenswrapper[4025]: I0318 13:06:52.577811 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-config\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:06:52.578848 master-0 kubenswrapper[4025]: I0318 13:06:52.577919 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b75d4622-ac12-4f82-afc9-ab63e6278b0c-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-n7fn4\" (UID: \"b75d4622-ac12-4f82-afc9-ab63e6278b0c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4" Mar 18 13:06:52.578848 master-0 kubenswrapper[4025]: I0318 13:06:52.577942 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07505113-d5e7-4ea3-b9cc-8f08cba45ccc-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb\" (UID: \"07505113-d5e7-4ea3-b9cc-8f08cba45ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb" Mar 18 13:06:52.578848 master-0 kubenswrapper[4025]: I0318 13:06:52.577964 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdkx7\" (UniqueName: \"kubernetes.io/projected/290d1f84-5c5c-4bff-b045-e6020793cded-kube-api-access-rdkx7\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:06:52.578848 master-0 kubenswrapper[4025]: I0318 13:06:52.577982 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/19a76585-a9ac-4ed9-9146-bb77b31848c6-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:06:52.578848 master-0 kubenswrapper[4025]: I0318 13:06:52.578002 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9d09a56-ed4c-40b7-8be1-f3934c07296e-trusted-ca\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:06:52.578848 master-0 kubenswrapper[4025]: I0318 13:06:52.578499 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/19a76585-a9ac-4ed9-9146-bb77b31848c6-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:06:52.578848 master-0 kubenswrapper[4025]: I0318 13:06:52.578557 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kskqr\" (UniqueName: \"kubernetes.io/projected/902909ca-ab08-49aa-9736-70e073f8e67d-kube-api-access-kskqr\") pod \"cluster-olm-operator-67dcd4998-bppd4\" (UID: \"902909ca-ab08-49aa-9736-70e073f8e67d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4" Mar 18 13:06:52.578848 master-0 kubenswrapper[4025]: I0318 13:06:52.578591 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5a4202c2-c330-4a5d-87e7-0a63d069113f-images\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:06:52.578848 master-0 kubenswrapper[4025]: I0318 13:06:52.578596 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07505113-d5e7-4ea3-b9cc-8f08cba45ccc-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb\" (UID: \"07505113-d5e7-4ea3-b9cc-8f08cba45ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb" Mar 18 13:06:52.578848 master-0 kubenswrapper[4025]: I0318 13:06:52.578619 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/290d1f84-5c5c-4bff-b045-e6020793cded-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:06:52.578848 master-0 kubenswrapper[4025]: I0318 13:06:52.578650 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/290d1f84-5c5c-4bff-b045-e6020793cded-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:06:52.578848 master-0 kubenswrapper[4025]: I0318 13:06:52.578829 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf9d21f9-64d6-4e21-a985-491197038568-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-sp4ld\" (UID: \"bf9d21f9-64d6-4e21-a985-491197038568\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:06:52.578848 master-0 kubenswrapper[4025]: I0318 13:06:52.578858 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-n8hgl\" (UID: \"8c0e5eca-819b-40f3-bf77-0cd90a4f6e94\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" Mar 18 13:06:52.579248 master-0 kubenswrapper[4025]: I0318 13:06:52.578886 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:06:52.579248 master-0 kubenswrapper[4025]: I0318 13:06:52.578920 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34a3a84b-048f-4822-9f05-0e7509327ca2-config\") pod \"kube-apiserver-operator-8b68b9d9b-k8tv4\" (UID: \"34a3a84b-048f-4822-9f05-0e7509327ca2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-k8tv4" Mar 18 13:06:52.579248 master-0 kubenswrapper[4025]: I0318 13:06:52.578948 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fxgl\" (UniqueName: \"kubernetes.io/projected/595f697b-d238-4500-84ce-1ea00377f05e-kube-api-access-4fxgl\") pod \"service-ca-operator-b865698dc-hnr6m\" (UID: \"595f697b-d238-4500-84ce-1ea00377f05e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m" Mar 18 13:06:52.579248 master-0 kubenswrapper[4025]: I0318 13:06:52.578974 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-n8hgl\" (UID: \"8c0e5eca-819b-40f3-bf77-0cd90a4f6e94\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" Mar 18 13:06:52.579381 master-0 kubenswrapper[4025]: I0318 13:06:52.579295 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9d09a56-ed4c-40b7-8be1-f3934c07296e-trusted-ca\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:06:52.579812 master-0 kubenswrapper[4025]: I0318 13:06:52.579676 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/290d1f84-5c5c-4bff-b045-e6020793cded-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:06:52.579812 master-0 kubenswrapper[4025]: I0318 13:06:52.579737 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34a3a84b-048f-4822-9f05-0e7509327ca2-config\") pod \"kube-apiserver-operator-8b68b9d9b-k8tv4\" (UID: \"34a3a84b-048f-4822-9f05-0e7509327ca2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-k8tv4" Mar 18 13:06:52.579812 master-0 kubenswrapper[4025]: I0318 13:06:52.579758 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07505113-d5e7-4ea3-b9cc-8f08cba45ccc-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb\" (UID: \"07505113-d5e7-4ea3-b9cc-8f08cba45ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb" Mar 18 13:06:52.579812 master-0 kubenswrapper[4025]: I0318 13:06:52.579776 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/15a97fe2-5022-4997-9936-4247ae7ecb43-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-r4dzk\" (UID: \"15a97fe2-5022-4997-9936-4247ae7ecb43\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-r4dzk" Mar 18 13:06:52.579937 master-0 kubenswrapper[4025]: I0318 13:06:52.579873 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/19a76585-a9ac-4ed9-9146-bb77b31848c6-etcd-client\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:06:52.580534 master-0 kubenswrapper[4025]: I0318 13:06:52.580499 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34a3a84b-048f-4822-9f05-0e7509327ca2-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-k8tv4\" (UID: \"34a3a84b-048f-4822-9f05-0e7509327ca2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-k8tv4" Mar 18 13:06:52.581945 master-0 kubenswrapper[4025]: I0318 13:06:52.581902 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19a76585-a9ac-4ed9-9146-bb77b31848c6-serving-cert\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:06:52.593440 master-0 kubenswrapper[4025]: I0318 13:06:52.586619 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/595f697b-d238-4500-84ce-1ea00377f05e-serving-cert\") pod \"service-ca-operator-b865698dc-hnr6m\" (UID: \"595f697b-d238-4500-84ce-1ea00377f05e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m" Mar 18 13:06:52.598774 master-0 kubenswrapper[4025]: I0318 13:06:52.598723 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7bpz\" (UniqueName: \"kubernetes.io/projected/394061b4-1bac-4699-96d2-88558c1adaf8-kube-api-access-r7bpz\") pod \"csi-snapshot-controller-operator-5f5d689c6b-68lgz\" (UID: \"394061b4-1bac-4699-96d2-88558c1adaf8\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-68lgz" Mar 18 13:06:52.598973 master-0 kubenswrapper[4025]: I0318 13:06:52.598936 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5xgh\" (UniqueName: \"kubernetes.io/projected/59bf5114-29f9-4f70-8582-108e95327cb2-kube-api-access-z5xgh\") pod \"dns-operator-9c5679d8f-5lzzn\" (UID: \"59bf5114-29f9-4f70-8582-108e95327cb2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-5lzzn" Mar 18 13:06:52.599243 master-0 kubenswrapper[4025]: I0318 13:06:52.599212 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgzkd\" (UniqueName: \"kubernetes.io/projected/07505113-d5e7-4ea3-b9cc-8f08cba45ccc-kube-api-access-lgzkd\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb\" (UID: \"07505113-d5e7-4ea3-b9cc-8f08cba45ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb" Mar 18 13:06:52.599722 master-0 kubenswrapper[4025]: I0318 13:06:52.599700 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9zbp\" (UniqueName: \"kubernetes.io/projected/19a76585-a9ac-4ed9-9146-bb77b31848c6-kube-api-access-w9zbp\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:06:52.601451 master-0 kubenswrapper[4025]: I0318 13:06:52.600869 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhs5w\" (UniqueName: \"kubernetes.io/projected/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc-kube-api-access-qhs5w\") pod \"openshift-controller-manager-operator-8c94f4649-cpqm5\" (UID: \"7dca7577-6bee-4dd3-917a-7b7ccc42f0fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5" Mar 18 13:06:52.601451 master-0 kubenswrapper[4025]: I0318 13:06:52.601055 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34a3a84b-048f-4822-9f05-0e7509327ca2-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-k8tv4\" (UID: \"34a3a84b-048f-4822-9f05-0e7509327ca2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-k8tv4" Mar 18 13:06:52.601637 master-0 kubenswrapper[4025]: I0318 13:06:52.601505 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlzqd\" (UniqueName: \"kubernetes.io/projected/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-kube-api-access-zlzqd\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:06:52.614066 master-0 kubenswrapper[4025]: I0318 13:06:52.614021 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxk9v\" (UniqueName: \"kubernetes.io/projected/d2455453-5943-49ef-bfea-cba077197da0-kube-api-access-lxk9v\") pod \"catalog-operator-68f85b4d6c-t84s9\" (UID: \"d2455453-5943-49ef-bfea-cba077197da0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:06:52.626454 master-0 kubenswrapper[4025]: I0318 13:06:52.626399 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-k8tv4" Mar 18 13:06:52.632292 master-0 kubenswrapper[4025]: I0318 13:06:52.632231 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4vtf\" (UniqueName: \"kubernetes.io/projected/15a97fe2-5022-4997-9936-4247ae7ecb43-kube-api-access-h4vtf\") pod \"cluster-storage-operator-7d87854d6-r4dzk\" (UID: \"15a97fe2-5022-4997-9936-4247ae7ecb43\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-r4dzk" Mar 18 13:06:52.652927 master-0 kubenswrapper[4025]: I0318 13:06:52.652702 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9d09a56-ed4c-40b7-8be1-f3934c07296e-bound-sa-token\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:06:52.670262 master-0 kubenswrapper[4025]: I0318 13:06:52.670220 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmzr4\" (UniqueName: \"kubernetes.io/projected/d9d09a56-ed4c-40b7-8be1-f3934c07296e-kube-api-access-wmzr4\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:06:52.679775 master-0 kubenswrapper[4025]: I0318 13:06:52.679722 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf9d21f9-64d6-4e21-a985-491197038568-serving-cert\") pod \"authentication-operator-5885bfd7f4-sp4ld\" (UID: \"bf9d21f9-64d6-4e21-a985-491197038568\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:06:52.679775 master-0 kubenswrapper[4025]: I0318 13:06:52.679767 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-p7vvx\" (UID: \"2ea9eb53-0385-4a1a-a64f-696f8520cf49\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" Mar 18 13:06:52.679947 master-0 kubenswrapper[4025]: I0318 13:06:52.679807 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2hxh\" (UniqueName: \"kubernetes.io/projected/c3ff09ab-cbe1-49e7-8121-5f71997a5176-kube-api-access-n2hxh\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:06:52.680092 master-0 kubenswrapper[4025]: I0318 13:06:52.680062 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b75d4622-ac12-4f82-afc9-ab63e6278b0c-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-n7fn4\" (UID: \"b75d4622-ac12-4f82-afc9-ab63e6278b0c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4" Mar 18 13:06:52.680243 master-0 kubenswrapper[4025]: I0318 13:06:52.680215 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6zmc\" (UniqueName: \"kubernetes.io/projected/0c2c4a58-9780-4ecd-b417-e590ac3576ed-kube-api-access-v6zmc\") pod \"openshift-apiserver-operator-d65958b8-4bqf9\" (UID: \"0c2c4a58-9780-4ecd-b417-e590ac3576ed\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-4bqf9" Mar 18 13:06:52.680330 master-0 kubenswrapper[4025]: E0318 13:06:52.680284 4025 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 13:06:52.680473 master-0 kubenswrapper[4025]: E0318 13:06:52.680458 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert podName:2ea9eb53-0385-4a1a-a64f-696f8520cf49 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:53.180435839 +0000 UTC m=+114.100314461 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-p7vvx" (UID: "2ea9eb53-0385-4a1a-a64f-696f8520cf49") : secret "package-server-manager-serving-cert" not found Mar 18 13:06:52.680953 master-0 kubenswrapper[4025]: I0318 13:06:52.680920 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dw4r\" (UniqueName: \"kubernetes.io/projected/2ea9eb53-0385-4a1a-a64f-696f8520cf49-kube-api-access-4dw4r\") pod \"package-server-manager-7b95f86987-p7vvx\" (UID: \"2ea9eb53-0385-4a1a-a64f-696f8520cf49\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" Mar 18 13:06:52.681058 master-0 kubenswrapper[4025]: I0318 13:06:52.681042 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgt5t\" (UniqueName: \"kubernetes.io/projected/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-kube-api-access-lgt5t\") pod \"cluster-monitoring-operator-58845fbb57-n8hgl\" (UID: \"8c0e5eca-819b-40f3-bf77-0cd90a4f6e94\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" Mar 18 13:06:52.681551 master-0 kubenswrapper[4025]: I0318 13:06:52.681160 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/902909ca-ab08-49aa-9736-70e073f8e67d-operand-assets\") pod \"cluster-olm-operator-67dcd4998-bppd4\" (UID: \"902909ca-ab08-49aa-9736-70e073f8e67d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4" Mar 18 13:06:52.681675 master-0 kubenswrapper[4025]: I0318 13:06:52.681654 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-99pzm\" (UID: \"fe643e40-d06d-4e69-9be3-0065c2a78567\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:06:52.681784 master-0 kubenswrapper[4025]: I0318 13:06:52.681767 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/053cc9bc-f98e-46f6-93bb-b5344d20bf74-host-slash\") pod \"iptables-alerter-jkl4x\" (UID: \"053cc9bc-f98e-46f6-93bb-b5344d20bf74\") " pod="openshift-network-operator/iptables-alerter-jkl4x" Mar 18 13:06:52.681872 master-0 kubenswrapper[4025]: I0318 13:06:52.681839 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/053cc9bc-f98e-46f6-93bb-b5344d20bf74-host-slash\") pod \"iptables-alerter-jkl4x\" (UID: \"053cc9bc-f98e-46f6-93bb-b5344d20bf74\") " pod="openshift-network-operator/iptables-alerter-jkl4x" Mar 18 13:06:52.681958 master-0 kubenswrapper[4025]: I0318 13:06:52.681609 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/902909ca-ab08-49aa-9736-70e073f8e67d-operand-assets\") pod \"cluster-olm-operator-67dcd4998-bppd4\" (UID: \"902909ca-ab08-49aa-9736-70e073f8e67d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4" Mar 18 13:06:52.682349 master-0 kubenswrapper[4025]: I0318 13:06:52.682329 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvdg2\" (UniqueName: \"kubernetes.io/projected/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-kube-api-access-qvdg2\") pod \"olm-operator-5c9796789-z8jkt\" (UID: \"822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt" Mar 18 13:06:52.682507 master-0 kubenswrapper[4025]: I0318 13:06:52.682489 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8eff549-02f3-446e-b3a1-a66cecdc02a6-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-zm5rd\" (UID: \"a8eff549-02f3-446e-b3a1-a66cecdc02a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd" Mar 18 13:06:52.682597 master-0 kubenswrapper[4025]: I0318 13:06:52.682583 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a4202c2-c330-4a5d-87e7-0a63d069113f-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:06:52.682684 master-0 kubenswrapper[4025]: I0318 13:06:52.682669 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce3728ab-5d50-40ac-95b3-74a5b62a557f-serving-cert\") pod \"openshift-config-operator-95bf4f4d-qwgrm\" (UID: \"ce3728ab-5d50-40ac-95b3-74a5b62a557f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" Mar 18 13:06:52.682813 master-0 kubenswrapper[4025]: I0318 13:06:52.682759 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ce3728ab-5d50-40ac-95b3-74a5b62a557f-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-qwgrm\" (UID: \"ce3728ab-5d50-40ac-95b3-74a5b62a557f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" Mar 18 13:06:52.682908 master-0 kubenswrapper[4025]: I0318 13:06:52.682890 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-99pzm\" (UID: \"fe643e40-d06d-4e69-9be3-0065c2a78567\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:06:52.683106 master-0 kubenswrapper[4025]: I0318 13:06:52.683088 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:06:52.683251 master-0 kubenswrapper[4025]: I0318 13:06:52.683233 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/902909ca-ab08-49aa-9736-70e073f8e67d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-bppd4\" (UID: \"902909ca-ab08-49aa-9736-70e073f8e67d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4" Mar 18 13:06:52.683353 master-0 kubenswrapper[4025]: I0318 13:06:52.683337 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4cqp\" (UniqueName: \"kubernetes.io/projected/fe643e40-d06d-4e69-9be3-0065c2a78567-kube-api-access-w4cqp\") pod \"marketplace-operator-89ccd998f-99pzm\" (UID: \"fe643e40-d06d-4e69-9be3-0065c2a78567\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:06:52.683484 master-0 kubenswrapper[4025]: I0318 13:06:52.683470 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8eff549-02f3-446e-b3a1-a66cecdc02a6-config\") pod \"openshift-kube-scheduler-operator-dddff6458-zm5rd\" (UID: \"a8eff549-02f3-446e-b3a1-a66cecdc02a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd" Mar 18 13:06:52.683619 master-0 kubenswrapper[4025]: E0318 13:06:52.683596 4025 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 13:06:52.683687 master-0 kubenswrapper[4025]: E0318 13:06:52.683649 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-apiservice-cert podName:c3ff09ab-cbe1-49e7-8121-5f71997a5176 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:53.183631145 +0000 UTC m=+114.103509767 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-kvbzn" (UID: "c3ff09ab-cbe1-49e7-8121-5f71997a5176") : secret "performance-addon-operator-webhook-cert" not found Mar 18 13:06:52.683794 master-0 kubenswrapper[4025]: I0318 13:06:52.683776 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b75d4622-ac12-4f82-afc9-ab63e6278b0c-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-n7fn4\" (UID: \"b75d4622-ac12-4f82-afc9-ab63e6278b0c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4" Mar 18 13:06:52.683892 master-0 kubenswrapper[4025]: I0318 13:06:52.683876 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8eff549-02f3-446e-b3a1-a66cecdc02a6-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-zm5rd\" (UID: \"a8eff549-02f3-446e-b3a1-a66cecdc02a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd" Mar 18 13:06:52.684003 master-0 kubenswrapper[4025]: I0318 13:06:52.683986 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kskqr\" (UniqueName: \"kubernetes.io/projected/902909ca-ab08-49aa-9736-70e073f8e67d-kube-api-access-kskqr\") pod \"cluster-olm-operator-67dcd4998-bppd4\" (UID: \"902909ca-ab08-49aa-9736-70e073f8e67d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4" Mar 18 13:06:52.684165 master-0 kubenswrapper[4025]: I0318 13:06:52.684147 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5a4202c2-c330-4a5d-87e7-0a63d069113f-images\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:06:52.684282 master-0 kubenswrapper[4025]: I0318 13:06:52.684255 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a4202c2-c330-4a5d-87e7-0a63d069113f-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:06:52.684367 master-0 kubenswrapper[4025]: I0318 13:06:52.684351 4025 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29qbv\" (UniqueName: \"kubernetes.io/projected/ce3728ab-5d50-40ac-95b3-74a5b62a557f-kube-api-access-29qbv\") pod \"openshift-config-operator-95bf4f4d-qwgrm\" (UID: \"ce3728ab-5d50-40ac-95b3-74a5b62a557f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" Mar 18 13:06:52.684485 master-0 kubenswrapper[4025]: I0318 13:06:52.684467 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf9d21f9-64d6-4e21-a985-491197038568-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-sp4ld\" (UID: \"bf9d21f9-64d6-4e21-a985-491197038568\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:06:52.684595 master-0 kubenswrapper[4025]: I0318 13:06:52.684578 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-n8hgl\" (UID: \"8c0e5eca-819b-40f3-bf77-0cd90a4f6e94\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" Mar 18 13:06:52.684656 master-0 kubenswrapper[4025]: I0318 13:06:52.684635 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5a4202c2-c330-4a5d-87e7-0a63d069113f-images\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:06:52.684739 master-0 kubenswrapper[4025]: I0318 13:06:52.684726 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:06:52.684818 master-0 kubenswrapper[4025]: I0318 13:06:52.684794 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-n8hgl\" (UID: \"8c0e5eca-819b-40f3-bf77-0cd90a4f6e94\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" Mar 18 13:06:52.684896 master-0 kubenswrapper[4025]: E0318 13:06:52.684866 4025 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 13:06:52.684946 master-0 kubenswrapper[4025]: E0318 13:06:52.684886 4025 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 13:06:52.684995 master-0 kubenswrapper[4025]: I0318 13:06:52.684973 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf9d21f9-64d6-4e21-a985-491197038568-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-sp4ld\" (UID: \"bf9d21f9-64d6-4e21-a985-491197038568\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:06:52.685065 master-0 kubenswrapper[4025]: I0318 13:06:52.685050 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf9d21f9-64d6-4e21-a985-491197038568-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-sp4ld\" (UID: \"bf9d21f9-64d6-4e21-a985-491197038568\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:06:52.685614 master-0 kubenswrapper[4025]: E0318 13:06:52.685222 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-node-tuning-operator-tls podName:c3ff09ab-cbe1-49e7-8121-5f71997a5176 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:53.185065759 +0000 UTC m=+114.104944381 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-kvbzn" (UID: "c3ff09ab-cbe1-49e7-8121-5f71997a5176") : secret "node-tuning-operator-tls" not found Mar 18 13:06:52.685614 master-0 kubenswrapper[4025]: I0318 13:06:52.685272 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c2c4a58-9780-4ecd-b417-e590ac3576ed-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-4bqf9\" (UID: \"0c2c4a58-9780-4ecd-b417-e590ac3576ed\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-4bqf9" Mar 18 13:06:52.685614 master-0 kubenswrapper[4025]: I0318 13:06:52.685320 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbqfh\" (UniqueName: \"kubernetes.io/projected/720a1f60-c1cb-4aef-aaec-f082090ca631-kube-api-access-nbqfh\") pod \"multus-admission-controller-5dbbb8b86f-zrc8h\" (UID: \"720a1f60-c1cb-4aef-aaec-f082090ca631\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" Mar 18 13:06:52.685614 master-0 kubenswrapper[4025]: I0318 13:06:52.685352 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf9d21f9-64d6-4e21-a985-491197038568-config\") pod \"authentication-operator-5885bfd7f4-sp4ld\" (UID: \"bf9d21f9-64d6-4e21-a985-491197038568\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:06:52.685614 master-0 kubenswrapper[4025]: I0318 13:06:52.685376 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:06:52.685614 master-0 kubenswrapper[4025]: I0318 13:06:52.685402 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b75d4622-ac12-4f82-afc9-ab63e6278b0c-config\") pod \"kube-controller-manager-operator-ff989d6cc-n7fn4\" (UID: \"b75d4622-ac12-4f82-afc9-ab63e6278b0c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4" Mar 18 13:06:52.685614 master-0 kubenswrapper[4025]: I0318 13:06:52.685445 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnxv5\" (UniqueName: \"kubernetes.io/projected/053cc9bc-f98e-46f6-93bb-b5344d20bf74-kube-api-access-gnxv5\") pod \"iptables-alerter-jkl4x\" (UID: \"053cc9bc-f98e-46f6-93bb-b5344d20bf74\") " pod="openshift-network-operator/iptables-alerter-jkl4x" Mar 18 13:06:52.685614 master-0 kubenswrapper[4025]: I0318 13:06:52.685488 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgffb\" (UniqueName: \"kubernetes.io/projected/bf9d21f9-64d6-4e21-a985-491197038568-kube-api-access-qgffb\") pod \"authentication-operator-5885bfd7f4-sp4ld\" (UID: \"bf9d21f9-64d6-4e21-a985-491197038568\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:06:52.685614 master-0 kubenswrapper[4025]: I0318 13:06:52.685511 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3ff09ab-cbe1-49e7-8121-5f71997a5176-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:06:52.685614 master-0 kubenswrapper[4025]: I0318 13:06:52.685536 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert\") pod \"olm-operator-5c9796789-z8jkt\" (UID: \"822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt" Mar 18 13:06:52.685614 master-0 kubenswrapper[4025]: I0318 13:06:52.685560 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb5b6\" (UniqueName: \"kubernetes.io/projected/5a4202c2-c330-4a5d-87e7-0a63d069113f-kube-api-access-kb5b6\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:06:52.685614 master-0 kubenswrapper[4025]: I0318 13:06:52.685600 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/053cc9bc-f98e-46f6-93bb-b5344d20bf74-iptables-alerter-script\") pod \"iptables-alerter-jkl4x\" (UID: \"053cc9bc-f98e-46f6-93bb-b5344d20bf74\") " pod="openshift-network-operator/iptables-alerter-jkl4x" Mar 18 13:06:52.687344 master-0 kubenswrapper[4025]: I0318 13:06:52.685641 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c2c4a58-9780-4ecd-b417-e590ac3576ed-config\") pod \"openshift-apiserver-operator-d65958b8-4bqf9\" (UID: \"0c2c4a58-9780-4ecd-b417-e590ac3576ed\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-4bqf9" Mar 18 13:06:52.687344 master-0 kubenswrapper[4025]: I0318 13:06:52.685668 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-zrc8h\" (UID: \"720a1f60-c1cb-4aef-aaec-f082090ca631\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" Mar 18 13:06:52.687344 master-0 kubenswrapper[4025]: E0318 13:06:52.685772 4025 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 13:06:52.687344 master-0 kubenswrapper[4025]: E0318 13:06:52.685804 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs podName:720a1f60-c1cb-4aef-aaec-f082090ca631 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:53.185794286 +0000 UTC m=+114.105672908 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-zrc8h" (UID: "720a1f60-c1cb-4aef-aaec-f082090ca631") : secret "multus-admission-controller-secret" not found Mar 18 13:06:52.687344 master-0 kubenswrapper[4025]: I0318 13:06:52.685790 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-n8hgl\" (UID: \"8c0e5eca-819b-40f3-bf77-0cd90a4f6e94\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" Mar 18 13:06:52.687344 master-0 kubenswrapper[4025]: E0318 13:06:52.685873 4025 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 13:06:52.687344 master-0 kubenswrapper[4025]: E0318 13:06:52.685900 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert podName:822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:53.185890728 +0000 UTC m=+114.105769350 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert") pod "olm-operator-5c9796789-z8jkt" (UID: "822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9") : secret "olm-operator-serving-cert" not found Mar 18 13:06:52.687344 master-0 kubenswrapper[4025]: I0318 13:06:52.685899 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf9d21f9-64d6-4e21-a985-491197038568-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-sp4ld\" (UID: \"bf9d21f9-64d6-4e21-a985-491197038568\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:06:52.687344 master-0 kubenswrapper[4025]: E0318 13:06:52.686294 4025 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Mar 18 13:06:52.687344 master-0 kubenswrapper[4025]: E0318 13:06:52.686372 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls podName:5a4202c2-c330-4a5d-87e7-0a63d069113f nodeName:}" failed. No retries permitted until 2026-03-18 13:06:53.18634182 +0000 UTC m=+114.106220532 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls") pod "machine-config-operator-84d549f6d5-dlr6p" (UID: "5a4202c2-c330-4a5d-87e7-0a63d069113f") : secret "mco-proxy-tls" not found Mar 18 13:06:52.687344 master-0 kubenswrapper[4025]: I0318 13:06:52.686383 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/902909ca-ab08-49aa-9736-70e073f8e67d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-bppd4\" (UID: \"902909ca-ab08-49aa-9736-70e073f8e67d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4" Mar 18 13:06:52.687344 master-0 kubenswrapper[4025]: E0318 13:06:52.686398 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls podName:8c0e5eca-819b-40f3-bf77-0cd90a4f6e94 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:53.186388191 +0000 UTC m=+114.106266943 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-n8hgl" (UID: "8c0e5eca-819b-40f3-bf77-0cd90a4f6e94") : secret "cluster-monitoring-operator-tls" not found Mar 18 13:06:52.687344 master-0 kubenswrapper[4025]: I0318 13:06:52.686704 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c2c4a58-9780-4ecd-b417-e590ac3576ed-config\") pod \"openshift-apiserver-operator-d65958b8-4bqf9\" (UID: \"0c2c4a58-9780-4ecd-b417-e590ac3576ed\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-4bqf9" Mar 18 13:06:52.687344 master-0 kubenswrapper[4025]: I0318 13:06:52.686749 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/053cc9bc-f98e-46f6-93bb-b5344d20bf74-iptables-alerter-script\") pod \"iptables-alerter-jkl4x\" (UID: \"053cc9bc-f98e-46f6-93bb-b5344d20bf74\") " pod="openshift-network-operator/iptables-alerter-jkl4x" Mar 18 13:06:52.687344 master-0 kubenswrapper[4025]: I0318 13:06:52.686794 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3ff09ab-cbe1-49e7-8121-5f71997a5176-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:06:52.687772 master-0 kubenswrapper[4025]: I0318 13:06:52.686966 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf9d21f9-64d6-4e21-a985-491197038568-serving-cert\") pod \"authentication-operator-5885bfd7f4-sp4ld\" (UID: \"bf9d21f9-64d6-4e21-a985-491197038568\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:06:52.687772 master-0 kubenswrapper[4025]: I0318 13:06:52.687097 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf9d21f9-64d6-4e21-a985-491197038568-config\") pod \"authentication-operator-5885bfd7f4-sp4ld\" (UID: \"bf9d21f9-64d6-4e21-a985-491197038568\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:06:52.687772 master-0 kubenswrapper[4025]: I0318 13:06:52.687300 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b75d4622-ac12-4f82-afc9-ab63e6278b0c-config\") pod \"kube-controller-manager-operator-ff989d6cc-n7fn4\" (UID: \"b75d4622-ac12-4f82-afc9-ab63e6278b0c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4" Mar 18 13:06:52.689371 master-0 kubenswrapper[4025]: I0318 13:06:52.689338 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b75d4622-ac12-4f82-afc9-ab63e6278b0c-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-n7fn4\" (UID: \"b75d4622-ac12-4f82-afc9-ab63e6278b0c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4" Mar 18 13:06:52.690002 master-0 kubenswrapper[4025]: I0318 13:06:52.689963 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c2c4a58-9780-4ecd-b417-e590ac3576ed-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-4bqf9\" (UID: \"0c2c4a58-9780-4ecd-b417-e590ac3576ed\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-4bqf9" Mar 18 13:06:52.690853 master-0 kubenswrapper[4025]: I0318 13:06:52.690820 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdkx7\" (UniqueName: \"kubernetes.io/projected/290d1f84-5c5c-4bff-b045-e6020793cded-kube-api-access-rdkx7\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:06:52.709837 master-0 kubenswrapper[4025]: I0318 13:06:52.709796 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/290d1f84-5c5c-4bff-b045-e6020793cded-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:06:52.741343 master-0 kubenswrapper[4025]: I0318 13:06:52.741300 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fxgl\" (UniqueName: \"kubernetes.io/projected/595f697b-d238-4500-84ce-1ea00377f05e-kube-api-access-4fxgl\") pod \"service-ca-operator-b865698dc-hnr6m\" (UID: \"595f697b-d238-4500-84ce-1ea00377f05e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m" Mar 18 13:06:52.767749 master-0 kubenswrapper[4025]: I0318 13:06:52.767703 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b75d4622-ac12-4f82-afc9-ab63e6278b0c-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-n7fn4\" (UID: \"b75d4622-ac12-4f82-afc9-ab63e6278b0c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4" Mar 18 13:06:52.768100 master-0 kubenswrapper[4025]: I0318 13:06:52.768059 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5" Mar 18 13:06:52.781083 master-0 kubenswrapper[4025]: I0318 13:06:52.781030 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-k8tv4"] Mar 18 13:06:52.793922 master-0 kubenswrapper[4025]: W0318 13:06:52.793842 4025 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34a3a84b_048f_4822_9f05_0e7509327ca2.slice/crio-e311ec640a1a240867598468c9fc4d6af26b5595345ee1d6fdaa4ef38454491f WatchSource:0}: Error finding container e311ec640a1a240867598468c9fc4d6af26b5595345ee1d6fdaa4ef38454491f: Status 404 returned error can't find the container with id e311ec640a1a240867598468c9fc4d6af26b5595345ee1d6fdaa4ef38454491f Mar 18 13:06:52.794670 master-0 kubenswrapper[4025]: I0318 13:06:52.794591 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-99pzm\" (UID: \"fe643e40-d06d-4e69-9be3-0065c2a78567\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:06:52.795034 master-0 kubenswrapper[4025]: I0318 13:06:52.795006 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8eff549-02f3-446e-b3a1-a66cecdc02a6-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-zm5rd\" (UID: \"a8eff549-02f3-446e-b3a1-a66cecdc02a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd" Mar 18 13:06:52.795099 master-0 kubenswrapper[4025]: I0318 13:06:52.795045 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce3728ab-5d50-40ac-95b3-74a5b62a557f-serving-cert\") pod \"openshift-config-operator-95bf4f4d-qwgrm\" (UID: \"ce3728ab-5d50-40ac-95b3-74a5b62a557f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" Mar 18 13:06:52.795099 master-0 kubenswrapper[4025]: I0318 13:06:52.795064 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ce3728ab-5d50-40ac-95b3-74a5b62a557f-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-qwgrm\" (UID: \"ce3728ab-5d50-40ac-95b3-74a5b62a557f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" Mar 18 13:06:52.795099 master-0 kubenswrapper[4025]: I0318 13:06:52.795084 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-99pzm\" (UID: \"fe643e40-d06d-4e69-9be3-0065c2a78567\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:06:52.795203 master-0 kubenswrapper[4025]: E0318 13:06:52.795162 4025 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 13:06:52.795203 master-0 kubenswrapper[4025]: E0318 13:06:52.795194 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics podName:fe643e40-d06d-4e69-9be3-0065c2a78567 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:53.295181322 +0000 UTC m=+114.215059944 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-99pzm" (UID: "fe643e40-d06d-4e69-9be3-0065c2a78567") : secret "marketplace-operator-metrics" not found Mar 18 13:06:52.796150 master-0 kubenswrapper[4025]: I0318 13:06:52.795592 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ce3728ab-5d50-40ac-95b3-74a5b62a557f-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-qwgrm\" (UID: \"ce3728ab-5d50-40ac-95b3-74a5b62a557f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" Mar 18 13:06:52.796150 master-0 kubenswrapper[4025]: I0318 13:06:52.795656 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4cqp\" (UniqueName: \"kubernetes.io/projected/fe643e40-d06d-4e69-9be3-0065c2a78567-kube-api-access-w4cqp\") pod \"marketplace-operator-89ccd998f-99pzm\" (UID: \"fe643e40-d06d-4e69-9be3-0065c2a78567\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:06:52.796150 master-0 kubenswrapper[4025]: I0318 13:06:52.795682 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8eff549-02f3-446e-b3a1-a66cecdc02a6-config\") pod \"openshift-kube-scheduler-operator-dddff6458-zm5rd\" (UID: \"a8eff549-02f3-446e-b3a1-a66cecdc02a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd" Mar 18 13:06:52.796150 master-0 kubenswrapper[4025]: I0318 13:06:52.795716 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8eff549-02f3-446e-b3a1-a66cecdc02a6-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-zm5rd\" (UID: \"a8eff549-02f3-446e-b3a1-a66cecdc02a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd" Mar 18 13:06:52.796150 master-0 kubenswrapper[4025]: I0318 13:06:52.795919 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29qbv\" (UniqueName: \"kubernetes.io/projected/ce3728ab-5d50-40ac-95b3-74a5b62a557f-kube-api-access-29qbv\") pod \"openshift-config-operator-95bf4f4d-qwgrm\" (UID: \"ce3728ab-5d50-40ac-95b3-74a5b62a557f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" Mar 18 13:06:52.802038 master-0 kubenswrapper[4025]: I0318 13:06:52.801158 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8eff549-02f3-446e-b3a1-a66cecdc02a6-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-zm5rd\" (UID: \"a8eff549-02f3-446e-b3a1-a66cecdc02a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd" Mar 18 13:06:52.802038 master-0 kubenswrapper[4025]: I0318 13:06:52.801235 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2hxh\" (UniqueName: \"kubernetes.io/projected/c3ff09ab-cbe1-49e7-8121-5f71997a5176-kube-api-access-n2hxh\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:06:52.805199 master-0 kubenswrapper[4025]: I0318 13:06:52.802653 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8eff549-02f3-446e-b3a1-a66cecdc02a6-config\") pod \"openshift-kube-scheduler-operator-dddff6458-zm5rd\" (UID: \"a8eff549-02f3-446e-b3a1-a66cecdc02a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd" Mar 18 13:06:52.805199 master-0 kubenswrapper[4025]: I0318 13:06:52.802913 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce3728ab-5d50-40ac-95b3-74a5b62a557f-serving-cert\") pod \"openshift-config-operator-95bf4f4d-qwgrm\" (UID: \"ce3728ab-5d50-40ac-95b3-74a5b62a557f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" Mar 18 13:06:52.805199 master-0 kubenswrapper[4025]: I0318 13:06:52.803505 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:06:52.805199 master-0 kubenswrapper[4025]: I0318 13:06:52.803526 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:06:52.805199 master-0 kubenswrapper[4025]: I0318 13:06:52.805145 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-99pzm\" (UID: \"fe643e40-d06d-4e69-9be3-0065c2a78567\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:06:52.808872 master-0 kubenswrapper[4025]: I0318 13:06:52.808816 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6zmc\" (UniqueName: \"kubernetes.io/projected/0c2c4a58-9780-4ecd-b417-e590ac3576ed-kube-api-access-v6zmc\") pod \"openshift-apiserver-operator-d65958b8-4bqf9\" (UID: \"0c2c4a58-9780-4ecd-b417-e590ac3576ed\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-4bqf9" Mar 18 13:06:52.816314 master-0 kubenswrapper[4025]: I0318 13:06:52.816272 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-68lgz" Mar 18 13:06:52.821742 master-0 kubenswrapper[4025]: I0318 13:06:52.821622 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-r4dzk" Mar 18 13:06:52.831275 master-0 kubenswrapper[4025]: I0318 13:06:52.831211 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dw4r\" (UniqueName: \"kubernetes.io/projected/2ea9eb53-0385-4a1a-a64f-696f8520cf49-kube-api-access-4dw4r\") pod \"package-server-manager-7b95f86987-p7vvx\" (UID: \"2ea9eb53-0385-4a1a-a64f-696f8520cf49\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" Mar 18 13:06:52.848969 master-0 kubenswrapper[4025]: I0318 13:06:52.848846 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:06:52.852675 master-0 kubenswrapper[4025]: I0318 13:06:52.852616 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgt5t\" (UniqueName: \"kubernetes.io/projected/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-kube-api-access-lgt5t\") pod \"cluster-monitoring-operator-58845fbb57-n8hgl\" (UID: \"8c0e5eca-819b-40f3-bf77-0cd90a4f6e94\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" Mar 18 13:06:52.855788 master-0 kubenswrapper[4025]: I0318 13:06:52.855735 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb" Mar 18 13:06:52.870027 master-0 kubenswrapper[4025]: I0318 13:06:52.869987 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvdg2\" (UniqueName: \"kubernetes.io/projected/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-kube-api-access-qvdg2\") pod \"olm-operator-5c9796789-z8jkt\" (UID: \"822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt" Mar 18 13:06:52.889615 master-0 kubenswrapper[4025]: I0318 13:06:52.889523 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kskqr\" (UniqueName: \"kubernetes.io/projected/902909ca-ab08-49aa-9736-70e073f8e67d-kube-api-access-kskqr\") pod \"cluster-olm-operator-67dcd4998-bppd4\" (UID: \"902909ca-ab08-49aa-9736-70e073f8e67d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4" Mar 18 13:06:52.909883 master-0 kubenswrapper[4025]: I0318 13:06:52.909836 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb5b6\" (UniqueName: \"kubernetes.io/projected/5a4202c2-c330-4a5d-87e7-0a63d069113f-kube-api-access-kb5b6\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:06:52.946642 master-0 kubenswrapper[4025]: I0318 13:06:52.946307 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5"] Mar 18 13:06:52.949454 master-0 kubenswrapper[4025]: I0318 13:06:52.948983 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbqfh\" (UniqueName: \"kubernetes.io/projected/720a1f60-c1cb-4aef-aaec-f082090ca631-kube-api-access-nbqfh\") pod \"multus-admission-controller-5dbbb8b86f-zrc8h\" (UID: \"720a1f60-c1cb-4aef-aaec-f082090ca631\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" Mar 18 13:06:52.958653 master-0 kubenswrapper[4025]: I0318 13:06:52.958626 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgffb\" (UniqueName: \"kubernetes.io/projected/bf9d21f9-64d6-4e21-a985-491197038568-kube-api-access-qgffb\") pod \"authentication-operator-5885bfd7f4-sp4ld\" (UID: \"bf9d21f9-64d6-4e21-a985-491197038568\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:06:52.965128 master-0 kubenswrapper[4025]: I0318 13:06:52.964005 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-4bqf9" Mar 18 13:06:52.971560 master-0 kubenswrapper[4025]: I0318 13:06:52.969240 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4" Mar 18 13:06:52.992130 master-0 kubenswrapper[4025]: I0318 13:06:52.991784 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnxv5\" (UniqueName: \"kubernetes.io/projected/053cc9bc-f98e-46f6-93bb-b5344d20bf74-kube-api-access-gnxv5\") pod \"iptables-alerter-jkl4x\" (UID: \"053cc9bc-f98e-46f6-93bb-b5344d20bf74\") " pod="openshift-network-operator/iptables-alerter-jkl4x" Mar 18 13:06:53.004589 master-0 kubenswrapper[4025]: I0318 13:06:53.002495 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m" Mar 18 13:06:53.014121 master-0 kubenswrapper[4025]: I0318 13:06:53.010963 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4cqp\" (UniqueName: \"kubernetes.io/projected/fe643e40-d06d-4e69-9be3-0065c2a78567-kube-api-access-w4cqp\") pod \"marketplace-operator-89ccd998f-99pzm\" (UID: \"fe643e40-d06d-4e69-9be3-0065c2a78567\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:06:53.014748 master-0 kubenswrapper[4025]: I0318 13:06:53.014675 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-68lgz"] Mar 18 13:06:53.028576 master-0 kubenswrapper[4025]: I0318 13:06:53.028537 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8eff549-02f3-446e-b3a1-a66cecdc02a6-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-zm5rd\" (UID: \"a8eff549-02f3-446e-b3a1-a66cecdc02a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd" Mar 18 13:06:53.043862 master-0 kubenswrapper[4025]: I0318 13:06:53.043655 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:06:53.046985 master-0 kubenswrapper[4025]: I0318 13:06:53.046737 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd" Mar 18 13:06:53.057706 master-0 kubenswrapper[4025]: I0318 13:06:53.057628 4025 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 18 13:06:53.059320 master-0 kubenswrapper[4025]: I0318 13:06:53.057952 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4" Mar 18 13:06:53.068066 master-0 kubenswrapper[4025]: I0318 13:06:53.068016 4025 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29qbv\" (UniqueName: \"kubernetes.io/projected/ce3728ab-5d50-40ac-95b3-74a5b62a557f-kube-api-access-29qbv\") pod \"openshift-config-operator-95bf4f4d-qwgrm\" (UID: \"ce3728ab-5d50-40ac-95b3-74a5b62a557f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" Mar 18 13:06:53.074143 master-0 kubenswrapper[4025]: I0318 13:06:53.074038 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 18 13:06:53.077654 master-0 kubenswrapper[4025]: I0318 13:06:53.077079 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb"] Mar 18 13:06:53.077830 master-0 kubenswrapper[4025]: I0318 13:06:53.077794 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-r4dzk"] Mar 18 13:06:53.081684 master-0 kubenswrapper[4025]: I0318 13:06:53.081649 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-jkl4x" Mar 18 13:06:53.095779 master-0 kubenswrapper[4025]: W0318 13:06:53.095732 4025 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07505113_d5e7_4ea3_b9cc_8f08cba45ccc.slice/crio-6aa30a9c358b647ee82a65206dafbfb29c8b2259010a68ba5311f3d4fa4e3bc2 WatchSource:0}: Error finding container 6aa30a9c358b647ee82a65206dafbfb29c8b2259010a68ba5311f3d4fa4e3bc2: Status 404 returned error can't find the container with id 6aa30a9c358b647ee82a65206dafbfb29c8b2259010a68ba5311f3d4fa4e3bc2 Mar 18 13:06:53.095964 master-0 kubenswrapper[4025]: I0318 13:06:53.095944 4025 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 18 13:06:53.099345 master-0 kubenswrapper[4025]: I0318 13:06:53.099315 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert\") pod \"catalog-operator-68f85b4d6c-t84s9\" (UID: \"d2455453-5943-49ef-bfea-cba077197da0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:06:53.099508 master-0 kubenswrapper[4025]: I0318 13:06:53.099486 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls\") pod \"dns-operator-9c5679d8f-5lzzn\" (UID: \"59bf5114-29f9-4f70-8582-108e95327cb2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-5lzzn" Mar 18 13:06:53.099572 master-0 kubenswrapper[4025]: I0318 13:06:53.099521 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:06:53.099628 master-0 kubenswrapper[4025]: E0318 13:06:53.099598 4025 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:06:53.099665 master-0 kubenswrapper[4025]: E0318 13:06:53.099654 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls podName:59bf5114-29f9-4f70-8582-108e95327cb2 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:54.099638135 +0000 UTC m=+115.019516757 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls") pod "dns-operator-9c5679d8f-5lzzn" (UID: "59bf5114-29f9-4f70-8582-108e95327cb2") : secret "metrics-tls" not found Mar 18 13:06:53.099707 master-0 kubenswrapper[4025]: E0318 13:06:53.099684 4025 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 13:06:53.099707 master-0 kubenswrapper[4025]: I0318 13:06:53.099691 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:06:53.099771 master-0 kubenswrapper[4025]: E0318 13:06:53.099730 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert podName:d2455453-5943-49ef-bfea-cba077197da0 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:54.099714767 +0000 UTC m=+115.019593479 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert") pod "catalog-operator-68f85b4d6c-t84s9" (UID: "d2455453-5943-49ef-bfea-cba077197da0") : secret "catalog-operator-serving-cert" not found Mar 18 13:06:53.099816 master-0 kubenswrapper[4025]: E0318 13:06:53.099792 4025 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 13:06:53.099856 master-0 kubenswrapper[4025]: E0318 13:06:53.099830 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls podName:290d1f84-5c5c-4bff-b045-e6020793cded nodeName:}" failed. No retries permitted until 2026-03-18 13:06:54.099816859 +0000 UTC m=+115.019695481 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-6l7pv" (UID: "290d1f84-5c5c-4bff-b045-e6020793cded") : secret "image-registry-operator-tls" not found Mar 18 13:06:53.099856 master-0 kubenswrapper[4025]: I0318 13:06:53.099817 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:06:53.100727 master-0 kubenswrapper[4025]: E0318 13:06:53.099864 4025 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:06:53.100727 master-0 kubenswrapper[4025]: I0318 13:06:53.099884 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:06:53.100727 master-0 kubenswrapper[4025]: E0318 13:06:53.099909 4025 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 18 13:06:53.100727 master-0 kubenswrapper[4025]: E0318 13:06:53.099929 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls podName:ac6d8eb6-1d5e-4757-9823-5ffe478c711c nodeName:}" failed. No retries permitted until 2026-03-18 13:06:54.099922712 +0000 UTC m=+115.019801454 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-mz4qp" (UID: "ac6d8eb6-1d5e-4757-9823-5ffe478c711c") : secret "cluster-baremetal-operator-tls" not found Mar 18 13:06:53.100727 master-0 kubenswrapper[4025]: E0318 13:06:53.099964 4025 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:06:53.100727 master-0 kubenswrapper[4025]: E0318 13:06:53.099983 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls podName:d9d09a56-ed4c-40b7-8be1-f3934c07296e nodeName:}" failed. No retries permitted until 2026-03-18 13:06:54.099976853 +0000 UTC m=+115.019855465 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls") pod "ingress-operator-66b84d69b-wqxpk" (UID: "d9d09a56-ed4c-40b7-8be1-f3934c07296e") : secret "metrics-tls" not found Mar 18 13:06:53.100727 master-0 kubenswrapper[4025]: E0318 13:06:53.100061 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert podName:ac6d8eb6-1d5e-4757-9823-5ffe478c711c nodeName:}" failed. No retries permitted until 2026-03-18 13:06:54.100053765 +0000 UTC m=+115.019932387 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert") pod "cluster-baremetal-operator-6f69995874-mz4qp" (UID: "ac6d8eb6-1d5e-4757-9823-5ffe478c711c") : secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:06:53.107580 master-0 kubenswrapper[4025]: W0318 13:06:53.107530 4025 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15a97fe2_5022_4997_9936_4247ae7ecb43.slice/crio-f060cf0da8cda14bc2c113a9d1ffe07541127085f23390156d2bcdad8537aa45 WatchSource:0}: Error finding container f060cf0da8cda14bc2c113a9d1ffe07541127085f23390156d2bcdad8537aa45: Status 404 returned error can't find the container with id f060cf0da8cda14bc2c113a9d1ffe07541127085f23390156d2bcdad8537aa45 Mar 18 13:06:53.162119 master-0 kubenswrapper[4025]: I0318 13:06:53.161073 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf"] Mar 18 13:06:53.196283 master-0 kubenswrapper[4025]: I0318 13:06:53.195445 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m"] Mar 18 13:06:53.200685 master-0 kubenswrapper[4025]: I0318 13:06:53.200647 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:06:53.200792 master-0 kubenswrapper[4025]: I0318 13:06:53.200691 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-n8hgl\" (UID: \"8c0e5eca-819b-40f3-bf77-0cd90a4f6e94\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" Mar 18 13:06:53.200792 master-0 kubenswrapper[4025]: I0318 13:06:53.200743 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:06:53.200914 master-0 kubenswrapper[4025]: E0318 13:06:53.200855 4025 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Mar 18 13:06:53.200914 master-0 kubenswrapper[4025]: E0318 13:06:53.200879 4025 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 13:06:53.200914 master-0 kubenswrapper[4025]: E0318 13:06:53.200913 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls podName:5a4202c2-c330-4a5d-87e7-0a63d069113f nodeName:}" failed. No retries permitted until 2026-03-18 13:06:54.200900107 +0000 UTC m=+115.120778729 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls") pod "machine-config-operator-84d549f6d5-dlr6p" (UID: "5a4202c2-c330-4a5d-87e7-0a63d069113f") : secret "mco-proxy-tls" not found Mar 18 13:06:53.201003 master-0 kubenswrapper[4025]: E0318 13:06:53.200979 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls podName:8c0e5eca-819b-40f3-bf77-0cd90a4f6e94 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:54.200932738 +0000 UTC m=+115.120811450 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-n8hgl" (UID: "8c0e5eca-819b-40f3-bf77-0cd90a4f6e94") : secret "cluster-monitoring-operator-tls" not found Mar 18 13:06:53.201065 master-0 kubenswrapper[4025]: W0318 13:06:53.201036 4025 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod595f697b_d238_4500_84ce_1ea00377f05e.slice/crio-5ebf31a11d3c2bc39d6ab17ef9ad48b30eadf37d8f9dbe43bf78a5b88a48eeba WatchSource:0}: Error finding container 5ebf31a11d3c2bc39d6ab17ef9ad48b30eadf37d8f9dbe43bf78a5b88a48eeba: Status 404 returned error can't find the container with id 5ebf31a11d3c2bc39d6ab17ef9ad48b30eadf37d8f9dbe43bf78a5b88a48eeba Mar 18 13:06:53.201117 master-0 kubenswrapper[4025]: I0318 13:06:53.201098 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert\") pod \"olm-operator-5c9796789-z8jkt\" (UID: \"822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt" Mar 18 13:06:53.201257 master-0 kubenswrapper[4025]: I0318 13:06:53.201150 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-zrc8h\" (UID: \"720a1f60-c1cb-4aef-aaec-f082090ca631\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" Mar 18 13:06:53.201257 master-0 kubenswrapper[4025]: E0318 13:06:53.201211 4025 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 13:06:53.201322 master-0 kubenswrapper[4025]: E0318 13:06:53.201265 4025 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 13:06:53.201322 master-0 kubenswrapper[4025]: E0318 13:06:53.201290 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-node-tuning-operator-tls podName:c3ff09ab-cbe1-49e7-8121-5f71997a5176 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:54.201281626 +0000 UTC m=+115.121160248 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-kvbzn" (UID: "c3ff09ab-cbe1-49e7-8121-5f71997a5176") : secret "node-tuning-operator-tls" not found Mar 18 13:06:53.201513 master-0 kubenswrapper[4025]: E0318 13:06:53.201332 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert podName:822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:54.201298917 +0000 UTC m=+115.121177539 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert") pod "olm-operator-5c9796789-z8jkt" (UID: "822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9") : secret "olm-operator-serving-cert" not found Mar 18 13:06:53.201513 master-0 kubenswrapper[4025]: I0318 13:06:53.201353 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-p7vvx\" (UID: \"2ea9eb53-0385-4a1a-a64f-696f8520cf49\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" Mar 18 13:06:53.201513 master-0 kubenswrapper[4025]: E0318 13:06:53.201375 4025 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 13:06:53.201513 master-0 kubenswrapper[4025]: E0318 13:06:53.201409 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs podName:720a1f60-c1cb-4aef-aaec-f082090ca631 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:54.201399469 +0000 UTC m=+115.121278091 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-zrc8h" (UID: "720a1f60-c1cb-4aef-aaec-f082090ca631") : secret "multus-admission-controller-secret" not found Mar 18 13:06:53.201513 master-0 kubenswrapper[4025]: E0318 13:06:53.201459 4025 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 13:06:53.201513 master-0 kubenswrapper[4025]: E0318 13:06:53.201485 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert podName:2ea9eb53-0385-4a1a-a64f-696f8520cf49 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:54.201478042 +0000 UTC m=+115.121356664 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-p7vvx" (UID: "2ea9eb53-0385-4a1a-a64f-696f8520cf49") : secret "package-server-manager-serving-cert" not found Mar 18 13:06:53.201513 master-0 kubenswrapper[4025]: I0318 13:06:53.201485 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:06:53.201685 master-0 kubenswrapper[4025]: E0318 13:06:53.201596 4025 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 13:06:53.201685 master-0 kubenswrapper[4025]: E0318 13:06:53.201618 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-apiservice-cert podName:c3ff09ab-cbe1-49e7-8121-5f71997a5176 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:54.201612025 +0000 UTC m=+115.121490647 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-kvbzn" (UID: "c3ff09ab-cbe1-49e7-8121-5f71997a5176") : secret "performance-addon-operator-webhook-cert" not found Mar 18 13:06:53.273773 master-0 kubenswrapper[4025]: I0318 13:06:53.273721 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd"] Mar 18 13:06:53.285133 master-0 kubenswrapper[4025]: W0318 13:06:53.284980 4025 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8eff549_02f3_446e_b3a1_a66cecdc02a6.slice/crio-087f1bfbdd93b7cd9cc4547bdaad7f0c837b1009fb9f9eea1b3ef9f1330544d2 WatchSource:0}: Error finding container 087f1bfbdd93b7cd9cc4547bdaad7f0c837b1009fb9f9eea1b3ef9f1330544d2: Status 404 returned error can't find the container with id 087f1bfbdd93b7cd9cc4547bdaad7f0c837b1009fb9f9eea1b3ef9f1330544d2 Mar 18 13:06:53.294915 master-0 kubenswrapper[4025]: I0318 13:06:53.294874 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4"] Mar 18 13:06:53.302176 master-0 kubenswrapper[4025]: I0318 13:06:53.302142 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-99pzm\" (UID: \"fe643e40-d06d-4e69-9be3-0065c2a78567\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:06:53.302275 master-0 kubenswrapper[4025]: E0318 13:06:53.302257 4025 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 13:06:53.302307 master-0 kubenswrapper[4025]: E0318 13:06:53.302298 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics podName:fe643e40-d06d-4e69-9be3-0065c2a78567 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:54.302286502 +0000 UTC m=+115.222165124 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-99pzm" (UID: "fe643e40-d06d-4e69-9be3-0065c2a78567") : secret "marketplace-operator-metrics" not found Mar 18 13:06:53.302833 master-0 kubenswrapper[4025]: W0318 13:06:53.302800 4025 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod902909ca_ab08_49aa_9736_70e073f8e67d.slice/crio-8bdb6f1dfbc7856feb8d743ecbf26b76b58e1fc414ee00508489367f3860efae WatchSource:0}: Error finding container 8bdb6f1dfbc7856feb8d743ecbf26b76b58e1fc414ee00508489367f3860efae: Status 404 returned error can't find the container with id 8bdb6f1dfbc7856feb8d743ecbf26b76b58e1fc414ee00508489367f3860efae Mar 18 13:06:53.314318 master-0 kubenswrapper[4025]: I0318 13:06:53.314270 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb" event={"ID":"07505113-d5e7-4ea3-b9cc-8f08cba45ccc","Type":"ContainerStarted","Data":"6aa30a9c358b647ee82a65206dafbfb29c8b2259010a68ba5311f3d4fa4e3bc2"} Mar 18 13:06:53.315153 master-0 kubenswrapper[4025]: I0318 13:06:53.315126 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-r4dzk" event={"ID":"15a97fe2-5022-4997-9936-4247ae7ecb43","Type":"ContainerStarted","Data":"f060cf0da8cda14bc2c113a9d1ffe07541127085f23390156d2bcdad8537aa45"} Mar 18 13:06:53.315862 master-0 kubenswrapper[4025]: I0318 13:06:53.315839 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-jkl4x" event={"ID":"053cc9bc-f98e-46f6-93bb-b5344d20bf74","Type":"ContainerStarted","Data":"c3a20ede6cada5383a3c17314cdc63a1bd82056b7193b0a825d73322086a74cd"} Mar 18 13:06:53.318794 master-0 kubenswrapper[4025]: I0318 13:06:53.318746 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4" event={"ID":"902909ca-ab08-49aa-9736-70e073f8e67d","Type":"ContainerStarted","Data":"8bdb6f1dfbc7856feb8d743ecbf26b76b58e1fc414ee00508489367f3860efae"} Mar 18 13:06:53.318916 master-0 kubenswrapper[4025]: I0318 13:06:53.318890 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld"] Mar 18 13:06:53.320095 master-0 kubenswrapper[4025]: I0318 13:06:53.320057 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-k8tv4" event={"ID":"34a3a84b-048f-4822-9f05-0e7509327ca2","Type":"ContainerStarted","Data":"f405c7c5758aab122512ec8685660fb5ea0502d97836267e430ea463ff79f592"} Mar 18 13:06:53.320270 master-0 kubenswrapper[4025]: I0318 13:06:53.320242 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-k8tv4" event={"ID":"34a3a84b-048f-4822-9f05-0e7509327ca2","Type":"ContainerStarted","Data":"e311ec640a1a240867598468c9fc4d6af26b5595345ee1d6fdaa4ef38454491f"} Mar 18 13:06:53.320994 master-0 kubenswrapper[4025]: I0318 13:06:53.320971 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5" event={"ID":"7dca7577-6bee-4dd3-917a-7b7ccc42f0fc","Type":"ContainerStarted","Data":"3f26792b173013020f69888fc826973fdb52355d71160dda571060f1b858412f"} Mar 18 13:06:53.322661 master-0 kubenswrapper[4025]: I0318 13:06:53.321625 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" event={"ID":"19a76585-a9ac-4ed9-9146-bb77b31848c6","Type":"ContainerStarted","Data":"4a97b24b2b4402b956c009659df6a92e6079c267e12ae961ceccadc636caf34a"} Mar 18 13:06:53.322661 master-0 kubenswrapper[4025]: I0318 13:06:53.321874 4025 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" Mar 18 13:06:53.323046 master-0 kubenswrapper[4025]: I0318 13:06:53.323026 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-68lgz" event={"ID":"394061b4-1bac-4699-96d2-88558c1adaf8","Type":"ContainerStarted","Data":"3e755bfdf969ae0aedc8dea1041ea98192494df2fdc6f217c2fff168055bbf86"} Mar 18 13:06:53.326783 master-0 kubenswrapper[4025]: I0318 13:06:53.326757 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd" event={"ID":"a8eff549-02f3-446e-b3a1-a66cecdc02a6","Type":"ContainerStarted","Data":"087f1bfbdd93b7cd9cc4547bdaad7f0c837b1009fb9f9eea1b3ef9f1330544d2"} Mar 18 13:06:53.327392 master-0 kubenswrapper[4025]: I0318 13:06:53.327369 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m" event={"ID":"595f697b-d238-4500-84ce-1ea00377f05e","Type":"ContainerStarted","Data":"5ebf31a11d3c2bc39d6ab17ef9ad48b30eadf37d8f9dbe43bf78a5b88a48eeba"} Mar 18 13:06:53.330855 master-0 kubenswrapper[4025]: W0318 13:06:53.330817 4025 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf9d21f9_64d6_4e21_a985_491197038568.slice/crio-d01d4e5c147c00b55f6b6b29aae7404f1c65f4d9542857788934f8f1acdf5475 WatchSource:0}: Error finding container d01d4e5c147c00b55f6b6b29aae7404f1c65f4d9542857788934f8f1acdf5475: Status 404 returned error can't find the container with id d01d4e5c147c00b55f6b6b29aae7404f1c65f4d9542857788934f8f1acdf5475 Mar 18 13:06:53.420108 master-0 kubenswrapper[4025]: I0318 13:06:53.419790 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-4bqf9"] Mar 18 13:06:53.440216 master-0 kubenswrapper[4025]: I0318 13:06:53.440169 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4"] Mar 18 13:06:53.519552 master-0 kubenswrapper[4025]: I0318 13:06:53.517499 4025 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm"] Mar 18 13:06:53.526022 master-0 kubenswrapper[4025]: W0318 13:06:53.525975 4025 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce3728ab_5d50_40ac_95b3_74a5b62a557f.slice/crio-e3813939efa506945bdf3b3ce075a38e8fa5a4f203ce2d57587a61e54ae68d09 WatchSource:0}: Error finding container e3813939efa506945bdf3b3ce075a38e8fa5a4f203ce2d57587a61e54ae68d09: Status 404 returned error can't find the container with id e3813939efa506945bdf3b3ce075a38e8fa5a4f203ce2d57587a61e54ae68d09 Mar 18 13:06:53.528099 master-0 kubenswrapper[4025]: E0318 13:06:53.528056 4025 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:openshift-api,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:292560e2d80b460468bb19fe0ddf289767c655027b03a76ee6c40c91ffe4c483,Command:[write-available-featuresets --asset-output-dir=/available-featuregates --payload-version=$(OPERATOR_IMAGE_VERSION)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.35,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:available-featuregates,ReadOnly:false,MountPath:/available-featuregates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29qbv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-config-operator-95bf4f4d-qwgrm_openshift-config-operator(ce3728ab-5d50-40ac-95b3-74a5b62a557f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 18 13:06:53.529520 master-0 kubenswrapper[4025]: E0318 13:06:53.529428 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-api\" with ErrImagePull: \"pull QPS exceeded\"" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" podUID="ce3728ab-5d50-40ac-95b3-74a5b62a557f" Mar 18 13:06:53.866817 master-0 kubenswrapper[4025]: I0318 13:06:53.866458 4025 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-k8tv4" podStartSLOduration=79.866400552 podStartE2EDuration="1m19.866400552s" podCreationTimestamp="2026-03-18 13:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:06:53.865518971 +0000 UTC m=+114.785397603" watchObservedRunningTime="2026-03-18 13:06:53.866400552 +0000 UTC m=+114.786279184" Mar 18 13:06:54.125484 master-0 kubenswrapper[4025]: I0318 13:06:54.124807 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls\") pod \"dns-operator-9c5679d8f-5lzzn\" (UID: \"59bf5114-29f9-4f70-8582-108e95327cb2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-5lzzn" Mar 18 13:06:54.125484 master-0 kubenswrapper[4025]: I0318 13:06:54.124853 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:06:54.125484 master-0 kubenswrapper[4025]: I0318 13:06:54.124881 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:06:54.125484 master-0 kubenswrapper[4025]: E0318 13:06:54.125053 4025 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:06:54.125484 master-0 kubenswrapper[4025]: E0318 13:06:54.125110 4025 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:06:54.125484 master-0 kubenswrapper[4025]: E0318 13:06:54.125142 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls podName:59bf5114-29f9-4f70-8582-108e95327cb2 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:56.125119605 +0000 UTC m=+117.044998287 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls") pod "dns-operator-9c5679d8f-5lzzn" (UID: "59bf5114-29f9-4f70-8582-108e95327cb2") : secret "metrics-tls" not found Mar 18 13:06:54.125484 master-0 kubenswrapper[4025]: E0318 13:06:54.125072 4025 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 13:06:54.125484 master-0 kubenswrapper[4025]: E0318 13:06:54.125163 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert podName:ac6d8eb6-1d5e-4757-9823-5ffe478c711c nodeName:}" failed. No retries permitted until 2026-03-18 13:06:56.125155235 +0000 UTC m=+117.045033857 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert") pod "cluster-baremetal-operator-6f69995874-mz4qp" (UID: "ac6d8eb6-1d5e-4757-9823-5ffe478c711c") : secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:06:54.125484 master-0 kubenswrapper[4025]: I0318 13:06:54.125237 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:06:54.125484 master-0 kubenswrapper[4025]: I0318 13:06:54.125282 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:06:54.125484 master-0 kubenswrapper[4025]: I0318 13:06:54.125407 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert\") pod \"catalog-operator-68f85b4d6c-t84s9\" (UID: \"d2455453-5943-49ef-bfea-cba077197da0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:06:54.125880 master-0 kubenswrapper[4025]: E0318 13:06:54.125561 4025 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 13:06:54.125880 master-0 kubenswrapper[4025]: E0318 13:06:54.125596 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert podName:d2455453-5943-49ef-bfea-cba077197da0 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:56.125586025 +0000 UTC m=+117.045464657 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert") pod "catalog-operator-68f85b4d6c-t84s9" (UID: "d2455453-5943-49ef-bfea-cba077197da0") : secret "catalog-operator-serving-cert" not found Mar 18 13:06:54.125880 master-0 kubenswrapper[4025]: E0318 13:06:54.125615 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls podName:290d1f84-5c5c-4bff-b045-e6020793cded nodeName:}" failed. No retries permitted until 2026-03-18 13:06:56.125606366 +0000 UTC m=+117.045485098 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-6l7pv" (UID: "290d1f84-5c5c-4bff-b045-e6020793cded") : secret "image-registry-operator-tls" not found Mar 18 13:06:54.125880 master-0 kubenswrapper[4025]: E0318 13:06:54.125659 4025 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 18 13:06:54.125880 master-0 kubenswrapper[4025]: E0318 13:06:54.125683 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls podName:ac6d8eb6-1d5e-4757-9823-5ffe478c711c nodeName:}" failed. No retries permitted until 2026-03-18 13:06:56.125675648 +0000 UTC m=+117.045554270 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-mz4qp" (UID: "ac6d8eb6-1d5e-4757-9823-5ffe478c711c") : secret "cluster-baremetal-operator-tls" not found Mar 18 13:06:54.125880 master-0 kubenswrapper[4025]: E0318 13:06:54.125725 4025 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:06:54.125880 master-0 kubenswrapper[4025]: E0318 13:06:54.125747 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls podName:d9d09a56-ed4c-40b7-8be1-f3934c07296e nodeName:}" failed. No retries permitted until 2026-03-18 13:06:56.125740459 +0000 UTC m=+117.045619201 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls") pod "ingress-operator-66b84d69b-wqxpk" (UID: "d9d09a56-ed4c-40b7-8be1-f3934c07296e") : secret "metrics-tls" not found Mar 18 13:06:54.226039 master-0 kubenswrapper[4025]: I0318 13:06:54.225985 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert\") pod \"olm-operator-5c9796789-z8jkt\" (UID: \"822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt" Mar 18 13:06:54.226039 master-0 kubenswrapper[4025]: I0318 13:06:54.226049 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-zrc8h\" (UID: \"720a1f60-c1cb-4aef-aaec-f082090ca631\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" Mar 18 13:06:54.226268 master-0 kubenswrapper[4025]: I0318 13:06:54.226077 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-p7vvx\" (UID: \"2ea9eb53-0385-4a1a-a64f-696f8520cf49\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" Mar 18 13:06:54.226268 master-0 kubenswrapper[4025]: I0318 13:06:54.226171 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:06:54.226268 master-0 kubenswrapper[4025]: I0318 13:06:54.226218 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:06:54.226268 master-0 kubenswrapper[4025]: I0318 13:06:54.226244 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-n8hgl\" (UID: \"8c0e5eca-819b-40f3-bf77-0cd90a4f6e94\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" Mar 18 13:06:54.226268 master-0 kubenswrapper[4025]: I0318 13:06:54.226271 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:06:54.226489 master-0 kubenswrapper[4025]: E0318 13:06:54.226379 4025 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Mar 18 13:06:54.226531 master-0 kubenswrapper[4025]: E0318 13:06:54.226516 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls podName:5a4202c2-c330-4a5d-87e7-0a63d069113f nodeName:}" failed. No retries permitted until 2026-03-18 13:06:56.226487039 +0000 UTC m=+117.146365681 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls") pod "machine-config-operator-84d549f6d5-dlr6p" (UID: "5a4202c2-c330-4a5d-87e7-0a63d069113f") : secret "mco-proxy-tls" not found Mar 18 13:06:54.226871 master-0 kubenswrapper[4025]: E0318 13:06:54.226849 4025 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 13:06:54.226949 master-0 kubenswrapper[4025]: E0318 13:06:54.226882 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert podName:822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:56.226873979 +0000 UTC m=+117.146752601 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert") pod "olm-operator-5c9796789-z8jkt" (UID: "822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9") : secret "olm-operator-serving-cert" not found Mar 18 13:06:54.226949 master-0 kubenswrapper[4025]: E0318 13:06:54.226916 4025 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 13:06:54.226949 master-0 kubenswrapper[4025]: E0318 13:06:54.226934 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs podName:720a1f60-c1cb-4aef-aaec-f082090ca631 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:56.22692881 +0000 UTC m=+117.146807432 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-zrc8h" (UID: "720a1f60-c1cb-4aef-aaec-f082090ca631") : secret "multus-admission-controller-secret" not found Mar 18 13:06:54.227074 master-0 kubenswrapper[4025]: E0318 13:06:54.226963 4025 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 13:06:54.227074 master-0 kubenswrapper[4025]: E0318 13:06:54.226981 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert podName:2ea9eb53-0385-4a1a-a64f-696f8520cf49 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:56.226976312 +0000 UTC m=+117.146854934 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-p7vvx" (UID: "2ea9eb53-0385-4a1a-a64f-696f8520cf49") : secret "package-server-manager-serving-cert" not found Mar 18 13:06:54.227074 master-0 kubenswrapper[4025]: E0318 13:06:54.227009 4025 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 13:06:54.227074 master-0 kubenswrapper[4025]: E0318 13:06:54.227028 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-apiservice-cert podName:c3ff09ab-cbe1-49e7-8121-5f71997a5176 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:56.227021123 +0000 UTC m=+117.146899745 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-kvbzn" (UID: "c3ff09ab-cbe1-49e7-8121-5f71997a5176") : secret "performance-addon-operator-webhook-cert" not found Mar 18 13:06:54.227074 master-0 kubenswrapper[4025]: E0318 13:06:54.227057 4025 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 13:06:54.227074 master-0 kubenswrapper[4025]: E0318 13:06:54.227076 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-node-tuning-operator-tls podName:c3ff09ab-cbe1-49e7-8121-5f71997a5176 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:56.227069234 +0000 UTC m=+117.146947856 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-kvbzn" (UID: "c3ff09ab-cbe1-49e7-8121-5f71997a5176") : secret "node-tuning-operator-tls" not found Mar 18 13:06:54.227294 master-0 kubenswrapper[4025]: E0318 13:06:54.227116 4025 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 13:06:54.227294 master-0 kubenswrapper[4025]: E0318 13:06:54.227136 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls podName:8c0e5eca-819b-40f3-bf77-0cd90a4f6e94 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:56.227130765 +0000 UTC m=+117.147009387 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-n8hgl" (UID: "8c0e5eca-819b-40f3-bf77-0cd90a4f6e94") : secret "cluster-monitoring-operator-tls" not found Mar 18 13:06:54.327397 master-0 kubenswrapper[4025]: I0318 13:06:54.327342 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-99pzm\" (UID: \"fe643e40-d06d-4e69-9be3-0065c2a78567\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:06:54.327601 master-0 kubenswrapper[4025]: E0318 13:06:54.327558 4025 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 13:06:54.327664 master-0 kubenswrapper[4025]: E0318 13:06:54.327645 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics podName:fe643e40-d06d-4e69-9be3-0065c2a78567 nodeName:}" failed. No retries permitted until 2026-03-18 13:06:56.327626159 +0000 UTC m=+117.247504781 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-99pzm" (UID: "fe643e40-d06d-4e69-9be3-0065c2a78567") : secret "marketplace-operator-metrics" not found Mar 18 13:06:54.333024 master-0 kubenswrapper[4025]: I0318 13:06:54.332882 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4" event={"ID":"b75d4622-ac12-4f82-afc9-ab63e6278b0c","Type":"ContainerStarted","Data":"f9589a25d07ced54d4fbfa68774c413e985214ddc531362c9f8430ade544bfcc"} Mar 18 13:06:54.334057 master-0 kubenswrapper[4025]: I0318 13:06:54.334033 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" event={"ID":"bf9d21f9-64d6-4e21-a985-491197038568","Type":"ContainerStarted","Data":"d01d4e5c147c00b55f6b6b29aae7404f1c65f4d9542857788934f8f1acdf5475"} Mar 18 13:06:54.335709 master-0 kubenswrapper[4025]: I0318 13:06:54.335686 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" event={"ID":"ce3728ab-5d50-40ac-95b3-74a5b62a557f","Type":"ContainerStarted","Data":"e3813939efa506945bdf3b3ce075a38e8fa5a4f203ce2d57587a61e54ae68d09"} Mar 18 13:06:54.337427 master-0 kubenswrapper[4025]: E0318 13:06:54.337383 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-api\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:292560e2d80b460468bb19fe0ddf289767c655027b03a76ee6c40c91ffe4c483\\\"\"" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" podUID="ce3728ab-5d50-40ac-95b3-74a5b62a557f" Mar 18 13:06:54.343513 master-0 kubenswrapper[4025]: I0318 13:06:54.343469 4025 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-4bqf9" event={"ID":"0c2c4a58-9780-4ecd-b417-e590ac3576ed","Type":"ContainerStarted","Data":"aadd21574589df05a94b4c4fadbf0dfafa5f50f06c631557a3bc30c9b28ade98"} Mar 18 13:06:55.352084 master-0 kubenswrapper[4025]: E0318 13:06:55.348823 4025 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-api\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:292560e2d80b460468bb19fe0ddf289767c655027b03a76ee6c40c91ffe4c483\\\"\"" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" podUID="ce3728ab-5d50-40ac-95b3-74a5b62a557f" Mar 18 13:06:56.151981 master-0 kubenswrapper[4025]: I0318 13:06:56.151644 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert\") pod \"catalog-operator-68f85b4d6c-t84s9\" (UID: \"d2455453-5943-49ef-bfea-cba077197da0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:06:56.152166 master-0 kubenswrapper[4025]: E0318 13:06:56.152039 4025 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 13:06:56.152667 master-0 kubenswrapper[4025]: I0318 13:06:56.152179 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls\") pod \"dns-operator-9c5679d8f-5lzzn\" (UID: \"59bf5114-29f9-4f70-8582-108e95327cb2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-5lzzn" Mar 18 13:06:56.152667 master-0 kubenswrapper[4025]: E0318 13:06:56.152257 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert podName:d2455453-5943-49ef-bfea-cba077197da0 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:00.152228276 +0000 UTC m=+121.072106898 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert") pod "catalog-operator-68f85b4d6c-t84s9" (UID: "d2455453-5943-49ef-bfea-cba077197da0") : secret "catalog-operator-serving-cert" not found Mar 18 13:06:56.152667 master-0 kubenswrapper[4025]: I0318 13:06:56.152292 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:06:56.152667 master-0 kubenswrapper[4025]: I0318 13:06:56.152319 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:06:56.152667 master-0 kubenswrapper[4025]: I0318 13:06:56.152356 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:06:56.152667 master-0 kubenswrapper[4025]: E0318 13:06:56.152374 4025 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:06:56.152667 master-0 kubenswrapper[4025]: E0318 13:06:56.152465 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls podName:59bf5114-29f9-4f70-8582-108e95327cb2 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:00.152443431 +0000 UTC m=+121.072322143 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls") pod "dns-operator-9c5679d8f-5lzzn" (UID: "59bf5114-29f9-4f70-8582-108e95327cb2") : secret "metrics-tls" not found Mar 18 13:06:56.152667 master-0 kubenswrapper[4025]: I0318 13:06:56.152376 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:06:56.152667 master-0 kubenswrapper[4025]: E0318 13:06:56.152489 4025 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:06:56.152667 master-0 kubenswrapper[4025]: E0318 13:06:56.152538 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls podName:d9d09a56-ed4c-40b7-8be1-f3934c07296e nodeName:}" failed. No retries permitted until 2026-03-18 13:07:00.152528853 +0000 UTC m=+121.072407565 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls") pod "ingress-operator-66b84d69b-wqxpk" (UID: "d9d09a56-ed4c-40b7-8be1-f3934c07296e") : secret "metrics-tls" not found Mar 18 13:06:56.152667 master-0 kubenswrapper[4025]: E0318 13:06:56.152604 4025 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:06:56.152667 master-0 kubenswrapper[4025]: E0318 13:06:56.152626 4025 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 18 13:06:56.152667 master-0 kubenswrapper[4025]: E0318 13:06:56.152642 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert podName:ac6d8eb6-1d5e-4757-9823-5ffe478c711c nodeName:}" failed. No retries permitted until 2026-03-18 13:07:00.152632485 +0000 UTC m=+121.072511197 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert") pod "cluster-baremetal-operator-6f69995874-mz4qp" (UID: "ac6d8eb6-1d5e-4757-9823-5ffe478c711c") : secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:06:56.152667 master-0 kubenswrapper[4025]: E0318 13:06:56.152663 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls podName:ac6d8eb6-1d5e-4757-9823-5ffe478c711c nodeName:}" failed. No retries permitted until 2026-03-18 13:07:00.152653186 +0000 UTC m=+121.072531948 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-mz4qp" (UID: "ac6d8eb6-1d5e-4757-9823-5ffe478c711c") : secret "cluster-baremetal-operator-tls" not found Mar 18 13:06:56.153098 master-0 kubenswrapper[4025]: E0318 13:06:56.152697 4025 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 13:06:56.153098 master-0 kubenswrapper[4025]: E0318 13:06:56.152750 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls podName:290d1f84-5c5c-4bff-b045-e6020793cded nodeName:}" failed. No retries permitted until 2026-03-18 13:07:00.152732777 +0000 UTC m=+121.072611409 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-6l7pv" (UID: "290d1f84-5c5c-4bff-b045-e6020793cded") : secret "image-registry-operator-tls" not found Mar 18 13:06:56.253725 master-0 kubenswrapper[4025]: I0318 13:06:56.253664 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:06:56.253725 master-0 kubenswrapper[4025]: I0318 13:06:56.253715 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-n8hgl\" (UID: \"8c0e5eca-819b-40f3-bf77-0cd90a4f6e94\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" Mar 18 13:06:56.253725 master-0 kubenswrapper[4025]: I0318 13:06:56.253738 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:06:56.254048 master-0 kubenswrapper[4025]: I0318 13:06:56.253768 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert\") pod \"olm-operator-5c9796789-z8jkt\" (UID: \"822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt" Mar 18 13:06:56.254048 master-0 kubenswrapper[4025]: I0318 13:06:56.253827 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-zrc8h\" (UID: \"720a1f60-c1cb-4aef-aaec-f082090ca631\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" Mar 18 13:06:56.254048 master-0 kubenswrapper[4025]: I0318 13:06:56.253849 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-p7vvx\" (UID: \"2ea9eb53-0385-4a1a-a64f-696f8520cf49\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" Mar 18 13:06:56.254048 master-0 kubenswrapper[4025]: I0318 13:06:56.253913 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:06:56.254048 master-0 kubenswrapper[4025]: E0318 13:06:56.254043 4025 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 13:06:56.254308 master-0 kubenswrapper[4025]: E0318 13:06:56.254091 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-apiservice-cert podName:c3ff09ab-cbe1-49e7-8121-5f71997a5176 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:00.254074841 +0000 UTC m=+121.173953463 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-kvbzn" (UID: "c3ff09ab-cbe1-49e7-8121-5f71997a5176") : secret "performance-addon-operator-webhook-cert" not found Mar 18 13:06:56.254686 master-0 kubenswrapper[4025]: E0318 13:06:56.254368 4025 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 13:06:56.254686 master-0 kubenswrapper[4025]: E0318 13:06:56.254436 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert podName:822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:00.25442473 +0000 UTC m=+121.174303352 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert") pod "olm-operator-5c9796789-z8jkt" (UID: "822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9") : secret "olm-operator-serving-cert" not found Mar 18 13:06:56.254686 master-0 kubenswrapper[4025]: E0318 13:06:56.254572 4025 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Mar 18 13:06:56.254686 master-0 kubenswrapper[4025]: E0318 13:06:56.254643 4025 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 13:06:56.254686 master-0 kubenswrapper[4025]: E0318 13:06:56.254653 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls podName:5a4202c2-c330-4a5d-87e7-0a63d069113f nodeName:}" failed. No retries permitted until 2026-03-18 13:07:00.254631175 +0000 UTC m=+121.174509887 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls") pod "machine-config-operator-84d549f6d5-dlr6p" (UID: "5a4202c2-c330-4a5d-87e7-0a63d069113f") : secret "mco-proxy-tls" not found Mar 18 13:06:56.254686 master-0 kubenswrapper[4025]: E0318 13:06:56.254681 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs podName:720a1f60-c1cb-4aef-aaec-f082090ca631 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:00.254669035 +0000 UTC m=+121.174547657 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-zrc8h" (UID: "720a1f60-c1cb-4aef-aaec-f082090ca631") : secret "multus-admission-controller-secret" not found Mar 18 13:06:56.254942 master-0 kubenswrapper[4025]: E0318 13:06:56.254704 4025 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 13:06:56.254942 master-0 kubenswrapper[4025]: E0318 13:06:56.254739 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert podName:2ea9eb53-0385-4a1a-a64f-696f8520cf49 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:00.254728017 +0000 UTC m=+121.174606639 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-p7vvx" (UID: "2ea9eb53-0385-4a1a-a64f-696f8520cf49") : secret "package-server-manager-serving-cert" not found Mar 18 13:06:56.254942 master-0 kubenswrapper[4025]: E0318 13:06:56.254776 4025 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 13:06:56.254942 master-0 kubenswrapper[4025]: E0318 13:06:56.254794 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-node-tuning-operator-tls podName:c3ff09ab-cbe1-49e7-8121-5f71997a5176 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:00.254788538 +0000 UTC m=+121.174667160 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-kvbzn" (UID: "c3ff09ab-cbe1-49e7-8121-5f71997a5176") : secret "node-tuning-operator-tls" not found Mar 18 13:06:56.254942 master-0 kubenswrapper[4025]: E0318 13:06:56.254823 4025 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 13:06:56.254942 master-0 kubenswrapper[4025]: E0318 13:06:56.254838 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls podName:8c0e5eca-819b-40f3-bf77-0cd90a4f6e94 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:00.254833519 +0000 UTC m=+121.174712141 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-n8hgl" (UID: "8c0e5eca-819b-40f3-bf77-0cd90a4f6e94") : secret "cluster-monitoring-operator-tls" not found Mar 18 13:06:56.355037 master-0 kubenswrapper[4025]: I0318 13:06:56.354757 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-99pzm\" (UID: \"fe643e40-d06d-4e69-9be3-0065c2a78567\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:06:56.355037 master-0 kubenswrapper[4025]: E0318 13:06:56.354944 4025 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 13:06:56.355037 master-0 kubenswrapper[4025]: E0318 13:06:56.355008 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics podName:fe643e40-d06d-4e69-9be3-0065c2a78567 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:00.354990626 +0000 UTC m=+121.274869258 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-99pzm" (UID: "fe643e40-d06d-4e69-9be3-0065c2a78567") : secret "marketplace-operator-metrics" not found Mar 18 13:07:00.201470 master-0 kubenswrapper[4025]: I0318 13:07:00.201351 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:07:00.205782 master-0 kubenswrapper[4025]: E0318 13:07:00.201557 4025 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:07:00.205782 master-0 kubenswrapper[4025]: I0318 13:07:00.201627 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert\") pod \"catalog-operator-68f85b4d6c-t84s9\" (UID: \"d2455453-5943-49ef-bfea-cba077197da0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:07:00.205782 master-0 kubenswrapper[4025]: E0318 13:07:00.201636 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls podName:d9d09a56-ed4c-40b7-8be1-f3934c07296e nodeName:}" failed. No retries permitted until 2026-03-18 13:07:08.201618222 +0000 UTC m=+129.121496844 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls") pod "ingress-operator-66b84d69b-wqxpk" (UID: "d9d09a56-ed4c-40b7-8be1-f3934c07296e") : secret "metrics-tls" not found Mar 18 13:07:00.205782 master-0 kubenswrapper[4025]: I0318 13:07:00.201722 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls\") pod \"dns-operator-9c5679d8f-5lzzn\" (UID: \"59bf5114-29f9-4f70-8582-108e95327cb2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-5lzzn" Mar 18 13:07:00.205782 master-0 kubenswrapper[4025]: I0318 13:07:00.201746 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:07:00.205782 master-0 kubenswrapper[4025]: I0318 13:07:00.201776 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:07:00.205782 master-0 kubenswrapper[4025]: I0318 13:07:00.201825 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:07:00.205782 master-0 kubenswrapper[4025]: E0318 13:07:00.201966 4025 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 18 13:07:00.205782 master-0 kubenswrapper[4025]: E0318 13:07:00.202007 4025 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:07:00.205782 master-0 kubenswrapper[4025]: E0318 13:07:00.202022 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls podName:ac6d8eb6-1d5e-4757-9823-5ffe478c711c nodeName:}" failed. No retries permitted until 2026-03-18 13:07:08.202005931 +0000 UTC m=+129.121884573 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-mz4qp" (UID: "ac6d8eb6-1d5e-4757-9823-5ffe478c711c") : secret "cluster-baremetal-operator-tls" not found Mar 18 13:07:00.205782 master-0 kubenswrapper[4025]: E0318 13:07:00.202041 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls podName:59bf5114-29f9-4f70-8582-108e95327cb2 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:08.202032151 +0000 UTC m=+129.121910793 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls") pod "dns-operator-9c5679d8f-5lzzn" (UID: "59bf5114-29f9-4f70-8582-108e95327cb2") : secret "metrics-tls" not found Mar 18 13:07:00.205782 master-0 kubenswrapper[4025]: E0318 13:07:00.202055 4025 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 13:07:00.205782 master-0 kubenswrapper[4025]: E0318 13:07:00.202074 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert podName:d2455453-5943-49ef-bfea-cba077197da0 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:08.202067402 +0000 UTC m=+129.121946024 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert") pod "catalog-operator-68f85b4d6c-t84s9" (UID: "d2455453-5943-49ef-bfea-cba077197da0") : secret "catalog-operator-serving-cert" not found Mar 18 13:07:00.205782 master-0 kubenswrapper[4025]: E0318 13:07:00.202119 4025 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:07:00.205782 master-0 kubenswrapper[4025]: E0318 13:07:00.202136 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert podName:ac6d8eb6-1d5e-4757-9823-5ffe478c711c nodeName:}" failed. No retries permitted until 2026-03-18 13:07:08.202131075 +0000 UTC m=+129.122009697 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert") pod "cluster-baremetal-operator-6f69995874-mz4qp" (UID: "ac6d8eb6-1d5e-4757-9823-5ffe478c711c") : secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:07:00.205782 master-0 kubenswrapper[4025]: E0318 13:07:00.202162 4025 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 13:07:00.209221 master-0 kubenswrapper[4025]: E0318 13:07:00.202179 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls podName:290d1f84-5c5c-4bff-b045-e6020793cded nodeName:}" failed. No retries permitted until 2026-03-18 13:07:08.202173936 +0000 UTC m=+129.122052558 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-6l7pv" (UID: "290d1f84-5c5c-4bff-b045-e6020793cded") : secret "image-registry-operator-tls" not found Mar 18 13:07:00.302873 master-0 kubenswrapper[4025]: I0318 13:07:00.302821 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:07:00.303078 master-0 kubenswrapper[4025]: I0318 13:07:00.302880 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-n8hgl\" (UID: \"8c0e5eca-819b-40f3-bf77-0cd90a4f6e94\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" Mar 18 13:07:00.303078 master-0 kubenswrapper[4025]: E0318 13:07:00.303018 4025 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 13:07:00.303165 master-0 kubenswrapper[4025]: E0318 13:07:00.303097 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-node-tuning-operator-tls podName:c3ff09ab-cbe1-49e7-8121-5f71997a5176 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:08.303076409 +0000 UTC m=+129.222955091 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-kvbzn" (UID: "c3ff09ab-cbe1-49e7-8121-5f71997a5176") : secret "node-tuning-operator-tls" not found Mar 18 13:07:00.303482 master-0 kubenswrapper[4025]: E0318 13:07:00.303467 4025 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 13:07:00.303541 master-0 kubenswrapper[4025]: E0318 13:07:00.303501 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls podName:8c0e5eca-819b-40f3-bf77-0cd90a4f6e94 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:08.303491069 +0000 UTC m=+129.223369771 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-n8hgl" (UID: "8c0e5eca-819b-40f3-bf77-0cd90a4f6e94") : secret "cluster-monitoring-operator-tls" not found Mar 18 13:07:00.303588 master-0 kubenswrapper[4025]: I0318 13:07:00.303535 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:07:00.303698 master-0 kubenswrapper[4025]: I0318 13:07:00.303585 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert\") pod \"olm-operator-5c9796789-z8jkt\" (UID: \"822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt" Mar 18 13:07:00.303698 master-0 kubenswrapper[4025]: I0318 13:07:00.303626 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-zrc8h\" (UID: \"720a1f60-c1cb-4aef-aaec-f082090ca631\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" Mar 18 13:07:00.303698 master-0 kubenswrapper[4025]: I0318 13:07:00.303652 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-p7vvx\" (UID: \"2ea9eb53-0385-4a1a-a64f-696f8520cf49\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" Mar 18 13:07:00.303813 master-0 kubenswrapper[4025]: I0318 13:07:00.303732 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:07:00.303813 master-0 kubenswrapper[4025]: E0318 13:07:00.303807 4025 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 13:07:00.303892 master-0 kubenswrapper[4025]: E0318 13:07:00.303826 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-apiservice-cert podName:c3ff09ab-cbe1-49e7-8121-5f71997a5176 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:08.303820187 +0000 UTC m=+129.223698809 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-kvbzn" (UID: "c3ff09ab-cbe1-49e7-8121-5f71997a5176") : secret "performance-addon-operator-webhook-cert" not found Mar 18 13:07:00.303892 master-0 kubenswrapper[4025]: E0318 13:07:00.303858 4025 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Mar 18 13:07:00.303892 master-0 kubenswrapper[4025]: E0318 13:07:00.303874 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls podName:5a4202c2-c330-4a5d-87e7-0a63d069113f nodeName:}" failed. No retries permitted until 2026-03-18 13:07:08.303868908 +0000 UTC m=+129.223747530 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls") pod "machine-config-operator-84d549f6d5-dlr6p" (UID: "5a4202c2-c330-4a5d-87e7-0a63d069113f") : secret "mco-proxy-tls" not found Mar 18 13:07:00.304017 master-0 kubenswrapper[4025]: E0318 13:07:00.303905 4025 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 13:07:00.304017 master-0 kubenswrapper[4025]: E0318 13:07:00.303921 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert podName:822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:08.303916629 +0000 UTC m=+129.223795251 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert") pod "olm-operator-5c9796789-z8jkt" (UID: "822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9") : secret "olm-operator-serving-cert" not found Mar 18 13:07:00.304017 master-0 kubenswrapper[4025]: E0318 13:07:00.303950 4025 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 13:07:00.304017 master-0 kubenswrapper[4025]: E0318 13:07:00.303967 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs podName:720a1f60-c1cb-4aef-aaec-f082090ca631 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:08.30396155 +0000 UTC m=+129.223840172 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-zrc8h" (UID: "720a1f60-c1cb-4aef-aaec-f082090ca631") : secret "multus-admission-controller-secret" not found Mar 18 13:07:00.304017 master-0 kubenswrapper[4025]: E0318 13:07:00.303997 4025 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 13:07:00.304017 master-0 kubenswrapper[4025]: E0318 13:07:00.304012 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert podName:2ea9eb53-0385-4a1a-a64f-696f8520cf49 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:08.304007711 +0000 UTC m=+129.223886333 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-p7vvx" (UID: "2ea9eb53-0385-4a1a-a64f-696f8520cf49") : secret "package-server-manager-serving-cert" not found Mar 18 13:07:00.407120 master-0 kubenswrapper[4025]: I0318 13:07:00.406460 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-99pzm\" (UID: \"fe643e40-d06d-4e69-9be3-0065c2a78567\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:07:00.407120 master-0 kubenswrapper[4025]: E0318 13:07:00.406688 4025 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 13:07:00.407120 master-0 kubenswrapper[4025]: E0318 13:07:00.406777 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics podName:fe643e40-d06d-4e69-9be3-0065c2a78567 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:08.406755659 +0000 UTC m=+129.326634371 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-99pzm" (UID: "fe643e40-d06d-4e69-9be3-0065c2a78567") : secret "marketplace-operator-metrics" not found Mar 18 13:07:04.159857 master-0 kubenswrapper[4025]: I0318 13:07:04.159549 4025 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs\") pod \"network-metrics-daemon-kbfbq\" (UID: \"bf1cc230-0a79-4a1d-b500-a65d02e50973\") " pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:07:04.161658 master-0 kubenswrapper[4025]: I0318 13:07:04.161611 4025 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 18 13:07:04.170777 master-0 kubenswrapper[4025]: E0318 13:07:04.170721 4025 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 13:07:04.170930 master-0 kubenswrapper[4025]: E0318 13:07:04.170791 4025 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs podName:bf1cc230-0a79-4a1d-b500-a65d02e50973 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:08.170762606 +0000 UTC m=+189.090641238 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs") pod "network-metrics-daemon-kbfbq" (UID: "bf1cc230-0a79-4a1d-b500-a65d02e50973") : secret "metrics-daemon-secret" not found Mar 18 13:07:04.930349 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 18 13:07:04.955558 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 18 13:07:04.955934 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 18 13:07:04.959815 master-0 systemd[1]: kubelet.service: Consumed 8.943s CPU time. Mar 18 13:07:04.976161 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 18 13:07:05.165877 master-0 kubenswrapper[7599]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 13:07:05.165877 master-0 kubenswrapper[7599]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 18 13:07:05.165877 master-0 kubenswrapper[7599]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 13:07:05.165877 master-0 kubenswrapper[7599]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 13:07:05.165877 master-0 kubenswrapper[7599]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 18 13:07:05.165877 master-0 kubenswrapper[7599]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 13:07:05.167640 master-0 kubenswrapper[7599]: I0318 13:07:05.166003 7599 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 18 13:07:05.170692 master-0 kubenswrapper[7599]: W0318 13:07:05.170644 7599 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 13:07:05.170692 master-0 kubenswrapper[7599]: W0318 13:07:05.170679 7599 feature_gate.go:330] unrecognized feature gate: Example Mar 18 13:07:05.170869 master-0 kubenswrapper[7599]: W0318 13:07:05.170774 7599 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 13:07:05.170869 master-0 kubenswrapper[7599]: W0318 13:07:05.170789 7599 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 13:07:05.170869 master-0 kubenswrapper[7599]: W0318 13:07:05.170800 7599 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 13:07:05.170869 master-0 kubenswrapper[7599]: W0318 13:07:05.170857 7599 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 13:07:05.170869 master-0 kubenswrapper[7599]: W0318 13:07:05.170867 7599 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 13:07:05.171182 master-0 kubenswrapper[7599]: W0318 13:07:05.170879 7599 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 13:07:05.171182 master-0 kubenswrapper[7599]: W0318 13:07:05.170890 7599 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 13:07:05.171182 master-0 kubenswrapper[7599]: W0318 13:07:05.170900 7599 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 13:07:05.171182 master-0 kubenswrapper[7599]: W0318 13:07:05.170909 7599 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 13:07:05.171182 master-0 kubenswrapper[7599]: W0318 13:07:05.170919 7599 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 13:07:05.171182 master-0 kubenswrapper[7599]: W0318 13:07:05.170928 7599 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 13:07:05.171182 master-0 kubenswrapper[7599]: W0318 13:07:05.170949 7599 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 13:07:05.171182 master-0 kubenswrapper[7599]: W0318 13:07:05.170958 7599 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 13:07:05.171182 master-0 kubenswrapper[7599]: W0318 13:07:05.170966 7599 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 13:07:05.171182 master-0 kubenswrapper[7599]: W0318 13:07:05.170974 7599 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 13:07:05.171182 master-0 kubenswrapper[7599]: W0318 13:07:05.170982 7599 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 13:07:05.171182 master-0 kubenswrapper[7599]: W0318 13:07:05.170989 7599 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 13:07:05.171182 master-0 kubenswrapper[7599]: W0318 13:07:05.171000 7599 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 13:07:05.171182 master-0 kubenswrapper[7599]: W0318 13:07:05.171009 7599 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 13:07:05.171182 master-0 kubenswrapper[7599]: W0318 13:07:05.171019 7599 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 13:07:05.171182 master-0 kubenswrapper[7599]: W0318 13:07:05.171027 7599 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 13:07:05.171182 master-0 kubenswrapper[7599]: W0318 13:07:05.171036 7599 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 13:07:05.171182 master-0 kubenswrapper[7599]: W0318 13:07:05.171045 7599 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 13:07:05.172405 master-0 kubenswrapper[7599]: W0318 13:07:05.171055 7599 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 13:07:05.172405 master-0 kubenswrapper[7599]: W0318 13:07:05.171064 7599 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 13:07:05.172405 master-0 kubenswrapper[7599]: W0318 13:07:05.171073 7599 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 13:07:05.172405 master-0 kubenswrapper[7599]: W0318 13:07:05.171082 7599 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 13:07:05.172405 master-0 kubenswrapper[7599]: W0318 13:07:05.171090 7599 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 13:07:05.172405 master-0 kubenswrapper[7599]: W0318 13:07:05.171099 7599 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 13:07:05.172405 master-0 kubenswrapper[7599]: W0318 13:07:05.171109 7599 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 13:07:05.172405 master-0 kubenswrapper[7599]: W0318 13:07:05.171117 7599 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 13:07:05.172405 master-0 kubenswrapper[7599]: W0318 13:07:05.171127 7599 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 13:07:05.172405 master-0 kubenswrapper[7599]: W0318 13:07:05.171135 7599 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 13:07:05.172405 master-0 kubenswrapper[7599]: W0318 13:07:05.171144 7599 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 13:07:05.172405 master-0 kubenswrapper[7599]: W0318 13:07:05.171152 7599 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 13:07:05.172405 master-0 kubenswrapper[7599]: W0318 13:07:05.171163 7599 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 13:07:05.172405 master-0 kubenswrapper[7599]: W0318 13:07:05.171173 7599 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 13:07:05.172405 master-0 kubenswrapper[7599]: W0318 13:07:05.171183 7599 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 13:07:05.172405 master-0 kubenswrapper[7599]: W0318 13:07:05.171192 7599 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 13:07:05.172405 master-0 kubenswrapper[7599]: W0318 13:07:05.171200 7599 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 13:07:05.172405 master-0 kubenswrapper[7599]: W0318 13:07:05.171209 7599 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 13:07:05.172405 master-0 kubenswrapper[7599]: W0318 13:07:05.171217 7599 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 13:07:05.172405 master-0 kubenswrapper[7599]: W0318 13:07:05.171226 7599 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 13:07:05.173771 master-0 kubenswrapper[7599]: W0318 13:07:05.171234 7599 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 13:07:05.173771 master-0 kubenswrapper[7599]: W0318 13:07:05.171243 7599 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 13:07:05.173771 master-0 kubenswrapper[7599]: W0318 13:07:05.171252 7599 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 13:07:05.173771 master-0 kubenswrapper[7599]: W0318 13:07:05.171260 7599 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 13:07:05.173771 master-0 kubenswrapper[7599]: W0318 13:07:05.171268 7599 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 13:07:05.173771 master-0 kubenswrapper[7599]: W0318 13:07:05.171276 7599 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 13:07:05.173771 master-0 kubenswrapper[7599]: W0318 13:07:05.171284 7599 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 13:07:05.173771 master-0 kubenswrapper[7599]: W0318 13:07:05.171291 7599 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 13:07:05.173771 master-0 kubenswrapper[7599]: W0318 13:07:05.171299 7599 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 13:07:05.173771 master-0 kubenswrapper[7599]: W0318 13:07:05.171306 7599 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 13:07:05.173771 master-0 kubenswrapper[7599]: W0318 13:07:05.171316 7599 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 13:07:05.173771 master-0 kubenswrapper[7599]: W0318 13:07:05.171325 7599 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 13:07:05.173771 master-0 kubenswrapper[7599]: W0318 13:07:05.171332 7599 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 13:07:05.173771 master-0 kubenswrapper[7599]: W0318 13:07:05.171340 7599 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 13:07:05.173771 master-0 kubenswrapper[7599]: W0318 13:07:05.171348 7599 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 13:07:05.173771 master-0 kubenswrapper[7599]: W0318 13:07:05.171356 7599 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 13:07:05.173771 master-0 kubenswrapper[7599]: W0318 13:07:05.171364 7599 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 13:07:05.173771 master-0 kubenswrapper[7599]: W0318 13:07:05.171371 7599 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 13:07:05.173771 master-0 kubenswrapper[7599]: W0318 13:07:05.171379 7599 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 13:07:05.173771 master-0 kubenswrapper[7599]: W0318 13:07:05.171387 7599 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 13:07:05.175007 master-0 kubenswrapper[7599]: W0318 13:07:05.171396 7599 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 13:07:05.175007 master-0 kubenswrapper[7599]: W0318 13:07:05.171404 7599 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 13:07:05.175007 master-0 kubenswrapper[7599]: W0318 13:07:05.171439 7599 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 13:07:05.175007 master-0 kubenswrapper[7599]: W0318 13:07:05.171459 7599 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 13:07:05.175007 master-0 kubenswrapper[7599]: W0318 13:07:05.171475 7599 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 13:07:05.175007 master-0 kubenswrapper[7599]: W0318 13:07:05.171485 7599 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 13:07:05.175007 master-0 kubenswrapper[7599]: W0318 13:07:05.171495 7599 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 13:07:05.175007 master-0 kubenswrapper[7599]: I0318 13:07:05.171663 7599 flags.go:64] FLAG: --address="0.0.0.0" Mar 18 13:07:05.175007 master-0 kubenswrapper[7599]: I0318 13:07:05.171681 7599 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 18 13:07:05.175007 master-0 kubenswrapper[7599]: I0318 13:07:05.171699 7599 flags.go:64] FLAG: --anonymous-auth="true" Mar 18 13:07:05.175007 master-0 kubenswrapper[7599]: I0318 13:07:05.171710 7599 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 18 13:07:05.175007 master-0 kubenswrapper[7599]: I0318 13:07:05.171722 7599 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 18 13:07:05.175007 master-0 kubenswrapper[7599]: I0318 13:07:05.171732 7599 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 18 13:07:05.175007 master-0 kubenswrapper[7599]: I0318 13:07:05.171744 7599 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 18 13:07:05.175007 master-0 kubenswrapper[7599]: I0318 13:07:05.171754 7599 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 18 13:07:05.175007 master-0 kubenswrapper[7599]: I0318 13:07:05.171763 7599 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 18 13:07:05.175007 master-0 kubenswrapper[7599]: I0318 13:07:05.171773 7599 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 18 13:07:05.175007 master-0 kubenswrapper[7599]: I0318 13:07:05.171784 7599 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 18 13:07:05.175007 master-0 kubenswrapper[7599]: I0318 13:07:05.171793 7599 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 18 13:07:05.175007 master-0 kubenswrapper[7599]: I0318 13:07:05.171803 7599 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 18 13:07:05.175007 master-0 kubenswrapper[7599]: I0318 13:07:05.171812 7599 flags.go:64] FLAG: --cgroup-root="" Mar 18 13:07:05.175007 master-0 kubenswrapper[7599]: I0318 13:07:05.171821 7599 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 18 13:07:05.175007 master-0 kubenswrapper[7599]: I0318 13:07:05.171830 7599 flags.go:64] FLAG: --client-ca-file="" Mar 18 13:07:05.176270 master-0 kubenswrapper[7599]: I0318 13:07:05.171839 7599 flags.go:64] FLAG: --cloud-config="" Mar 18 13:07:05.176270 master-0 kubenswrapper[7599]: I0318 13:07:05.171848 7599 flags.go:64] FLAG: --cloud-provider="" Mar 18 13:07:05.176270 master-0 kubenswrapper[7599]: I0318 13:07:05.171857 7599 flags.go:64] FLAG: --cluster-dns="[]" Mar 18 13:07:05.176270 master-0 kubenswrapper[7599]: I0318 13:07:05.171867 7599 flags.go:64] FLAG: --cluster-domain="" Mar 18 13:07:05.176270 master-0 kubenswrapper[7599]: I0318 13:07:05.171875 7599 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 18 13:07:05.176270 master-0 kubenswrapper[7599]: I0318 13:07:05.171885 7599 flags.go:64] FLAG: --config-dir="" Mar 18 13:07:05.176270 master-0 kubenswrapper[7599]: I0318 13:07:05.171894 7599 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 18 13:07:05.176270 master-0 kubenswrapper[7599]: I0318 13:07:05.171903 7599 flags.go:64] FLAG: --container-log-max-files="5" Mar 18 13:07:05.176270 master-0 kubenswrapper[7599]: I0318 13:07:05.171915 7599 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 18 13:07:05.176270 master-0 kubenswrapper[7599]: I0318 13:07:05.171926 7599 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 18 13:07:05.176270 master-0 kubenswrapper[7599]: I0318 13:07:05.171936 7599 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 18 13:07:05.176270 master-0 kubenswrapper[7599]: I0318 13:07:05.171945 7599 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 18 13:07:05.176270 master-0 kubenswrapper[7599]: I0318 13:07:05.171955 7599 flags.go:64] FLAG: --contention-profiling="false" Mar 18 13:07:05.176270 master-0 kubenswrapper[7599]: I0318 13:07:05.171964 7599 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 18 13:07:05.176270 master-0 kubenswrapper[7599]: I0318 13:07:05.171973 7599 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 18 13:07:05.176270 master-0 kubenswrapper[7599]: I0318 13:07:05.171982 7599 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 18 13:07:05.176270 master-0 kubenswrapper[7599]: I0318 13:07:05.171991 7599 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 18 13:07:05.176270 master-0 kubenswrapper[7599]: I0318 13:07:05.172002 7599 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 18 13:07:05.176270 master-0 kubenswrapper[7599]: I0318 13:07:05.172012 7599 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 18 13:07:05.176270 master-0 kubenswrapper[7599]: I0318 13:07:05.172022 7599 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 18 13:07:05.176270 master-0 kubenswrapper[7599]: I0318 13:07:05.172031 7599 flags.go:64] FLAG: --enable-load-reader="false" Mar 18 13:07:05.176270 master-0 kubenswrapper[7599]: I0318 13:07:05.172040 7599 flags.go:64] FLAG: --enable-server="true" Mar 18 13:07:05.176270 master-0 kubenswrapper[7599]: I0318 13:07:05.172049 7599 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 18 13:07:05.176270 master-0 kubenswrapper[7599]: I0318 13:07:05.172061 7599 flags.go:64] FLAG: --event-burst="100" Mar 18 13:07:05.176270 master-0 kubenswrapper[7599]: I0318 13:07:05.172071 7599 flags.go:64] FLAG: --event-qps="50" Mar 18 13:07:05.177903 master-0 kubenswrapper[7599]: I0318 13:07:05.172080 7599 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 18 13:07:05.177903 master-0 kubenswrapper[7599]: I0318 13:07:05.172089 7599 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 18 13:07:05.177903 master-0 kubenswrapper[7599]: I0318 13:07:05.172098 7599 flags.go:64] FLAG: --eviction-hard="" Mar 18 13:07:05.177903 master-0 kubenswrapper[7599]: I0318 13:07:05.172109 7599 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 18 13:07:05.177903 master-0 kubenswrapper[7599]: I0318 13:07:05.172121 7599 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 18 13:07:05.177903 master-0 kubenswrapper[7599]: I0318 13:07:05.172129 7599 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 18 13:07:05.177903 master-0 kubenswrapper[7599]: I0318 13:07:05.172139 7599 flags.go:64] FLAG: --eviction-soft="" Mar 18 13:07:05.177903 master-0 kubenswrapper[7599]: I0318 13:07:05.172148 7599 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 18 13:07:05.177903 master-0 kubenswrapper[7599]: I0318 13:07:05.172157 7599 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 18 13:07:05.177903 master-0 kubenswrapper[7599]: I0318 13:07:05.172166 7599 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 18 13:07:05.177903 master-0 kubenswrapper[7599]: I0318 13:07:05.172175 7599 flags.go:64] FLAG: --experimental-mounter-path="" Mar 18 13:07:05.177903 master-0 kubenswrapper[7599]: I0318 13:07:05.172184 7599 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 18 13:07:05.177903 master-0 kubenswrapper[7599]: I0318 13:07:05.172193 7599 flags.go:64] FLAG: --fail-swap-on="true" Mar 18 13:07:05.177903 master-0 kubenswrapper[7599]: I0318 13:07:05.172202 7599 flags.go:64] FLAG: --feature-gates="" Mar 18 13:07:05.177903 master-0 kubenswrapper[7599]: I0318 13:07:05.172212 7599 flags.go:64] FLAG: --file-check-frequency="20s" Mar 18 13:07:05.177903 master-0 kubenswrapper[7599]: I0318 13:07:05.172221 7599 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 18 13:07:05.177903 master-0 kubenswrapper[7599]: I0318 13:07:05.172231 7599 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 18 13:07:05.177903 master-0 kubenswrapper[7599]: I0318 13:07:05.172240 7599 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 18 13:07:05.177903 master-0 kubenswrapper[7599]: I0318 13:07:05.172249 7599 flags.go:64] FLAG: --healthz-port="10248" Mar 18 13:07:05.177903 master-0 kubenswrapper[7599]: I0318 13:07:05.172258 7599 flags.go:64] FLAG: --help="false" Mar 18 13:07:05.177903 master-0 kubenswrapper[7599]: I0318 13:07:05.172267 7599 flags.go:64] FLAG: --hostname-override="" Mar 18 13:07:05.177903 master-0 kubenswrapper[7599]: I0318 13:07:05.172276 7599 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 18 13:07:05.177903 master-0 kubenswrapper[7599]: I0318 13:07:05.172285 7599 flags.go:64] FLAG: --http-check-frequency="20s" Mar 18 13:07:05.177903 master-0 kubenswrapper[7599]: I0318 13:07:05.172294 7599 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 18 13:07:05.177903 master-0 kubenswrapper[7599]: I0318 13:07:05.172303 7599 flags.go:64] FLAG: --image-credential-provider-config="" Mar 18 13:07:05.179513 master-0 kubenswrapper[7599]: I0318 13:07:05.172313 7599 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 18 13:07:05.179513 master-0 kubenswrapper[7599]: I0318 13:07:05.172321 7599 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 18 13:07:05.179513 master-0 kubenswrapper[7599]: I0318 13:07:05.172330 7599 flags.go:64] FLAG: --image-service-endpoint="" Mar 18 13:07:05.179513 master-0 kubenswrapper[7599]: I0318 13:07:05.172339 7599 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 18 13:07:05.179513 master-0 kubenswrapper[7599]: I0318 13:07:05.172347 7599 flags.go:64] FLAG: --kube-api-burst="100" Mar 18 13:07:05.179513 master-0 kubenswrapper[7599]: I0318 13:07:05.172356 7599 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 18 13:07:05.179513 master-0 kubenswrapper[7599]: I0318 13:07:05.172366 7599 flags.go:64] FLAG: --kube-api-qps="50" Mar 18 13:07:05.179513 master-0 kubenswrapper[7599]: I0318 13:07:05.172375 7599 flags.go:64] FLAG: --kube-reserved="" Mar 18 13:07:05.179513 master-0 kubenswrapper[7599]: I0318 13:07:05.172384 7599 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 18 13:07:05.179513 master-0 kubenswrapper[7599]: I0318 13:07:05.172392 7599 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 18 13:07:05.179513 master-0 kubenswrapper[7599]: I0318 13:07:05.172402 7599 flags.go:64] FLAG: --kubelet-cgroups="" Mar 18 13:07:05.179513 master-0 kubenswrapper[7599]: I0318 13:07:05.172448 7599 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 18 13:07:05.179513 master-0 kubenswrapper[7599]: I0318 13:07:05.172462 7599 flags.go:64] FLAG: --lock-file="" Mar 18 13:07:05.179513 master-0 kubenswrapper[7599]: I0318 13:07:05.172474 7599 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 18 13:07:05.179513 master-0 kubenswrapper[7599]: I0318 13:07:05.172485 7599 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 18 13:07:05.179513 master-0 kubenswrapper[7599]: I0318 13:07:05.172497 7599 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 18 13:07:05.179513 master-0 kubenswrapper[7599]: I0318 13:07:05.172511 7599 flags.go:64] FLAG: --log-json-split-stream="false" Mar 18 13:07:05.179513 master-0 kubenswrapper[7599]: I0318 13:07:05.172521 7599 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 18 13:07:05.179513 master-0 kubenswrapper[7599]: I0318 13:07:05.172530 7599 flags.go:64] FLAG: --log-text-split-stream="false" Mar 18 13:07:05.179513 master-0 kubenswrapper[7599]: I0318 13:07:05.172539 7599 flags.go:64] FLAG: --logging-format="text" Mar 18 13:07:05.179513 master-0 kubenswrapper[7599]: I0318 13:07:05.172547 7599 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 18 13:07:05.179513 master-0 kubenswrapper[7599]: I0318 13:07:05.172557 7599 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 18 13:07:05.179513 master-0 kubenswrapper[7599]: I0318 13:07:05.172568 7599 flags.go:64] FLAG: --manifest-url="" Mar 18 13:07:05.179513 master-0 kubenswrapper[7599]: I0318 13:07:05.172576 7599 flags.go:64] FLAG: --manifest-url-header="" Mar 18 13:07:05.179513 master-0 kubenswrapper[7599]: I0318 13:07:05.172588 7599 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 18 13:07:05.181131 master-0 kubenswrapper[7599]: I0318 13:07:05.172597 7599 flags.go:64] FLAG: --max-open-files="1000000" Mar 18 13:07:05.181131 master-0 kubenswrapper[7599]: I0318 13:07:05.172608 7599 flags.go:64] FLAG: --max-pods="110" Mar 18 13:07:05.181131 master-0 kubenswrapper[7599]: I0318 13:07:05.172617 7599 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 18 13:07:05.181131 master-0 kubenswrapper[7599]: I0318 13:07:05.172627 7599 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 18 13:07:05.181131 master-0 kubenswrapper[7599]: I0318 13:07:05.172636 7599 flags.go:64] FLAG: --memory-manager-policy="None" Mar 18 13:07:05.181131 master-0 kubenswrapper[7599]: I0318 13:07:05.172644 7599 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 18 13:07:05.181131 master-0 kubenswrapper[7599]: I0318 13:07:05.172653 7599 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 18 13:07:05.181131 master-0 kubenswrapper[7599]: I0318 13:07:05.172662 7599 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 18 13:07:05.181131 master-0 kubenswrapper[7599]: I0318 13:07:05.172672 7599 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 18 13:07:05.181131 master-0 kubenswrapper[7599]: I0318 13:07:05.172692 7599 flags.go:64] FLAG: --node-status-max-images="50" Mar 18 13:07:05.181131 master-0 kubenswrapper[7599]: I0318 13:07:05.172701 7599 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 18 13:07:05.181131 master-0 kubenswrapper[7599]: I0318 13:07:05.172710 7599 flags.go:64] FLAG: --oom-score-adj="-999" Mar 18 13:07:05.181131 master-0 kubenswrapper[7599]: I0318 13:07:05.172719 7599 flags.go:64] FLAG: --pod-cidr="" Mar 18 13:07:05.181131 master-0 kubenswrapper[7599]: I0318 13:07:05.172728 7599 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422" Mar 18 13:07:05.181131 master-0 kubenswrapper[7599]: I0318 13:07:05.172741 7599 flags.go:64] FLAG: --pod-manifest-path="" Mar 18 13:07:05.181131 master-0 kubenswrapper[7599]: I0318 13:07:05.172750 7599 flags.go:64] FLAG: --pod-max-pids="-1" Mar 18 13:07:05.181131 master-0 kubenswrapper[7599]: I0318 13:07:05.172759 7599 flags.go:64] FLAG: --pods-per-core="0" Mar 18 13:07:05.181131 master-0 kubenswrapper[7599]: I0318 13:07:05.172768 7599 flags.go:64] FLAG: --port="10250" Mar 18 13:07:05.181131 master-0 kubenswrapper[7599]: I0318 13:07:05.172777 7599 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 18 13:07:05.181131 master-0 kubenswrapper[7599]: I0318 13:07:05.172786 7599 flags.go:64] FLAG: --provider-id="" Mar 18 13:07:05.181131 master-0 kubenswrapper[7599]: I0318 13:07:05.172795 7599 flags.go:64] FLAG: --qos-reserved="" Mar 18 13:07:05.181131 master-0 kubenswrapper[7599]: I0318 13:07:05.172803 7599 flags.go:64] FLAG: --read-only-port="10255" Mar 18 13:07:05.181131 master-0 kubenswrapper[7599]: I0318 13:07:05.172812 7599 flags.go:64] FLAG: --register-node="true" Mar 18 13:07:05.181131 master-0 kubenswrapper[7599]: I0318 13:07:05.172822 7599 flags.go:64] FLAG: --register-schedulable="true" Mar 18 13:07:05.182697 master-0 kubenswrapper[7599]: I0318 13:07:05.172830 7599 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 18 13:07:05.182697 master-0 kubenswrapper[7599]: I0318 13:07:05.172845 7599 flags.go:64] FLAG: --registry-burst="10" Mar 18 13:07:05.182697 master-0 kubenswrapper[7599]: I0318 13:07:05.172957 7599 flags.go:64] FLAG: --registry-qps="5" Mar 18 13:07:05.182697 master-0 kubenswrapper[7599]: I0318 13:07:05.173747 7599 flags.go:64] FLAG: --reserved-cpus="" Mar 18 13:07:05.182697 master-0 kubenswrapper[7599]: I0318 13:07:05.173782 7599 flags.go:64] FLAG: --reserved-memory="" Mar 18 13:07:05.182697 master-0 kubenswrapper[7599]: I0318 13:07:05.173823 7599 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 18 13:07:05.182697 master-0 kubenswrapper[7599]: I0318 13:07:05.173839 7599 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 18 13:07:05.182697 master-0 kubenswrapper[7599]: I0318 13:07:05.173854 7599 flags.go:64] FLAG: --rotate-certificates="false" Mar 18 13:07:05.182697 master-0 kubenswrapper[7599]: I0318 13:07:05.173869 7599 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 18 13:07:05.182697 master-0 kubenswrapper[7599]: I0318 13:07:05.173882 7599 flags.go:64] FLAG: --runonce="false" Mar 18 13:07:05.182697 master-0 kubenswrapper[7599]: I0318 13:07:05.173894 7599 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 18 13:07:05.182697 master-0 kubenswrapper[7599]: I0318 13:07:05.173907 7599 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 18 13:07:05.182697 master-0 kubenswrapper[7599]: I0318 13:07:05.173930 7599 flags.go:64] FLAG: --seccomp-default="false" Mar 18 13:07:05.182697 master-0 kubenswrapper[7599]: I0318 13:07:05.173943 7599 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 18 13:07:05.182697 master-0 kubenswrapper[7599]: I0318 13:07:05.173957 7599 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 18 13:07:05.182697 master-0 kubenswrapper[7599]: I0318 13:07:05.173970 7599 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 18 13:07:05.182697 master-0 kubenswrapper[7599]: I0318 13:07:05.173983 7599 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 18 13:07:05.182697 master-0 kubenswrapper[7599]: I0318 13:07:05.173996 7599 flags.go:64] FLAG: --storage-driver-password="root" Mar 18 13:07:05.182697 master-0 kubenswrapper[7599]: I0318 13:07:05.174008 7599 flags.go:64] FLAG: --storage-driver-secure="false" Mar 18 13:07:05.182697 master-0 kubenswrapper[7599]: I0318 13:07:05.174021 7599 flags.go:64] FLAG: --storage-driver-table="stats" Mar 18 13:07:05.182697 master-0 kubenswrapper[7599]: I0318 13:07:05.174044 7599 flags.go:64] FLAG: --storage-driver-user="root" Mar 18 13:07:05.182697 master-0 kubenswrapper[7599]: I0318 13:07:05.174056 7599 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 18 13:07:05.182697 master-0 kubenswrapper[7599]: I0318 13:07:05.174068 7599 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 18 13:07:05.182697 master-0 kubenswrapper[7599]: I0318 13:07:05.174080 7599 flags.go:64] FLAG: --system-cgroups="" Mar 18 13:07:05.182697 master-0 kubenswrapper[7599]: I0318 13:07:05.174093 7599 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 18 13:07:05.184029 master-0 kubenswrapper[7599]: I0318 13:07:05.174120 7599 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 18 13:07:05.184029 master-0 kubenswrapper[7599]: I0318 13:07:05.174136 7599 flags.go:64] FLAG: --tls-cert-file="" Mar 18 13:07:05.184029 master-0 kubenswrapper[7599]: I0318 13:07:05.174160 7599 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 18 13:07:05.184029 master-0 kubenswrapper[7599]: I0318 13:07:05.174177 7599 flags.go:64] FLAG: --tls-min-version="" Mar 18 13:07:05.184029 master-0 kubenswrapper[7599]: I0318 13:07:05.174189 7599 flags.go:64] FLAG: --tls-private-key-file="" Mar 18 13:07:05.184029 master-0 kubenswrapper[7599]: I0318 13:07:05.174200 7599 flags.go:64] FLAG: --topology-manager-policy="none" Mar 18 13:07:05.184029 master-0 kubenswrapper[7599]: I0318 13:07:05.174212 7599 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 18 13:07:05.184029 master-0 kubenswrapper[7599]: I0318 13:07:05.174225 7599 flags.go:64] FLAG: --topology-manager-scope="container" Mar 18 13:07:05.184029 master-0 kubenswrapper[7599]: I0318 13:07:05.174238 7599 flags.go:64] FLAG: --v="2" Mar 18 13:07:05.184029 master-0 kubenswrapper[7599]: I0318 13:07:05.174254 7599 flags.go:64] FLAG: --version="false" Mar 18 13:07:05.184029 master-0 kubenswrapper[7599]: I0318 13:07:05.174278 7599 flags.go:64] FLAG: --vmodule="" Mar 18 13:07:05.184029 master-0 kubenswrapper[7599]: I0318 13:07:05.174292 7599 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 18 13:07:05.184029 master-0 kubenswrapper[7599]: I0318 13:07:05.174305 7599 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 18 13:07:05.184029 master-0 kubenswrapper[7599]: W0318 13:07:05.175004 7599 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 13:07:05.184029 master-0 kubenswrapper[7599]: W0318 13:07:05.175031 7599 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 13:07:05.184029 master-0 kubenswrapper[7599]: W0318 13:07:05.175047 7599 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 13:07:05.184029 master-0 kubenswrapper[7599]: W0318 13:07:05.175057 7599 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 13:07:05.184029 master-0 kubenswrapper[7599]: W0318 13:07:05.175067 7599 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 13:07:05.184029 master-0 kubenswrapper[7599]: W0318 13:07:05.175089 7599 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 13:07:05.184029 master-0 kubenswrapper[7599]: W0318 13:07:05.175097 7599 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 13:07:05.184029 master-0 kubenswrapper[7599]: W0318 13:07:05.175106 7599 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 13:07:05.184029 master-0 kubenswrapper[7599]: W0318 13:07:05.175115 7599 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 13:07:05.184029 master-0 kubenswrapper[7599]: W0318 13:07:05.175123 7599 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 13:07:05.185886 master-0 kubenswrapper[7599]: W0318 13:07:05.175132 7599 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 13:07:05.185886 master-0 kubenswrapper[7599]: W0318 13:07:05.175142 7599 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 13:07:05.185886 master-0 kubenswrapper[7599]: W0318 13:07:05.175150 7599 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 13:07:05.185886 master-0 kubenswrapper[7599]: W0318 13:07:05.175158 7599 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 13:07:05.185886 master-0 kubenswrapper[7599]: W0318 13:07:05.175167 7599 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 13:07:05.185886 master-0 kubenswrapper[7599]: W0318 13:07:05.175180 7599 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 13:07:05.185886 master-0 kubenswrapper[7599]: W0318 13:07:05.175188 7599 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 13:07:05.185886 master-0 kubenswrapper[7599]: W0318 13:07:05.175197 7599 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 13:07:05.185886 master-0 kubenswrapper[7599]: W0318 13:07:05.175205 7599 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 13:07:05.185886 master-0 kubenswrapper[7599]: W0318 13:07:05.175215 7599 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 13:07:05.185886 master-0 kubenswrapper[7599]: W0318 13:07:05.175258 7599 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 13:07:05.185886 master-0 kubenswrapper[7599]: W0318 13:07:05.175302 7599 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 13:07:05.185886 master-0 kubenswrapper[7599]: W0318 13:07:05.175316 7599 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 13:07:05.185886 master-0 kubenswrapper[7599]: W0318 13:07:05.175328 7599 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 13:07:05.185886 master-0 kubenswrapper[7599]: W0318 13:07:05.175338 7599 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 13:07:05.185886 master-0 kubenswrapper[7599]: W0318 13:07:05.175347 7599 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 13:07:05.185886 master-0 kubenswrapper[7599]: W0318 13:07:05.175355 7599 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 13:07:05.185886 master-0 kubenswrapper[7599]: W0318 13:07:05.175372 7599 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 13:07:05.185886 master-0 kubenswrapper[7599]: W0318 13:07:05.175384 7599 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 13:07:05.187128 master-0 kubenswrapper[7599]: W0318 13:07:05.175394 7599 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 13:07:05.187128 master-0 kubenswrapper[7599]: W0318 13:07:05.175402 7599 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 13:07:05.187128 master-0 kubenswrapper[7599]: W0318 13:07:05.175467 7599 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 13:07:05.187128 master-0 kubenswrapper[7599]: W0318 13:07:05.175476 7599 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 13:07:05.187128 master-0 kubenswrapper[7599]: W0318 13:07:05.175486 7599 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 13:07:05.187128 master-0 kubenswrapper[7599]: W0318 13:07:05.175494 7599 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 13:07:05.187128 master-0 kubenswrapper[7599]: W0318 13:07:05.175502 7599 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 13:07:05.187128 master-0 kubenswrapper[7599]: W0318 13:07:05.175512 7599 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 13:07:05.187128 master-0 kubenswrapper[7599]: W0318 13:07:05.175521 7599 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 13:07:05.187128 master-0 kubenswrapper[7599]: W0318 13:07:05.175529 7599 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 13:07:05.187128 master-0 kubenswrapper[7599]: W0318 13:07:05.175543 7599 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 13:07:05.187128 master-0 kubenswrapper[7599]: W0318 13:07:05.175551 7599 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 13:07:05.187128 master-0 kubenswrapper[7599]: W0318 13:07:05.175560 7599 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 13:07:05.187128 master-0 kubenswrapper[7599]: W0318 13:07:05.175568 7599 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 13:07:05.187128 master-0 kubenswrapper[7599]: W0318 13:07:05.175576 7599 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 13:07:05.187128 master-0 kubenswrapper[7599]: W0318 13:07:05.175584 7599 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 13:07:05.187128 master-0 kubenswrapper[7599]: W0318 13:07:05.175592 7599 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 13:07:05.187128 master-0 kubenswrapper[7599]: W0318 13:07:05.175603 7599 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 13:07:05.187128 master-0 kubenswrapper[7599]: W0318 13:07:05.175613 7599 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 13:07:05.188293 master-0 kubenswrapper[7599]: W0318 13:07:05.175623 7599 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 13:07:05.188293 master-0 kubenswrapper[7599]: W0318 13:07:05.175644 7599 feature_gate.go:330] unrecognized feature gate: Example Mar 18 13:07:05.188293 master-0 kubenswrapper[7599]: W0318 13:07:05.175652 7599 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 13:07:05.188293 master-0 kubenswrapper[7599]: W0318 13:07:05.175666 7599 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 13:07:05.188293 master-0 kubenswrapper[7599]: W0318 13:07:05.175674 7599 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 13:07:05.188293 master-0 kubenswrapper[7599]: W0318 13:07:05.175682 7599 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 13:07:05.188293 master-0 kubenswrapper[7599]: W0318 13:07:05.175690 7599 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 13:07:05.188293 master-0 kubenswrapper[7599]: W0318 13:07:05.175698 7599 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 13:07:05.188293 master-0 kubenswrapper[7599]: W0318 13:07:05.175706 7599 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 13:07:05.188293 master-0 kubenswrapper[7599]: W0318 13:07:05.175715 7599 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 13:07:05.188293 master-0 kubenswrapper[7599]: W0318 13:07:05.175725 7599 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 13:07:05.188293 master-0 kubenswrapper[7599]: W0318 13:07:05.175735 7599 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 13:07:05.188293 master-0 kubenswrapper[7599]: W0318 13:07:05.175744 7599 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 13:07:05.188293 master-0 kubenswrapper[7599]: W0318 13:07:05.175753 7599 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 13:07:05.188293 master-0 kubenswrapper[7599]: W0318 13:07:05.175763 7599 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 13:07:05.188293 master-0 kubenswrapper[7599]: W0318 13:07:05.175771 7599 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 13:07:05.188293 master-0 kubenswrapper[7599]: W0318 13:07:05.175784 7599 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 13:07:05.188293 master-0 kubenswrapper[7599]: W0318 13:07:05.175793 7599 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 13:07:05.188293 master-0 kubenswrapper[7599]: W0318 13:07:05.175801 7599 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 13:07:05.188293 master-0 kubenswrapper[7599]: W0318 13:07:05.175810 7599 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 13:07:05.189835 master-0 kubenswrapper[7599]: W0318 13:07:05.175818 7599 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 13:07:05.189835 master-0 kubenswrapper[7599]: W0318 13:07:05.175826 7599 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 13:07:05.189835 master-0 kubenswrapper[7599]: W0318 13:07:05.175836 7599 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 13:07:05.189835 master-0 kubenswrapper[7599]: W0318 13:07:05.175844 7599 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 13:07:05.189835 master-0 kubenswrapper[7599]: I0318 13:07:05.175871 7599 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 13:07:05.189835 master-0 kubenswrapper[7599]: I0318 13:07:05.186095 7599 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 18 13:07:05.189835 master-0 kubenswrapper[7599]: I0318 13:07:05.186145 7599 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 18 13:07:05.189835 master-0 kubenswrapper[7599]: W0318 13:07:05.186299 7599 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 13:07:05.189835 master-0 kubenswrapper[7599]: W0318 13:07:05.186318 7599 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 13:07:05.189835 master-0 kubenswrapper[7599]: W0318 13:07:05.186329 7599 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 13:07:05.189835 master-0 kubenswrapper[7599]: W0318 13:07:05.186338 7599 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 13:07:05.189835 master-0 kubenswrapper[7599]: W0318 13:07:05.186347 7599 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 13:07:05.189835 master-0 kubenswrapper[7599]: W0318 13:07:05.186358 7599 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 13:07:05.189835 master-0 kubenswrapper[7599]: W0318 13:07:05.186369 7599 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 13:07:05.189835 master-0 kubenswrapper[7599]: W0318 13:07:05.186380 7599 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 13:07:05.189835 master-0 kubenswrapper[7599]: W0318 13:07:05.186390 7599 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 13:07:05.190690 master-0 kubenswrapper[7599]: W0318 13:07:05.186400 7599 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 13:07:05.190690 master-0 kubenswrapper[7599]: W0318 13:07:05.186409 7599 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 13:07:05.190690 master-0 kubenswrapper[7599]: W0318 13:07:05.186473 7599 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 13:07:05.190690 master-0 kubenswrapper[7599]: W0318 13:07:05.186483 7599 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 13:07:05.190690 master-0 kubenswrapper[7599]: W0318 13:07:05.186493 7599 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 13:07:05.190690 master-0 kubenswrapper[7599]: W0318 13:07:05.186504 7599 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 13:07:05.190690 master-0 kubenswrapper[7599]: W0318 13:07:05.186514 7599 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 13:07:05.190690 master-0 kubenswrapper[7599]: W0318 13:07:05.186524 7599 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 13:07:05.190690 master-0 kubenswrapper[7599]: W0318 13:07:05.186534 7599 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 13:07:05.190690 master-0 kubenswrapper[7599]: W0318 13:07:05.186545 7599 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 13:07:05.190690 master-0 kubenswrapper[7599]: W0318 13:07:05.186555 7599 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 13:07:05.190690 master-0 kubenswrapper[7599]: W0318 13:07:05.186565 7599 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 13:07:05.190690 master-0 kubenswrapper[7599]: W0318 13:07:05.186576 7599 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 13:07:05.190690 master-0 kubenswrapper[7599]: W0318 13:07:05.186590 7599 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 13:07:05.190690 master-0 kubenswrapper[7599]: W0318 13:07:05.186604 7599 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 13:07:05.190690 master-0 kubenswrapper[7599]: W0318 13:07:05.186615 7599 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 13:07:05.190690 master-0 kubenswrapper[7599]: W0318 13:07:05.186628 7599 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 13:07:05.190690 master-0 kubenswrapper[7599]: W0318 13:07:05.186639 7599 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 13:07:05.190690 master-0 kubenswrapper[7599]: W0318 13:07:05.186649 7599 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 13:07:05.190690 master-0 kubenswrapper[7599]: W0318 13:07:05.186661 7599 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 13:07:05.191869 master-0 kubenswrapper[7599]: W0318 13:07:05.186672 7599 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 13:07:05.191869 master-0 kubenswrapper[7599]: W0318 13:07:05.186683 7599 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 13:07:05.191869 master-0 kubenswrapper[7599]: W0318 13:07:05.186695 7599 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 13:07:05.191869 master-0 kubenswrapper[7599]: W0318 13:07:05.186705 7599 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 13:07:05.191869 master-0 kubenswrapper[7599]: W0318 13:07:05.186715 7599 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 13:07:05.191869 master-0 kubenswrapper[7599]: W0318 13:07:05.186726 7599 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 13:07:05.191869 master-0 kubenswrapper[7599]: W0318 13:07:05.186736 7599 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 13:07:05.191869 master-0 kubenswrapper[7599]: W0318 13:07:05.186746 7599 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 13:07:05.191869 master-0 kubenswrapper[7599]: W0318 13:07:05.186756 7599 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 13:07:05.191869 master-0 kubenswrapper[7599]: W0318 13:07:05.186770 7599 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 13:07:05.191869 master-0 kubenswrapper[7599]: W0318 13:07:05.186783 7599 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 13:07:05.191869 master-0 kubenswrapper[7599]: W0318 13:07:05.186794 7599 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 13:07:05.191869 master-0 kubenswrapper[7599]: W0318 13:07:05.186804 7599 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 13:07:05.191869 master-0 kubenswrapper[7599]: W0318 13:07:05.186814 7599 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 13:07:05.191869 master-0 kubenswrapper[7599]: W0318 13:07:05.186829 7599 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 13:07:05.191869 master-0 kubenswrapper[7599]: W0318 13:07:05.186841 7599 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 13:07:05.191869 master-0 kubenswrapper[7599]: W0318 13:07:05.186852 7599 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 13:07:05.191869 master-0 kubenswrapper[7599]: W0318 13:07:05.186863 7599 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 13:07:05.191869 master-0 kubenswrapper[7599]: W0318 13:07:05.186875 7599 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 13:07:05.193211 master-0 kubenswrapper[7599]: W0318 13:07:05.186886 7599 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 13:07:05.193211 master-0 kubenswrapper[7599]: W0318 13:07:05.186897 7599 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 13:07:05.193211 master-0 kubenswrapper[7599]: W0318 13:07:05.186910 7599 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 13:07:05.193211 master-0 kubenswrapper[7599]: W0318 13:07:05.186922 7599 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 13:07:05.193211 master-0 kubenswrapper[7599]: W0318 13:07:05.186932 7599 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 13:07:05.193211 master-0 kubenswrapper[7599]: W0318 13:07:05.186943 7599 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 13:07:05.193211 master-0 kubenswrapper[7599]: W0318 13:07:05.186955 7599 feature_gate.go:330] unrecognized feature gate: Example Mar 18 13:07:05.193211 master-0 kubenswrapper[7599]: W0318 13:07:05.186965 7599 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 13:07:05.193211 master-0 kubenswrapper[7599]: W0318 13:07:05.186975 7599 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 13:07:05.193211 master-0 kubenswrapper[7599]: W0318 13:07:05.186985 7599 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 13:07:05.193211 master-0 kubenswrapper[7599]: W0318 13:07:05.186995 7599 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 13:07:05.193211 master-0 kubenswrapper[7599]: W0318 13:07:05.187005 7599 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 13:07:05.193211 master-0 kubenswrapper[7599]: W0318 13:07:05.187015 7599 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 13:07:05.193211 master-0 kubenswrapper[7599]: W0318 13:07:05.187025 7599 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 13:07:05.193211 master-0 kubenswrapper[7599]: W0318 13:07:05.187035 7599 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 13:07:05.193211 master-0 kubenswrapper[7599]: W0318 13:07:05.187046 7599 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 13:07:05.193211 master-0 kubenswrapper[7599]: W0318 13:07:05.187057 7599 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 13:07:05.193211 master-0 kubenswrapper[7599]: W0318 13:07:05.187067 7599 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 13:07:05.193211 master-0 kubenswrapper[7599]: W0318 13:07:05.187077 7599 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 13:07:05.193211 master-0 kubenswrapper[7599]: W0318 13:07:05.187088 7599 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 13:07:05.194385 master-0 kubenswrapper[7599]: W0318 13:07:05.187098 7599 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 13:07:05.194385 master-0 kubenswrapper[7599]: W0318 13:07:05.187109 7599 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 13:07:05.194385 master-0 kubenswrapper[7599]: W0318 13:07:05.187120 7599 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 13:07:05.194385 master-0 kubenswrapper[7599]: W0318 13:07:05.187130 7599 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 13:07:05.194385 master-0 kubenswrapper[7599]: I0318 13:07:05.187146 7599 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 13:07:05.194385 master-0 kubenswrapper[7599]: W0318 13:07:05.187512 7599 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 13:07:05.194385 master-0 kubenswrapper[7599]: W0318 13:07:05.187528 7599 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 13:07:05.194385 master-0 kubenswrapper[7599]: W0318 13:07:05.187538 7599 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 13:07:05.194385 master-0 kubenswrapper[7599]: W0318 13:07:05.187547 7599 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 13:07:05.194385 master-0 kubenswrapper[7599]: W0318 13:07:05.187555 7599 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 13:07:05.194385 master-0 kubenswrapper[7599]: W0318 13:07:05.187563 7599 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 13:07:05.194385 master-0 kubenswrapper[7599]: W0318 13:07:05.187572 7599 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 13:07:05.194385 master-0 kubenswrapper[7599]: W0318 13:07:05.187580 7599 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 13:07:05.194385 master-0 kubenswrapper[7599]: W0318 13:07:05.187588 7599 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 13:07:05.194385 master-0 kubenswrapper[7599]: W0318 13:07:05.187596 7599 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 13:07:05.194385 master-0 kubenswrapper[7599]: W0318 13:07:05.187604 7599 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 13:07:05.195346 master-0 kubenswrapper[7599]: W0318 13:07:05.187612 7599 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 13:07:05.195346 master-0 kubenswrapper[7599]: W0318 13:07:05.187620 7599 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 13:07:05.195346 master-0 kubenswrapper[7599]: W0318 13:07:05.187628 7599 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 13:07:05.195346 master-0 kubenswrapper[7599]: W0318 13:07:05.187636 7599 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 13:07:05.195346 master-0 kubenswrapper[7599]: W0318 13:07:05.187644 7599 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 13:07:05.195346 master-0 kubenswrapper[7599]: W0318 13:07:05.187652 7599 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 13:07:05.195346 master-0 kubenswrapper[7599]: W0318 13:07:05.187660 7599 feature_gate.go:330] unrecognized feature gate: Example Mar 18 13:07:05.195346 master-0 kubenswrapper[7599]: W0318 13:07:05.187671 7599 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 13:07:05.195346 master-0 kubenswrapper[7599]: W0318 13:07:05.187682 7599 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 13:07:05.195346 master-0 kubenswrapper[7599]: W0318 13:07:05.187690 7599 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 13:07:05.195346 master-0 kubenswrapper[7599]: W0318 13:07:05.187699 7599 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 13:07:05.195346 master-0 kubenswrapper[7599]: W0318 13:07:05.187709 7599 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 13:07:05.195346 master-0 kubenswrapper[7599]: W0318 13:07:05.187718 7599 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 13:07:05.195346 master-0 kubenswrapper[7599]: W0318 13:07:05.187728 7599 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 13:07:05.195346 master-0 kubenswrapper[7599]: W0318 13:07:05.187739 7599 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 13:07:05.195346 master-0 kubenswrapper[7599]: W0318 13:07:05.187749 7599 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 13:07:05.195346 master-0 kubenswrapper[7599]: W0318 13:07:05.187758 7599 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 13:07:05.195346 master-0 kubenswrapper[7599]: W0318 13:07:05.187768 7599 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 13:07:05.195346 master-0 kubenswrapper[7599]: W0318 13:07:05.187778 7599 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 13:07:05.196403 master-0 kubenswrapper[7599]: W0318 13:07:05.187786 7599 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 13:07:05.196403 master-0 kubenswrapper[7599]: W0318 13:07:05.187794 7599 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 13:07:05.196403 master-0 kubenswrapper[7599]: W0318 13:07:05.187803 7599 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 13:07:05.196403 master-0 kubenswrapper[7599]: W0318 13:07:05.187811 7599 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 13:07:05.196403 master-0 kubenswrapper[7599]: W0318 13:07:05.187820 7599 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 13:07:05.196403 master-0 kubenswrapper[7599]: W0318 13:07:05.187829 7599 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 13:07:05.196403 master-0 kubenswrapper[7599]: W0318 13:07:05.187838 7599 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 13:07:05.196403 master-0 kubenswrapper[7599]: W0318 13:07:05.187846 7599 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 13:07:05.196403 master-0 kubenswrapper[7599]: W0318 13:07:05.187855 7599 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 13:07:05.196403 master-0 kubenswrapper[7599]: W0318 13:07:05.187864 7599 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 13:07:05.196403 master-0 kubenswrapper[7599]: W0318 13:07:05.187874 7599 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 13:07:05.196403 master-0 kubenswrapper[7599]: W0318 13:07:05.187882 7599 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 13:07:05.196403 master-0 kubenswrapper[7599]: W0318 13:07:05.187891 7599 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 13:07:05.196403 master-0 kubenswrapper[7599]: W0318 13:07:05.187899 7599 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 13:07:05.196403 master-0 kubenswrapper[7599]: W0318 13:07:05.187907 7599 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 13:07:05.196403 master-0 kubenswrapper[7599]: W0318 13:07:05.187915 7599 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 13:07:05.196403 master-0 kubenswrapper[7599]: W0318 13:07:05.187923 7599 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 13:07:05.196403 master-0 kubenswrapper[7599]: W0318 13:07:05.187932 7599 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 13:07:05.196403 master-0 kubenswrapper[7599]: W0318 13:07:05.187941 7599 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 13:07:05.196403 master-0 kubenswrapper[7599]: W0318 13:07:05.187949 7599 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 13:07:05.198172 master-0 kubenswrapper[7599]: W0318 13:07:05.187958 7599 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 13:07:05.198172 master-0 kubenswrapper[7599]: W0318 13:07:05.187966 7599 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 13:07:05.198172 master-0 kubenswrapper[7599]: W0318 13:07:05.187973 7599 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 13:07:05.198172 master-0 kubenswrapper[7599]: W0318 13:07:05.187982 7599 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 13:07:05.198172 master-0 kubenswrapper[7599]: W0318 13:07:05.187990 7599 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 13:07:05.198172 master-0 kubenswrapper[7599]: W0318 13:07:05.188000 7599 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 13:07:05.198172 master-0 kubenswrapper[7599]: W0318 13:07:05.188008 7599 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 13:07:05.198172 master-0 kubenswrapper[7599]: W0318 13:07:05.188017 7599 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 13:07:05.198172 master-0 kubenswrapper[7599]: W0318 13:07:05.188025 7599 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 13:07:05.198172 master-0 kubenswrapper[7599]: W0318 13:07:05.188033 7599 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 13:07:05.198172 master-0 kubenswrapper[7599]: W0318 13:07:05.188041 7599 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 13:07:05.198172 master-0 kubenswrapper[7599]: W0318 13:07:05.188049 7599 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 13:07:05.198172 master-0 kubenswrapper[7599]: W0318 13:07:05.188059 7599 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 13:07:05.198172 master-0 kubenswrapper[7599]: W0318 13:07:05.188067 7599 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 13:07:05.198172 master-0 kubenswrapper[7599]: W0318 13:07:05.188075 7599 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 13:07:05.198172 master-0 kubenswrapper[7599]: W0318 13:07:05.188084 7599 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 13:07:05.198172 master-0 kubenswrapper[7599]: W0318 13:07:05.188092 7599 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 13:07:05.198172 master-0 kubenswrapper[7599]: W0318 13:07:05.188100 7599 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 13:07:05.198172 master-0 kubenswrapper[7599]: W0318 13:07:05.188108 7599 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 13:07:05.198172 master-0 kubenswrapper[7599]: W0318 13:07:05.188117 7599 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 13:07:05.199679 master-0 kubenswrapper[7599]: W0318 13:07:05.188125 7599 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 13:07:05.199679 master-0 kubenswrapper[7599]: W0318 13:07:05.188135 7599 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 13:07:05.199679 master-0 kubenswrapper[7599]: I0318 13:07:05.188151 7599 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 13:07:05.199679 master-0 kubenswrapper[7599]: I0318 13:07:05.188489 7599 server.go:940] "Client rotation is on, will bootstrap in background" Mar 18 13:07:05.199679 master-0 kubenswrapper[7599]: I0318 13:07:05.191994 7599 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 18 13:07:05.199679 master-0 kubenswrapper[7599]: I0318 13:07:05.192123 7599 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 18 13:07:05.199679 master-0 kubenswrapper[7599]: I0318 13:07:05.192541 7599 server.go:997] "Starting client certificate rotation" Mar 18 13:07:05.199679 master-0 kubenswrapper[7599]: I0318 13:07:05.192560 7599 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 18 13:07:05.199679 master-0 kubenswrapper[7599]: I0318 13:07:05.192815 7599 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-19 12:57:20 +0000 UTC, rotation deadline is 2026-03-19 08:31:17.062993133 +0000 UTC Mar 18 13:07:05.199679 master-0 kubenswrapper[7599]: I0318 13:07:05.192882 7599 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 19h24m11.870117397s for next certificate rotation Mar 18 13:07:05.199679 master-0 kubenswrapper[7599]: I0318 13:07:05.193620 7599 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 13:07:05.200445 master-0 kubenswrapper[7599]: I0318 13:07:05.195779 7599 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 13:07:05.200445 master-0 kubenswrapper[7599]: I0318 13:07:05.199854 7599 log.go:25] "Validated CRI v1 runtime API" Mar 18 13:07:05.203988 master-0 kubenswrapper[7599]: I0318 13:07:05.203805 7599 log.go:25] "Validated CRI v1 image API" Mar 18 13:07:05.205865 master-0 kubenswrapper[7599]: I0318 13:07:05.205535 7599 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 18 13:07:05.234053 master-0 kubenswrapper[7599]: I0318 13:07:05.233971 7599 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 b51f6abc-d651-468e-ae51-7c88144268ce:/dev/vda3] Mar 18 13:07:05.234982 master-0 kubenswrapper[7599]: I0318 13:07:05.234040 7599 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/087f1bfbdd93b7cd9cc4547bdaad7f0c837b1009fb9f9eea1b3ef9f1330544d2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/087f1bfbdd93b7cd9cc4547bdaad7f0c837b1009fb9f9eea1b3ef9f1330544d2/userdata/shm major:0 minor:286 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2df3167c99041fb8b521641e83cdf585c987ff07f0be8411cb46dd3d61303f4c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2df3167c99041fb8b521641e83cdf585c987ff07f0be8411cb46dd3d61303f4c/userdata/shm major:0 minor:100 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/35f2a49474234a3cc3d6b357341939ab9604ca7cc08b21e5412a5ae4810169c5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/35f2a49474234a3cc3d6b357341939ab9604ca7cc08b21e5412a5ae4810169c5/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3e755bfdf969ae0aedc8dea1041ea98192494df2fdc6f217c2fff168055bbf86/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3e755bfdf969ae0aedc8dea1041ea98192494df2fdc6f217c2fff168055bbf86/userdata/shm major:0 minor:250 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3f26792b173013020f69888fc826973fdb52355d71160dda571060f1b858412f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3f26792b173013020f69888fc826973fdb52355d71160dda571060f1b858412f/userdata/shm major:0 minor:240 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/492f8b5295cdebec40bc2ae4858cc5e6b7f333a6d877b4f8238f7b732ed663e7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/492f8b5295cdebec40bc2ae4858cc5e6b7f333a6d877b4f8238f7b732ed663e7/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4a97b24b2b4402b956c009659df6a92e6079c267e12ae961ceccadc636caf34a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4a97b24b2b4402b956c009659df6a92e6079c267e12ae961ceccadc636caf34a/userdata/shm major:0 minor:256 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4b4e7da45f4c21e2af79784fe7c50524be71fedf894b5ecf38d0d12d29080573/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4b4e7da45f4c21e2af79784fe7c50524be71fedf894b5ecf38d0d12d29080573/userdata/shm major:0 minor:130 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5ebf31a11d3c2bc39d6ab17ef9ad48b30eadf37d8f9dbe43bf78a5b88a48eeba/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5ebf31a11d3c2bc39d6ab17ef9ad48b30eadf37d8f9dbe43bf78a5b88a48eeba/userdata/shm major:0 minor:270 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6a1e7d1316bc2cf34931aa8f311ea4a4a0854b6b6c453298e452fe499d1407a7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6a1e7d1316bc2cf34931aa8f311ea4a4a0854b6b6c453298e452fe499d1407a7/userdata/shm major:0 minor:139 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6aa30a9c358b647ee82a65206dafbfb29c8b2259010a68ba5311f3d4fa4e3bc2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6aa30a9c358b647ee82a65206dafbfb29c8b2259010a68ba5311f3d4fa4e3bc2/userdata/shm major:0 minor:258 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8bdb6f1dfbc7856feb8d743ecbf26b76b58e1fc414ee00508489367f3860efae/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8bdb6f1dfbc7856feb8d743ecbf26b76b58e1fc414ee00508489367f3860efae/userdata/shm major:0 minor:287 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/95c74df7f62f7dc8b78c4b8838181b79403ce4060666c2043d78b39ebd9f0419/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/95c74df7f62f7dc8b78c4b8838181b79403ce4060666c2043d78b39ebd9f0419/userdata/shm major:0 minor:114 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9b875352a426804c810cbcabae5c16bee69af1f5eb5abb8757a87c72322c1d90/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9b875352a426804c810cbcabae5c16bee69af1f5eb5abb8757a87c72322c1d90/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/aadd21574589df05a94b4c4fadbf0dfafa5f50f06c631557a3bc30c9b28ade98/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/aadd21574589df05a94b4c4fadbf0dfafa5f50f06c631557a3bc30c9b28ade98/userdata/shm major:0 minor:265 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ad81888f7e81397819da2e276d066f221c19460ce9d1c808f123e49e5e137c12/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ad81888f7e81397819da2e276d066f221c19460ce9d1c808f123e49e5e137c12/userdata/shm major:0 minor:46 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c0f3b16d1ffaf44cfe1f64310fd36df2459a40e3dd2e9014c6e2cf3307b7b8c5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c0f3b16d1ffaf44cfe1f64310fd36df2459a40e3dd2e9014c6e2cf3307b7b8c5/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c3a20ede6cada5383a3c17314cdc63a1bd82056b7193b0a825d73322086a74cd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c3a20ede6cada5383a3c17314cdc63a1bd82056b7193b0a825d73322086a74cd/userdata/shm major:0 minor:292 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c9e13e3e5b49e845caaa9344e46631934d49f01be6a7ead87d5884c85f85894d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c9e13e3e5b49e845caaa9344e46631934d49f01be6a7ead87d5884c85f85894d/userdata/shm major:0 minor:119 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d01d4e5c147c00b55f6b6b29aae7404f1c65f4d9542857788934f8f1acdf5475/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d01d4e5c147c00b55f6b6b29aae7404f1c65f4d9542857788934f8f1acdf5475/userdata/shm major:0 minor:283 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dac657218e0dfd4a3c5aa3191ba5551afdd06a112b3e1902ad24f1784c7e77bb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dac657218e0dfd4a3c5aa3191ba5551afdd06a112b3e1902ad24f1784c7e77bb/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e311ec640a1a240867598468c9fc4d6af26b5595345ee1d6fdaa4ef38454491f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e311ec640a1a240867598468c9fc4d6af26b5595345ee1d6fdaa4ef38454491f/userdata/shm major:0 minor:228 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e3813939efa506945bdf3b3ce075a38e8fa5a4f203ce2d57587a61e54ae68d09/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e3813939efa506945bdf3b3ce075a38e8fa5a4f203ce2d57587a61e54ae68d09/userdata/shm major:0 minor:307 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f060cf0da8cda14bc2c113a9d1ffe07541127085f23390156d2bcdad8537aa45/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f060cf0da8cda14bc2c113a9d1ffe07541127085f23390156d2bcdad8537aa45/userdata/shm major:0 minor:252 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f9589a25d07ced54d4fbfa68774c413e985214ddc531362c9f8430ade544bfcc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f9589a25d07ced54d4fbfa68774c413e985214ddc531362c9f8430ade544bfcc/userdata/shm major:0 minor:266 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/053cc9bc-f98e-46f6-93bb-b5344d20bf74/volumes/kubernetes.io~projected/kube-api-access-gnxv5:{mountpoint:/var/lib/kubelet/pods/053cc9bc-f98e-46f6-93bb-b5344d20bf74/volumes/kubernetes.io~projected/kube-api-access-gnxv5 major:0 minor:269 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/07505113-d5e7-4ea3-b9cc-8f08cba45ccc/volumes/kubernetes.io~projected/kube-api-access-lgzkd:{mountpoint:/var/lib/kubelet/pods/07505113-d5e7-4ea3-b9cc-8f08cba45ccc/volumes/kubernetes.io~projected/kube-api-access-lgzkd major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/07505113-d5e7-4ea3-b9cc-8f08cba45ccc/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/07505113-d5e7-4ea3-b9cc-8f08cba45ccc/volumes/kubernetes.io~secret/serving-cert major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0c2c4a58-9780-4ecd-b417-e590ac3576ed/volumes/kubernetes.io~projected/kube-api-access-v6zmc:{mountpoint:/var/lib/kubelet/pods/0c2c4a58-9780-4ecd-b417-e590ac3576ed/volumes/kubernetes.io~projected/kube-api-access-v6zmc major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0c2c4a58-9780-4ecd-b417-e590ac3576ed/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0c2c4a58-9780-4ecd-b417-e590ac3576ed/volumes/kubernetes.io~secret/serving-cert major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0e7156cf-2d68-4de8-b7e7-60e1539590dd/volumes/kubernetes.io~projected/kube-api-access-z84cq:{mountpoint:/var/lib/kubelet/pods/0e7156cf-2d68-4de8-b7e7-60e1539590dd/volumes/kubernetes.io~projected/kube-api-access-z84cq major:0 minor:143 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0e7156cf-2d68-4de8-b7e7-60e1539590dd/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/0e7156cf-2d68-4de8-b7e7-60e1539590dd/volumes/kubernetes.io~secret/webhook-cert major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15a97fe2-5022-4997-9936-4247ae7ecb43/volumes/kubernetes.io~projected/kube-api-access-h4vtf:{mountpoint:/var/lib/kubelet/pods/15a97fe2-5022-4997-9936-4247ae7ecb43/volumes/kubernetes.io~projected/kube-api-access-h4vtf major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15a97fe2-5022-4997-9936-4247ae7ecb43/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/15a97fe2-5022-4997-9936-4247ae7ecb43/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/19a76585-a9ac-4ed9-9146-bb77b31848c6/volumes/kubernetes.io~projected/kube-api-access-w9zbp:{mountpoint:/var/lib/kubelet/pods/19a76585-a9ac-4ed9-9146-bb77b31848c6/volumes/kubernetes.io~projected/kube-api-access-w9zbp major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/19a76585-a9ac-4ed9-9146-bb77b31848c6/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/19a76585-a9ac-4ed9-9146-bb77b31848c6/volumes/kubernetes.io~secret/etcd-client major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/19a76585-a9ac-4ed9-9146-bb77b31848c6/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/19a76585-a9ac-4ed9-9146-bb77b31848c6/volumes/kubernetes.io~secret/serving-cert major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/290d1f84-5c5c-4bff-b045-e6020793cded/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/290d1f84-5c5c-4bff-b045-e6020793cded/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:237 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/290d1f84-5c5c-4bff-b045-e6020793cded/volumes/kubernetes.io~projected/kube-api-access-rdkx7:{mountpoint:/var/lib/kubelet/pods/290d1f84-5c5c-4bff-b045-e6020793cded/volumes/kubernetes.io~projected/kube-api-access-rdkx7 major:0 minor:236 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2b06a568-4dad-44b4-8312-aa52911dbfb0/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/2b06a568-4dad-44b4-8312-aa52911dbfb0/volumes/kubernetes.io~projected/kube-api-access major:0 minor:98 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2ea9eb53-0385-4a1a-a64f-696f8520cf49/volumes/kubernetes.io~projected/kube-api-access-4dw4r:{mountpoint:/var/lib/kubelet/pods/2ea9eb53-0385-4a1a-a64f-696f8520cf49/volumes/kubernetes.io~projected/kube-api-access-4dw4r major:0 minor:253 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/34a3a84b-048f-4822-9f05-0e7509327ca2/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/34a3a84b-048f-4822-9f05-0e7509327ca2/volumes/kubernetes.io~projected/kube-api-access major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/34a3a84b-048f-4822-9f05-0e7509327ca2/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/34a3a84b-048f-4822-9f05-0e7509327ca2/volumes/kubernetes.io~secret/serving-cert major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/394061b4-1bac-4699-96d2-88558c1adaf8/volumes/kubernetes.io~projected/kube-api-access-r7bpz:{mountpoint:/var/lib/kubelet/pods/394061b4-1bac-4699-96d2-88558c1adaf8/volumes/kubernetes.io~projected/kube-api-access-r7bpz major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/595f697b-d238-4500-84ce-1ea00377f05e/volumes/kubernetes.io~projected/kube-api-access-4fxgl:{mountpoint:/var/lib/kubelet/pods/595f697b-d238-4500-84ce-1ea00377f05e/volumes/kubernetes.io~projected/kube-api-access-4fxgl major:0 minor:238 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/595f697b-d238-4500-84ce-1ea00377f05e/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/595f697b-d238-4500-84ce-1ea00377f05e/volumes/kubernetes.io~secret/serving-cert major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/59bf5114-29f9-4f70-8582-108e95327cb2/volumes/kubernetes.io~projected/kube-api-access-z5xgh:{mountpoint:/var/lib/kubelet/pods/59bf5114-29f9-4f70-8582-108e95327cb2/volumes/kubernetes.io~projected/kube-api-access-z5xgh major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5a4202c2-c330-4a5d-87e7-0a63d069113f/volumes/kubernetes.io~projected/kube-api-access-kb5b6:{mountpoint:/var/lib/kubelet/pods/5a4202c2-c330-4a5d-87e7-0a63d069113f/volumes/kubernetes.io~projected/kube-api-access-kb5b6 major:0 minor:262 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/720a1f60-c1cb-4aef-aaec-f082090ca631/volumes/kubernetes.io~projected/kube-api-access-nbqfh:{mountpoint:/var/lib/kubelet/pods/720a1f60-c1cb-4aef-aaec-f082090ca631/volumes/kubernetes.io~projected/kube-api-access-nbqfh major:0 minor:263 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/767da57e-44e4-4861-bc6f-427c5bbb4d9d/volumes/kubernetes.io~projected/kube-api-access-2nxzr:{mountpoint:/var/lib/kubelet/pods/767da57e-44e4-4861-bc6f-427c5bbb4d9d/volumes/kubernetes.io~projected/kube-api-access-2nxzr major:0 minor:118 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc/volumes/kubernetes.io~projected/kube-api-access-qhs5w:{mountpoint:/var/lib/kubelet/pods/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc/volumes/kubernetes.io~projected/kube-api-access-qhs5w major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc/volumes/kubernetes.io~secret/serving-cert major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9/volumes/kubernetes.io~projected/kube-api-access-qvdg2:{mountpoint:/var/lib/kubelet/pods/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9/volumes/kubernetes.io~projected/kube-api-access-qvdg2 major:0 minor:260 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94/volumes/kubernetes.io~projected/kube-api-access-lgt5t:{mountpoint:/var/lib/kubelet/pods/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94/volumes/kubernetes.io~projected/kube-api-access-lgt5t major:0 minor:255 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/902909ca-ab08-49aa-9736-70e073f8e67d/volumes/kubernetes.io~projected/kube-api-access-kskqr:{mountpoint:/var/lib/kubelet/pods/902909ca-ab08-49aa-9736-70e073f8e67d/volumes/kubernetes.io~projected/kube-api-access-kskqr major:0 minor:261 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/902909ca-ab08-49aa-9736-70e073f8e67d/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/902909ca-ab08-49aa-9736-70e073f8e67d/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a8eff549-02f3-446e-b3a1-a66cecdc02a6/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/a8eff549-02f3-446e-b3a1-a66cecdc02a6/volumes/kubernetes.io~projected/kube-api-access major:0 minor:278 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a8eff549-02f3-446e-b3a1-a66cecdc02a6/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a8eff549-02f3-446e-b3a1-a66cecdc02a6/volumes/kubernetes.io~secret/serving-cert major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab2f96fb-ef55-4427-a598-7e3f1e224045/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/ab2f96fb-ef55-4427-a598-7e3f1e224045/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab2f96fb-ef55-4427-a598-7e3f1e224045/volumes/kubernetes.io~projected/kube-api-access-mddh9:{mountpoint:/var/lib/kubelet/pods/ab2f96fb-ef55-4427-a598-7e3f1e224045/volumes/kubernetes.io~projected/kube-api-access-mddh9 major:0 minor:127 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab2f96fb-ef55-4427-a598-7e3f1e224045/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/ab2f96fb-ef55-4427-a598-7e3f1e224045/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:126 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ac6d8eb6-1d5e-4757-9823-5ffe478c711c/volumes/kubernetes.io~projected/kube-api-access-zlzqd:{mountpoint:/var/lib/kubelet/pods/ac6d8eb6-1d5e-4757-9823-5ffe478c711c/volumes/kubernetes.io~projected/kube-api-access-zlzqd major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10/volumes/kubernetes.io~projected/kube-api-access-xsvmx:{mountpoint:/var/lib/kubelet/pods/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10/volumes/kubernetes.io~projected/kube-api-access-xsvmx major:0 minor:105 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b75d4622-ac12-4f82-afc9-ab63e6278b0c/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/b75d4622-ac12-4f82-afc9-ab63e6278b0c/volumes/kubernetes.io~projected/kube-api-access major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b75d4622-ac12-4f82-afc9-ab63e6278b0c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/b75d4622-ac12-4f82-afc9-ab63e6278b0c/volumes/kubernetes.io~secret/serving-cert major:0 minor:234 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bf1cc230-0a79-4a1d-b500-a65d02e50973/volumes/kubernetes.io~projected/kube-api-access-dvdtw:{mountpoint:/var/lib/kubelet/pods/bf1cc230-0a79-4a1d-b500-a65d02e50973/volumes/kubernetes.io~projected/kube-api-access-dvdtw major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bf9d21f9-64d6-4e21-a985-491197038568/volumes/kubernetes.io~projected/kube-api-access-qgffb:{mountpoint:/var/lib/kubelet/pods/bf9d21f9-64d6-4e21-a985-491197038568/volumes/kubernetes.io~projected/kube-api-access-qgffb major:0 minor:264 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bf9d21f9-64d6-4e21-a985-491197038568/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/bf9d21f9-64d6-4e21-a985-491197038568/volumes/kubernetes.io~secret/serving-cert major:0 minor:232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c3ff09ab-cbe1-49e7-8121-5f71997a5176/volumes/kubernetes.io~projected/kube-api-access-n2hxh:{mountpoint:/var/lib/kubelet/pods/c3ff09ab-cbe1-49e7-8121-5f71997a5176/volumes/kubernetes.io~projected/kube-api-access-n2hxh major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ce3728ab-5d50-40ac-95b3-74a5b62a557f/volumes/kubernetes.io~projected/kube-api-access-29qbv:{mountpoint:/var/lib/kubelet/pods/ce3728ab-5d50-40ac-95b3-74a5b62a557f/volumes/kubernetes.io~projected/kube-api-access-29qbv major:0 minor:284 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ce3728ab-5d50-40ac-95b3-74a5b62a557f/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/ce3728ab-5d50-40ac-95b3-74a5b62a557f/volumes/kubernetes.io~secret/serving-cert major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d2455453-5943-49ef-bfea-cba077197da0/volumes/kubernetes.io~projected/kube-api-access-lxk9v:{mountpoint:/var/lib/kubelet/pods/d2455453-5943-49ef-bfea-cba077197da0/volumes/kubernetes.io~projected/kube-api-access-lxk9v major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d42bcf13-548b-46c4-9a3d-a46f1b6ec045/volumes/kubernetes.io~projected/kube-api-access-vkxxg:{mountpoint:/var/lib/kubelet/pods/d42bcf13-548b-46c4-9a3d-a46f1b6ec045/volumes/kubernetes.io~projected/kube-api-access-vkxxg major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d42bcf13-548b-46c4-9a3d-a46f1b6ec045/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/d42bcf13-548b-46c4-9a3d-a46f1b6ec045/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9d09a56-ed4c-40b7-8be1-f3934c07296e/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/d9d09a56-ed4c-40b7-8be1-f3934c07296e/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9d09a56-ed4c-40b7-8be1-f3934c07296e/volumes/kubernetes.io~projected/kube-api-access-wmzr4:{mountpoint:/var/lib/kubelet/pods/d9d09a56-ed4c-40b7-8be1-f3934c07296e/volumes/kubernetes.io~projected/kube-api-access-wmzr4 major:0 minor:231 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f7f4ae93-428b-4ebd-bfaa-18359b407ede/volumes/kubernetes.io~projected/kube-api-access-mc8t5:{mountpoint:/var/lib/kubelet/pods/f7f4ae93-428b-4ebd-bfaa-18359b407ede/volumes/kubernetes.io~projected/kube-api-access-mc8t5 major:0 minor:99 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f7f4ae93-428b-4ebd-bfaa-18359b407ede/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/f7f4ae93-428b-4ebd-bfaa-18359b407ede/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fe643e40-d06d-4e69-9be3-0065c2a78567/volumes/kubernetes.io~projected/kube-api-access-w4cqp:{mountpoint:/var/lib/kubelet/pods/fe643e40-d06d-4e69-9be3-0065c2a78567/volumes/kubernetes.io~projected/kube-api-access-w4cqp major:0 minor:273 fsType:tmpfs blockSize:0} overlay_0-102:{mountpoint:/var/lib/containers/storage/overlay/f94be2944f1b37080c2ffff6eafe29e3dc1d78fbef4d1ff27d31b7bc61c29b77/merged major:0 minor:102 fsType:overlay blockSize:0} overlay_0-108:{mountpoint:/var/lib/containers/storage/overlay/16f8b4ffea0a5e47cd0eba10a488d9af62e8ab412167dbd9c18ea620d3c350f5/merged major:0 minor:108 fsType:overlay blockSize:0} overlay_0-116:{mountpoint:/var/lib/containers/storage/overlay/0735b3202eefa367d2e2bf8c9918199fd45d3c905433a2b45ac7dcfc6a5867f3/merged major:0 minor:116 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/157e1f54ca48648e41da36293453ff16c93b4722d12946dff3eb1642c6139611/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-131:{mountpoint:/var/lib/containers/storage/overlay/e4f39858d7dbc18eacb5d0462192aac1e4abbd24c86cb6bd029bc4697139e490/merged major:0 minor:131 fsType:overlay blockSize:0} overlay_0-134:{mountpoint:/var/lib/containers/storage/overlay/feaa86af7dd75e79edc740950a9e0152e688c53901cfd7604df849927b7287a3/merged major:0 minor:134 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/dcdc4e8ba8826281e7c48d612932a8283eca90e33683e6b152d27de6fd6b2e86/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-140:{mountpoint:/var/lib/containers/storage/overlay/bd7bab062f77bf03de9196d10e71403888065cb937fb823e196d7f904de4d2bd/merged major:0 minor:140 fsType:overlay blockSize:0} overlay_0-144:{mountpoint:/var/lib/containers/storage/overlay/ef85dc54f6fcbc4f28acd3824a659281588e2c52dd6225bf08927e64f8a27ea1/merged major:0 minor:144 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/8a45bb14e12f1449ccd398d11f93596bf6b5f576ee1253c64188d78ecbed1534/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/21950acd515b322dd3043586a49f7d90767a54668529984980041ff9d210a9fc/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-159:{mountpoint:/var/lib/containers/storage/overlay/c2034c98cf3b50d944b38ad7c3cc732dfede273657536854d0237c14ca43d337/merged major:0 minor:159 fsType:overlay blockSize:0} overlay_0-164:{mountpoint:/var/lib/containers/storage/overlay/d036ac79db3579566b9f082874ca7cba70cf8521f51f459f164bc0c3911577a0/merged major:0 minor:164 fsType:overlay blockSize:0} overlay_0-169:{mountpoint:/var/lib/containers/storage/overlay/f85503cb2381ef71b95bb53999a50aa13076ac6e30fb3f802a04cccf06aadc78/merged major:0 minor:169 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/2f49ceb38fb13fbc573f4f6545c4ece5a35ed7649dbbca3bc64725e0971cc0d7/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/229da0185fd0270169de6748533c5fea41278a36cc273108430fa76d070eac1d/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/51411eb6139db5ed816ee46b541b7bc2cfd9737fba520b4f6e99b7b6f25ef049/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-185:{mountpoint:/var/lib/containers/storage/overlay/657960edd263bbcba4240d0f00ef32f1b1137600861e4b928e950bf5dbe19341/merged major:0 minor:185 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/95582931e54039022659918d30650f0d072b780faad8511dd4e30b23aebf99bf/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-194:{mountpoint:/var/lib/containers/storage/overlay/b2ddee39e2f233562463301248adeda21b377a3fc095d693b6ee46937ce8b77b/merged major:0 minor:194 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/0f8a3b4c2b206d258b50d3c588c905db0be0cf7271f1e9b4a99a1a926c114a93/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-242:{mountpoint:/var/lib/containers/storage/overlay/786e8c1aead46264e9407de4e8b6aeb236155456022af57865756870d3596e32/merged major:0 minor:242 fsType:overlay blockSize:0} overlay_0-248:{mountpoint:/var/lib/containers/storage/overlay/d5ebc43ce43d77586f2e2edbed7dc736f37008ceb401920e5d570074365568e3/merged major:0 minor:248 fsType:overlay blockSize:0} overlay_0-271:{mountpoint:/var/lib/containers/storage/overlay/4de6d96f91604e9ed1eb5509c61b1b17ae0cf75029204790447a138485f6514d/merged major:0 minor:271 fsType:overlay blockSize:0} overlay_0-274:{mountpoint:/var/lib/containers/storage/overlay/e70df017947419d5d640516785ffd646c9d57b12c05c6c21ecee7649f1a0113e/merged major:0 minor:274 fsType:overlay blockSize:0} overlay_0-276:{mountpoint:/var/lib/containers/storage/overlay/b987e6fb46bb065749b28f77ff957a27abd7400d158649f160f0bae6618fb514/merged major:0 minor:276 fsType:overlay blockSize:0} overlay_0-290:{mountpoint:/var/lib/containers/storage/overlay/1e25e05c83287370a90b98c810023b84fdb0162d55fcc086e87fd5d6a516a87b/merged major:0 minor:290 fsType:overlay blockSize:0} overlay_0-293:{mountpoint:/var/lib/containers/storage/overlay/47bad877d182dec1a4563f3823eb5cdb4b8042e68496fbac655671f25d6555e1/merged major:0 minor:293 fsType:overlay blockSize:0} overlay_0-296:{mountpoint:/var/lib/containers/storage/overlay/45935998ec23985e73533d1f2a7c4401c6b339f25ba9cc57f3a2eb892431639e/merged major:0 minor:296 fsType:overlay blockSize:0} overlay_0-299:{mountpoint:/var/lib/containers/storage/overlay/b9a09ea8ce4f5b238043b8fe430dd3d67b60ec5970624fe0d3771e6f8ceb0faa/merged major:0 minor:299 fsType:overlay blockSize:0} overlay_0-301:{mountpoint:/var/lib/containers/storage/overlay/18e917e829cd5e34c8d03af304423da9d4220d28ec9261fe63c03801051b85c9/merged major:0 minor:301 fsType:overlay blockSize:0} overlay_0-303:{mountpoint:/var/lib/containers/storage/overlay/ae88d967fb6107201788847cbab7b83cc380c45c22148c347d2b54ed59b2e427/merged major:0 minor:303 fsType:overlay blockSize:0} overlay_0-305:{mountpoint:/var/lib/containers/storage/overlay/6fbfeeb04079a0c340807555834030154ad0df27cc93b2bef119f97ef92f9023/merged major:0 minor:305 fsType:overlay blockSize:0} overlay_0-309:{mountpoint:/var/lib/containers/storage/overlay/f408ae97d6dfe662463e184f3d317638d3d31f77089f23b1929e4eb745a9debc/merged major:0 minor:309 fsType:overlay blockSize:0} overlay_0-311:{mountpoint:/var/lib/containers/storage/overlay/419ad7a44dea2fb852ce7b17d9ced853e80a07d47fad574944dc8e1a0d45b295/merged major:0 minor:311 fsType:overlay blockSize:0} overlay_0-313:{mountpoint:/var/lib/containers/storage/overlay/057f8438f42935419cbb70ef0bb99e5572cea990b315452e1ca6dc13c666a060/merged major:0 minor:313 fsType:overlay blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/e4086a8851f5733ccf20eb8420e0a6c87f45b3e8c36789a833030eb5e19a7057/merged major:0 minor:44 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/34e86e0c680a0ce41fbd31354e6d2da173c52652d540532cb38eed2dcb9ee013/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/558f6769a22ee0edb86bfd9c7f32cbb7ecd65e73826206b96bdd65a7989074b0/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/b3895420043d94571c2e122273f7541394deb61f166823d5f995c076a0a78c18/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/4cf011e3bb2fcc4c2806da969cca4a50a58109c7b240818077de975f9b67fa4b/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/9b9f65f056b501fbee681bf1817a8970b758ded56c2e978bcf6efe5cf1b4d679/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/d755a8a51f043691658377516a20b009b6b3eae91de28ba3b31781f397487a9d/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-69:{mountpoint:/var/lib/containers/storage/overlay/d77cfacfdfe02195af750deef0f7fbf785edf394c0b05e99ac736464c43c26cf/merged major:0 minor:69 fsType:overlay blockSize:0} overlay_0-74:{mountpoint:/var/lib/containers/storage/overlay/0a6fb8805eeebe1a0909caa8b5cffbc296ff008dcf03e6232cdfe16863d08fa1/merged major:0 minor:74 fsType:overlay blockSize:0} overlay_0-76:{mountpoint:/var/lib/containers/storage/overlay/d89bc9b405bb9519060ccd551b53ad518c6581ad9b1f35d6f348eec762029ba9/merged major:0 minor:76 fsType:overlay blockSize:0} overlay_0-82:{mountpoint:/var/lib/containers/storage/overlay/b90796ac31c9b832f1377a264428188a124413c3dd2be3ecd8011225c038fe11/merged major:0 minor:82 fsType:overlay blockSize:0} overlay_0-90:{mountpoint:/var/lib/containers/storage/overlay/2c533d320638d12a4d87f089659a7061d2866fd17e4e82001e0c2e2d0542c716/merged major:0 minor:90 fsType:overlay blockSize:0} overlay_0-94:{mountpoint:/var/lib/containers/storage/overlay/31cbeec8f080821660addd2a47a789bfbc05d6824863728af56c54c5e2ec0f6b/merged major:0 minor:94 fsType:overlay blockSize:0}] Mar 18 13:07:05.270183 master-0 kubenswrapper[7599]: I0318 13:07:05.269454 7599 manager.go:217] Machine: {Timestamp:2026-03-18 13:07:05.268474147 +0000 UTC m=+0.229528429 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:0b28c177d1c547b6b192765c9d5bc20c SystemUUID:0b28c177-d1c5-47b6-b192-765c9d5bc20c BootID:82754421-b051-4950-9dab-4c3886d93f55 Filesystems:[{Device:overlay_0-309 DeviceMajor:0 DeviceMinor:309 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/290d1f84-5c5c-4bff-b045-e6020793cded/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:237 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3e755bfdf969ae0aedc8dea1041ea98192494df2fdc6f217c2fff168055bbf86/userdata/shm DeviceMajor:0 DeviceMinor:250 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-102 DeviceMajor:0 DeviceMinor:102 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-144 DeviceMajor:0 DeviceMinor:144 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0c2c4a58-9780-4ecd-b417-e590ac3576ed/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:235 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/595f697b-d238-4500-84ce-1ea00377f05e/volumes/kubernetes.io~projected/kube-api-access-4fxgl DeviceMajor:0 DeviceMinor:238 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-299 DeviceMajor:0 DeviceMinor:299 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-90 DeviceMajor:0 DeviceMinor:90 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6aa30a9c358b647ee82a65206dafbfb29c8b2259010a68ba5311f3d4fa4e3bc2/userdata/shm DeviceMajor:0 DeviceMinor:258 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-296 DeviceMajor:0 DeviceMinor:296 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e3813939efa506945bdf3b3ce075a38e8fa5a4f203ce2d57587a61e54ae68d09/userdata/shm DeviceMajor:0 DeviceMinor:307 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/0e7156cf-2d68-4de8-b7e7-60e1539590dd/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:138 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/15a97fe2-5022-4997-9936-4247ae7ecb43/volumes/kubernetes.io~projected/kube-api-access-h4vtf DeviceMajor:0 DeviceMinor:227 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b75d4622-ac12-4f82-afc9-ab63e6278b0c/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:239 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c0f3b16d1ffaf44cfe1f64310fd36df2459a40e3dd2e9014c6e2cf3307b7b8c5/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ce3728ab-5d50-40ac-95b3-74a5b62a557f/volumes/kubernetes.io~projected/kube-api-access-29qbv DeviceMajor:0 DeviceMinor:284 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-301 DeviceMajor:0 DeviceMinor:301 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/aadd21574589df05a94b4c4fadbf0dfafa5f50f06c631557a3bc30c9b28ade98/userdata/shm DeviceMajor:0 DeviceMinor:265 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-290 DeviceMajor:0 DeviceMinor:290 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4b4e7da45f4c21e2af79784fe7c50524be71fedf894b5ecf38d0d12d29080573/userdata/shm DeviceMajor:0 DeviceMinor:130 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ab2f96fb-ef55-4427-a598-7e3f1e224045/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:126 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5ebf31a11d3c2bc39d6ab17ef9ad48b30eadf37d8f9dbe43bf78a5b88a48eeba/userdata/shm DeviceMajor:0 DeviceMinor:270 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0e7156cf-2d68-4de8-b7e7-60e1539590dd/volumes/kubernetes.io~projected/kube-api-access-z84cq DeviceMajor:0 DeviceMinor:143 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/053cc9bc-f98e-46f6-93bb-b5344d20bf74/volumes/kubernetes.io~projected/kube-api-access-gnxv5 DeviceMajor:0 DeviceMinor:269 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/394061b4-1bac-4699-96d2-88558c1adaf8/volumes/kubernetes.io~projected/kube-api-access-r7bpz DeviceMajor:0 DeviceMinor:220 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b75d4622-ac12-4f82-afc9-ab63e6278b0c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:234 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3f26792b173013020f69888fc826973fdb52355d71160dda571060f1b858412f/userdata/shm DeviceMajor:0 DeviceMinor:240 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4a97b24b2b4402b956c009659df6a92e6079c267e12ae961ceccadc636caf34a/userdata/shm DeviceMajor:0 DeviceMinor:256 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-293 DeviceMajor:0 DeviceMinor:293 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-116 DeviceMajor:0 DeviceMinor:116 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d42bcf13-548b-46c4-9a3d-a46f1b6ec045/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:124 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-274 DeviceMajor:0 DeviceMinor:274 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-303 DeviceMajor:0 DeviceMinor:303 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-159 DeviceMajor:0 DeviceMinor:159 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-164 DeviceMajor:0 DeviceMinor:164 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d2455453-5943-49ef-bfea-cba077197da0/volumes/kubernetes.io~projected/kube-api-access-lxk9v DeviceMajor:0 DeviceMinor:226 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/07505113-d5e7-4ea3-b9cc-8f08cba45ccc/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:213 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f7f4ae93-428b-4ebd-bfaa-18359b407ede/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ce3728ab-5d50-40ac-95b3-74a5b62a557f/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:246 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-69 DeviceMajor:0 DeviceMinor:69 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a8eff549-02f3-446e-b3a1-a66cecdc02a6/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:278 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/35f2a49474234a3cc3d6b357341939ab9604ca7cc08b21e5412a5ae4810169c5/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/902909ca-ab08-49aa-9736-70e073f8e67d/volumes/kubernetes.io~projected/kube-api-access-kskqr DeviceMajor:0 DeviceMinor:261 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f9589a25d07ced54d4fbfa68774c413e985214ddc531362c9f8430ade544bfcc/userdata/shm DeviceMajor:0 DeviceMinor:266 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-94 DeviceMajor:0 DeviceMinor:94 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/59bf5114-29f9-4f70-8582-108e95327cb2/volumes/kubernetes.io~projected/kube-api-access-z5xgh DeviceMajor:0 DeviceMinor:221 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e311ec640a1a240867598468c9fc4d6af26b5595345ee1d6fdaa4ef38454491f/userdata/shm DeviceMajor:0 DeviceMinor:228 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/19a76585-a9ac-4ed9-9146-bb77b31848c6/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:214 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/fe643e40-d06d-4e69-9be3-0065c2a78567/volumes/kubernetes.io~projected/kube-api-access-w4cqp DeviceMajor:0 DeviceMinor:273 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ab2f96fb-ef55-4427-a598-7e3f1e224045/volumes/kubernetes.io~projected/kube-api-access-mddh9 DeviceMajor:0 DeviceMinor:127 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-131 DeviceMajor:0 DeviceMinor:131 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-185 DeviceMajor:0 DeviceMinor:185 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94/volumes/kubernetes.io~projected/kube-api-access-lgt5t DeviceMajor:0 DeviceMinor:255 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d01d4e5c147c00b55f6b6b29aae7404f1c65f4d9542857788934f8f1acdf5475/userdata/shm DeviceMajor:0 DeviceMinor:283 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d42bcf13-548b-46c4-9a3d-a46f1b6ec045/volumes/kubernetes.io~projected/kube-api-access-vkxxg DeviceMajor:0 DeviceMinor:125 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/19a76585-a9ac-4ed9-9146-bb77b31848c6/volumes/kubernetes.io~projected/kube-api-access-w9zbp DeviceMajor:0 DeviceMinor:222 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/0c2c4a58-9780-4ecd-b417-e590ac3576ed/volumes/kubernetes.io~projected/kube-api-access-v6zmc DeviceMajor:0 DeviceMinor:247 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c3ff09ab-cbe1-49e7-8121-5f71997a5176/volumes/kubernetes.io~projected/kube-api-access-n2hxh DeviceMajor:0 DeviceMinor:245 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bf9d21f9-64d6-4e21-a985-491197038568/volumes/kubernetes.io~projected/kube-api-access-qgffb DeviceMajor:0 DeviceMinor:264 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/07505113-d5e7-4ea3-b9cc-8f08cba45ccc/volumes/kubernetes.io~projected/kube-api-access-lgzkd DeviceMajor:0 DeviceMinor:219 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d9d09a56-ed4c-40b7-8be1-f3934c07296e/volumes/kubernetes.io~projected/kube-api-access-wmzr4 DeviceMajor:0 DeviceMinor:231 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5a4202c2-c330-4a5d-87e7-0a63d069113f/volumes/kubernetes.io~projected/kube-api-access-kb5b6 DeviceMajor:0 DeviceMinor:262 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/087f1bfbdd93b7cd9cc4547bdaad7f0c837b1009fb9f9eea1b3ef9f1330544d2/userdata/shm DeviceMajor:0 DeviceMinor:286 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c3a20ede6cada5383a3c17314cdc63a1bd82056b7193b0a825d73322086a74cd/userdata/shm DeviceMajor:0 DeviceMinor:292 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2df3167c99041fb8b521641e83cdf585c987ff07f0be8411cb46dd3d61303f4c/userdata/shm DeviceMajor:0 DeviceMinor:100 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c9e13e3e5b49e845caaa9344e46631934d49f01be6a7ead87d5884c85f85894d/userdata/shm DeviceMajor:0 DeviceMinor:119 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ac6d8eb6-1d5e-4757-9823-5ffe478c711c/volumes/kubernetes.io~projected/kube-api-access-zlzqd DeviceMajor:0 DeviceMinor:225 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/34a3a84b-048f-4822-9f05-0e7509327ca2/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:223 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/492f8b5295cdebec40bc2ae4858cc5e6b7f333a6d877b4f8238f7b732ed663e7/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-74 DeviceMajor:0 DeviceMinor:74 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dac657218e0dfd4a3c5aa3191ba5551afdd06a112b3e1902ad24f1784c7e77bb/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc/volumes/kubernetes.io~projected/kube-api-access-qhs5w DeviceMajor:0 DeviceMinor:224 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8bdb6f1dfbc7856feb8d743ecbf26b76b58e1fc414ee00508489367f3860efae/userdata/shm DeviceMajor:0 DeviceMinor:287 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/595f697b-d238-4500-84ce-1ea00377f05e/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:216 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-140 DeviceMajor:0 DeviceMinor:140 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:209 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/15a97fe2-5022-4997-9936-4247ae7ecb43/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert DeviceMajor:0 DeviceMinor:215 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-82 DeviceMajor:0 DeviceMinor:82 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9b875352a426804c810cbcabae5c16bee69af1f5eb5abb8757a87c72322c1d90/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-305 DeviceMajor:0 DeviceMinor:305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-134 DeviceMajor:0 DeviceMinor:134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/19a76585-a9ac-4ed9-9146-bb77b31848c6/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:218 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/720a1f60-c1cb-4aef-aaec-f082090ca631/volumes/kubernetes.io~projected/kube-api-access-nbqfh DeviceMajor:0 DeviceMinor:263 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2ea9eb53-0385-4a1a-a64f-696f8520cf49/volumes/kubernetes.io~projected/kube-api-access-4dw4r DeviceMajor:0 DeviceMinor:253 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-108 DeviceMajor:0 DeviceMinor:108 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/767da57e-44e4-4861-bc6f-427c5bbb4d9d/volumes/kubernetes.io~projected/kube-api-access-2nxzr DeviceMajor:0 DeviceMinor:118 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f060cf0da8cda14bc2c113a9d1ffe07541127085f23390156d2bcdad8537aa45/userdata/shm DeviceMajor:0 DeviceMinor:252 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/bf9d21f9-64d6-4e21-a985-491197038568/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:232 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2b06a568-4dad-44b4-8312-aa52911dbfb0/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:98 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/290d1f84-5c5c-4bff-b045-e6020793cded/volumes/kubernetes.io~projected/kube-api-access-rdkx7 DeviceMajor:0 DeviceMinor:236 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10/volumes/kubernetes.io~projected/kube-api-access-xsvmx DeviceMajor:0 DeviceMinor:105 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-169 DeviceMajor:0 DeviceMinor:169 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/902909ca-ab08-49aa-9736-70e073f8e67d/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:233 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/34a3a84b-048f-4822-9f05-0e7509327ca2/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:217 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-242 DeviceMajor:0 DeviceMinor:242 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a8eff549-02f3-446e-b3a1-a66cecdc02a6/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:244 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-248 DeviceMajor:0 DeviceMinor:248 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-271 DeviceMajor:0 DeviceMinor:271 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-76 DeviceMajor:0 DeviceMinor:76 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ab2f96fb-ef55-4427-a598-7e3f1e224045/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-313 DeviceMajor:0 DeviceMinor:313 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bf1cc230-0a79-4a1d-b500-a65d02e50973/volumes/kubernetes.io~projected/kube-api-access-dvdtw DeviceMajor:0 DeviceMinor:123 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d9d09a56-ed4c-40b7-8be1-f3934c07296e/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:230 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9/volumes/kubernetes.io~projected/kube-api-access-qvdg2 DeviceMajor:0 DeviceMinor:260 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-276 DeviceMajor:0 DeviceMinor:276 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ad81888f7e81397819da2e276d066f221c19460ce9d1c808f123e49e5e137c12/userdata/shm DeviceMajor:0 DeviceMinor:46 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f7f4ae93-428b-4ebd-bfaa-18359b407ede/volumes/kubernetes.io~projected/kube-api-access-mc8t5 DeviceMajor:0 DeviceMinor:99 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-311 DeviceMajor:0 DeviceMinor:311 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/95c74df7f62f7dc8b78c4b8838181b79403ce4060666c2043d78b39ebd9f0419/userdata/shm DeviceMajor:0 DeviceMinor:114 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6a1e7d1316bc2cf34931aa8f311ea4a4a0854b6b6c453298e452fe499d1407a7/userdata/shm DeviceMajor:0 DeviceMinor:139 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-194 DeviceMajor:0 DeviceMinor:194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:087f1bfbdd93b7c MacAddress:22:b3:03:f4:7f:94 Speed:10000 Mtu:8900} {Name:3e755bfdf969ae0 MacAddress:2e:43:94:a2:74:af Speed:10000 Mtu:8900} {Name:3f26792b1730130 MacAddress:32:5a:8e:b6:66:39 Speed:10000 Mtu:8900} {Name:4a97b24b2b4402b MacAddress:0e:4e:87:2d:fc:ea Speed:10000 Mtu:8900} {Name:5ebf31a11d3c2bc MacAddress:3a:ce:f4:bc:1f:d6 Speed:10000 Mtu:8900} {Name:6aa30a9c358b647 MacAddress:3a:a4:50:40:e9:78 Speed:10000 Mtu:8900} {Name:8bdb6f1dfbc7856 MacAddress:36:45:b0:50:fe:96 Speed:10000 Mtu:8900} {Name:aadd21574589df0 MacAddress:02:03:3e:e5:ce:44 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:46:57:c0:4d:d0:6e Speed:0 Mtu:8900} {Name:d01d4e5c147c00b MacAddress:92:a2:a4:f2:f6:c8 Speed:10000 Mtu:8900} {Name:e311ec640a1a240 MacAddress:aa:d4:a2:0a:33:aa Speed:10000 Mtu:8900} {Name:e3813939efa5069 MacAddress:3e:03:ea:d6:47:b0 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:c1:64:46 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:ac:24:0f Speed:-1 Mtu:9000} {Name:f060cf0da8cda14 MacAddress:72:cf:24:c3:76:32 Speed:10000 Mtu:8900} {Name:f9589a25d07ced5 MacAddress:86:58:49:71:d6:9e Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:52:62:dd:3e:30:92 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 18 13:07:05.270183 master-0 kubenswrapper[7599]: I0318 13:07:05.270142 7599 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 18 13:07:05.270813 master-0 kubenswrapper[7599]: I0318 13:07:05.270306 7599 manager.go:233] Version: {KernelVersion:5.14.0-427.113.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202603021444-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 18 13:07:05.270813 master-0 kubenswrapper[7599]: I0318 13:07:05.270583 7599 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 18 13:07:05.270813 master-0 kubenswrapper[7599]: I0318 13:07:05.270745 7599 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 18 13:07:05.270979 master-0 kubenswrapper[7599]: I0318 13:07:05.270778 7599 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 18 13:07:05.271063 master-0 kubenswrapper[7599]: I0318 13:07:05.270988 7599 topology_manager.go:138] "Creating topology manager with none policy" Mar 18 13:07:05.271063 master-0 kubenswrapper[7599]: I0318 13:07:05.271000 7599 container_manager_linux.go:303] "Creating device plugin manager" Mar 18 13:07:05.271063 master-0 kubenswrapper[7599]: I0318 13:07:05.271010 7599 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 13:07:05.271063 master-0 kubenswrapper[7599]: I0318 13:07:05.271035 7599 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 13:07:05.271275 master-0 kubenswrapper[7599]: I0318 13:07:05.271124 7599 state_mem.go:36] "Initialized new in-memory state store" Mar 18 13:07:05.271275 master-0 kubenswrapper[7599]: I0318 13:07:05.271208 7599 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 18 13:07:05.271275 master-0 kubenswrapper[7599]: I0318 13:07:05.271271 7599 kubelet.go:418] "Attempting to sync node with API server" Mar 18 13:07:05.271454 master-0 kubenswrapper[7599]: I0318 13:07:05.271282 7599 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 18 13:07:05.271454 master-0 kubenswrapper[7599]: I0318 13:07:05.271298 7599 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 18 13:07:05.271454 master-0 kubenswrapper[7599]: I0318 13:07:05.271311 7599 kubelet.go:324] "Adding apiserver pod source" Mar 18 13:07:05.271454 master-0 kubenswrapper[7599]: I0318 13:07:05.271329 7599 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 18 13:07:05.273268 master-0 kubenswrapper[7599]: I0318 13:07:05.272630 7599 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 18 13:07:05.273268 master-0 kubenswrapper[7599]: I0318 13:07:05.272821 7599 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 18 13:07:05.273268 master-0 kubenswrapper[7599]: I0318 13:07:05.273218 7599 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 18 13:07:05.273581 master-0 kubenswrapper[7599]: I0318 13:07:05.273366 7599 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 18 13:07:05.273581 master-0 kubenswrapper[7599]: I0318 13:07:05.273392 7599 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 18 13:07:05.273581 master-0 kubenswrapper[7599]: I0318 13:07:05.273405 7599 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 18 13:07:05.273743 master-0 kubenswrapper[7599]: I0318 13:07:05.273591 7599 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 18 13:07:05.273743 master-0 kubenswrapper[7599]: I0318 13:07:05.273604 7599 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 18 13:07:05.273743 master-0 kubenswrapper[7599]: I0318 13:07:05.273613 7599 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 18 13:07:05.273743 master-0 kubenswrapper[7599]: I0318 13:07:05.273621 7599 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 18 13:07:05.273743 master-0 kubenswrapper[7599]: I0318 13:07:05.273629 7599 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 18 13:07:05.273743 master-0 kubenswrapper[7599]: I0318 13:07:05.273639 7599 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 18 13:07:05.273743 master-0 kubenswrapper[7599]: I0318 13:07:05.273647 7599 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 18 13:07:05.273743 master-0 kubenswrapper[7599]: I0318 13:07:05.273658 7599 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 18 13:07:05.273743 master-0 kubenswrapper[7599]: I0318 13:07:05.273674 7599 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 18 13:07:05.273743 master-0 kubenswrapper[7599]: I0318 13:07:05.273700 7599 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 18 13:07:05.274218 master-0 kubenswrapper[7599]: I0318 13:07:05.274028 7599 server.go:1280] "Started kubelet" Mar 18 13:07:05.275024 master-0 kubenswrapper[7599]: I0318 13:07:05.274555 7599 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 18 13:07:05.275024 master-0 kubenswrapper[7599]: I0318 13:07:05.274598 7599 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 18 13:07:05.275024 master-0 kubenswrapper[7599]: I0318 13:07:05.274693 7599 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 18 13:07:05.275295 master-0 kubenswrapper[7599]: I0318 13:07:05.275251 7599 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 18 13:07:05.275112 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 18 13:07:05.290369 master-0 kubenswrapper[7599]: I0318 13:07:05.290138 7599 server.go:449] "Adding debug handlers to kubelet server" Mar 18 13:07:05.290831 master-0 kubenswrapper[7599]: I0318 13:07:05.290791 7599 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 18 13:07:05.290961 master-0 kubenswrapper[7599]: I0318 13:07:05.290919 7599 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 18 13:07:05.293952 master-0 kubenswrapper[7599]: I0318 13:07:05.293892 7599 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 18 13:07:05.293952 master-0 kubenswrapper[7599]: I0318 13:07:05.293936 7599 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 18 13:07:05.294212 master-0 kubenswrapper[7599]: I0318 13:07:05.294071 7599 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-19 12:57:20 +0000 UTC, rotation deadline is 2026-03-19 09:39:42.763344052 +0000 UTC Mar 18 13:07:05.294212 master-0 kubenswrapper[7599]: I0318 13:07:05.294181 7599 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h32m37.469171643s for next certificate rotation Mar 18 13:07:05.295277 master-0 kubenswrapper[7599]: I0318 13:07:05.294246 7599 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 18 13:07:05.295277 master-0 kubenswrapper[7599]: I0318 13:07:05.294265 7599 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 18 13:07:05.295277 master-0 kubenswrapper[7599]: I0318 13:07:05.294465 7599 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 18 13:07:05.295277 master-0 kubenswrapper[7599]: I0318 13:07:05.294867 7599 factory.go:55] Registering systemd factory Mar 18 13:07:05.295277 master-0 kubenswrapper[7599]: I0318 13:07:05.294887 7599 factory.go:221] Registration of the systemd container factory successfully Mar 18 13:07:05.295277 master-0 kubenswrapper[7599]: I0318 13:07:05.295086 7599 factory.go:153] Registering CRI-O factory Mar 18 13:07:05.295277 master-0 kubenswrapper[7599]: I0318 13:07:05.295095 7599 factory.go:221] Registration of the crio container factory successfully Mar 18 13:07:05.295277 master-0 kubenswrapper[7599]: I0318 13:07:05.295151 7599 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 18 13:07:05.295277 master-0 kubenswrapper[7599]: I0318 13:07:05.295171 7599 factory.go:103] Registering Raw factory Mar 18 13:07:05.295772 master-0 kubenswrapper[7599]: I0318 13:07:05.295354 7599 manager.go:1196] Started watching for new ooms in manager Mar 18 13:07:05.295772 master-0 kubenswrapper[7599]: I0318 13:07:05.295644 7599 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 18 13:07:05.296635 master-0 kubenswrapper[7599]: I0318 13:07:05.295805 7599 manager.go:319] Starting recovery of all containers Mar 18 13:07:05.307809 master-0 kubenswrapper[7599]: I0318 13:07:05.307727 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19a76585-a9ac-4ed9-9146-bb77b31848c6" volumeName="kubernetes.io/secret/19a76585-a9ac-4ed9-9146-bb77b31848c6-etcd-client" seLinuxMountContext="" Mar 18 13:07:05.307809 master-0 kubenswrapper[7599]: I0318 13:07:05.307791 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="290d1f84-5c5c-4bff-b045-e6020793cded" volumeName="kubernetes.io/projected/290d1f84-5c5c-4bff-b045-e6020793cded-bound-sa-token" seLinuxMountContext="" Mar 18 13:07:05.307809 master-0 kubenswrapper[7599]: I0318 13:07:05.307809 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="394061b4-1bac-4699-96d2-88558c1adaf8" volumeName="kubernetes.io/projected/394061b4-1bac-4699-96d2-88558c1adaf8-kube-api-access-r7bpz" seLinuxMountContext="" Mar 18 13:07:05.308091 master-0 kubenswrapper[7599]: I0318 13:07:05.307823 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c0e5eca-819b-40f3-bf77-0cd90a4f6e94" volumeName="kubernetes.io/projected/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-kube-api-access-lgt5t" seLinuxMountContext="" Mar 18 13:07:05.308091 master-0 kubenswrapper[7599]: I0318 13:07:05.307836 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ac6d8eb6-1d5e-4757-9823-5ffe478c711c" volumeName="kubernetes.io/configmap/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-images" seLinuxMountContext="" Mar 18 13:07:05.308091 master-0 kubenswrapper[7599]: I0318 13:07:05.307849 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ac6d8eb6-1d5e-4757-9823-5ffe478c711c" volumeName="kubernetes.io/projected/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-kube-api-access-zlzqd" seLinuxMountContext="" Mar 18 13:07:05.308091 master-0 kubenswrapper[7599]: I0318 13:07:05.307863 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="07505113-d5e7-4ea3-b9cc-8f08cba45ccc" volumeName="kubernetes.io/projected/07505113-d5e7-4ea3-b9cc-8f08cba45ccc-kube-api-access-lgzkd" seLinuxMountContext="" Mar 18 13:07:05.308091 master-0 kubenswrapper[7599]: I0318 13:07:05.307876 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d2455453-5943-49ef-bfea-cba077197da0" volumeName="kubernetes.io/projected/d2455453-5943-49ef-bfea-cba077197da0-kube-api-access-lxk9v" seLinuxMountContext="" Mar 18 13:07:05.308091 master-0 kubenswrapper[7599]: I0318 13:07:05.307893 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="053cc9bc-f98e-46f6-93bb-b5344d20bf74" volumeName="kubernetes.io/projected/053cc9bc-f98e-46f6-93bb-b5344d20bf74-kube-api-access-gnxv5" seLinuxMountContext="" Mar 18 13:07:05.308091 master-0 kubenswrapper[7599]: I0318 13:07:05.307905 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0e7156cf-2d68-4de8-b7e7-60e1539590dd" volumeName="kubernetes.io/configmap/0e7156cf-2d68-4de8-b7e7-60e1539590dd-env-overrides" seLinuxMountContext="" Mar 18 13:07:05.308091 master-0 kubenswrapper[7599]: I0318 13:07:05.307917 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15a97fe2-5022-4997-9936-4247ae7ecb43" volumeName="kubernetes.io/projected/15a97fe2-5022-4997-9936-4247ae7ecb43-kube-api-access-h4vtf" seLinuxMountContext="" Mar 18 13:07:05.308091 master-0 kubenswrapper[7599]: I0318 13:07:05.307930 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34a3a84b-048f-4822-9f05-0e7509327ca2" volumeName="kubernetes.io/secret/34a3a84b-048f-4822-9f05-0e7509327ca2-serving-cert" seLinuxMountContext="" Mar 18 13:07:05.308091 master-0 kubenswrapper[7599]: I0318 13:07:05.307943 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a4202c2-c330-4a5d-87e7-0a63d069113f" volumeName="kubernetes.io/projected/5a4202c2-c330-4a5d-87e7-0a63d069113f-kube-api-access-kb5b6" seLinuxMountContext="" Mar 18 13:07:05.308091 master-0 kubenswrapper[7599]: I0318 13:07:05.307960 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9" volumeName="kubernetes.io/projected/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-kube-api-access-qvdg2" seLinuxMountContext="" Mar 18 13:07:05.308091 master-0 kubenswrapper[7599]: I0318 13:07:05.307972 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="902909ca-ab08-49aa-9736-70e073f8e67d" volumeName="kubernetes.io/empty-dir/902909ca-ab08-49aa-9736-70e073f8e67d-operand-assets" seLinuxMountContext="" Mar 18 13:07:05.308091 master-0 kubenswrapper[7599]: I0318 13:07:05.307986 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce3728ab-5d50-40ac-95b3-74a5b62a557f" volumeName="kubernetes.io/secret/ce3728ab-5d50-40ac-95b3-74a5b62a557f-serving-cert" seLinuxMountContext="" Mar 18 13:07:05.308091 master-0 kubenswrapper[7599]: I0318 13:07:05.308001 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0c2c4a58-9780-4ecd-b417-e590ac3576ed" volumeName="kubernetes.io/projected/0c2c4a58-9780-4ecd-b417-e590ac3576ed-kube-api-access-v6zmc" seLinuxMountContext="" Mar 18 13:07:05.308091 master-0 kubenswrapper[7599]: I0318 13:07:05.308013 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9d09a56-ed4c-40b7-8be1-f3934c07296e" volumeName="kubernetes.io/configmap/d9d09a56-ed4c-40b7-8be1-f3934c07296e-trusted-ca" seLinuxMountContext="" Mar 18 13:07:05.308091 master-0 kubenswrapper[7599]: I0318 13:07:05.308024 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d42bcf13-548b-46c4-9a3d-a46f1b6ec045" volumeName="kubernetes.io/configmap/d42bcf13-548b-46c4-9a3d-a46f1b6ec045-ovnkube-config" seLinuxMountContext="" Mar 18 13:07:05.308091 master-0 kubenswrapper[7599]: I0318 13:07:05.308036 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19a76585-a9ac-4ed9-9146-bb77b31848c6" volumeName="kubernetes.io/configmap/19a76585-a9ac-4ed9-9146-bb77b31848c6-etcd-service-ca" seLinuxMountContext="" Mar 18 13:07:05.308091 master-0 kubenswrapper[7599]: I0318 13:07:05.308049 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19a76585-a9ac-4ed9-9146-bb77b31848c6" volumeName="kubernetes.io/projected/19a76585-a9ac-4ed9-9146-bb77b31848c6-kube-api-access-w9zbp" seLinuxMountContext="" Mar 18 13:07:05.308091 master-0 kubenswrapper[7599]: I0318 13:07:05.308062 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7dca7577-6bee-4dd3-917a-7b7ccc42f0fc" volumeName="kubernetes.io/configmap/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc-config" seLinuxMountContext="" Mar 18 13:07:05.308091 master-0 kubenswrapper[7599]: I0318 13:07:05.308074 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8eff549-02f3-446e-b3a1-a66cecdc02a6" volumeName="kubernetes.io/secret/a8eff549-02f3-446e-b3a1-a66cecdc02a6-serving-cert" seLinuxMountContext="" Mar 18 13:07:05.308091 master-0 kubenswrapper[7599]: I0318 13:07:05.308085 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10" volumeName="kubernetes.io/configmap/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-multus-daemon-config" seLinuxMountContext="" Mar 18 13:07:05.308091 master-0 kubenswrapper[7599]: I0318 13:07:05.308278 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3ff09ab-cbe1-49e7-8121-5f71997a5176" volumeName="kubernetes.io/projected/c3ff09ab-cbe1-49e7-8121-5f71997a5176-kube-api-access-n2hxh" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308293 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce3728ab-5d50-40ac-95b3-74a5b62a557f" volumeName="kubernetes.io/projected/ce3728ab-5d50-40ac-95b3-74a5b62a557f-kube-api-access-29qbv" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308309 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0c2c4a58-9780-4ecd-b417-e590ac3576ed" volumeName="kubernetes.io/configmap/0c2c4a58-9780-4ecd-b417-e590ac3576ed-config" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308331 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d42bcf13-548b-46c4-9a3d-a46f1b6ec045" volumeName="kubernetes.io/projected/d42bcf13-548b-46c4-9a3d-a46f1b6ec045-kube-api-access-vkxxg" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308349 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="720a1f60-c1cb-4aef-aaec-f082090ca631" volumeName="kubernetes.io/projected/720a1f60-c1cb-4aef-aaec-f082090ca631-kube-api-access-nbqfh" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308359 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="767da57e-44e4-4861-bc6f-427c5bbb4d9d" volumeName="kubernetes.io/configmap/767da57e-44e4-4861-bc6f-427c5bbb4d9d-whereabouts-flatfile-configmap" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308374 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7dca7577-6bee-4dd3-917a-7b7ccc42f0fc" volumeName="kubernetes.io/secret/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc-serving-cert" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308390 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab2f96fb-ef55-4427-a598-7e3f1e224045" volumeName="kubernetes.io/configmap/ab2f96fb-ef55-4427-a598-7e3f1e224045-ovnkube-config" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308404 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="053cc9bc-f98e-46f6-93bb-b5344d20bf74" volumeName="kubernetes.io/configmap/053cc9bc-f98e-46f6-93bb-b5344d20bf74-iptables-alerter-script" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308436 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10" volumeName="kubernetes.io/projected/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-kube-api-access-xsvmx" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308448 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf9d21f9-64d6-4e21-a985-491197038568" volumeName="kubernetes.io/configmap/bf9d21f9-64d6-4e21-a985-491197038568-service-ca-bundle" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308459 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0c2c4a58-9780-4ecd-b417-e590ac3576ed" volumeName="kubernetes.io/secret/0c2c4a58-9780-4ecd-b417-e590ac3576ed-serving-cert" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308471 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2ea9eb53-0385-4a1a-a64f-696f8520cf49" volumeName="kubernetes.io/projected/2ea9eb53-0385-4a1a-a64f-696f8520cf49-kube-api-access-4dw4r" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308484 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c0e5eca-819b-40f3-bf77-0cd90a4f6e94" volumeName="kubernetes.io/configmap/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-telemetry-config" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308495 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b75d4622-ac12-4f82-afc9-ab63e6278b0c" volumeName="kubernetes.io/configmap/b75d4622-ac12-4f82-afc9-ab63e6278b0c-config" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308511 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b75d4622-ac12-4f82-afc9-ab63e6278b0c" volumeName="kubernetes.io/secret/b75d4622-ac12-4f82-afc9-ab63e6278b0c-serving-cert" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308527 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19a76585-a9ac-4ed9-9146-bb77b31848c6" volumeName="kubernetes.io/configmap/19a76585-a9ac-4ed9-9146-bb77b31848c6-etcd-ca" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308536 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="595f697b-d238-4500-84ce-1ea00377f05e" volumeName="kubernetes.io/projected/595f697b-d238-4500-84ce-1ea00377f05e-kube-api-access-4fxgl" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308548 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0e7156cf-2d68-4de8-b7e7-60e1539590dd" volumeName="kubernetes.io/configmap/0e7156cf-2d68-4de8-b7e7-60e1539590dd-ovnkube-identity-cm" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308561 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="767da57e-44e4-4861-bc6f-427c5bbb4d9d" volumeName="kubernetes.io/configmap/767da57e-44e4-4861-bc6f-427c5bbb4d9d-cni-sysctl-allowlist" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308574 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="902909ca-ab08-49aa-9736-70e073f8e67d" volumeName="kubernetes.io/secret/902909ca-ab08-49aa-9736-70e073f8e67d-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308587 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf1cc230-0a79-4a1d-b500-a65d02e50973" volumeName="kubernetes.io/projected/bf1cc230-0a79-4a1d-b500-a65d02e50973-kube-api-access-dvdtw" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308599 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf9d21f9-64d6-4e21-a985-491197038568" volumeName="kubernetes.io/secret/bf9d21f9-64d6-4e21-a985-491197038568-serving-cert" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308610 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe643e40-d06d-4e69-9be3-0065c2a78567" volumeName="kubernetes.io/projected/fe643e40-d06d-4e69-9be3-0065c2a78567-kube-api-access-w4cqp" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308622 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0e7156cf-2d68-4de8-b7e7-60e1539590dd" volumeName="kubernetes.io/secret/0e7156cf-2d68-4de8-b7e7-60e1539590dd-webhook-cert" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308632 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34a3a84b-048f-4822-9f05-0e7509327ca2" volumeName="kubernetes.io/projected/34a3a84b-048f-4822-9f05-0e7509327ca2-kube-api-access" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308643 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab2f96fb-ef55-4427-a598-7e3f1e224045" volumeName="kubernetes.io/configmap/ab2f96fb-ef55-4427-a598-7e3f1e224045-ovnkube-script-lib" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308655 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10" volumeName="kubernetes.io/configmap/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-cni-binary-copy" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308674 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7f4ae93-428b-4ebd-bfaa-18359b407ede" volumeName="kubernetes.io/secret/f7f4ae93-428b-4ebd-bfaa-18359b407ede-metrics-tls" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308690 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19a76585-a9ac-4ed9-9146-bb77b31848c6" volumeName="kubernetes.io/secret/19a76585-a9ac-4ed9-9146-bb77b31848c6-serving-cert" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308703 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19a76585-a9ac-4ed9-9146-bb77b31848c6" volumeName="kubernetes.io/configmap/19a76585-a9ac-4ed9-9146-bb77b31848c6-config" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308717 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a4202c2-c330-4a5d-87e7-0a63d069113f" volumeName="kubernetes.io/configmap/5a4202c2-c330-4a5d-87e7-0a63d069113f-auth-proxy-config" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308745 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab2f96fb-ef55-4427-a598-7e3f1e224045" volumeName="kubernetes.io/configmap/ab2f96fb-ef55-4427-a598-7e3f1e224045-env-overrides" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308757 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b75d4622-ac12-4f82-afc9-ab63e6278b0c" volumeName="kubernetes.io/projected/b75d4622-ac12-4f82-afc9-ab63e6278b0c-kube-api-access" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308771 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9d09a56-ed4c-40b7-8be1-f3934c07296e" volumeName="kubernetes.io/projected/d9d09a56-ed4c-40b7-8be1-f3934c07296e-bound-sa-token" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308785 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="07505113-d5e7-4ea3-b9cc-8f08cba45ccc" volumeName="kubernetes.io/secret/07505113-d5e7-4ea3-b9cc-8f08cba45ccc-serving-cert" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308797 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b06a568-4dad-44b4-8312-aa52911dbfb0" volumeName="kubernetes.io/configmap/2b06a568-4dad-44b4-8312-aa52911dbfb0-service-ca" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308809 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b06a568-4dad-44b4-8312-aa52911dbfb0" volumeName="kubernetes.io/projected/2b06a568-4dad-44b4-8312-aa52911dbfb0-kube-api-access" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308821 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34a3a84b-048f-4822-9f05-0e7509327ca2" volumeName="kubernetes.io/configmap/34a3a84b-048f-4822-9f05-0e7509327ca2-config" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308832 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a4202c2-c330-4a5d-87e7-0a63d069113f" volumeName="kubernetes.io/configmap/5a4202c2-c330-4a5d-87e7-0a63d069113f-images" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308852 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8eff549-02f3-446e-b3a1-a66cecdc02a6" volumeName="kubernetes.io/projected/a8eff549-02f3-446e-b3a1-a66cecdc02a6-kube-api-access" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308866 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf9d21f9-64d6-4e21-a985-491197038568" volumeName="kubernetes.io/configmap/bf9d21f9-64d6-4e21-a985-491197038568-config" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308878 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce3728ab-5d50-40ac-95b3-74a5b62a557f" volumeName="kubernetes.io/empty-dir/ce3728ab-5d50-40ac-95b3-74a5b62a557f-available-featuregates" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308892 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="290d1f84-5c5c-4bff-b045-e6020793cded" volumeName="kubernetes.io/configmap/290d1f84-5c5c-4bff-b045-e6020793cded-trusted-ca" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308904 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d42bcf13-548b-46c4-9a3d-a46f1b6ec045" volumeName="kubernetes.io/secret/d42bcf13-548b-46c4-9a3d-a46f1b6ec045-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308916 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59bf5114-29f9-4f70-8582-108e95327cb2" volumeName="kubernetes.io/projected/59bf5114-29f9-4f70-8582-108e95327cb2-kube-api-access-z5xgh" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308929 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7dca7577-6bee-4dd3-917a-7b7ccc42f0fc" volumeName="kubernetes.io/projected/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc-kube-api-access-qhs5w" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308941 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab2f96fb-ef55-4427-a598-7e3f1e224045" volumeName="kubernetes.io/projected/ab2f96fb-ef55-4427-a598-7e3f1e224045-kube-api-access-mddh9" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308953 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3ff09ab-cbe1-49e7-8121-5f71997a5176" volumeName="kubernetes.io/configmap/c3ff09ab-cbe1-49e7-8121-5f71997a5176-trusted-ca" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.308990 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9d09a56-ed4c-40b7-8be1-f3934c07296e" volumeName="kubernetes.io/projected/d9d09a56-ed4c-40b7-8be1-f3934c07296e-kube-api-access-wmzr4" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.309003 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="595f697b-d238-4500-84ce-1ea00377f05e" volumeName="kubernetes.io/configmap/595f697b-d238-4500-84ce-1ea00377f05e-config" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.309015 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="290d1f84-5c5c-4bff-b045-e6020793cded" volumeName="kubernetes.io/projected/290d1f84-5c5c-4bff-b045-e6020793cded-kube-api-access-rdkx7" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.309027 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="767da57e-44e4-4861-bc6f-427c5bbb4d9d" volumeName="kubernetes.io/projected/767da57e-44e4-4861-bc6f-427c5bbb4d9d-kube-api-access-2nxzr" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.309038 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="902909ca-ab08-49aa-9736-70e073f8e67d" volumeName="kubernetes.io/projected/902909ca-ab08-49aa-9736-70e073f8e67d-kube-api-access-kskqr" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.309048 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="07505113-d5e7-4ea3-b9cc-8f08cba45ccc" volumeName="kubernetes.io/configmap/07505113-d5e7-4ea3-b9cc-8f08cba45ccc-config" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.309059 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="595f697b-d238-4500-84ce-1ea00377f05e" volumeName="kubernetes.io/secret/595f697b-d238-4500-84ce-1ea00377f05e-serving-cert" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.309070 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="767da57e-44e4-4861-bc6f-427c5bbb4d9d" volumeName="kubernetes.io/configmap/767da57e-44e4-4861-bc6f-427c5bbb4d9d-cni-binary-copy" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.309081 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab2f96fb-ef55-4427-a598-7e3f1e224045" volumeName="kubernetes.io/secret/ab2f96fb-ef55-4427-a598-7e3f1e224045-ovn-node-metrics-cert" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.309092 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d42bcf13-548b-46c4-9a3d-a46f1b6ec045" volumeName="kubernetes.io/configmap/d42bcf13-548b-46c4-9a3d-a46f1b6ec045-env-overrides" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.309102 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7f4ae93-428b-4ebd-bfaa-18359b407ede" volumeName="kubernetes.io/projected/f7f4ae93-428b-4ebd-bfaa-18359b407ede-kube-api-access-mc8t5" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.309113 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15a97fe2-5022-4997-9936-4247ae7ecb43" volumeName="kubernetes.io/secret/15a97fe2-5022-4997-9936-4247ae7ecb43-cluster-storage-operator-serving-cert" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.309125 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8eff549-02f3-446e-b3a1-a66cecdc02a6" volumeName="kubernetes.io/configmap/a8eff549-02f3-446e-b3a1-a66cecdc02a6-config" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.309139 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ac6d8eb6-1d5e-4757-9823-5ffe478c711c" volumeName="kubernetes.io/configmap/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-config" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.309152 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf9d21f9-64d6-4e21-a985-491197038568" volumeName="kubernetes.io/configmap/bf9d21f9-64d6-4e21-a985-491197038568-trusted-ca-bundle" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.309167 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf9d21f9-64d6-4e21-a985-491197038568" volumeName="kubernetes.io/projected/bf9d21f9-64d6-4e21-a985-491197038568-kube-api-access-qgffb" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.309179 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe643e40-d06d-4e69-9be3-0065c2a78567" volumeName="kubernetes.io/configmap/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-trusted-ca" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.309191 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0e7156cf-2d68-4de8-b7e7-60e1539590dd" volumeName="kubernetes.io/projected/0e7156cf-2d68-4de8-b7e7-60e1539590dd-kube-api-access-z84cq" seLinuxMountContext="" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.309202 7599 reconstruct.go:97] "Volume reconstruction finished" Mar 18 13:07:05.309581 master-0 kubenswrapper[7599]: I0318 13:07:05.309210 7599 reconciler.go:26] "Reconciler: start to sync state" Mar 18 13:07:05.369016 master-0 kubenswrapper[7599]: I0318 13:07:05.368928 7599 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 18 13:07:05.370182 master-0 kubenswrapper[7599]: I0318 13:07:05.370150 7599 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 18 13:07:05.370240 master-0 kubenswrapper[7599]: I0318 13:07:05.370185 7599 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 18 13:07:05.370240 master-0 kubenswrapper[7599]: I0318 13:07:05.370207 7599 kubelet.go:2335] "Starting kubelet main sync loop" Mar 18 13:07:05.370290 master-0 kubenswrapper[7599]: E0318 13:07:05.370247 7599 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 18 13:07:05.375359 master-0 kubenswrapper[7599]: I0318 13:07:05.375309 7599 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 18 13:07:05.378639 master-0 kubenswrapper[7599]: I0318 13:07:05.378583 7599 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="d6a80d543c22a63f6dea0e7d105a4e1ffc771640a4b28b8a800b1b2d79f67b1d" exitCode=0 Mar 18 13:07:05.382243 master-0 kubenswrapper[7599]: I0318 13:07:05.382214 7599 generic.go:334] "Generic (PLEG): container finished" podID="767da57e-44e4-4861-bc6f-427c5bbb4d9d" containerID="22e1bd5e28c298ede758e5ddea0b33351ac8c7be1111bab8e7269abdb7d0b24d" exitCode=0 Mar 18 13:07:05.382243 master-0 kubenswrapper[7599]: I0318 13:07:05.382242 7599 generic.go:334] "Generic (PLEG): container finished" podID="767da57e-44e4-4861-bc6f-427c5bbb4d9d" containerID="7dd7465ff0a0e7bd1744dc8ce263fa13a50d77f65ff8439074a245d515a4445a" exitCode=0 Mar 18 13:07:05.382354 master-0 kubenswrapper[7599]: I0318 13:07:05.382253 7599 generic.go:334] "Generic (PLEG): container finished" podID="767da57e-44e4-4861-bc6f-427c5bbb4d9d" containerID="ce72b00f2972d5446b5f276006e7acfa3fdc14bc227bc60b88d427b8aca46c01" exitCode=0 Mar 18 13:07:05.382354 master-0 kubenswrapper[7599]: I0318 13:07:05.382262 7599 generic.go:334] "Generic (PLEG): container finished" podID="767da57e-44e4-4861-bc6f-427c5bbb4d9d" containerID="e6c5e39905127934bde209ce2f1016715a59ddc9fc387b1a3a64af536455bdb8" exitCode=0 Mar 18 13:07:05.382354 master-0 kubenswrapper[7599]: I0318 13:07:05.382269 7599 generic.go:334] "Generic (PLEG): container finished" podID="767da57e-44e4-4861-bc6f-427c5bbb4d9d" containerID="2c7ef62a916ad3298edbd1aa1cbc3e8ff60647bfc3a55655d38feae6a6189afb" exitCode=0 Mar 18 13:07:05.382354 master-0 kubenswrapper[7599]: I0318 13:07:05.382276 7599 generic.go:334] "Generic (PLEG): container finished" podID="767da57e-44e4-4861-bc6f-427c5bbb4d9d" containerID="b002856dfe7358511cd094dcfacc7030cb861d82b50197ce9130a1536facf510" exitCode=0 Mar 18 13:07:05.383454 master-0 kubenswrapper[7599]: I0318 13:07:05.383428 7599 generic.go:334] "Generic (PLEG): container finished" podID="1bf6a38b-0bdd-4767-bc33-7cc12d9537e7" containerID="b515c044e4a53f4787c6a1c5354de363795974706495b5ee9abee555e41455a3" exitCode=0 Mar 18 13:07:05.398843 master-0 kubenswrapper[7599]: I0318 13:07:05.398788 7599 generic.go:334] "Generic (PLEG): container finished" podID="80daec9e-b15b-4782-a1f7-ce398bbe323b" containerID="2307d9f9b6edb7075e27303dc674c0604795c0e793d990a0bd35a8d4c7882a78" exitCode=0 Mar 18 13:07:05.402580 master-0 kubenswrapper[7599]: I0318 13:07:05.402541 7599 generic.go:334] "Generic (PLEG): container finished" podID="ab2f96fb-ef55-4427-a598-7e3f1e224045" containerID="5848e50846e9206c31c30b47f8e7f2df5ddc303c266302abaf44f36dbaa6229a" exitCode=0 Mar 18 13:07:05.408056 master-0 kubenswrapper[7599]: I0318 13:07:05.408010 7599 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="ea4c7c8dc1dee8fb69dc17e4e5c096e51c691a4d47e30362ad839b224364d388" exitCode=1 Mar 18 13:07:05.416014 master-0 kubenswrapper[7599]: I0318 13:07:05.415895 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 18 13:07:05.416298 master-0 kubenswrapper[7599]: I0318 13:07:05.416255 7599 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="4d4a6fb4b82b14518b178f0f032fb16b2ce281c0de8e87c9f0449d46d7739b5b" exitCode=1 Mar 18 13:07:05.416298 master-0 kubenswrapper[7599]: I0318 13:07:05.416287 7599 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="1eb12a87dc862d5b3d8d0f8d6df8c24ebffab83c33817eb9807a92d04594145f" exitCode=0 Mar 18 13:07:05.423982 master-0 kubenswrapper[7599]: I0318 13:07:05.423645 7599 manager.go:324] Recovery completed Mar 18 13:07:05.463082 master-0 kubenswrapper[7599]: I0318 13:07:05.463037 7599 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 18 13:07:05.463082 master-0 kubenswrapper[7599]: I0318 13:07:05.463067 7599 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 18 13:07:05.463082 master-0 kubenswrapper[7599]: I0318 13:07:05.463089 7599 state_mem.go:36] "Initialized new in-memory state store" Mar 18 13:07:05.463303 master-0 kubenswrapper[7599]: I0318 13:07:05.463273 7599 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 18 13:07:05.463303 master-0 kubenswrapper[7599]: I0318 13:07:05.463288 7599 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 18 13:07:05.463365 master-0 kubenswrapper[7599]: I0318 13:07:05.463312 7599 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 18 13:07:05.463365 master-0 kubenswrapper[7599]: I0318 13:07:05.463322 7599 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 18 13:07:05.463365 master-0 kubenswrapper[7599]: I0318 13:07:05.463330 7599 policy_none.go:49] "None policy: Start" Mar 18 13:07:05.465001 master-0 kubenswrapper[7599]: I0318 13:07:05.464967 7599 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 18 13:07:05.465001 master-0 kubenswrapper[7599]: I0318 13:07:05.465003 7599 state_mem.go:35] "Initializing new in-memory state store" Mar 18 13:07:05.465259 master-0 kubenswrapper[7599]: I0318 13:07:05.465230 7599 state_mem.go:75] "Updated machine memory state" Mar 18 13:07:05.465259 master-0 kubenswrapper[7599]: I0318 13:07:05.465251 7599 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 18 13:07:05.470455 master-0 kubenswrapper[7599]: E0318 13:07:05.470376 7599 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 18 13:07:05.473187 master-0 kubenswrapper[7599]: I0318 13:07:05.473158 7599 manager.go:334] "Starting Device Plugin manager" Mar 18 13:07:05.473241 master-0 kubenswrapper[7599]: I0318 13:07:05.473200 7599 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 18 13:07:05.473241 master-0 kubenswrapper[7599]: I0318 13:07:05.473213 7599 server.go:79] "Starting device plugin registration server" Mar 18 13:07:05.473568 master-0 kubenswrapper[7599]: I0318 13:07:05.473545 7599 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 18 13:07:05.473605 master-0 kubenswrapper[7599]: I0318 13:07:05.473563 7599 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 18 13:07:05.473808 master-0 kubenswrapper[7599]: I0318 13:07:05.473713 7599 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 18 13:07:05.473808 master-0 kubenswrapper[7599]: I0318 13:07:05.473783 7599 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 18 13:07:05.473808 master-0 kubenswrapper[7599]: I0318 13:07:05.473791 7599 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 18 13:07:05.574343 master-0 kubenswrapper[7599]: I0318 13:07:05.574260 7599 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:07:05.575999 master-0 kubenswrapper[7599]: I0318 13:07:05.575958 7599 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:07:05.575999 master-0 kubenswrapper[7599]: I0318 13:07:05.575998 7599 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:07:05.576291 master-0 kubenswrapper[7599]: I0318 13:07:05.576010 7599 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:07:05.576291 master-0 kubenswrapper[7599]: I0318 13:07:05.576032 7599 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 13:07:05.670615 master-0 kubenswrapper[7599]: I0318 13:07:05.669555 7599 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 18 13:07:05.671667 master-0 kubenswrapper[7599]: I0318 13:07:05.671534 7599 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0"] Mar 18 13:07:05.672734 master-0 kubenswrapper[7599]: I0318 13:07:05.672600 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"a4f58e12d74ce5ae7d84f01a469edde896e16989e12c3d7d74c8926694ed4605"} Mar 18 13:07:05.673023 master-0 kubenswrapper[7599]: I0318 13:07:05.672957 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"2bf6097f58997d4a7857bf19b235ef6fc055563709efd33dba529c95e95347b7"} Mar 18 13:07:05.673228 master-0 kubenswrapper[7599]: I0318 13:07:05.673198 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerDied","Data":"d6a80d543c22a63f6dea0e7d105a4e1ffc771640a4b28b8a800b1b2d79f67b1d"} Mar 18 13:07:05.673402 master-0 kubenswrapper[7599]: I0318 13:07:05.673376 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"dac657218e0dfd4a3c5aa3191ba5551afdd06a112b3e1902ad24f1784c7e77bb"} Mar 18 13:07:05.673649 master-0 kubenswrapper[7599]: I0318 13:07:05.673594 7599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40b261faec74f68e9f91e396dd4e777ae1879f65ef14ec6f7736af761907c26b" Mar 18 13:07:05.673938 master-0 kubenswrapper[7599]: I0318 13:07:05.673832 7599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b75c64ccdcd6c7429e0eee9f5d3eef7c095db4da53200590b14c4e34b0c5741d" Mar 18 13:07:05.674279 master-0 kubenswrapper[7599]: I0318 13:07:05.674194 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"f95a3bc3d3ba83cb38567fab408924e4ffe01d6a95b0daefb0d6bae2338f0525"} Mar 18 13:07:05.675762 master-0 kubenswrapper[7599]: I0318 13:07:05.674509 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"ef75baaea3b231f0a943268458f551b383f49ce5906993775a78b47a21e43600"} Mar 18 13:07:05.675911 master-0 kubenswrapper[7599]: I0318 13:07:05.675770 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"ea4c7c8dc1dee8fb69dc17e4e5c096e51c691a4d47e30362ad839b224364d388"} Mar 18 13:07:05.675911 master-0 kubenswrapper[7599]: I0318 13:07:05.675807 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"ad81888f7e81397819da2e276d066f221c19460ce9d1c808f123e49e5e137c12"} Mar 18 13:07:05.675911 master-0 kubenswrapper[7599]: I0318 13:07:05.675833 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"b680406b619eed9e8ddd7c1c6d8c3b3b49b30c6e09e5967e05d69c77c60fb8e7"} Mar 18 13:07:05.675911 master-0 kubenswrapper[7599]: I0318 13:07:05.675858 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"ac4829129d2b10e54c93299397b9cc08e24d1caf7621cbb7c7ae941e1d3f8b33"} Mar 18 13:07:05.675911 master-0 kubenswrapper[7599]: I0318 13:07:05.675879 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"9b875352a426804c810cbcabae5c16bee69af1f5eb5abb8757a87c72322c1d90"} Mar 18 13:07:05.675911 master-0 kubenswrapper[7599]: I0318 13:07:05.675917 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"c6b7be01dc24d7f26b3d57447fbf2490a6f4dfb2fb1c9fdf65bee4f74420bdb3"} Mar 18 13:07:05.676252 master-0 kubenswrapper[7599]: I0318 13:07:05.675942 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"4d4a6fb4b82b14518b178f0f032fb16b2ce281c0de8e87c9f0449d46d7739b5b"} Mar 18 13:07:05.676252 master-0 kubenswrapper[7599]: I0318 13:07:05.675966 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"1eb12a87dc862d5b3d8d0f8d6df8c24ebffab83c33817eb9807a92d04594145f"} Mar 18 13:07:05.676252 master-0 kubenswrapper[7599]: I0318 13:07:05.675986 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"c0f3b16d1ffaf44cfe1f64310fd36df2459a40e3dd2e9014c6e2cf3307b7b8c5"} Mar 18 13:07:05.676252 master-0 kubenswrapper[7599]: I0318 13:07:05.676007 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"fa24e07dc1e554926055d55fec3f68de49cdd19d5efe278d06ec7ad571b7e767"} Mar 18 13:07:05.676252 master-0 kubenswrapper[7599]: I0318 13:07:05.676029 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"492f8b5295cdebec40bc2ae4858cc5e6b7f333a6d877b4f8238f7b732ed663e7"} Mar 18 13:07:05.770110 master-0 kubenswrapper[7599]: I0318 13:07:05.770012 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:07:05.770110 master-0 kubenswrapper[7599]: I0318 13:07:05.770102 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:07:05.770322 master-0 kubenswrapper[7599]: I0318 13:07:05.770141 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:07:05.770322 master-0 kubenswrapper[7599]: I0318 13:07:05.770170 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:07:05.770322 master-0 kubenswrapper[7599]: I0318 13:07:05.770200 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:07:05.770322 master-0 kubenswrapper[7599]: I0318 13:07:05.770228 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 13:07:05.770322 master-0 kubenswrapper[7599]: I0318 13:07:05.770255 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:07:05.770322 master-0 kubenswrapper[7599]: I0318 13:07:05.770282 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:07:05.770322 master-0 kubenswrapper[7599]: I0318 13:07:05.770314 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:07:05.770641 master-0 kubenswrapper[7599]: I0318 13:07:05.770340 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 13:07:05.770641 master-0 kubenswrapper[7599]: I0318 13:07:05.770368 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:07:05.770641 master-0 kubenswrapper[7599]: I0318 13:07:05.770394 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:07:05.770641 master-0 kubenswrapper[7599]: I0318 13:07:05.770455 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:07:05.770641 master-0 kubenswrapper[7599]: I0318 13:07:05.770483 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:07:05.770641 master-0 kubenswrapper[7599]: I0318 13:07:05.770510 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:07:05.770641 master-0 kubenswrapper[7599]: I0318 13:07:05.770535 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:07:05.770641 master-0 kubenswrapper[7599]: I0318 13:07:05.770561 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:07:05.871880 master-0 kubenswrapper[7599]: I0318 13:07:05.871777 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:07:05.871880 master-0 kubenswrapper[7599]: I0318 13:07:05.871846 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:07:05.872192 master-0 kubenswrapper[7599]: I0318 13:07:05.872014 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:07:05.872192 master-0 kubenswrapper[7599]: I0318 13:07:05.872151 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 13:07:05.872305 master-0 kubenswrapper[7599]: I0318 13:07:05.872173 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:07:05.872365 master-0 kubenswrapper[7599]: I0318 13:07:05.872299 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 13:07:05.872479 master-0 kubenswrapper[7599]: I0318 13:07:05.872404 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:07:05.872546 master-0 kubenswrapper[7599]: I0318 13:07:05.872496 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:07:05.872546 master-0 kubenswrapper[7599]: I0318 13:07:05.872506 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:07:05.872546 master-0 kubenswrapper[7599]: I0318 13:07:05.872537 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:07:05.872702 master-0 kubenswrapper[7599]: I0318 13:07:05.872542 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:07:05.872702 master-0 kubenswrapper[7599]: I0318 13:07:05.872600 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:07:05.872702 master-0 kubenswrapper[7599]: I0318 13:07:05.872639 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:07:05.872702 master-0 kubenswrapper[7599]: I0318 13:07:05.872649 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:07:05.872702 master-0 kubenswrapper[7599]: I0318 13:07:05.872676 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:07:05.872702 master-0 kubenswrapper[7599]: I0318 13:07:05.872686 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:07:05.873007 master-0 kubenswrapper[7599]: I0318 13:07:05.872746 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:07:05.873007 master-0 kubenswrapper[7599]: I0318 13:07:05.872752 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:07:05.873007 master-0 kubenswrapper[7599]: I0318 13:07:05.872758 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:07:05.873007 master-0 kubenswrapper[7599]: I0318 13:07:05.872791 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:07:05.873007 master-0 kubenswrapper[7599]: I0318 13:07:05.872801 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:07:05.873007 master-0 kubenswrapper[7599]: I0318 13:07:05.872850 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:07:05.873007 master-0 kubenswrapper[7599]: I0318 13:07:05.872855 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:07:05.873007 master-0 kubenswrapper[7599]: I0318 13:07:05.872908 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:07:05.873007 master-0 kubenswrapper[7599]: I0318 13:07:05.872966 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 13:07:05.873628 master-0 kubenswrapper[7599]: I0318 13:07:05.873116 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 13:07:05.873628 master-0 kubenswrapper[7599]: I0318 13:07:05.873227 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:07:05.873628 master-0 kubenswrapper[7599]: I0318 13:07:05.873286 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:07:05.873628 master-0 kubenswrapper[7599]: I0318 13:07:05.873459 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:07:05.873628 master-0 kubenswrapper[7599]: I0318 13:07:05.873501 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:07:05.873628 master-0 kubenswrapper[7599]: I0318 13:07:05.873533 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:07:05.873628 master-0 kubenswrapper[7599]: I0318 13:07:05.873590 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:07:05.873957 master-0 kubenswrapper[7599]: I0318 13:07:05.873627 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:07:05.873957 master-0 kubenswrapper[7599]: I0318 13:07:05.873644 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:07:06.271945 master-0 kubenswrapper[7599]: I0318 13:07:06.271831 7599 apiserver.go:52] "Watching apiserver" Mar 18 13:07:06.282921 master-0 kubenswrapper[7599]: I0318 13:07:06.282881 7599 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 18 13:07:06.283682 master-0 kubenswrapper[7599]: I0318 13:07:06.283585 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-kbfbq","openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-68lgz","openshift-marketplace/marketplace-operator-89ccd998f-99pzm","openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl","openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h","openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx","kube-system/bootstrap-kube-scheduler-master-0","openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5","openshift-multus/multus-additional-cni-plugins-ttdn5","openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp","openshift-network-diagnostics/network-check-target-kcsgp","openshift-network-operator/network-operator-7bd846bfc4-gxxbr","openshift-ovn-kubernetes/ovnkube-node-kxqjc","assisted-installer/assisted-installer-controller-7bfhd","openshift-etcd/etcd-master-0-master-0","openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv","openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-r4dzk","openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf","openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-k8tv4","openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m","kube-system/bootstrap-kube-controller-manager-master-0","openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-4bqf9","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn","openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb","openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p","openshift-multus/multus-vkbvp","openshift-network-operator/iptables-alerter-jkl4x","openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld","openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp","openshift-dns-operator/dns-operator-9c5679d8f-5lzzn","openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-network-node-identity/network-node-identity-x8r78","openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt","openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4","openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd"] Mar 18 13:07:06.284093 master-0 kubenswrapper[7599]: I0318 13:07:06.284019 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-7bfhd" Mar 18 13:07:06.285385 master-0 kubenswrapper[7599]: I0318 13:07:06.284777 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:07:06.286762 master-0 kubenswrapper[7599]: I0318 13:07:06.286244 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:07:06.286762 master-0 kubenswrapper[7599]: I0318 13:07:06.286373 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:07:06.286762 master-0 kubenswrapper[7599]: I0318 13:07:06.286502 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:07:06.286762 master-0 kubenswrapper[7599]: I0318 13:07:06.286514 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 13:07:06.287561 master-0 kubenswrapper[7599]: I0318 13:07:06.287268 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 18 13:07:06.287561 master-0 kubenswrapper[7599]: I0318 13:07:06.287469 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 18 13:07:06.288340 master-0 kubenswrapper[7599]: I0318 13:07:06.288280 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" Mar 18 13:07:06.288546 master-0 kubenswrapper[7599]: I0318 13:07:06.288462 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 18 13:07:06.288727 master-0 kubenswrapper[7599]: I0318 13:07:06.288670 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 18 13:07:06.289931 master-0 kubenswrapper[7599]: I0318 13:07:06.289012 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:07:06.289931 master-0 kubenswrapper[7599]: I0318 13:07:06.289435 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-9c5679d8f-5lzzn" Mar 18 13:07:06.291301 master-0 kubenswrapper[7599]: I0318 13:07:06.291197 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:07:06.293495 master-0 kubenswrapper[7599]: I0318 13:07:06.292715 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 18 13:07:06.293495 master-0 kubenswrapper[7599]: I0318 13:07:06.292993 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:07:06.293890 master-0 kubenswrapper[7599]: I0318 13:07:06.293836 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:07:06.294276 master-0 kubenswrapper[7599]: I0318 13:07:06.294236 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt" Mar 18 13:07:06.294363 master-0 kubenswrapper[7599]: I0318 13:07:06.294322 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" Mar 18 13:07:06.296520 master-0 kubenswrapper[7599]: I0318 13:07:06.296491 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" Mar 18 13:07:06.297289 master-0 kubenswrapper[7599]: I0318 13:07:06.297204 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:07:06.297289 master-0 kubenswrapper[7599]: I0318 13:07:06.297299 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:07:06.298328 master-0 kubenswrapper[7599]: I0318 13:07:06.298283 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 18 13:07:06.298429 master-0 kubenswrapper[7599]: I0318 13:07:06.298364 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 18 13:07:06.298619 master-0 kubenswrapper[7599]: I0318 13:07:06.298571 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 18 13:07:06.298763 master-0 kubenswrapper[7599]: I0318 13:07:06.298700 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 18 13:07:06.411995 master-0 kubenswrapper[7599]: I0318 13:07:06.411913 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:07:06.411995 master-0 kubenswrapper[7599]: I0318 13:07:06.411989 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc8t5\" (UniqueName: \"kubernetes.io/projected/f7f4ae93-428b-4ebd-bfaa-18359b407ede-kube-api-access-mc8t5\") pod \"network-operator-7bd846bfc4-gxxbr\" (UID: \"f7f4ae93-428b-4ebd-bfaa-18359b407ede\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gxxbr" Mar 18 13:07:06.412560 master-0 kubenswrapper[7599]: I0318 13:07:06.412516 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-config\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:07:06.412642 master-0 kubenswrapper[7599]: I0318 13:07:06.412570 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-images\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:07:06.412642 master-0 kubenswrapper[7599]: I0318 13:07:06.412599 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-system-cni-dir\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.412642 master-0 kubenswrapper[7599]: I0318 13:07:06.412623 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls\") pod \"dns-operator-9c5679d8f-5lzzn\" (UID: \"59bf5114-29f9-4f70-8582-108e95327cb2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-5lzzn" Mar 18 13:07:06.412840 master-0 kubenswrapper[7599]: I0318 13:07:06.412645 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5xgh\" (UniqueName: \"kubernetes.io/projected/59bf5114-29f9-4f70-8582-108e95327cb2-kube-api-access-z5xgh\") pod \"dns-operator-9c5679d8f-5lzzn\" (UID: \"59bf5114-29f9-4f70-8582-108e95327cb2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-5lzzn" Mar 18 13:07:06.413021 master-0 kubenswrapper[7599]: I0318 13:07:06.412974 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-images\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:07:06.413021 master-0 kubenswrapper[7599]: I0318 13:07:06.412992 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f7f4ae93-428b-4ebd-bfaa-18359b407ede-metrics-tls\") pod \"network-operator-7bd846bfc4-gxxbr\" (UID: \"f7f4ae93-428b-4ebd-bfaa-18359b407ede\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gxxbr" Mar 18 13:07:06.413200 master-0 kubenswrapper[7599]: I0318 13:07:06.413044 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlzqd\" (UniqueName: \"kubernetes.io/projected/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-kube-api-access-zlzqd\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:07:06.413200 master-0 kubenswrapper[7599]: I0318 13:07:06.413071 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:07:06.413200 master-0 kubenswrapper[7599]: I0318 13:07:06.413095 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/f7f4ae93-428b-4ebd-bfaa-18359b407ede-host-etc-kube\") pod \"network-operator-7bd846bfc4-gxxbr\" (UID: \"f7f4ae93-428b-4ebd-bfaa-18359b407ede\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gxxbr" Mar 18 13:07:06.413200 master-0 kubenswrapper[7599]: E0318 13:07:06.413194 7599 secret.go:189] Couldn't get secret openshift-network-operator/metrics-tls: object "openshift-network-operator"/"metrics-tls" not registered Mar 18 13:07:06.413540 master-0 kubenswrapper[7599]: E0318 13:07:06.413245 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7f4ae93-428b-4ebd-bfaa-18359b407ede-metrics-tls podName:f7f4ae93-428b-4ebd-bfaa-18359b407ede nodeName:}" failed. No retries permitted until 2026-03-18 13:07:06.913229237 +0000 UTC m=+1.874283489 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f7f4ae93-428b-4ebd-bfaa-18359b407ede-metrics-tls") pod "network-operator-7bd846bfc4-gxxbr" (UID: "f7f4ae93-428b-4ebd-bfaa-18359b407ede") : object "openshift-network-operator"/"metrics-tls" not registered Mar 18 13:07:06.413843 master-0 kubenswrapper[7599]: I0318 13:07:06.413808 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 18 13:07:06.413964 master-0 kubenswrapper[7599]: I0318 13:07:06.413907 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 18 13:07:06.414053 master-0 kubenswrapper[7599]: I0318 13:07:06.414028 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 18 13:07:06.414590 master-0 kubenswrapper[7599]: I0318 13:07:06.414545 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 18 13:07:06.414726 master-0 kubenswrapper[7599]: I0318 13:07:06.414643 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 18 13:07:06.414726 master-0 kubenswrapper[7599]: I0318 13:07:06.414706 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 18 13:07:06.415090 master-0 kubenswrapper[7599]: I0318 13:07:06.415040 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 18 13:07:06.415170 master-0 kubenswrapper[7599]: I0318 13:07:06.415152 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 18 13:07:06.417011 master-0 kubenswrapper[7599]: I0318 13:07:06.416895 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 18 13:07:06.417011 master-0 kubenswrapper[7599]: I0318 13:07:06.417023 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 18 13:07:06.417241 master-0 kubenswrapper[7599]: I0318 13:07:06.417085 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 18 13:07:06.418484 master-0 kubenswrapper[7599]: I0318 13:07:06.418404 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 18 13:07:06.419803 master-0 kubenswrapper[7599]: I0318 13:07:06.419742 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 18 13:07:06.431031 master-0 kubenswrapper[7599]: I0318 13:07:06.423110 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-config\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:07:06.431031 master-0 kubenswrapper[7599]: I0318 13:07:06.423339 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 18 13:07:06.439468 master-0 kubenswrapper[7599]: I0318 13:07:06.437009 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 18 13:07:06.439468 master-0 kubenswrapper[7599]: I0318 13:07:06.437401 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 18 13:07:06.439468 master-0 kubenswrapper[7599]: I0318 13:07:06.437687 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 18 13:07:06.439468 master-0 kubenswrapper[7599]: I0318 13:07:06.438001 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 18 13:07:06.439468 master-0 kubenswrapper[7599]: I0318 13:07:06.438016 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 18 13:07:06.439468 master-0 kubenswrapper[7599]: I0318 13:07:06.438084 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 18 13:07:06.439468 master-0 kubenswrapper[7599]: I0318 13:07:06.438153 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 18 13:07:06.439468 master-0 kubenswrapper[7599]: I0318 13:07:06.438206 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 18 13:07:06.439468 master-0 kubenswrapper[7599]: I0318 13:07:06.438221 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 18 13:07:06.439468 master-0 kubenswrapper[7599]: I0318 13:07:06.438310 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 18 13:07:06.439468 master-0 kubenswrapper[7599]: I0318 13:07:06.438405 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 18 13:07:06.439468 master-0 kubenswrapper[7599]: I0318 13:07:06.438456 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 18 13:07:06.439468 master-0 kubenswrapper[7599]: I0318 13:07:06.438488 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 18 13:07:06.439468 master-0 kubenswrapper[7599]: I0318 13:07:06.438613 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 18 13:07:06.439468 master-0 kubenswrapper[7599]: I0318 13:07:06.438634 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 18 13:07:06.439468 master-0 kubenswrapper[7599]: I0318 13:07:06.438682 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 18 13:07:06.439468 master-0 kubenswrapper[7599]: I0318 13:07:06.438748 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 18 13:07:06.439468 master-0 kubenswrapper[7599]: I0318 13:07:06.438805 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 18 13:07:06.439468 master-0 kubenswrapper[7599]: E0318 13:07:06.438880 7599 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:07:06.439468 master-0 kubenswrapper[7599]: I0318 13:07:06.439077 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 18 13:07:06.439468 master-0 kubenswrapper[7599]: E0318 13:07:06.439212 7599 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:07:06.439468 master-0 kubenswrapper[7599]: I0318 13:07:06.439273 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 18 13:07:06.439468 master-0 kubenswrapper[7599]: E0318 13:07:06.439390 7599 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 13:07:06.439468 master-0 kubenswrapper[7599]: I0318 13:07:06.439478 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 18 13:07:06.441211 master-0 kubenswrapper[7599]: I0318 13:07:06.439599 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 18 13:07:06.441211 master-0 kubenswrapper[7599]: I0318 13:07:06.439650 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 18 13:07:06.441211 master-0 kubenswrapper[7599]: I0318 13:07:06.439687 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 18 13:07:06.441211 master-0 kubenswrapper[7599]: I0318 13:07:06.439754 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 18 13:07:06.441211 master-0 kubenswrapper[7599]: I0318 13:07:06.439921 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 18 13:07:06.441211 master-0 kubenswrapper[7599]: I0318 13:07:06.440010 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 18 13:07:06.441211 master-0 kubenswrapper[7599]: I0318 13:07:06.440071 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 18 13:07:06.441211 master-0 kubenswrapper[7599]: I0318 13:07:06.440126 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 18 13:07:06.441211 master-0 kubenswrapper[7599]: I0318 13:07:06.440179 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 18 13:07:06.441211 master-0 kubenswrapper[7599]: I0318 13:07:06.440237 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 18 13:07:06.441211 master-0 kubenswrapper[7599]: I0318 13:07:06.440275 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 18 13:07:06.441211 master-0 kubenswrapper[7599]: I0318 13:07:06.440298 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 18 13:07:06.441211 master-0 kubenswrapper[7599]: I0318 13:07:06.440358 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 18 13:07:06.441211 master-0 kubenswrapper[7599]: I0318 13:07:06.440505 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 18 13:07:06.442066 master-0 kubenswrapper[7599]: I0318 13:07:06.441351 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 18 13:07:06.442066 master-0 kubenswrapper[7599]: I0318 13:07:06.441447 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 18 13:07:06.442066 master-0 kubenswrapper[7599]: I0318 13:07:06.441549 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 18 13:07:06.442066 master-0 kubenswrapper[7599]: I0318 13:07:06.441969 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 18 13:07:06.442066 master-0 kubenswrapper[7599]: I0318 13:07:06.442062 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 18 13:07:06.442458 master-0 kubenswrapper[7599]: I0318 13:07:06.442170 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 18 13:07:06.442458 master-0 kubenswrapper[7599]: I0318 13:07:06.442269 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 18 13:07:06.442458 master-0 kubenswrapper[7599]: I0318 13:07:06.442326 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 18 13:07:06.442458 master-0 kubenswrapper[7599]: I0318 13:07:06.442381 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 18 13:07:06.442458 master-0 kubenswrapper[7599]: I0318 13:07:06.442460 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 18 13:07:06.442826 master-0 kubenswrapper[7599]: I0318 13:07:06.442566 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 18 13:07:06.442826 master-0 kubenswrapper[7599]: I0318 13:07:06.442753 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 18 13:07:06.442996 master-0 kubenswrapper[7599]: I0318 13:07:06.442897 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 18 13:07:06.442996 master-0 kubenswrapper[7599]: I0318 13:07:06.442953 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 18 13:07:06.443283 master-0 kubenswrapper[7599]: I0318 13:07:06.443172 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 18 13:07:06.443656 master-0 kubenswrapper[7599]: I0318 13:07:06.443609 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5xgh\" (UniqueName: \"kubernetes.io/projected/59bf5114-29f9-4f70-8582-108e95327cb2-kube-api-access-z5xgh\") pod \"dns-operator-9c5679d8f-5lzzn\" (UID: \"59bf5114-29f9-4f70-8582-108e95327cb2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-5lzzn" Mar 18 13:07:06.443841 master-0 kubenswrapper[7599]: I0318 13:07:06.443769 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 18 13:07:06.443936 master-0 kubenswrapper[7599]: I0318 13:07:06.443847 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 18 13:07:06.444067 master-0 kubenswrapper[7599]: I0318 13:07:06.444050 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 18 13:07:06.444495 master-0 kubenswrapper[7599]: I0318 13:07:06.444454 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 18 13:07:06.444590 master-0 kubenswrapper[7599]: I0318 13:07:06.444575 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 18 13:07:06.444661 master-0 kubenswrapper[7599]: I0318 13:07:06.444622 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 18 13:07:06.444748 master-0 kubenswrapper[7599]: I0318 13:07:06.444674 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 18 13:07:06.444819 master-0 kubenswrapper[7599]: I0318 13:07:06.444772 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 18 13:07:06.444819 master-0 kubenswrapper[7599]: I0318 13:07:06.444786 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 18 13:07:06.444945 master-0 kubenswrapper[7599]: I0318 13:07:06.444923 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 18 13:07:06.445102 master-0 kubenswrapper[7599]: I0318 13:07:06.445049 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 18 13:07:06.445298 master-0 kubenswrapper[7599]: I0318 13:07:06.445257 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 18 13:07:06.445511 master-0 kubenswrapper[7599]: I0318 13:07:06.445471 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 18 13:07:06.445648 master-0 kubenswrapper[7599]: I0318 13:07:06.445623 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 18 13:07:06.445914 master-0 kubenswrapper[7599]: I0318 13:07:06.445875 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 18 13:07:06.446032 master-0 kubenswrapper[7599]: I0318 13:07:06.446007 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 18 13:07:06.446251 master-0 kubenswrapper[7599]: I0318 13:07:06.446228 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 18 13:07:06.446386 master-0 kubenswrapper[7599]: I0318 13:07:06.446363 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 18 13:07:06.446547 master-0 kubenswrapper[7599]: I0318 13:07:06.446522 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 18 13:07:06.446675 master-0 kubenswrapper[7599]: I0318 13:07:06.446650 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 18 13:07:06.446874 master-0 kubenswrapper[7599]: I0318 13:07:06.446852 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 18 13:07:06.446960 master-0 kubenswrapper[7599]: I0318 13:07:06.446922 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 18 13:07:06.447171 master-0 kubenswrapper[7599]: I0318 13:07:06.447144 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 18 13:07:06.447323 master-0 kubenswrapper[7599]: I0318 13:07:06.447302 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 18 13:07:06.451372 master-0 kubenswrapper[7599]: I0318 13:07:06.450725 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 18 13:07:06.451622 master-0 kubenswrapper[7599]: I0318 13:07:06.451575 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlzqd\" (UniqueName: \"kubernetes.io/projected/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-kube-api-access-zlzqd\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:07:06.452374 master-0 kubenswrapper[7599]: W0318 13:07:06.452213 7599 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 18 13:07:06.455254 master-0 kubenswrapper[7599]: E0318 13:07:06.453084 7599 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:07:06.455254 master-0 kubenswrapper[7599]: E0318 13:07:06.454616 7599 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:07:06.458975 master-0 kubenswrapper[7599]: I0318 13:07:06.458893 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc8t5\" (UniqueName: \"kubernetes.io/projected/f7f4ae93-428b-4ebd-bfaa-18359b407ede-kube-api-access-mc8t5\") pod \"network-operator-7bd846bfc4-gxxbr\" (UID: \"f7f4ae93-428b-4ebd-bfaa-18359b407ede\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gxxbr" Mar 18 13:07:06.459143 master-0 kubenswrapper[7599]: I0318 13:07:06.459028 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 18 13:07:06.465899 master-0 kubenswrapper[7599]: I0318 13:07:06.464752 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 18 13:07:06.468652 master-0 kubenswrapper[7599]: I0318 13:07:06.466357 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 18 13:07:06.468758 master-0 kubenswrapper[7599]: I0318 13:07:06.467146 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 18 13:07:06.472565 master-0 kubenswrapper[7599]: I0318 13:07:06.467446 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 18 13:07:06.474924 master-0 kubenswrapper[7599]: I0318 13:07:06.474877 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 18 13:07:06.498470 master-0 kubenswrapper[7599]: I0318 13:07:06.496187 7599 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 18 13:07:06.498470 master-0 kubenswrapper[7599]: I0318 13:07:06.496453 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 18 13:07:06.514096 master-0 kubenswrapper[7599]: I0318 13:07:06.514047 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34a3a84b-048f-4822-9f05-0e7509327ca2-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-k8tv4\" (UID: \"34a3a84b-048f-4822-9f05-0e7509327ca2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-k8tv4" Mar 18 13:07:06.514171 master-0 kubenswrapper[7599]: I0318 13:07:06.514097 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab2f96fb-ef55-4427-a598-7e3f1e224045-ovnkube-config\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.514171 master-0 kubenswrapper[7599]: I0318 13:07:06.514126 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-99pzm\" (UID: \"fe643e40-d06d-4e69-9be3-0065c2a78567\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:07:06.514171 master-0 kubenswrapper[7599]: I0318 13:07:06.514150 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mddh9\" (UniqueName: \"kubernetes.io/projected/ab2f96fb-ef55-4427-a598-7e3f1e224045-kube-api-access-mddh9\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.514298 master-0 kubenswrapper[7599]: I0318 13:07:06.514172 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsvmx\" (UniqueName: \"kubernetes.io/projected/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-kube-api-access-xsvmx\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.514298 master-0 kubenswrapper[7599]: I0318 13:07:06.514195 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d42bcf13-548b-46c4-9a3d-a46f1b6ec045-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-hvnt4\" (UID: \"d42bcf13-548b-46c4-9a3d-a46f1b6ec045\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" Mar 18 13:07:06.514298 master-0 kubenswrapper[7599]: I0318 13:07:06.514216 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/595f697b-d238-4500-84ce-1ea00377f05e-serving-cert\") pod \"service-ca-operator-b865698dc-hnr6m\" (UID: \"595f697b-d238-4500-84ce-1ea00377f05e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m" Mar 18 13:07:06.514298 master-0 kubenswrapper[7599]: I0318 13:07:06.514238 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-systemd-units\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.514298 master-0 kubenswrapper[7599]: I0318 13:07:06.514258 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-var-lib-cni-multus\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.514298 master-0 kubenswrapper[7599]: I0318 13:07:06.514282 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/767da57e-44e4-4861-bc6f-427c5bbb4d9d-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:07:06.514528 master-0 kubenswrapper[7599]: I0318 13:07:06.514304 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxk9v\" (UniqueName: \"kubernetes.io/projected/d2455453-5943-49ef-bfea-cba077197da0-kube-api-access-lxk9v\") pod \"catalog-operator-68f85b4d6c-t84s9\" (UID: \"d2455453-5943-49ef-bfea-cba077197da0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:07:06.514528 master-0 kubenswrapper[7599]: I0318 13:07:06.514326 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/902909ca-ab08-49aa-9736-70e073f8e67d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-bppd4\" (UID: \"902909ca-ab08-49aa-9736-70e073f8e67d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4" Mar 18 13:07:06.514528 master-0 kubenswrapper[7599]: I0318 13:07:06.514348 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:07:06.514528 master-0 kubenswrapper[7599]: I0318 13:07:06.514369 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-etc-openvswitch\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.514528 master-0 kubenswrapper[7599]: I0318 13:07:06.514423 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b75d4622-ac12-4f82-afc9-ab63e6278b0c-config\") pod \"kube-controller-manager-operator-ff989d6cc-n7fn4\" (UID: \"b75d4622-ac12-4f82-afc9-ab63e6278b0c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4" Mar 18 13:07:06.514528 master-0 kubenswrapper[7599]: I0318 13:07:06.514449 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7bpz\" (UniqueName: \"kubernetes.io/projected/394061b4-1bac-4699-96d2-88558c1adaf8-kube-api-access-r7bpz\") pod \"csi-snapshot-controller-operator-5f5d689c6b-68lgz\" (UID: \"394061b4-1bac-4699-96d2-88558c1adaf8\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-68lgz" Mar 18 13:07:06.514528 master-0 kubenswrapper[7599]: I0318 13:07:06.514481 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-cni-binary-copy\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.514528 master-0 kubenswrapper[7599]: I0318 13:07:06.514502 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb5b6\" (UniqueName: \"kubernetes.io/projected/5a4202c2-c330-4a5d-87e7-0a63d069113f-kube-api-access-kb5b6\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:07:06.514797 master-0 kubenswrapper[7599]: I0318 13:07:06.514542 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c2c4a58-9780-4ecd-b417-e590ac3576ed-config\") pod \"openshift-apiserver-operator-d65958b8-4bqf9\" (UID: \"0c2c4a58-9780-4ecd-b417-e590ac3576ed\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-4bqf9" Mar 18 13:07:06.514797 master-0 kubenswrapper[7599]: I0318 13:07:06.514564 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/767da57e-44e4-4861-bc6f-427c5bbb4d9d-system-cni-dir\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:07:06.514797 master-0 kubenswrapper[7599]: I0318 13:07:06.514586 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b06a568-4dad-44b4-8312-aa52911dbfb0-kube-api-access\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:07:06.514797 master-0 kubenswrapper[7599]: I0318 13:07:06.514610 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-run-multus-certs\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.514797 master-0 kubenswrapper[7599]: I0318 13:07:06.514632 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-slash\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.514797 master-0 kubenswrapper[7599]: I0318 13:07:06.514652 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-run-netns\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.514797 master-0 kubenswrapper[7599]: I0318 13:07:06.514672 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-cni-bin\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.514797 master-0 kubenswrapper[7599]: I0318 13:07:06.514693 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ab2f96fb-ef55-4427-a598-7e3f1e224045-ovnkube-script-lib\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.514797 master-0 kubenswrapper[7599]: I0318 13:07:06.514717 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d42bcf13-548b-46c4-9a3d-a46f1b6ec045-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-hvnt4\" (UID: \"d42bcf13-548b-46c4-9a3d-a46f1b6ec045\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" Mar 18 13:07:06.514797 master-0 kubenswrapper[7599]: I0318 13:07:06.514741 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4cqp\" (UniqueName: \"kubernetes.io/projected/fe643e40-d06d-4e69-9be3-0065c2a78567-kube-api-access-w4cqp\") pod \"marketplace-operator-89ccd998f-99pzm\" (UID: \"fe643e40-d06d-4e69-9be3-0065c2a78567\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:07:06.514797 master-0 kubenswrapper[7599]: I0318 13:07:06.514762 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/902909ca-ab08-49aa-9736-70e073f8e67d-operand-assets\") pod \"cluster-olm-operator-67dcd4998-bppd4\" (UID: \"902909ca-ab08-49aa-9736-70e073f8e67d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4" Mar 18 13:07:06.514797 master-0 kubenswrapper[7599]: I0318 13:07:06.514783 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2hxh\" (UniqueName: \"kubernetes.io/projected/c3ff09ab-cbe1-49e7-8121-5f71997a5176-kube-api-access-n2hxh\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:07:06.515262 master-0 kubenswrapper[7599]: I0318 13:07:06.514826 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07505113-d5e7-4ea3-b9cc-8f08cba45ccc-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb\" (UID: \"07505113-d5e7-4ea3-b9cc-8f08cba45ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb" Mar 18 13:07:06.515262 master-0 kubenswrapper[7599]: I0318 13:07:06.514850 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc-config\") pod \"openshift-controller-manager-operator-8c94f4649-cpqm5\" (UID: \"7dca7577-6bee-4dd3-917a-7b7ccc42f0fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5" Mar 18 13:07:06.515262 master-0 kubenswrapper[7599]: I0318 13:07:06.514881 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-n8hgl\" (UID: \"8c0e5eca-819b-40f3-bf77-0cd90a4f6e94\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" Mar 18 13:07:06.515262 master-0 kubenswrapper[7599]: I0318 13:07:06.514908 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:07:06.515262 master-0 kubenswrapper[7599]: I0318 13:07:06.514943 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgzkd\" (UniqueName: \"kubernetes.io/projected/07505113-d5e7-4ea3-b9cc-8f08cba45ccc-kube-api-access-lgzkd\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb\" (UID: \"07505113-d5e7-4ea3-b9cc-8f08cba45ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb" Mar 18 13:07:06.515262 master-0 kubenswrapper[7599]: I0318 13:07:06.514967 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/053cc9bc-f98e-46f6-93bb-b5344d20bf74-iptables-alerter-script\") pod \"iptables-alerter-jkl4x\" (UID: \"053cc9bc-f98e-46f6-93bb-b5344d20bf74\") " pod="openshift-network-operator/iptables-alerter-jkl4x" Mar 18 13:07:06.515262 master-0 kubenswrapper[7599]: I0318 13:07:06.514996 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-cpqm5\" (UID: \"7dca7577-6bee-4dd3-917a-7b7ccc42f0fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5" Mar 18 13:07:06.515262 master-0 kubenswrapper[7599]: I0318 13:07:06.515015 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19a76585-a9ac-4ed9-9146-bb77b31848c6-config\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:07:06.515262 master-0 kubenswrapper[7599]: I0318 13:07:06.515054 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/053cc9bc-f98e-46f6-93bb-b5344d20bf74-host-slash\") pod \"iptables-alerter-jkl4x\" (UID: \"053cc9bc-f98e-46f6-93bb-b5344d20bf74\") " pod="openshift-network-operator/iptables-alerter-jkl4x" Mar 18 13:07:06.515262 master-0 kubenswrapper[7599]: I0318 13:07:06.515076 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:07:06.515262 master-0 kubenswrapper[7599]: I0318 13:07:06.515099 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/767da57e-44e4-4861-bc6f-427c5bbb4d9d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:07:06.515262 master-0 kubenswrapper[7599]: I0318 13:07:06.515122 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b75d4622-ac12-4f82-afc9-ab63e6278b0c-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-n7fn4\" (UID: \"b75d4622-ac12-4f82-afc9-ab63e6278b0c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4" Mar 18 13:07:06.515262 master-0 kubenswrapper[7599]: I0318 13:07:06.515143 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b75d4622-ac12-4f82-afc9-ab63e6278b0c-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-n7fn4\" (UID: \"b75d4622-ac12-4f82-afc9-ab63e6278b0c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4" Mar 18 13:07:06.515262 master-0 kubenswrapper[7599]: I0318 13:07:06.515165 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dw4r\" (UniqueName: \"kubernetes.io/projected/2ea9eb53-0385-4a1a-a64f-696f8520cf49-kube-api-access-4dw4r\") pod \"package-server-manager-7b95f86987-p7vvx\" (UID: \"2ea9eb53-0385-4a1a-a64f-696f8520cf49\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" Mar 18 13:07:06.515262 master-0 kubenswrapper[7599]: I0318 13:07:06.515185 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnxv5\" (UniqueName: \"kubernetes.io/projected/053cc9bc-f98e-46f6-93bb-b5344d20bf74-kube-api-access-gnxv5\") pod \"iptables-alerter-jkl4x\" (UID: \"053cc9bc-f98e-46f6-93bb-b5344d20bf74\") " pod="openshift-network-operator/iptables-alerter-jkl4x" Mar 18 13:07:06.515262 master-0 kubenswrapper[7599]: I0318 13:07:06.515207 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/f7f4ae93-428b-4ebd-bfaa-18359b407ede-host-etc-kube\") pod \"network-operator-7bd846bfc4-gxxbr\" (UID: \"f7f4ae93-428b-4ebd-bfaa-18359b407ede\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gxxbr" Mar 18 13:07:06.515262 master-0 kubenswrapper[7599]: I0318 13:07:06.515227 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/290d1f84-5c5c-4bff-b045-e6020793cded-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:07:06.515262 master-0 kubenswrapper[7599]: I0318 13:07:06.515249 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-kubelet\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.515262 master-0 kubenswrapper[7599]: I0318 13:07:06.515271 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d42bcf13-548b-46c4-9a3d-a46f1b6ec045-env-overrides\") pod \"ovnkube-control-plane-57f769d897-hvnt4\" (UID: \"d42bcf13-548b-46c4-9a3d-a46f1b6ec045\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" Mar 18 13:07:06.515997 master-0 kubenswrapper[7599]: I0318 13:07:06.515294 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/19a76585-a9ac-4ed9-9146-bb77b31848c6-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:07:06.515997 master-0 kubenswrapper[7599]: I0318 13:07:06.515317 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert\") pod \"catalog-operator-68f85b4d6c-t84s9\" (UID: \"d2455453-5943-49ef-bfea-cba077197da0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:07:06.515997 master-0 kubenswrapper[7599]: I0318 13:07:06.515338 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-var-lib-kubelet\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.515997 master-0 kubenswrapper[7599]: I0318 13:07:06.515359 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-multus-daemon-config\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.515997 master-0 kubenswrapper[7599]: I0318 13:07:06.515379 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5a4202c2-c330-4a5d-87e7-0a63d069113f-images\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:07:06.515997 master-0 kubenswrapper[7599]: I0318 13:07:06.515401 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/767da57e-44e4-4861-bc6f-427c5bbb4d9d-cnibin\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:07:06.515997 master-0 kubenswrapper[7599]: I0318 13:07:06.515443 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z84cq\" (UniqueName: \"kubernetes.io/projected/0e7156cf-2d68-4de8-b7e7-60e1539590dd-kube-api-access-z84cq\") pod \"network-node-identity-x8r78\" (UID: \"0e7156cf-2d68-4de8-b7e7-60e1539590dd\") " pod="openshift-network-node-identity/network-node-identity-x8r78" Mar 18 13:07:06.515997 master-0 kubenswrapper[7599]: I0318 13:07:06.515467 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgt5t\" (UniqueName: \"kubernetes.io/projected/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-kube-api-access-lgt5t\") pod \"cluster-monitoring-operator-58845fbb57-n8hgl\" (UID: \"8c0e5eca-819b-40f3-bf77-0cd90a4f6e94\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" Mar 18 13:07:06.515997 master-0 kubenswrapper[7599]: I0318 13:07:06.515489 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8eff549-02f3-446e-b3a1-a66cecdc02a6-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-zm5rd\" (UID: \"a8eff549-02f3-446e-b3a1-a66cecdc02a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd" Mar 18 13:07:06.515997 master-0 kubenswrapper[7599]: I0318 13:07:06.515511 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4vtf\" (UniqueName: \"kubernetes.io/projected/15a97fe2-5022-4997-9936-4247ae7ecb43-kube-api-access-h4vtf\") pod \"cluster-storage-operator-7d87854d6-r4dzk\" (UID: \"15a97fe2-5022-4997-9936-4247ae7ecb43\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-r4dzk" Mar 18 13:07:06.515997 master-0 kubenswrapper[7599]: I0318 13:07:06.515536 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-p7vvx\" (UID: \"2ea9eb53-0385-4a1a-a64f-696f8520cf49\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" Mar 18 13:07:06.515997 master-0 kubenswrapper[7599]: I0318 13:07:06.515559 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2b06a568-4dad-44b4-8312-aa52911dbfb0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:07:06.515997 master-0 kubenswrapper[7599]: I0318 13:07:06.515580 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-multus-conf-dir\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.515997 master-0 kubenswrapper[7599]: I0318 13:07:06.515606 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34a3a84b-048f-4822-9f05-0e7509327ca2-config\") pod \"kube-apiserver-operator-8b68b9d9b-k8tv4\" (UID: \"34a3a84b-048f-4822-9f05-0e7509327ca2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-k8tv4" Mar 18 13:07:06.515997 master-0 kubenswrapper[7599]: I0318 13:07:06.515631 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:07:06.515997 master-0 kubenswrapper[7599]: I0318 13:07:06.515663 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.515997 master-0 kubenswrapper[7599]: I0318 13:07:06.515688 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-99pzm\" (UID: \"fe643e40-d06d-4e69-9be3-0065c2a78567\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:07:06.515997 master-0 kubenswrapper[7599]: I0318 13:07:06.515711 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf9d21f9-64d6-4e21-a985-491197038568-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-sp4ld\" (UID: \"bf9d21f9-64d6-4e21-a985-491197038568\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:07:06.515997 master-0 kubenswrapper[7599]: I0318 13:07:06.515737 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-run-ovn\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.515997 master-0 kubenswrapper[7599]: I0318 13:07:06.515760 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf9d21f9-64d6-4e21-a985-491197038568-config\") pod \"authentication-operator-5885bfd7f4-sp4ld\" (UID: \"bf9d21f9-64d6-4e21-a985-491197038568\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:07:06.515997 master-0 kubenswrapper[7599]: I0318 13:07:06.515786 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce3728ab-5d50-40ac-95b3-74a5b62a557f-serving-cert\") pod \"openshift-config-operator-95bf4f4d-qwgrm\" (UID: \"ce3728ab-5d50-40ac-95b3-74a5b62a557f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" Mar 18 13:07:06.515997 master-0 kubenswrapper[7599]: I0318 13:07:06.515809 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34a3a84b-048f-4822-9f05-0e7509327ca2-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-k8tv4\" (UID: \"34a3a84b-048f-4822-9f05-0e7509327ca2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-k8tv4" Mar 18 13:07:06.515997 master-0 kubenswrapper[7599]: I0318 13:07:06.515831 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a4202c2-c330-4a5d-87e7-0a63d069113f-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:07:06.515997 master-0 kubenswrapper[7599]: I0318 13:07:06.515854 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fxgl\" (UniqueName: \"kubernetes.io/projected/595f697b-d238-4500-84ce-1ea00377f05e-kube-api-access-4fxgl\") pod \"service-ca-operator-b865698dc-hnr6m\" (UID: \"595f697b-d238-4500-84ce-1ea00377f05e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m" Mar 18 13:07:06.515997 master-0 kubenswrapper[7599]: I0318 13:07:06.515874 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c2c4a58-9780-4ecd-b417-e590ac3576ed-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-4bqf9\" (UID: \"0c2c4a58-9780-4ecd-b417-e590ac3576ed\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-4bqf9" Mar 18 13:07:06.515997 master-0 kubenswrapper[7599]: I0318 13:07:06.515899 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/767da57e-44e4-4861-bc6f-427c5bbb4d9d-cni-binary-copy\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:07:06.515997 master-0 kubenswrapper[7599]: I0318 13:07:06.515921 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:07:06.515997 master-0 kubenswrapper[7599]: I0318 13:07:06.515945 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:07:06.515997 master-0 kubenswrapper[7599]: I0318 13:07:06.515969 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3ff09ab-cbe1-49e7-8121-5f71997a5176-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:07:06.515997 master-0 kubenswrapper[7599]: I0318 13:07:06.515990 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-run-openvswitch\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.515997 master-0 kubenswrapper[7599]: I0318 13:07:06.516014 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07505113-d5e7-4ea3-b9cc-8f08cba45ccc-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb\" (UID: \"07505113-d5e7-4ea3-b9cc-8f08cba45ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516038 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516060 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8eff549-02f3-446e-b3a1-a66cecdc02a6-config\") pod \"openshift-kube-scheduler-operator-dddff6458-zm5rd\" (UID: \"a8eff549-02f3-446e-b3a1-a66cecdc02a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516082 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29qbv\" (UniqueName: \"kubernetes.io/projected/ce3728ab-5d50-40ac-95b3-74a5b62a557f-kube-api-access-29qbv\") pod \"openshift-config-operator-95bf4f4d-qwgrm\" (UID: \"ce3728ab-5d50-40ac-95b3-74a5b62a557f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516107 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmzr4\" (UniqueName: \"kubernetes.io/projected/d9d09a56-ed4c-40b7-8be1-f3934c07296e-kube-api-access-wmzr4\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516131 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/767da57e-44e4-4861-bc6f-427c5bbb4d9d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516152 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab2f96fb-ef55-4427-a598-7e3f1e224045-ovn-node-metrics-cert\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516175 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2b06a568-4dad-44b4-8312-aa52911dbfb0-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516198 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-n8hgl\" (UID: \"8c0e5eca-819b-40f3-bf77-0cd90a4f6e94\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516221 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ce3728ab-5d50-40ac-95b3-74a5b62a557f-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-qwgrm\" (UID: \"ce3728ab-5d50-40ac-95b3-74a5b62a557f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516244 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nxzr\" (UniqueName: \"kubernetes.io/projected/767da57e-44e4-4861-bc6f-427c5bbb4d9d-kube-api-access-2nxzr\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516265 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab2f96fb-ef55-4427-a598-7e3f1e224045-env-overrides\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516287 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19a76585-a9ac-4ed9-9146-bb77b31848c6-serving-cert\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516308 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/0e7156cf-2d68-4de8-b7e7-60e1539590dd-ovnkube-identity-cm\") pod \"network-node-identity-x8r78\" (UID: \"0e7156cf-2d68-4de8-b7e7-60e1539590dd\") " pod="openshift-network-node-identity/network-node-identity-x8r78" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516331 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-run-ovn-kubernetes\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516352 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/19a76585-a9ac-4ed9-9146-bb77b31848c6-etcd-client\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516377 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-zrc8h\" (UID: \"720a1f60-c1cb-4aef-aaec-f082090ca631\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516399 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0e7156cf-2d68-4de8-b7e7-60e1539590dd-env-overrides\") pod \"network-node-identity-x8r78\" (UID: \"0e7156cf-2d68-4de8-b7e7-60e1539590dd\") " pod="openshift-network-node-identity/network-node-identity-x8r78" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516435 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/595f697b-d238-4500-84ce-1ea00377f05e-config\") pod \"service-ca-operator-b865698dc-hnr6m\" (UID: \"595f697b-d238-4500-84ce-1ea00377f05e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516458 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/290d1f84-5c5c-4bff-b045-e6020793cded-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516481 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2b06a568-4dad-44b4-8312-aa52911dbfb0-service-ca\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516504 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkxxg\" (UniqueName: \"kubernetes.io/projected/d42bcf13-548b-46c4-9a3d-a46f1b6ec045-kube-api-access-vkxxg\") pod \"ovnkube-control-plane-57f769d897-hvnt4\" (UID: \"d42bcf13-548b-46c4-9a3d-a46f1b6ec045\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516525 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-run-systemd\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516547 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert\") pod \"olm-operator-5c9796789-z8jkt\" (UID: \"822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516569 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2snjj\" (UniqueName: \"kubernetes.io/projected/0278b04b-b27b-4717-a009-a70315fd05a6-kube-api-access-2snjj\") pod \"network-check-target-kcsgp\" (UID: \"0278b04b-b27b-4717-a009-a70315fd05a6\") " pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516593 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls\") pod \"dns-operator-9c5679d8f-5lzzn\" (UID: \"59bf5114-29f9-4f70-8582-108e95327cb2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-5lzzn" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516616 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6zmc\" (UniqueName: \"kubernetes.io/projected/0c2c4a58-9780-4ecd-b417-e590ac3576ed-kube-api-access-v6zmc\") pod \"openshift-apiserver-operator-d65958b8-4bqf9\" (UID: \"0c2c4a58-9780-4ecd-b417-e590ac3576ed\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-4bqf9" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516637 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/767da57e-44e4-4861-bc6f-427c5bbb4d9d-os-release\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516660 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvdg2\" (UniqueName: \"kubernetes.io/projected/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-kube-api-access-qvdg2\") pod \"olm-operator-5c9796789-z8jkt\" (UID: \"822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516683 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhs5w\" (UniqueName: \"kubernetes.io/projected/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc-kube-api-access-qhs5w\") pod \"openshift-controller-manager-operator-8c94f4649-cpqm5\" (UID: \"7dca7577-6bee-4dd3-917a-7b7ccc42f0fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516707 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8eff549-02f3-446e-b3a1-a66cecdc02a6-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-zm5rd\" (UID: \"a8eff549-02f3-446e-b3a1-a66cecdc02a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516730 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-run-netns\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516754 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-etc-kubernetes\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516774 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-cni-netd\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516794 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-os-release\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516817 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9d09a56-ed4c-40b7-8be1-f3934c07296e-trusted-ca\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516839 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvdtw\" (UniqueName: \"kubernetes.io/projected/bf1cc230-0a79-4a1d-b500-a65d02e50973-kube-api-access-dvdtw\") pod \"network-metrics-daemon-kbfbq\" (UID: \"bf1cc230-0a79-4a1d-b500-a65d02e50973\") " pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516859 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-node-log\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516879 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/15a97fe2-5022-4997-9936-4247ae7ecb43-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-r4dzk\" (UID: \"15a97fe2-5022-4997-9936-4247ae7ecb43\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-r4dzk" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516897 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf9d21f9-64d6-4e21-a985-491197038568-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-sp4ld\" (UID: \"bf9d21f9-64d6-4e21-a985-491197038568\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516920 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516952 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-multus-socket-dir-parent\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516971 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdkx7\" (UniqueName: \"kubernetes.io/projected/290d1f84-5c5c-4bff-b045-e6020793cded-kube-api-access-rdkx7\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.516988 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-var-lib-cni-bin\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.517004 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9d09a56-ed4c-40b7-8be1-f3934c07296e-bound-sa-token\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.517019 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-log-socket\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.517035 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/19a76585-a9ac-4ed9-9146-bb77b31848c6-etcd-ca\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.517052 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgffb\" (UniqueName: \"kubernetes.io/projected/bf9d21f9-64d6-4e21-a985-491197038568-kube-api-access-qgffb\") pod \"authentication-operator-5885bfd7f4-sp4ld\" (UID: \"bf9d21f9-64d6-4e21-a985-491197038568\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.517068 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-multus-cni-dir\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.517082 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-cnibin\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.517097 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-run-k8s-cni-cncf-io\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.517111 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs\") pod \"network-metrics-daemon-kbfbq\" (UID: \"bf1cc230-0a79-4a1d-b500-a65d02e50973\") " pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.517128 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9zbp\" (UniqueName: \"kubernetes.io/projected/19a76585-a9ac-4ed9-9146-bb77b31848c6-kube-api-access-w9zbp\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.517143 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-var-lib-openvswitch\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.517161 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e7156cf-2d68-4de8-b7e7-60e1539590dd-webhook-cert\") pod \"network-node-identity-x8r78\" (UID: \"0e7156cf-2d68-4de8-b7e7-60e1539590dd\") " pod="openshift-network-node-identity/network-node-identity-x8r78" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.517197 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kskqr\" (UniqueName: \"kubernetes.io/projected/902909ca-ab08-49aa-9736-70e073f8e67d-kube-api-access-kskqr\") pod \"cluster-olm-operator-67dcd4998-bppd4\" (UID: \"902909ca-ab08-49aa-9736-70e073f8e67d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.517214 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-system-cni-dir\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.517168 master-0 kubenswrapper[7599]: I0318 13:07:06.517231 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-hostroot\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.519111 master-0 kubenswrapper[7599]: I0318 13:07:06.517248 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf9d21f9-64d6-4e21-a985-491197038568-serving-cert\") pod \"authentication-operator-5885bfd7f4-sp4ld\" (UID: \"bf9d21f9-64d6-4e21-a985-491197038568\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:07:06.519111 master-0 kubenswrapper[7599]: I0318 13:07:06.517265 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbqfh\" (UniqueName: \"kubernetes.io/projected/720a1f60-c1cb-4aef-aaec-f082090ca631-kube-api-access-nbqfh\") pod \"multus-admission-controller-5dbbb8b86f-zrc8h\" (UID: \"720a1f60-c1cb-4aef-aaec-f082090ca631\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" Mar 18 13:07:06.519111 master-0 kubenswrapper[7599]: I0318 13:07:06.517722 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34a3a84b-048f-4822-9f05-0e7509327ca2-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-k8tv4\" (UID: \"34a3a84b-048f-4822-9f05-0e7509327ca2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-k8tv4" Mar 18 13:07:06.519111 master-0 kubenswrapper[7599]: I0318 13:07:06.517979 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/595f697b-d238-4500-84ce-1ea00377f05e-serving-cert\") pod \"service-ca-operator-b865698dc-hnr6m\" (UID: \"595f697b-d238-4500-84ce-1ea00377f05e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m" Mar 18 13:07:06.519111 master-0 kubenswrapper[7599]: I0318 13:07:06.518186 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/767da57e-44e4-4861-bc6f-427c5bbb4d9d-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:07:06.519111 master-0 kubenswrapper[7599]: I0318 13:07:06.518328 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/902909ca-ab08-49aa-9736-70e073f8e67d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-bppd4\" (UID: \"902909ca-ab08-49aa-9736-70e073f8e67d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4" Mar 18 13:07:06.519111 master-0 kubenswrapper[7599]: I0318 13:07:06.518502 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b75d4622-ac12-4f82-afc9-ab63e6278b0c-config\") pod \"kube-controller-manager-operator-ff989d6cc-n7fn4\" (UID: \"b75d4622-ac12-4f82-afc9-ab63e6278b0c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4" Mar 18 13:07:06.519111 master-0 kubenswrapper[7599]: I0318 13:07:06.518670 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-cni-binary-copy\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.519111 master-0 kubenswrapper[7599]: I0318 13:07:06.518832 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c2c4a58-9780-4ecd-b417-e590ac3576ed-config\") pod \"openshift-apiserver-operator-d65958b8-4bqf9\" (UID: \"0c2c4a58-9780-4ecd-b417-e590ac3576ed\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-4bqf9" Mar 18 13:07:06.519111 master-0 kubenswrapper[7599]: I0318 13:07:06.519066 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d42bcf13-548b-46c4-9a3d-a46f1b6ec045-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-hvnt4\" (UID: \"d42bcf13-548b-46c4-9a3d-a46f1b6ec045\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" Mar 18 13:07:06.519476 master-0 kubenswrapper[7599]: I0318 13:07:06.519149 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/902909ca-ab08-49aa-9736-70e073f8e67d-operand-assets\") pod \"cluster-olm-operator-67dcd4998-bppd4\" (UID: \"902909ca-ab08-49aa-9736-70e073f8e67d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4" Mar 18 13:07:06.519476 master-0 kubenswrapper[7599]: I0318 13:07:06.519292 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07505113-d5e7-4ea3-b9cc-8f08cba45ccc-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb\" (UID: \"07505113-d5e7-4ea3-b9cc-8f08cba45ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb" Mar 18 13:07:06.519476 master-0 kubenswrapper[7599]: I0318 13:07:06.519449 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc-config\") pod \"openshift-controller-manager-operator-8c94f4649-cpqm5\" (UID: \"7dca7577-6bee-4dd3-917a-7b7ccc42f0fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5" Mar 18 13:07:06.519669 master-0 kubenswrapper[7599]: I0318 13:07:06.519637 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/053cc9bc-f98e-46f6-93bb-b5344d20bf74-iptables-alerter-script\") pod \"iptables-alerter-jkl4x\" (UID: \"053cc9bc-f98e-46f6-93bb-b5344d20bf74\") " pod="openshift-network-operator/iptables-alerter-jkl4x" Mar 18 13:07:06.519762 master-0 kubenswrapper[7599]: I0318 13:07:06.519736 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-cpqm5\" (UID: \"7dca7577-6bee-4dd3-917a-7b7ccc42f0fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5" Mar 18 13:07:06.519898 master-0 kubenswrapper[7599]: I0318 13:07:06.519869 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19a76585-a9ac-4ed9-9146-bb77b31848c6-config\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:07:06.520040 master-0 kubenswrapper[7599]: I0318 13:07:06.520011 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/767da57e-44e4-4861-bc6f-427c5bbb4d9d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:07:06.520128 master-0 kubenswrapper[7599]: I0318 13:07:06.520110 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b75d4622-ac12-4f82-afc9-ab63e6278b0c-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-n7fn4\" (UID: \"b75d4622-ac12-4f82-afc9-ab63e6278b0c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4" Mar 18 13:07:06.520283 master-0 kubenswrapper[7599]: I0318 13:07:06.520257 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/f7f4ae93-428b-4ebd-bfaa-18359b407ede-host-etc-kube\") pod \"network-operator-7bd846bfc4-gxxbr\" (UID: \"f7f4ae93-428b-4ebd-bfaa-18359b407ede\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gxxbr" Mar 18 13:07:06.520424 master-0 kubenswrapper[7599]: I0318 13:07:06.520392 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d42bcf13-548b-46c4-9a3d-a46f1b6ec045-env-overrides\") pod \"ovnkube-control-plane-57f769d897-hvnt4\" (UID: \"d42bcf13-548b-46c4-9a3d-a46f1b6ec045\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" Mar 18 13:07:06.520522 master-0 kubenswrapper[7599]: I0318 13:07:06.520504 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/19a76585-a9ac-4ed9-9146-bb77b31848c6-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:07:06.520684 master-0 kubenswrapper[7599]: I0318 13:07:06.520639 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-multus-daemon-config\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.520776 master-0 kubenswrapper[7599]: I0318 13:07:06.520758 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5a4202c2-c330-4a5d-87e7-0a63d069113f-images\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:07:06.521066 master-0 kubenswrapper[7599]: I0318 13:07:06.521036 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34a3a84b-048f-4822-9f05-0e7509327ca2-config\") pod \"kube-apiserver-operator-8b68b9d9b-k8tv4\" (UID: \"34a3a84b-048f-4822-9f05-0e7509327ca2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-k8tv4" Mar 18 13:07:06.521452 master-0 kubenswrapper[7599]: I0318 13:07:06.521425 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-99pzm\" (UID: \"fe643e40-d06d-4e69-9be3-0065c2a78567\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:07:06.521551 master-0 kubenswrapper[7599]: I0318 13:07:06.521524 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf9d21f9-64d6-4e21-a985-491197038568-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-sp4ld\" (UID: \"bf9d21f9-64d6-4e21-a985-491197038568\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:07:06.521677 master-0 kubenswrapper[7599]: I0318 13:07:06.521650 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf9d21f9-64d6-4e21-a985-491197038568-config\") pod \"authentication-operator-5885bfd7f4-sp4ld\" (UID: \"bf9d21f9-64d6-4e21-a985-491197038568\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:07:06.521768 master-0 kubenswrapper[7599]: I0318 13:07:06.521750 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce3728ab-5d50-40ac-95b3-74a5b62a557f-serving-cert\") pod \"openshift-config-operator-95bf4f4d-qwgrm\" (UID: \"ce3728ab-5d50-40ac-95b3-74a5b62a557f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" Mar 18 13:07:06.522007 master-0 kubenswrapper[7599]: I0318 13:07:06.521925 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a4202c2-c330-4a5d-87e7-0a63d069113f-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:07:06.522096 master-0 kubenswrapper[7599]: I0318 13:07:06.522069 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c2c4a58-9780-4ecd-b417-e590ac3576ed-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-4bqf9\" (UID: \"0c2c4a58-9780-4ecd-b417-e590ac3576ed\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-4bqf9" Mar 18 13:07:06.522231 master-0 kubenswrapper[7599]: I0318 13:07:06.522213 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/767da57e-44e4-4861-bc6f-427c5bbb4d9d-cni-binary-copy\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:07:06.522535 master-0 kubenswrapper[7599]: I0318 13:07:06.522426 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3ff09ab-cbe1-49e7-8121-5f71997a5176-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:07:06.522694 master-0 kubenswrapper[7599]: I0318 13:07:06.522579 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07505113-d5e7-4ea3-b9cc-8f08cba45ccc-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb\" (UID: \"07505113-d5e7-4ea3-b9cc-8f08cba45ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb" Mar 18 13:07:06.522694 master-0 kubenswrapper[7599]: E0318 13:07:06.522624 7599 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:07:06.522694 master-0 kubenswrapper[7599]: E0318 13:07:06.522666 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert podName:ac6d8eb6-1d5e-4757-9823-5ffe478c711c nodeName:}" failed. No retries permitted until 2026-03-18 13:07:07.022654024 +0000 UTC m=+1.983708266 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert") pod "cluster-baremetal-operator-6f69995874-mz4qp" (UID: "ac6d8eb6-1d5e-4757-9823-5ffe478c711c") : secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:07:06.522966 master-0 kubenswrapper[7599]: I0318 13:07:06.522946 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8eff549-02f3-446e-b3a1-a66cecdc02a6-config\") pod \"openshift-kube-scheduler-operator-dddff6458-zm5rd\" (UID: \"a8eff549-02f3-446e-b3a1-a66cecdc02a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd" Mar 18 13:07:06.523275 master-0 kubenswrapper[7599]: I0318 13:07:06.523244 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-n8hgl\" (UID: \"8c0e5eca-819b-40f3-bf77-0cd90a4f6e94\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" Mar 18 13:07:06.523318 master-0 kubenswrapper[7599]: I0318 13:07:06.523299 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ce3728ab-5d50-40ac-95b3-74a5b62a557f-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-qwgrm\" (UID: \"ce3728ab-5d50-40ac-95b3-74a5b62a557f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" Mar 18 13:07:06.523502 master-0 kubenswrapper[7599]: I0318 13:07:06.523471 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab2f96fb-ef55-4427-a598-7e3f1e224045-env-overrides\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.523624 master-0 kubenswrapper[7599]: I0318 13:07:06.523594 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19a76585-a9ac-4ed9-9146-bb77b31848c6-serving-cert\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:07:06.523838 master-0 kubenswrapper[7599]: I0318 13:07:06.523809 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/19a76585-a9ac-4ed9-9146-bb77b31848c6-etcd-client\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:07:06.524034 master-0 kubenswrapper[7599]: I0318 13:07:06.524003 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/595f697b-d238-4500-84ce-1ea00377f05e-config\") pod \"service-ca-operator-b865698dc-hnr6m\" (UID: \"595f697b-d238-4500-84ce-1ea00377f05e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m" Mar 18 13:07:06.524290 master-0 kubenswrapper[7599]: I0318 13:07:06.524261 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/290d1f84-5c5c-4bff-b045-e6020793cded-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:07:06.524493 master-0 kubenswrapper[7599]: I0318 13:07:06.524457 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2b06a568-4dad-44b4-8312-aa52911dbfb0-service-ca\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:07:06.524636 master-0 kubenswrapper[7599]: E0318 13:07:06.524609 7599 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:07:06.524682 master-0 kubenswrapper[7599]: E0318 13:07:06.524643 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls podName:59bf5114-29f9-4f70-8582-108e95327cb2 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:07.024634071 +0000 UTC m=+1.985688303 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls") pod "dns-operator-9c5679d8f-5lzzn" (UID: "59bf5114-29f9-4f70-8582-108e95327cb2") : secret "metrics-tls" not found Mar 18 13:07:06.524904 master-0 kubenswrapper[7599]: I0318 13:07:06.524840 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8eff549-02f3-446e-b3a1-a66cecdc02a6-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-zm5rd\" (UID: \"a8eff549-02f3-446e-b3a1-a66cecdc02a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd" Mar 18 13:07:06.525117 master-0 kubenswrapper[7599]: I0318 13:07:06.525065 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9d09a56-ed4c-40b7-8be1-f3934c07296e-trusted-ca\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:07:06.525351 master-0 kubenswrapper[7599]: I0318 13:07:06.525220 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/15a97fe2-5022-4997-9936-4247ae7ecb43-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-r4dzk\" (UID: \"15a97fe2-5022-4997-9936-4247ae7ecb43\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-r4dzk" Mar 18 13:07:06.525698 master-0 kubenswrapper[7599]: I0318 13:07:06.525646 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf9d21f9-64d6-4e21-a985-491197038568-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-sp4ld\" (UID: \"bf9d21f9-64d6-4e21-a985-491197038568\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:07:06.525698 master-0 kubenswrapper[7599]: E0318 13:07:06.525704 7599 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 18 13:07:06.525881 master-0 kubenswrapper[7599]: E0318 13:07:06.525726 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls podName:ac6d8eb6-1d5e-4757-9823-5ffe478c711c nodeName:}" failed. No retries permitted until 2026-03-18 13:07:07.025718767 +0000 UTC m=+1.986773009 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-mz4qp" (UID: "ac6d8eb6-1d5e-4757-9823-5ffe478c711c") : secret "cluster-baremetal-operator-tls" not found Mar 18 13:07:06.526201 master-0 kubenswrapper[7599]: I0318 13:07:06.526158 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/19a76585-a9ac-4ed9-9146-bb77b31848c6-etcd-ca\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:07:06.526431 master-0 kubenswrapper[7599]: I0318 13:07:06.526392 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-system-cni-dir\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.526696 master-0 kubenswrapper[7599]: I0318 13:07:06.526676 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 18 13:07:06.526825 master-0 kubenswrapper[7599]: I0318 13:07:06.526789 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf9d21f9-64d6-4e21-a985-491197038568-serving-cert\") pod \"authentication-operator-5885bfd7f4-sp4ld\" (UID: \"bf9d21f9-64d6-4e21-a985-491197038568\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:07:06.533863 master-0 kubenswrapper[7599]: I0318 13:07:06.533820 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 18 13:07:06.538198 master-0 kubenswrapper[7599]: I0318 13:07:06.538164 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d42bcf13-548b-46c4-9a3d-a46f1b6ec045-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-hvnt4\" (UID: \"d42bcf13-548b-46c4-9a3d-a46f1b6ec045\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" Mar 18 13:07:06.538287 master-0 kubenswrapper[7599]: I0318 13:07:06.538214 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab2f96fb-ef55-4427-a598-7e3f1e224045-ovnkube-config\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.553696 master-0 kubenswrapper[7599]: I0318 13:07:06.553651 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 18 13:07:06.554949 master-0 kubenswrapper[7599]: I0318 13:07:06.554897 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0e7156cf-2d68-4de8-b7e7-60e1539590dd-env-overrides\") pod \"network-node-identity-x8r78\" (UID: \"0e7156cf-2d68-4de8-b7e7-60e1539590dd\") " pod="openshift-network-node-identity/network-node-identity-x8r78" Mar 18 13:07:06.574424 master-0 kubenswrapper[7599]: I0318 13:07:06.574376 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 18 13:07:06.594589 master-0 kubenswrapper[7599]: I0318 13:07:06.594516 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 18 13:07:06.596802 master-0 kubenswrapper[7599]: I0318 13:07:06.596764 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e7156cf-2d68-4de8-b7e7-60e1539590dd-webhook-cert\") pod \"network-node-identity-x8r78\" (UID: \"0e7156cf-2d68-4de8-b7e7-60e1539590dd\") " pod="openshift-network-node-identity/network-node-identity-x8r78" Mar 18 13:07:06.614034 master-0 kubenswrapper[7599]: I0318 13:07:06.613976 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 18 13:07:06.618234 master-0 kubenswrapper[7599]: I0318 13:07:06.618154 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:07:06.618234 master-0 kubenswrapper[7599]: I0318 13:07:06.618202 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-n8hgl\" (UID: \"8c0e5eca-819b-40f3-bf77-0cd90a4f6e94\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" Mar 18 13:07:06.618667 master-0 kubenswrapper[7599]: I0318 13:07:06.618255 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/053cc9bc-f98e-46f6-93bb-b5344d20bf74-host-slash\") pod \"iptables-alerter-jkl4x\" (UID: \"053cc9bc-f98e-46f6-93bb-b5344d20bf74\") " pod="openshift-network-operator/iptables-alerter-jkl4x" Mar 18 13:07:06.618667 master-0 kubenswrapper[7599]: I0318 13:07:06.618275 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:07:06.618667 master-0 kubenswrapper[7599]: I0318 13:07:06.618320 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-kubelet\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.618667 master-0 kubenswrapper[7599]: E0318 13:07:06.618533 7599 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 13:07:06.618667 master-0 kubenswrapper[7599]: E0318 13:07:06.618615 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-node-tuning-operator-tls podName:c3ff09ab-cbe1-49e7-8121-5f71997a5176 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:07.118595899 +0000 UTC m=+2.079650141 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-kvbzn" (UID: "c3ff09ab-cbe1-49e7-8121-5f71997a5176") : secret "node-tuning-operator-tls" not found Mar 18 13:07:06.618667 master-0 kubenswrapper[7599]: I0318 13:07:06.618637 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-kubelet\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.619260 master-0 kubenswrapper[7599]: I0318 13:07:06.618742 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/053cc9bc-f98e-46f6-93bb-b5344d20bf74-host-slash\") pod \"iptables-alerter-jkl4x\" (UID: \"053cc9bc-f98e-46f6-93bb-b5344d20bf74\") " pod="openshift-network-operator/iptables-alerter-jkl4x" Mar 18 13:07:06.619260 master-0 kubenswrapper[7599]: E0318 13:07:06.618903 7599 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Mar 18 13:07:06.619260 master-0 kubenswrapper[7599]: E0318 13:07:06.618924 7599 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 13:07:06.619260 master-0 kubenswrapper[7599]: E0318 13:07:06.618956 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls podName:8c0e5eca-819b-40f3-bf77-0cd90a4f6e94 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:07.118948428 +0000 UTC m=+2.080002670 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-n8hgl" (UID: "8c0e5eca-819b-40f3-bf77-0cd90a4f6e94") : secret "cluster-monitoring-operator-tls" not found Mar 18 13:07:06.619260 master-0 kubenswrapper[7599]: E0318 13:07:06.618995 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls podName:5a4202c2-c330-4a5d-87e7-0a63d069113f nodeName:}" failed. No retries permitted until 2026-03-18 13:07:07.118963428 +0000 UTC m=+2.080017740 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls") pod "machine-config-operator-84d549f6d5-dlr6p" (UID: "5a4202c2-c330-4a5d-87e7-0a63d069113f") : secret "mco-proxy-tls" not found Mar 18 13:07:06.619260 master-0 kubenswrapper[7599]: I0318 13:07:06.619038 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert\") pod \"catalog-operator-68f85b4d6c-t84s9\" (UID: \"d2455453-5943-49ef-bfea-cba077197da0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:07:06.619260 master-0 kubenswrapper[7599]: I0318 13:07:06.619097 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-var-lib-kubelet\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.619260 master-0 kubenswrapper[7599]: I0318 13:07:06.619168 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/767da57e-44e4-4861-bc6f-427c5bbb4d9d-cnibin\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:07:06.619260 master-0 kubenswrapper[7599]: I0318 13:07:06.619260 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.620158 master-0 kubenswrapper[7599]: I0318 13:07:06.619346 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-p7vvx\" (UID: \"2ea9eb53-0385-4a1a-a64f-696f8520cf49\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" Mar 18 13:07:06.620158 master-0 kubenswrapper[7599]: I0318 13:07:06.619405 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2b06a568-4dad-44b4-8312-aa52911dbfb0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:07:06.620158 master-0 kubenswrapper[7599]: I0318 13:07:06.619513 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-multus-conf-dir\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.620158 master-0 kubenswrapper[7599]: I0318 13:07:06.619563 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:07:06.620158 master-0 kubenswrapper[7599]: I0318 13:07:06.619631 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-run-ovn\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.620158 master-0 kubenswrapper[7599]: I0318 13:07:06.619702 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:07:06.620158 master-0 kubenswrapper[7599]: I0318 13:07:06.619754 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:07:06.620158 master-0 kubenswrapper[7599]: I0318 13:07:06.619804 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-run-openvswitch\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.620158 master-0 kubenswrapper[7599]: I0318 13:07:06.619920 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/767da57e-44e4-4861-bc6f-427c5bbb4d9d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:07:06.620158 master-0 kubenswrapper[7599]: I0318 13:07:06.619993 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2b06a568-4dad-44b4-8312-aa52911dbfb0-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:07:06.620158 master-0 kubenswrapper[7599]: I0318 13:07:06.620075 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-run-ovn-kubernetes\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.620158 master-0 kubenswrapper[7599]: I0318 13:07:06.620127 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-zrc8h\" (UID: \"720a1f60-c1cb-4aef-aaec-f082090ca631\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" Mar 18 13:07:06.621595 master-0 kubenswrapper[7599]: I0318 13:07:06.620200 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-run-systemd\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.621595 master-0 kubenswrapper[7599]: I0318 13:07:06.620249 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert\") pod \"olm-operator-5c9796789-z8jkt\" (UID: \"822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt" Mar 18 13:07:06.621595 master-0 kubenswrapper[7599]: I0318 13:07:06.620321 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2snjj\" (UniqueName: \"kubernetes.io/projected/0278b04b-b27b-4717-a009-a70315fd05a6-kube-api-access-2snjj\") pod \"network-check-target-kcsgp\" (UID: \"0278b04b-b27b-4717-a009-a70315fd05a6\") " pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:07:06.621595 master-0 kubenswrapper[7599]: I0318 13:07:06.620408 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/767da57e-44e4-4861-bc6f-427c5bbb4d9d-os-release\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:07:06.621595 master-0 kubenswrapper[7599]: I0318 13:07:06.620514 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-cni-netd\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.621595 master-0 kubenswrapper[7599]: E0318 13:07:06.620544 7599 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 13:07:06.621595 master-0 kubenswrapper[7599]: E0318 13:07:06.620571 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls podName:290d1f84-5c5c-4bff-b045-e6020793cded nodeName:}" failed. No retries permitted until 2026-03-18 13:07:07.120562647 +0000 UTC m=+2.081616889 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-6l7pv" (UID: "290d1f84-5c5c-4bff-b045-e6020793cded") : secret "image-registry-operator-tls" not found Mar 18 13:07:06.621595 master-0 kubenswrapper[7599]: I0318 13:07:06.620573 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-run-netns\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.621595 master-0 kubenswrapper[7599]: E0318 13:07:06.620606 7599 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 13:07:06.621595 master-0 kubenswrapper[7599]: E0318 13:07:06.620627 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert podName:d2455453-5943-49ef-bfea-cba077197da0 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:07.120619108 +0000 UTC m=+2.081673340 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert") pod "catalog-operator-68f85b4d6c-t84s9" (UID: "d2455453-5943-49ef-bfea-cba077197da0") : secret "catalog-operator-serving-cert" not found Mar 18 13:07:06.621595 master-0 kubenswrapper[7599]: I0318 13:07:06.620624 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-etc-kubernetes\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.621595 master-0 kubenswrapper[7599]: I0318 13:07:06.620656 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-var-lib-kubelet\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.621595 master-0 kubenswrapper[7599]: I0318 13:07:06.620674 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-os-release\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.621595 master-0 kubenswrapper[7599]: I0318 13:07:06.620690 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/767da57e-44e4-4861-bc6f-427c5bbb4d9d-cnibin\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:07:06.621595 master-0 kubenswrapper[7599]: I0318 13:07:06.620716 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.621595 master-0 kubenswrapper[7599]: I0318 13:07:06.620755 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-run-systemd\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.621595 master-0 kubenswrapper[7599]: E0318 13:07:06.620793 7599 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 13:07:06.621595 master-0 kubenswrapper[7599]: E0318 13:07:06.620811 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert podName:2ea9eb53-0385-4a1a-a64f-696f8520cf49 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:07.120805442 +0000 UTC m=+2.081859684 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-p7vvx" (UID: "2ea9eb53-0385-4a1a-a64f-696f8520cf49") : secret "package-server-manager-serving-cert" not found Mar 18 13:07:06.621595 master-0 kubenswrapper[7599]: E0318 13:07:06.620816 7599 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 13:07:06.621595 master-0 kubenswrapper[7599]: I0318 13:07:06.620831 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-run-openvswitch\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.621595 master-0 kubenswrapper[7599]: E0318 13:07:06.620885 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert podName:2b06a568-4dad-44b4-8312-aa52911dbfb0 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:07.120861774 +0000 UTC m=+2.081916136 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert") pod "cluster-version-operator-56d8475767-f2hjp" (UID: "2b06a568-4dad-44b4-8312-aa52911dbfb0") : secret "cluster-version-operator-serving-cert" not found Mar 18 13:07:06.621595 master-0 kubenswrapper[7599]: I0318 13:07:06.620982 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-cni-netd\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.621595 master-0 kubenswrapper[7599]: I0318 13:07:06.621014 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/767da57e-44e4-4861-bc6f-427c5bbb4d9d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:07:06.621595 master-0 kubenswrapper[7599]: I0318 13:07:06.621014 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2b06a568-4dad-44b4-8312-aa52911dbfb0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:07:06.621595 master-0 kubenswrapper[7599]: I0318 13:07:06.621060 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-os-release\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.621595 master-0 kubenswrapper[7599]: E0318 13:07:06.621100 7599 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 13:07:06.621595 master-0 kubenswrapper[7599]: E0318 13:07:06.621128 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert podName:822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:07.12111866 +0000 UTC m=+2.082172902 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert") pod "olm-operator-5c9796789-z8jkt" (UID: "822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9") : secret "olm-operator-serving-cert" not found Mar 18 13:07:06.621595 master-0 kubenswrapper[7599]: I0318 13:07:06.621156 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-run-netns\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.621595 master-0 kubenswrapper[7599]: I0318 13:07:06.621244 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-multus-conf-dir\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.621595 master-0 kubenswrapper[7599]: E0318 13:07:06.621360 7599 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:07:06.621595 master-0 kubenswrapper[7599]: I0318 13:07:06.621376 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-run-ovn\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.621595 master-0 kubenswrapper[7599]: I0318 13:07:06.621465 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-etc-kubernetes\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.621595 master-0 kubenswrapper[7599]: E0318 13:07:06.621513 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls podName:d9d09a56-ed4c-40b7-8be1-f3934c07296e nodeName:}" failed. No retries permitted until 2026-03-18 13:07:07.121405846 +0000 UTC m=+2.082460138 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls") pod "ingress-operator-66b84d69b-wqxpk" (UID: "d9d09a56-ed4c-40b7-8be1-f3934c07296e") : secret "metrics-tls" not found Mar 18 13:07:06.621595 master-0 kubenswrapper[7599]: I0318 13:07:06.621570 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-run-ovn-kubernetes\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.621595 master-0 kubenswrapper[7599]: I0318 13:07:06.621617 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/767da57e-44e4-4861-bc6f-427c5bbb4d9d-os-release\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:07:06.621595 master-0 kubenswrapper[7599]: E0318 13:07:06.621673 7599 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: E0318 13:07:06.621701 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs podName:720a1f60-c1cb-4aef-aaec-f082090ca631 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:07.121692593 +0000 UTC m=+2.082746835 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-zrc8h" (UID: "720a1f60-c1cb-4aef-aaec-f082090ca631") : secret "multus-admission-controller-secret" not found Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.621721 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2b06a568-4dad-44b4-8312-aa52911dbfb0-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.621751 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-node-log\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.621786 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-multus-socket-dir-parent\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.621804 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-var-lib-cni-bin\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.621826 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-log-socket\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.621855 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-multus-cni-dir\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.621884 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-log-socket\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.622036 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-node-log\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.622143 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-cnibin\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.622189 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-run-k8s-cni-cncf-io\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.622212 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-multus-cni-dir\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.622219 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs\") pod \"network-metrics-daemon-kbfbq\" (UID: \"bf1cc230-0a79-4a1d-b500-a65d02e50973\") " pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.622236 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-var-lib-cni-bin\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.622244 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-var-lib-openvswitch\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.622263 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-multus-socket-dir-parent\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.622286 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-run-k8s-cni-cncf-io\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: E0318 13:07:06.622325 7599 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: E0318 13:07:06.622346 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs podName:bf1cc230-0a79-4a1d-b500-a65d02e50973 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:07.122338848 +0000 UTC m=+2.083393090 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs") pod "network-metrics-daemon-kbfbq" (UID: "bf1cc230-0a79-4a1d-b500-a65d02e50973") : secret "metrics-daemon-secret" not found Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.622363 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-cnibin\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.622432 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-var-lib-openvswitch\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.622460 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-hostroot\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.622502 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-99pzm\" (UID: \"fe643e40-d06d-4e69-9be3-0065c2a78567\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.622536 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-systemd-units\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.622604 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: E0318 13:07:06.622635 7599 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.622637 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-var-lib-cni-multus\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.622640 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-hostroot\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: E0318 13:07:06.622663 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics podName:fe643e40-d06d-4e69-9be3-0065c2a78567 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:07.122654146 +0000 UTC m=+2.083708388 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-99pzm" (UID: "fe643e40-d06d-4e69-9be3-0065c2a78567") : secret "marketplace-operator-metrics" not found Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.622687 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-systemd-units\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: E0318 13:07:06.622734 7599 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.622739 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-etc-openvswitch\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: E0318 13:07:06.622783 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-apiservice-cert podName:c3ff09ab-cbe1-49e7-8121-5f71997a5176 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:07.122765958 +0000 UTC m=+2.083820320 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-kvbzn" (UID: "c3ff09ab-cbe1-49e7-8121-5f71997a5176") : secret "performance-addon-operator-webhook-cert" not found Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.622800 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-var-lib-cni-multus\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.622822 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-etc-openvswitch\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.622842 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/767da57e-44e4-4861-bc6f-427c5bbb4d9d-system-cni-dir\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.622872 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/767da57e-44e4-4861-bc6f-427c5bbb4d9d-system-cni-dir\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.622907 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-run-multus-certs\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.622938 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-run-multus-certs\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.622964 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-slash\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.622995 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-slash\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.623020 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-run-netns\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.623041 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-cni-bin\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.623062 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-run-netns\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.624193 master-0 kubenswrapper[7599]: I0318 13:07:06.623113 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-cni-bin\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.627171 master-0 kubenswrapper[7599]: I0318 13:07:06.624525 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/0e7156cf-2d68-4de8-b7e7-60e1539590dd-ovnkube-identity-cm\") pod \"network-node-identity-x8r78\" (UID: \"0e7156cf-2d68-4de8-b7e7-60e1539590dd\") " pod="openshift-network-node-identity/network-node-identity-x8r78" Mar 18 13:07:06.634871 master-0 kubenswrapper[7599]: I0318 13:07:06.634808 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 18 13:07:06.645025 master-0 kubenswrapper[7599]: I0318 13:07:06.644692 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab2f96fb-ef55-4427-a598-7e3f1e224045-ovn-node-metrics-cert\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.656571 master-0 kubenswrapper[7599]: I0318 13:07:06.656523 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 18 13:07:06.660278 master-0 kubenswrapper[7599]: I0318 13:07:06.660212 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ab2f96fb-ef55-4427-a598-7e3f1e224045-ovnkube-script-lib\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.673546 master-0 kubenswrapper[7599]: I0318 13:07:06.673482 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 18 13:07:06.693209 master-0 kubenswrapper[7599]: I0318 13:07:06.693174 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 18 13:07:06.715477 master-0 kubenswrapper[7599]: I0318 13:07:06.715433 7599 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 18 13:07:06.715676 master-0 kubenswrapper[7599]: I0318 13:07:06.715558 7599 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 18 13:07:06.768344 master-0 kubenswrapper[7599]: I0318 13:07:06.768273 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbqfh\" (UniqueName: \"kubernetes.io/projected/720a1f60-c1cb-4aef-aaec-f082090ca631-kube-api-access-nbqfh\") pod \"multus-admission-controller-5dbbb8b86f-zrc8h\" (UID: \"720a1f60-c1cb-4aef-aaec-f082090ca631\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" Mar 18 13:07:06.785398 master-0 kubenswrapper[7599]: I0318 13:07:06.785273 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mddh9\" (UniqueName: \"kubernetes.io/projected/ab2f96fb-ef55-4427-a598-7e3f1e224045-kube-api-access-mddh9\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:06.805529 master-0 kubenswrapper[7599]: I0318 13:07:06.805449 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsvmx\" (UniqueName: \"kubernetes.io/projected/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-kube-api-access-xsvmx\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:07:06.832540 master-0 kubenswrapper[7599]: I0318 13:07:06.832468 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxk9v\" (UniqueName: \"kubernetes.io/projected/d2455453-5943-49ef-bfea-cba077197da0-kube-api-access-lxk9v\") pod \"catalog-operator-68f85b4d6c-t84s9\" (UID: \"d2455453-5943-49ef-bfea-cba077197da0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:07:06.849803 master-0 kubenswrapper[7599]: I0318 13:07:06.849728 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7bpz\" (UniqueName: \"kubernetes.io/projected/394061b4-1bac-4699-96d2-88558c1adaf8-kube-api-access-r7bpz\") pod \"csi-snapshot-controller-operator-5f5d689c6b-68lgz\" (UID: \"394061b4-1bac-4699-96d2-88558c1adaf8\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-68lgz" Mar 18 13:07:06.869545 master-0 kubenswrapper[7599]: I0318 13:07:06.869355 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb5b6\" (UniqueName: \"kubernetes.io/projected/5a4202c2-c330-4a5d-87e7-0a63d069113f-kube-api-access-kb5b6\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:07:06.885121 master-0 kubenswrapper[7599]: I0318 13:07:06.885052 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b06a568-4dad-44b4-8312-aa52911dbfb0-kube-api-access\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:07:06.904670 master-0 kubenswrapper[7599]: I0318 13:07:06.904602 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4cqp\" (UniqueName: \"kubernetes.io/projected/fe643e40-d06d-4e69-9be3-0065c2a78567-kube-api-access-w4cqp\") pod \"marketplace-operator-89ccd998f-99pzm\" (UID: \"fe643e40-d06d-4e69-9be3-0065c2a78567\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:07:06.926959 master-0 kubenswrapper[7599]: I0318 13:07:06.926890 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f7f4ae93-428b-4ebd-bfaa-18359b407ede-metrics-tls\") pod \"network-operator-7bd846bfc4-gxxbr\" (UID: \"f7f4ae93-428b-4ebd-bfaa-18359b407ede\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gxxbr" Mar 18 13:07:06.927338 master-0 kubenswrapper[7599]: I0318 13:07:06.927289 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f7f4ae93-428b-4ebd-bfaa-18359b407ede-metrics-tls\") pod \"network-operator-7bd846bfc4-gxxbr\" (UID: \"f7f4ae93-428b-4ebd-bfaa-18359b407ede\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gxxbr" Mar 18 13:07:06.937211 master-0 kubenswrapper[7599]: I0318 13:07:06.937165 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2hxh\" (UniqueName: \"kubernetes.io/projected/c3ff09ab-cbe1-49e7-8121-5f71997a5176-kube-api-access-n2hxh\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:07:06.946522 master-0 kubenswrapper[7599]: I0318 13:07:06.946482 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgzkd\" (UniqueName: \"kubernetes.io/projected/07505113-d5e7-4ea3-b9cc-8f08cba45ccc-kube-api-access-lgzkd\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb\" (UID: \"07505113-d5e7-4ea3-b9cc-8f08cba45ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb" Mar 18 13:07:06.966389 master-0 kubenswrapper[7599]: I0318 13:07:06.966060 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b75d4622-ac12-4f82-afc9-ab63e6278b0c-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-n7fn4\" (UID: \"b75d4622-ac12-4f82-afc9-ab63e6278b0c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4" Mar 18 13:07:06.985237 master-0 kubenswrapper[7599]: I0318 13:07:06.985137 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dw4r\" (UniqueName: \"kubernetes.io/projected/2ea9eb53-0385-4a1a-a64f-696f8520cf49-kube-api-access-4dw4r\") pod \"package-server-manager-7b95f86987-p7vvx\" (UID: \"2ea9eb53-0385-4a1a-a64f-696f8520cf49\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" Mar 18 13:07:07.018185 master-0 kubenswrapper[7599]: I0318 13:07:07.018113 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnxv5\" (UniqueName: \"kubernetes.io/projected/053cc9bc-f98e-46f6-93bb-b5344d20bf74-kube-api-access-gnxv5\") pod \"iptables-alerter-jkl4x\" (UID: \"053cc9bc-f98e-46f6-93bb-b5344d20bf74\") " pod="openshift-network-operator/iptables-alerter-jkl4x" Mar 18 13:07:07.027889 master-0 kubenswrapper[7599]: I0318 13:07:07.027827 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:07:07.028186 master-0 kubenswrapper[7599]: E0318 13:07:07.028144 7599 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 18 13:07:07.028268 master-0 kubenswrapper[7599]: E0318 13:07:07.028218 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls podName:ac6d8eb6-1d5e-4757-9823-5ffe478c711c nodeName:}" failed. No retries permitted until 2026-03-18 13:07:08.028195887 +0000 UTC m=+2.989250209 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-mz4qp" (UID: "ac6d8eb6-1d5e-4757-9823-5ffe478c711c") : secret "cluster-baremetal-operator-tls" not found Mar 18 13:07:07.028618 master-0 kubenswrapper[7599]: I0318 13:07:07.028582 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:07:07.028934 master-0 kubenswrapper[7599]: E0318 13:07:07.028795 7599 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:07:07.028934 master-0 kubenswrapper[7599]: I0318 13:07:07.028865 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls\") pod \"dns-operator-9c5679d8f-5lzzn\" (UID: \"59bf5114-29f9-4f70-8582-108e95327cb2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-5lzzn" Mar 18 13:07:07.028934 master-0 kubenswrapper[7599]: E0318 13:07:07.028876 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert podName:ac6d8eb6-1d5e-4757-9823-5ffe478c711c nodeName:}" failed. No retries permitted until 2026-03-18 13:07:08.028853612 +0000 UTC m=+2.989907844 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert") pod "cluster-baremetal-operator-6f69995874-mz4qp" (UID: "ac6d8eb6-1d5e-4757-9823-5ffe478c711c") : secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:07:07.029136 master-0 kubenswrapper[7599]: E0318 13:07:07.028958 7599 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:07:07.029136 master-0 kubenswrapper[7599]: E0318 13:07:07.029019 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls podName:59bf5114-29f9-4f70-8582-108e95327cb2 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:08.029003756 +0000 UTC m=+2.990058058 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls") pod "dns-operator-9c5679d8f-5lzzn" (UID: "59bf5114-29f9-4f70-8582-108e95327cb2") : secret "metrics-tls" not found Mar 18 13:07:07.030092 master-0 kubenswrapper[7599]: I0318 13:07:07.030042 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/290d1f84-5c5c-4bff-b045-e6020793cded-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:07:07.056624 master-0 kubenswrapper[7599]: I0318 13:07:07.055607 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z84cq\" (UniqueName: \"kubernetes.io/projected/0e7156cf-2d68-4de8-b7e7-60e1539590dd-kube-api-access-z84cq\") pod \"network-node-identity-x8r78\" (UID: \"0e7156cf-2d68-4de8-b7e7-60e1539590dd\") " pod="openshift-network-node-identity/network-node-identity-x8r78" Mar 18 13:07:07.070454 master-0 kubenswrapper[7599]: I0318 13:07:07.070371 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgt5t\" (UniqueName: \"kubernetes.io/projected/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-kube-api-access-lgt5t\") pod \"cluster-monitoring-operator-58845fbb57-n8hgl\" (UID: \"8c0e5eca-819b-40f3-bf77-0cd90a4f6e94\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" Mar 18 13:07:07.099038 master-0 kubenswrapper[7599]: I0318 13:07:07.098972 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8eff549-02f3-446e-b3a1-a66cecdc02a6-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-zm5rd\" (UID: \"a8eff549-02f3-446e-b3a1-a66cecdc02a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd" Mar 18 13:07:07.106039 master-0 kubenswrapper[7599]: I0318 13:07:07.106012 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4vtf\" (UniqueName: \"kubernetes.io/projected/15a97fe2-5022-4997-9936-4247ae7ecb43-kube-api-access-h4vtf\") pod \"cluster-storage-operator-7d87854d6-r4dzk\" (UID: \"15a97fe2-5022-4997-9936-4247ae7ecb43\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-r4dzk" Mar 18 13:07:07.129658 master-0 kubenswrapper[7599]: I0318 13:07:07.129591 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:07:07.129850 master-0 kubenswrapper[7599]: E0318 13:07:07.129797 7599 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Mar 18 13:07:07.129917 master-0 kubenswrapper[7599]: E0318 13:07:07.129898 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls podName:5a4202c2-c330-4a5d-87e7-0a63d069113f nodeName:}" failed. No retries permitted until 2026-03-18 13:07:08.129871459 +0000 UTC m=+3.090925741 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls") pod "machine-config-operator-84d549f6d5-dlr6p" (UID: "5a4202c2-c330-4a5d-87e7-0a63d069113f") : secret "mco-proxy-tls" not found Mar 18 13:07:07.129982 master-0 kubenswrapper[7599]: I0318 13:07:07.129813 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert\") pod \"catalog-operator-68f85b4d6c-t84s9\" (UID: \"d2455453-5943-49ef-bfea-cba077197da0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:07:07.129982 master-0 kubenswrapper[7599]: I0318 13:07:07.129973 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:07:07.130091 master-0 kubenswrapper[7599]: I0318 13:07:07.130006 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-p7vvx\" (UID: \"2ea9eb53-0385-4a1a-a64f-696f8520cf49\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" Mar 18 13:07:07.130091 master-0 kubenswrapper[7599]: I0318 13:07:07.130055 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:07:07.130091 master-0 kubenswrapper[7599]: I0318 13:07:07.130086 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:07:07.130282 master-0 kubenswrapper[7599]: I0318 13:07:07.130145 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-zrc8h\" (UID: \"720a1f60-c1cb-4aef-aaec-f082090ca631\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" Mar 18 13:07:07.130282 master-0 kubenswrapper[7599]: I0318 13:07:07.130177 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert\") pod \"olm-operator-5c9796789-z8jkt\" (UID: \"822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt" Mar 18 13:07:07.130355 master-0 kubenswrapper[7599]: I0318 13:07:07.130280 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs\") pod \"network-metrics-daemon-kbfbq\" (UID: \"bf1cc230-0a79-4a1d-b500-a65d02e50973\") " pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:07:07.130355 master-0 kubenswrapper[7599]: I0318 13:07:07.130308 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34a3a84b-048f-4822-9f05-0e7509327ca2-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-k8tv4\" (UID: \"34a3a84b-048f-4822-9f05-0e7509327ca2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-k8tv4" Mar 18 13:07:07.130355 master-0 kubenswrapper[7599]: I0318 13:07:07.130333 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-99pzm\" (UID: \"fe643e40-d06d-4e69-9be3-0065c2a78567\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:07:07.130497 master-0 kubenswrapper[7599]: E0318 13:07:07.130438 7599 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 13:07:07.130497 master-0 kubenswrapper[7599]: E0318 13:07:07.130473 7599 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 13:07:07.130583 master-0 kubenswrapper[7599]: E0318 13:07:07.130516 7599 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 13:07:07.130583 master-0 kubenswrapper[7599]: E0318 13:07:07.130567 7599 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 13:07:07.130662 master-0 kubenswrapper[7599]: E0318 13:07:07.130628 7599 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 13:07:07.130704 master-0 kubenswrapper[7599]: E0318 13:07:07.130633 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert podName:d2455453-5943-49ef-bfea-cba077197da0 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:08.130473964 +0000 UTC m=+3.091528206 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert") pod "catalog-operator-68f85b4d6c-t84s9" (UID: "d2455453-5943-49ef-bfea-cba077197da0") : secret "catalog-operator-serving-cert" not found Mar 18 13:07:07.130704 master-0 kubenswrapper[7599]: I0318 13:07:07.130694 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:07:07.130783 master-0 kubenswrapper[7599]: E0318 13:07:07.130717 7599 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:07:07.130783 master-0 kubenswrapper[7599]: I0318 13:07:07.130731 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-n8hgl\" (UID: \"8c0e5eca-819b-40f3-bf77-0cd90a4f6e94\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" Mar 18 13:07:07.130783 master-0 kubenswrapper[7599]: E0318 13:07:07.130757 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls podName:d9d09a56-ed4c-40b7-8be1-f3934c07296e nodeName:}" failed. No retries permitted until 2026-03-18 13:07:08.13073962 +0000 UTC m=+3.091793952 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls") pod "ingress-operator-66b84d69b-wqxpk" (UID: "d9d09a56-ed4c-40b7-8be1-f3934c07296e") : secret "metrics-tls" not found Mar 18 13:07:07.130905 master-0 kubenswrapper[7599]: I0318 13:07:07.130790 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:07:07.130905 master-0 kubenswrapper[7599]: E0318 13:07:07.130793 7599 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 13:07:07.130905 master-0 kubenswrapper[7599]: E0318 13:07:07.130895 7599 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 13:07:07.131015 master-0 kubenswrapper[7599]: E0318 13:07:07.130936 7599 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 13:07:07.131015 master-0 kubenswrapper[7599]: E0318 13:07:07.130958 7599 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 13:07:07.131015 master-0 kubenswrapper[7599]: E0318 13:07:07.130832 7599 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 13:07:07.131015 master-0 kubenswrapper[7599]: E0318 13:07:07.130845 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert podName:822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:08.130835132 +0000 UTC m=+3.091889504 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert") pod "olm-operator-5c9796789-z8jkt" (UID: "822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9") : secret "olm-operator-serving-cert" not found Mar 18 13:07:07.131015 master-0 kubenswrapper[7599]: E0318 13:07:07.131007 7599 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 13:07:07.131015 master-0 kubenswrapper[7599]: E0318 13:07:07.131017 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics podName:fe643e40-d06d-4e69-9be3-0065c2a78567 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:08.131006536 +0000 UTC m=+3.092060888 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-99pzm" (UID: "fe643e40-d06d-4e69-9be3-0065c2a78567") : secret "marketplace-operator-metrics" not found Mar 18 13:07:07.131228 master-0 kubenswrapper[7599]: E0318 13:07:07.131033 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert podName:2b06a568-4dad-44b4-8312-aa52911dbfb0 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:08.131024137 +0000 UTC m=+3.092078509 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert") pod "cluster-version-operator-56d8475767-f2hjp" (UID: "2b06a568-4dad-44b4-8312-aa52911dbfb0") : secret "cluster-version-operator-serving-cert" not found Mar 18 13:07:07.131228 master-0 kubenswrapper[7599]: E0318 13:07:07.131054 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls podName:8c0e5eca-819b-40f3-bf77-0cd90a4f6e94 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:08.131043747 +0000 UTC m=+3.092098099 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-n8hgl" (UID: "8c0e5eca-819b-40f3-bf77-0cd90a4f6e94") : secret "cluster-monitoring-operator-tls" not found Mar 18 13:07:07.131228 master-0 kubenswrapper[7599]: E0318 13:07:07.131068 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert podName:2ea9eb53-0385-4a1a-a64f-696f8520cf49 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:08.131060888 +0000 UTC m=+3.092115270 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-p7vvx" (UID: "2ea9eb53-0385-4a1a-a64f-696f8520cf49") : secret "package-server-manager-serving-cert" not found Mar 18 13:07:07.131228 master-0 kubenswrapper[7599]: E0318 13:07:07.131083 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-node-tuning-operator-tls podName:c3ff09ab-cbe1-49e7-8121-5f71997a5176 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:08.131076078 +0000 UTC m=+3.092130320 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-kvbzn" (UID: "c3ff09ab-cbe1-49e7-8121-5f71997a5176") : secret "node-tuning-operator-tls" not found Mar 18 13:07:07.131228 master-0 kubenswrapper[7599]: E0318 13:07:07.131099 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls podName:290d1f84-5c5c-4bff-b045-e6020793cded nodeName:}" failed. No retries permitted until 2026-03-18 13:07:08.131090868 +0000 UTC m=+3.092145220 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-6l7pv" (UID: "290d1f84-5c5c-4bff-b045-e6020793cded") : secret "image-registry-operator-tls" not found Mar 18 13:07:07.131228 master-0 kubenswrapper[7599]: E0318 13:07:07.131111 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs podName:bf1cc230-0a79-4a1d-b500-a65d02e50973 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:08.131105199 +0000 UTC m=+3.092159551 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs") pod "network-metrics-daemon-kbfbq" (UID: "bf1cc230-0a79-4a1d-b500-a65d02e50973") : secret "metrics-daemon-secret" not found Mar 18 13:07:07.131228 master-0 kubenswrapper[7599]: E0318 13:07:07.131123 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-apiservice-cert podName:c3ff09ab-cbe1-49e7-8121-5f71997a5176 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:08.131117349 +0000 UTC m=+3.092171701 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-kvbzn" (UID: "c3ff09ab-cbe1-49e7-8121-5f71997a5176") : secret "performance-addon-operator-webhook-cert" not found Mar 18 13:07:07.131228 master-0 kubenswrapper[7599]: E0318 13:07:07.131137 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs podName:720a1f60-c1cb-4aef-aaec-f082090ca631 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:08.131129759 +0000 UTC m=+3.092184141 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-zrc8h" (UID: "720a1f60-c1cb-4aef-aaec-f082090ca631") : secret "multus-admission-controller-secret" not found Mar 18 13:07:07.145087 master-0 kubenswrapper[7599]: I0318 13:07:07.145049 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fxgl\" (UniqueName: \"kubernetes.io/projected/595f697b-d238-4500-84ce-1ea00377f05e-kube-api-access-4fxgl\") pod \"service-ca-operator-b865698dc-hnr6m\" (UID: \"595f697b-d238-4500-84ce-1ea00377f05e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m" Mar 18 13:07:07.164911 master-0 kubenswrapper[7599]: I0318 13:07:07.164853 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29qbv\" (UniqueName: \"kubernetes.io/projected/ce3728ab-5d50-40ac-95b3-74a5b62a557f-kube-api-access-29qbv\") pod \"openshift-config-operator-95bf4f4d-qwgrm\" (UID: \"ce3728ab-5d50-40ac-95b3-74a5b62a557f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" Mar 18 13:07:07.185743 master-0 kubenswrapper[7599]: I0318 13:07:07.185693 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmzr4\" (UniqueName: \"kubernetes.io/projected/d9d09a56-ed4c-40b7-8be1-f3934c07296e-kube-api-access-wmzr4\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:07:07.199611 master-0 kubenswrapper[7599]: I0318 13:07:07.199579 7599 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 13:07:07.207018 master-0 kubenswrapper[7599]: I0318 13:07:07.206973 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nxzr\" (UniqueName: \"kubernetes.io/projected/767da57e-44e4-4861-bc6f-427c5bbb4d9d-kube-api-access-2nxzr\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:07:07.230224 master-0 kubenswrapper[7599]: I0318 13:07:07.229953 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:07:07.243383 master-0 kubenswrapper[7599]: I0318 13:07:07.237560 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkxxg\" (UniqueName: \"kubernetes.io/projected/d42bcf13-548b-46c4-9a3d-a46f1b6ec045-kube-api-access-vkxxg\") pod \"ovnkube-control-plane-57f769d897-hvnt4\" (UID: \"d42bcf13-548b-46c4-9a3d-a46f1b6ec045\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" Mar 18 13:07:07.255600 master-0 kubenswrapper[7599]: I0318 13:07:07.255549 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6zmc\" (UniqueName: \"kubernetes.io/projected/0c2c4a58-9780-4ecd-b417-e590ac3576ed-kube-api-access-v6zmc\") pod \"openshift-apiserver-operator-d65958b8-4bqf9\" (UID: \"0c2c4a58-9780-4ecd-b417-e590ac3576ed\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-4bqf9" Mar 18 13:07:07.265227 master-0 kubenswrapper[7599]: I0318 13:07:07.265189 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvdg2\" (UniqueName: \"kubernetes.io/projected/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-kube-api-access-qvdg2\") pod \"olm-operator-5c9796789-z8jkt\" (UID: \"822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt" Mar 18 13:07:07.310971 master-0 kubenswrapper[7599]: I0318 13:07:07.298657 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhs5w\" (UniqueName: \"kubernetes.io/projected/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc-kube-api-access-qhs5w\") pod \"openshift-controller-manager-operator-8c94f4649-cpqm5\" (UID: \"7dca7577-6bee-4dd3-917a-7b7ccc42f0fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5" Mar 18 13:07:07.313459 master-0 kubenswrapper[7599]: I0318 13:07:07.312451 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvdtw\" (UniqueName: \"kubernetes.io/projected/bf1cc230-0a79-4a1d-b500-a65d02e50973-kube-api-access-dvdtw\") pod \"network-metrics-daemon-kbfbq\" (UID: \"bf1cc230-0a79-4a1d-b500-a65d02e50973\") " pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:07:07.332614 master-0 kubenswrapper[7599]: I0318 13:07:07.332116 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdkx7\" (UniqueName: \"kubernetes.io/projected/290d1f84-5c5c-4bff-b045-e6020793cded-kube-api-access-rdkx7\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:07:07.358885 master-0 kubenswrapper[7599]: I0318 13:07:07.358846 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9d09a56-ed4c-40b7-8be1-f3934c07296e-bound-sa-token\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:07:07.370715 master-0 kubenswrapper[7599]: I0318 13:07:07.370674 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgffb\" (UniqueName: \"kubernetes.io/projected/bf9d21f9-64d6-4e21-a985-491197038568-kube-api-access-qgffb\") pod \"authentication-operator-5885bfd7f4-sp4ld\" (UID: \"bf9d21f9-64d6-4e21-a985-491197038568\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:07:07.386903 master-0 kubenswrapper[7599]: I0318 13:07:07.386872 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9zbp\" (UniqueName: \"kubernetes.io/projected/19a76585-a9ac-4ed9-9146-bb77b31848c6-kube-api-access-w9zbp\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:07:07.405567 master-0 kubenswrapper[7599]: I0318 13:07:07.404581 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kskqr\" (UniqueName: \"kubernetes.io/projected/902909ca-ab08-49aa-9736-70e073f8e67d-kube-api-access-kskqr\") pod \"cluster-olm-operator-67dcd4998-bppd4\" (UID: \"902909ca-ab08-49aa-9736-70e073f8e67d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4" Mar 18 13:07:07.429853 master-0 kubenswrapper[7599]: I0318 13:07:07.429817 7599 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 18 13:07:07.435532 master-0 kubenswrapper[7599]: I0318 13:07:07.435485 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2snjj\" (UniqueName: \"kubernetes.io/projected/0278b04b-b27b-4717-a009-a70315fd05a6-kube-api-access-2snjj\") pod \"network-check-target-kcsgp\" (UID: \"0278b04b-b27b-4717-a009-a70315fd05a6\") " pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:07:07.448564 master-0 kubenswrapper[7599]: I0318 13:07:07.448534 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd" event={"ID":"a8eff549-02f3-446e-b3a1-a66cecdc02a6","Type":"ContainerStarted","Data":"0fa9267fcb1942ed177056f1462768d5db7582291e5f4b758f528a23e47041d8"} Mar 18 13:07:07.450161 master-0 kubenswrapper[7599]: I0318 13:07:07.450127 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m" event={"ID":"595f697b-d238-4500-84ce-1ea00377f05e","Type":"ContainerStarted","Data":"a2dd4b79716d36a56d21bba417e3ebe1360ab2ee3f667763e4260bf014da2347"} Mar 18 13:07:07.450824 master-0 kubenswrapper[7599]: I0318 13:07:07.450803 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb" event={"ID":"07505113-d5e7-4ea3-b9cc-8f08cba45ccc","Type":"ContainerStarted","Data":"34f2829f920c0b8e7fad32f3489c2848036444d936bf5324856fb8eb487c04e1"} Mar 18 13:07:07.465135 master-0 kubenswrapper[7599]: I0318 13:07:07.465065 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-68lgz" event={"ID":"394061b4-1bac-4699-96d2-88558c1adaf8","Type":"ContainerStarted","Data":"c9f1921c446214d30702dfb6939c3c003e6da6eb3a26e4b0d63f3a857db0e4ce"} Mar 18 13:07:07.469006 master-0 kubenswrapper[7599]: I0318 13:07:07.468963 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-r4dzk" event={"ID":"15a97fe2-5022-4997-9936-4247ae7ecb43","Type":"ContainerStarted","Data":"6bba51891e1777a8a2c079cba18156b56f50c10e22f9de1c059b65799e3a81f6"} Mar 18 13:07:07.498673 master-0 kubenswrapper[7599]: I0318 13:07:07.498319 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:07:07.751145 master-0 kubenswrapper[7599]: I0318 13:07:07.747939 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-kcsgp"] Mar 18 13:07:08.059436 master-0 kubenswrapper[7599]: I0318 13:07:08.055904 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:07:08.059436 master-0 kubenswrapper[7599]: I0318 13:07:08.056079 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:07:08.059436 master-0 kubenswrapper[7599]: I0318 13:07:08.056154 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls\") pod \"dns-operator-9c5679d8f-5lzzn\" (UID: \"59bf5114-29f9-4f70-8582-108e95327cb2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-5lzzn" Mar 18 13:07:08.059436 master-0 kubenswrapper[7599]: E0318 13:07:08.056288 7599 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:07:08.059436 master-0 kubenswrapper[7599]: E0318 13:07:08.056351 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls podName:59bf5114-29f9-4f70-8582-108e95327cb2 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:10.05632878 +0000 UTC m=+5.017383022 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls") pod "dns-operator-9c5679d8f-5lzzn" (UID: "59bf5114-29f9-4f70-8582-108e95327cb2") : secret "metrics-tls" not found Mar 18 13:07:08.059436 master-0 kubenswrapper[7599]: E0318 13:07:08.056770 7599 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 18 13:07:08.059436 master-0 kubenswrapper[7599]: E0318 13:07:08.056805 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls podName:ac6d8eb6-1d5e-4757-9823-5ffe478c711c nodeName:}" failed. No retries permitted until 2026-03-18 13:07:10.05679329 +0000 UTC m=+5.017847542 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-mz4qp" (UID: "ac6d8eb6-1d5e-4757-9823-5ffe478c711c") : secret "cluster-baremetal-operator-tls" not found Mar 18 13:07:08.059436 master-0 kubenswrapper[7599]: E0318 13:07:08.056849 7599 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:07:08.059436 master-0 kubenswrapper[7599]: E0318 13:07:08.056871 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert podName:ac6d8eb6-1d5e-4757-9823-5ffe478c711c nodeName:}" failed. No retries permitted until 2026-03-18 13:07:10.056862942 +0000 UTC m=+5.017917194 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert") pod "cluster-baremetal-operator-6f69995874-mz4qp" (UID: "ac6d8eb6-1d5e-4757-9823-5ffe478c711c") : secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:07:08.156905 master-0 kubenswrapper[7599]: I0318 13:07:08.156642 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert\") pod \"catalog-operator-68f85b4d6c-t84s9\" (UID: \"d2455453-5943-49ef-bfea-cba077197da0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:07:08.157135 master-0 kubenswrapper[7599]: E0318 13:07:08.157099 7599 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 13:07:08.157197 master-0 kubenswrapper[7599]: E0318 13:07:08.157180 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert podName:d2455453-5943-49ef-bfea-cba077197da0 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:10.157161892 +0000 UTC m=+5.118216134 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert") pod "catalog-operator-68f85b4d6c-t84s9" (UID: "d2455453-5943-49ef-bfea-cba077197da0") : secret "catalog-operator-serving-cert" not found Mar 18 13:07:08.157281 master-0 kubenswrapper[7599]: I0318 13:07:08.157123 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-p7vvx\" (UID: \"2ea9eb53-0385-4a1a-a64f-696f8520cf49\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" Mar 18 13:07:08.157427 master-0 kubenswrapper[7599]: E0318 13:07:08.157400 7599 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 13:07:08.157520 master-0 kubenswrapper[7599]: E0318 13:07:08.157510 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert podName:2ea9eb53-0385-4a1a-a64f-696f8520cf49 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:10.15749583 +0000 UTC m=+5.118550072 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-p7vvx" (UID: "2ea9eb53-0385-4a1a-a64f-696f8520cf49") : secret "package-server-manager-serving-cert" not found Mar 18 13:07:08.157617 master-0 kubenswrapper[7599]: I0318 13:07:08.157601 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:07:08.157698 master-0 kubenswrapper[7599]: I0318 13:07:08.157686 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:07:08.157763 master-0 kubenswrapper[7599]: I0318 13:07:08.157753 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:07:08.157847 master-0 kubenswrapper[7599]: I0318 13:07:08.157835 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-zrc8h\" (UID: \"720a1f60-c1cb-4aef-aaec-f082090ca631\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" Mar 18 13:07:08.157920 master-0 kubenswrapper[7599]: I0318 13:07:08.157908 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert\") pod \"olm-operator-5c9796789-z8jkt\" (UID: \"822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt" Mar 18 13:07:08.158032 master-0 kubenswrapper[7599]: I0318 13:07:08.158019 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs\") pod \"network-metrics-daemon-kbfbq\" (UID: \"bf1cc230-0a79-4a1d-b500-a65d02e50973\") " pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:07:08.158105 master-0 kubenswrapper[7599]: I0318 13:07:08.158093 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-99pzm\" (UID: \"fe643e40-d06d-4e69-9be3-0065c2a78567\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:07:08.158171 master-0 kubenswrapper[7599]: I0318 13:07:08.158160 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:07:08.158249 master-0 kubenswrapper[7599]: I0318 13:07:08.158235 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-n8hgl\" (UID: \"8c0e5eca-819b-40f3-bf77-0cd90a4f6e94\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" Mar 18 13:07:08.158318 master-0 kubenswrapper[7599]: I0318 13:07:08.158307 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:07:08.158385 master-0 kubenswrapper[7599]: I0318 13:07:08.158374 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:07:08.158514 master-0 kubenswrapper[7599]: E0318 13:07:08.158501 7599 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Mar 18 13:07:08.158588 master-0 kubenswrapper[7599]: E0318 13:07:08.158579 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls podName:5a4202c2-c330-4a5d-87e7-0a63d069113f nodeName:}" failed. No retries permitted until 2026-03-18 13:07:10.158568576 +0000 UTC m=+5.119622818 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls") pod "machine-config-operator-84d549f6d5-dlr6p" (UID: "5a4202c2-c330-4a5d-87e7-0a63d069113f") : secret "mco-proxy-tls" not found Mar 18 13:07:08.158683 master-0 kubenswrapper[7599]: E0318 13:07:08.158670 7599 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:07:08.158779 master-0 kubenswrapper[7599]: E0318 13:07:08.158767 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls podName:d9d09a56-ed4c-40b7-8be1-f3934c07296e nodeName:}" failed. No retries permitted until 2026-03-18 13:07:10.15875922 +0000 UTC m=+5.119813462 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls") pod "ingress-operator-66b84d69b-wqxpk" (UID: "d9d09a56-ed4c-40b7-8be1-f3934c07296e") : secret "metrics-tls" not found Mar 18 13:07:08.158878 master-0 kubenswrapper[7599]: E0318 13:07:08.158867 7599 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 13:07:08.158945 master-0 kubenswrapper[7599]: E0318 13:07:08.158936 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls podName:290d1f84-5c5c-4bff-b045-e6020793cded nodeName:}" failed. No retries permitted until 2026-03-18 13:07:10.158928584 +0000 UTC m=+5.119982826 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-6l7pv" (UID: "290d1f84-5c5c-4bff-b045-e6020793cded") : secret "image-registry-operator-tls" not found Mar 18 13:07:08.159043 master-0 kubenswrapper[7599]: E0318 13:07:08.159032 7599 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 13:07:08.159109 master-0 kubenswrapper[7599]: E0318 13:07:08.159101 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert podName:2b06a568-4dad-44b4-8312-aa52911dbfb0 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:10.159093798 +0000 UTC m=+5.120148040 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert") pod "cluster-version-operator-56d8475767-f2hjp" (UID: "2b06a568-4dad-44b4-8312-aa52911dbfb0") : secret "cluster-version-operator-serving-cert" not found Mar 18 13:07:08.159208 master-0 kubenswrapper[7599]: E0318 13:07:08.159198 7599 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 13:07:08.159277 master-0 kubenswrapper[7599]: E0318 13:07:08.159268 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs podName:720a1f60-c1cb-4aef-aaec-f082090ca631 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:10.159259162 +0000 UTC m=+5.120313404 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-zrc8h" (UID: "720a1f60-c1cb-4aef-aaec-f082090ca631") : secret "multus-admission-controller-secret" not found Mar 18 13:07:08.159366 master-0 kubenswrapper[7599]: E0318 13:07:08.159356 7599 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 13:07:08.159457 master-0 kubenswrapper[7599]: E0318 13:07:08.159445 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert podName:822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:10.159436226 +0000 UTC m=+5.120490468 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert") pod "olm-operator-5c9796789-z8jkt" (UID: "822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9") : secret "olm-operator-serving-cert" not found Mar 18 13:07:08.159568 master-0 kubenswrapper[7599]: E0318 13:07:08.159556 7599 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 13:07:08.159648 master-0 kubenswrapper[7599]: E0318 13:07:08.159639 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs podName:bf1cc230-0a79-4a1d-b500-a65d02e50973 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:10.159630691 +0000 UTC m=+5.120684923 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs") pod "network-metrics-daemon-kbfbq" (UID: "bf1cc230-0a79-4a1d-b500-a65d02e50973") : secret "metrics-daemon-secret" not found Mar 18 13:07:08.159743 master-0 kubenswrapper[7599]: E0318 13:07:08.159730 7599 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 13:07:08.159819 master-0 kubenswrapper[7599]: E0318 13:07:08.159810 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics podName:fe643e40-d06d-4e69-9be3-0065c2a78567 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:10.159801765 +0000 UTC m=+5.120856007 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-99pzm" (UID: "fe643e40-d06d-4e69-9be3-0065c2a78567") : secret "marketplace-operator-metrics" not found Mar 18 13:07:08.159908 master-0 kubenswrapper[7599]: E0318 13:07:08.159898 7599 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 13:07:08.159985 master-0 kubenswrapper[7599]: E0318 13:07:08.159976 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-apiservice-cert podName:c3ff09ab-cbe1-49e7-8121-5f71997a5176 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:10.159968939 +0000 UTC m=+5.121023181 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-kvbzn" (UID: "c3ff09ab-cbe1-49e7-8121-5f71997a5176") : secret "performance-addon-operator-webhook-cert" not found Mar 18 13:07:08.160075 master-0 kubenswrapper[7599]: E0318 13:07:08.160065 7599 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 13:07:08.160391 master-0 kubenswrapper[7599]: E0318 13:07:08.160381 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls podName:8c0e5eca-819b-40f3-bf77-0cd90a4f6e94 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:10.160371408 +0000 UTC m=+5.121425650 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-n8hgl" (UID: "8c0e5eca-819b-40f3-bf77-0cd90a4f6e94") : secret "cluster-monitoring-operator-tls" not found Mar 18 13:07:08.160476 master-0 kubenswrapper[7599]: E0318 13:07:08.160283 7599 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 13:07:08.160544 master-0 kubenswrapper[7599]: E0318 13:07:08.160535 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-node-tuning-operator-tls podName:c3ff09ab-cbe1-49e7-8121-5f71997a5176 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:10.160527922 +0000 UTC m=+5.121582164 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-kvbzn" (UID: "c3ff09ab-cbe1-49e7-8121-5f71997a5176") : secret "node-tuning-operator-tls" not found Mar 18 13:07:08.431838 master-0 kubenswrapper[7599]: I0318 13:07:08.431779 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qsnxz"] Mar 18 13:07:08.441129 master-0 kubenswrapper[7599]: E0318 13:07:08.432001 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bf6a38b-0bdd-4767-bc33-7cc12d9537e7" containerName="prober" Mar 18 13:07:08.441129 master-0 kubenswrapper[7599]: I0318 13:07:08.432016 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bf6a38b-0bdd-4767-bc33-7cc12d9537e7" containerName="prober" Mar 18 13:07:08.441129 master-0 kubenswrapper[7599]: E0318 13:07:08.432027 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80daec9e-b15b-4782-a1f7-ce398bbe323b" containerName="assisted-installer-controller" Mar 18 13:07:08.441129 master-0 kubenswrapper[7599]: I0318 13:07:08.432036 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="80daec9e-b15b-4782-a1f7-ce398bbe323b" containerName="assisted-installer-controller" Mar 18 13:07:08.441129 master-0 kubenswrapper[7599]: I0318 13:07:08.432143 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="80daec9e-b15b-4782-a1f7-ce398bbe323b" containerName="assisted-installer-controller" Mar 18 13:07:08.441129 master-0 kubenswrapper[7599]: I0318 13:07:08.432157 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bf6a38b-0bdd-4767-bc33-7cc12d9537e7" containerName="prober" Mar 18 13:07:08.441129 master-0 kubenswrapper[7599]: I0318 13:07:08.432540 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qsnxz" Mar 18 13:07:08.443246 master-0 kubenswrapper[7599]: I0318 13:07:08.443209 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:07:08.455012 master-0 kubenswrapper[7599]: I0318 13:07:08.452915 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qsnxz"] Mar 18 13:07:08.482765 master-0 kubenswrapper[7599]: I0318 13:07:08.482682 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5" event={"ID":"7dca7577-6bee-4dd3-917a-7b7ccc42f0fc","Type":"ContainerStarted","Data":"58ba17cb9e47416db3b6a0a6b8c2a2608308d20a79593a16babea0c6f26ec54c"} Mar 18 13:07:08.486001 master-0 kubenswrapper[7599]: I0318 13:07:08.485969 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-kcsgp" event={"ID":"0278b04b-b27b-4717-a009-a70315fd05a6","Type":"ContainerStarted","Data":"255878e502d4cefb42ba40055cada36ae5db45de3d4a7c393b1e4c8220dae784"} Mar 18 13:07:08.486067 master-0 kubenswrapper[7599]: I0318 13:07:08.486000 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-kcsgp" event={"ID":"0278b04b-b27b-4717-a009-a70315fd05a6","Type":"ContainerStarted","Data":"caf8685ec1d7171c12646ad4a2c704d85c1985e24c1994b6f4a18dfa14666d6f"} Mar 18 13:07:08.491020 master-0 kubenswrapper[7599]: I0318 13:07:08.490980 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" event={"ID":"bf9d21f9-64d6-4e21-a985-491197038568","Type":"ContainerStarted","Data":"e3030c6144549ecf6368b1e14f59622a57b27f9cd532ce32634fa6a2d9e59421"} Mar 18 13:07:08.501434 master-0 kubenswrapper[7599]: I0318 13:07:08.498638 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4" event={"ID":"b75d4622-ac12-4f82-afc9-ab63e6278b0c","Type":"ContainerStarted","Data":"4f4390a1edc4e74d8425b268d4802fbbd68b0a727bcc922dd63ac0c094e61704"} Mar 18 13:07:08.522443 master-0 kubenswrapper[7599]: I0318 13:07:08.520631 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-4bqf9" event={"ID":"0c2c4a58-9780-4ecd-b417-e590ac3576ed","Type":"ContainerStarted","Data":"8e530c2314387d6faa3389f896853faadcabf48e6b1056d8665d0aee6b25ba83"} Mar 18 13:07:08.522443 master-0 kubenswrapper[7599]: I0318 13:07:08.522227 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:07:08.549433 master-0 kubenswrapper[7599]: I0318 13:07:08.545805 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-8487694857-49h6x"] Mar 18 13:07:08.549433 master-0 kubenswrapper[7599]: I0318 13:07:08.546849 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-8487694857-49h6x" Mar 18 13:07:08.550164 master-0 kubenswrapper[7599]: I0318 13:07:08.550129 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 18 13:07:08.553428 master-0 kubenswrapper[7599]: I0318 13:07:08.551779 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-8487694857-49h6x"] Mar 18 13:07:08.553428 master-0 kubenswrapper[7599]: I0318 13:07:08.552928 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" event={"ID":"19a76585-a9ac-4ed9-9146-bb77b31848c6","Type":"ContainerStarted","Data":"e98d728f4b1b0e813247323f6966121eae00b055f966e7db7eab7c672af9c4da"} Mar 18 13:07:08.557443 master-0 kubenswrapper[7599]: I0318 13:07:08.557385 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 18 13:07:08.568442 master-0 kubenswrapper[7599]: I0318 13:07:08.567973 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62lvq\" (UniqueName: \"kubernetes.io/projected/deb67ea0-8342-40cb-b0f4-115270e878dd-kube-api-access-62lvq\") pod \"csi-snapshot-controller-64854d9cff-qsnxz\" (UID: \"deb67ea0-8342-40cb-b0f4-115270e878dd\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qsnxz" Mar 18 13:07:08.583623 master-0 kubenswrapper[7599]: I0318 13:07:08.583584 7599 generic.go:334] "Generic (PLEG): container finished" podID="902909ca-ab08-49aa-9736-70e073f8e67d" containerID="0697c4988a8dce166398ff970c57c5e68178bc04fae2f2829aa0dffd05961950" exitCode=0 Mar 18 13:07:08.584082 master-0 kubenswrapper[7599]: I0318 13:07:08.584057 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4" event={"ID":"902909ca-ab08-49aa-9736-70e073f8e67d","Type":"ContainerDied","Data":"0697c4988a8dce166398ff970c57c5e68178bc04fae2f2829aa0dffd05961950"} Mar 18 13:07:08.669619 master-0 kubenswrapper[7599]: I0318 13:07:08.669041 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62lvq\" (UniqueName: \"kubernetes.io/projected/deb67ea0-8342-40cb-b0f4-115270e878dd-kube-api-access-62lvq\") pod \"csi-snapshot-controller-64854d9cff-qsnxz\" (UID: \"deb67ea0-8342-40cb-b0f4-115270e878dd\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qsnxz" Mar 18 13:07:08.669619 master-0 kubenswrapper[7599]: I0318 13:07:08.669184 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28z2f\" (UniqueName: \"kubernetes.io/projected/5b2acd84-85c0-4c47-90a4-44745b79976d-kube-api-access-28z2f\") pod \"migrator-8487694857-49h6x\" (UID: \"5b2acd84-85c0-4c47-90a4-44745b79976d\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-49h6x" Mar 18 13:07:08.724163 master-0 kubenswrapper[7599]: I0318 13:07:08.724109 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62lvq\" (UniqueName: \"kubernetes.io/projected/deb67ea0-8342-40cb-b0f4-115270e878dd-kube-api-access-62lvq\") pod \"csi-snapshot-controller-64854d9cff-qsnxz\" (UID: \"deb67ea0-8342-40cb-b0f4-115270e878dd\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qsnxz" Mar 18 13:07:08.770853 master-0 kubenswrapper[7599]: I0318 13:07:08.770811 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28z2f\" (UniqueName: \"kubernetes.io/projected/5b2acd84-85c0-4c47-90a4-44745b79976d-kube-api-access-28z2f\") pod \"migrator-8487694857-49h6x\" (UID: \"5b2acd84-85c0-4c47-90a4-44745b79976d\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-49h6x" Mar 18 13:07:08.848611 master-0 kubenswrapper[7599]: I0318 13:07:08.848577 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qsnxz" Mar 18 13:07:08.944517 master-0 kubenswrapper[7599]: I0318 13:07:08.941536 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28z2f\" (UniqueName: \"kubernetes.io/projected/5b2acd84-85c0-4c47-90a4-44745b79976d-kube-api-access-28z2f\") pod \"migrator-8487694857-49h6x\" (UID: \"5b2acd84-85c0-4c47-90a4-44745b79976d\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-49h6x" Mar 18 13:07:08.991433 master-0 kubenswrapper[7599]: I0318 13:07:08.979682 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-8487694857-49h6x" Mar 18 13:07:09.045432 master-0 kubenswrapper[7599]: I0318 13:07:09.040352 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:07:09.051479 master-0 kubenswrapper[7599]: I0318 13:07:09.050330 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:07:09.189093 master-0 kubenswrapper[7599]: I0318 13:07:09.188832 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qsnxz"] Mar 18 13:07:09.589442 master-0 kubenswrapper[7599]: I0318 13:07:09.588190 7599 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 13:07:09.589442 master-0 kubenswrapper[7599]: I0318 13:07:09.588824 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-4vt5h"] Mar 18 13:07:09.589442 master-0 kubenswrapper[7599]: I0318 13:07:09.589261 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5df8899c-4vt5h" Mar 18 13:07:09.598087 master-0 kubenswrapper[7599]: I0318 13:07:09.594212 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 13:07:09.598087 master-0 kubenswrapper[7599]: I0318 13:07:09.595764 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 13:07:09.598087 master-0 kubenswrapper[7599]: I0318 13:07:09.596067 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 13:07:09.598087 master-0 kubenswrapper[7599]: I0318 13:07:09.596255 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 13:07:09.598087 master-0 kubenswrapper[7599]: I0318 13:07:09.596399 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 13:07:09.598087 master-0 kubenswrapper[7599]: I0318 13:07:09.596510 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 13:07:09.598087 master-0 kubenswrapper[7599]: I0318 13:07:09.596618 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:07:09.598087 master-0 kubenswrapper[7599]: I0318 13:07:09.597848 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:07:09.609439 master-0 kubenswrapper[7599]: I0318 13:07:09.603719 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-4vt5h"] Mar 18 13:07:09.683223 master-0 kubenswrapper[7599]: I0318 13:07:09.683166 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d56529f-6fe0-4652-ac0a-f229aef1b86f-client-ca\") pod \"controller-manager-f5df8899c-4vt5h\" (UID: \"0d56529f-6fe0-4652-ac0a-f229aef1b86f\") " pod="openshift-controller-manager/controller-manager-f5df8899c-4vt5h" Mar 18 13:07:09.683507 master-0 kubenswrapper[7599]: I0318 13:07:09.683231 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d56529f-6fe0-4652-ac0a-f229aef1b86f-serving-cert\") pod \"controller-manager-f5df8899c-4vt5h\" (UID: \"0d56529f-6fe0-4652-ac0a-f229aef1b86f\") " pod="openshift-controller-manager/controller-manager-f5df8899c-4vt5h" Mar 18 13:07:09.683507 master-0 kubenswrapper[7599]: I0318 13:07:09.683435 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d56529f-6fe0-4652-ac0a-f229aef1b86f-config\") pod \"controller-manager-f5df8899c-4vt5h\" (UID: \"0d56529f-6fe0-4652-ac0a-f229aef1b86f\") " pod="openshift-controller-manager/controller-manager-f5df8899c-4vt5h" Mar 18 13:07:09.683579 master-0 kubenswrapper[7599]: I0318 13:07:09.683506 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d56529f-6fe0-4652-ac0a-f229aef1b86f-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-4vt5h\" (UID: \"0d56529f-6fe0-4652-ac0a-f229aef1b86f\") " pod="openshift-controller-manager/controller-manager-f5df8899c-4vt5h" Mar 18 13:07:09.683579 master-0 kubenswrapper[7599]: I0318 13:07:09.683542 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rntv\" (UniqueName: \"kubernetes.io/projected/0d56529f-6fe0-4652-ac0a-f229aef1b86f-kube-api-access-2rntv\") pod \"controller-manager-f5df8899c-4vt5h\" (UID: \"0d56529f-6fe0-4652-ac0a-f229aef1b86f\") " pod="openshift-controller-manager/controller-manager-f5df8899c-4vt5h" Mar 18 13:07:09.684627 master-0 kubenswrapper[7599]: W0318 13:07:09.684589 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddeb67ea0_8342_40cb_b0f4_115270e878dd.slice/crio-86270375ddd9ef7091a168593f24db7b8afc117f301f953944886d249627818f WatchSource:0}: Error finding container 86270375ddd9ef7091a168593f24db7b8afc117f301f953944886d249627818f: Status 404 returned error can't find the container with id 86270375ddd9ef7091a168593f24db7b8afc117f301f953944886d249627818f Mar 18 13:07:09.784306 master-0 kubenswrapper[7599]: I0318 13:07:09.784256 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d56529f-6fe0-4652-ac0a-f229aef1b86f-config\") pod \"controller-manager-f5df8899c-4vt5h\" (UID: \"0d56529f-6fe0-4652-ac0a-f229aef1b86f\") " pod="openshift-controller-manager/controller-manager-f5df8899c-4vt5h" Mar 18 13:07:09.784528 master-0 kubenswrapper[7599]: I0318 13:07:09.784319 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d56529f-6fe0-4652-ac0a-f229aef1b86f-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-4vt5h\" (UID: \"0d56529f-6fe0-4652-ac0a-f229aef1b86f\") " pod="openshift-controller-manager/controller-manager-f5df8899c-4vt5h" Mar 18 13:07:09.784528 master-0 kubenswrapper[7599]: I0318 13:07:09.784348 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rntv\" (UniqueName: \"kubernetes.io/projected/0d56529f-6fe0-4652-ac0a-f229aef1b86f-kube-api-access-2rntv\") pod \"controller-manager-f5df8899c-4vt5h\" (UID: \"0d56529f-6fe0-4652-ac0a-f229aef1b86f\") " pod="openshift-controller-manager/controller-manager-f5df8899c-4vt5h" Mar 18 13:07:09.784528 master-0 kubenswrapper[7599]: E0318 13:07:09.784459 7599 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Mar 18 13:07:09.784688 master-0 kubenswrapper[7599]: E0318 13:07:09.784543 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0d56529f-6fe0-4652-ac0a-f229aef1b86f-config podName:0d56529f-6fe0-4652-ac0a-f229aef1b86f nodeName:}" failed. No retries permitted until 2026-03-18 13:07:10.28452159 +0000 UTC m=+5.245575852 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0d56529f-6fe0-4652-ac0a-f229aef1b86f-config") pod "controller-manager-f5df8899c-4vt5h" (UID: "0d56529f-6fe0-4652-ac0a-f229aef1b86f") : configmap "config" not found Mar 18 13:07:09.784688 master-0 kubenswrapper[7599]: E0318 13:07:09.784549 7599 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Mar 18 13:07:09.784688 master-0 kubenswrapper[7599]: E0318 13:07:09.784654 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0d56529f-6fe0-4652-ac0a-f229aef1b86f-proxy-ca-bundles podName:0d56529f-6fe0-4652-ac0a-f229aef1b86f nodeName:}" failed. No retries permitted until 2026-03-18 13:07:10.284625222 +0000 UTC m=+5.245679504 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/0d56529f-6fe0-4652-ac0a-f229aef1b86f-proxy-ca-bundles") pod "controller-manager-f5df8899c-4vt5h" (UID: "0d56529f-6fe0-4652-ac0a-f229aef1b86f") : configmap "openshift-global-ca" not found Mar 18 13:07:09.784894 master-0 kubenswrapper[7599]: I0318 13:07:09.784758 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d56529f-6fe0-4652-ac0a-f229aef1b86f-client-ca\") pod \"controller-manager-f5df8899c-4vt5h\" (UID: \"0d56529f-6fe0-4652-ac0a-f229aef1b86f\") " pod="openshift-controller-manager/controller-manager-f5df8899c-4vt5h" Mar 18 13:07:09.784894 master-0 kubenswrapper[7599]: I0318 13:07:09.784837 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d56529f-6fe0-4652-ac0a-f229aef1b86f-serving-cert\") pod \"controller-manager-f5df8899c-4vt5h\" (UID: \"0d56529f-6fe0-4652-ac0a-f229aef1b86f\") " pod="openshift-controller-manager/controller-manager-f5df8899c-4vt5h" Mar 18 13:07:09.785051 master-0 kubenswrapper[7599]: E0318 13:07:09.785004 7599 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 13:07:09.785102 master-0 kubenswrapper[7599]: E0318 13:07:09.785086 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d56529f-6fe0-4652-ac0a-f229aef1b86f-serving-cert podName:0d56529f-6fe0-4652-ac0a-f229aef1b86f nodeName:}" failed. No retries permitted until 2026-03-18 13:07:10.285065642 +0000 UTC m=+5.246119934 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0d56529f-6fe0-4652-ac0a-f229aef1b86f-serving-cert") pod "controller-manager-f5df8899c-4vt5h" (UID: "0d56529f-6fe0-4652-ac0a-f229aef1b86f") : secret "serving-cert" not found Mar 18 13:07:09.785150 master-0 kubenswrapper[7599]: E0318 13:07:09.785118 7599 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 13:07:09.785186 master-0 kubenswrapper[7599]: E0318 13:07:09.785173 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0d56529f-6fe0-4652-ac0a-f229aef1b86f-client-ca podName:0d56529f-6fe0-4652-ac0a-f229aef1b86f nodeName:}" failed. No retries permitted until 2026-03-18 13:07:10.285157814 +0000 UTC m=+5.246212146 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/0d56529f-6fe0-4652-ac0a-f229aef1b86f-client-ca") pod "controller-manager-f5df8899c-4vt5h" (UID: "0d56529f-6fe0-4652-ac0a-f229aef1b86f") : configmap "client-ca" not found Mar 18 13:07:09.804579 master-0 kubenswrapper[7599]: I0318 13:07:09.804503 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rntv\" (UniqueName: \"kubernetes.io/projected/0d56529f-6fe0-4652-ac0a-f229aef1b86f-kube-api-access-2rntv\") pod \"controller-manager-f5df8899c-4vt5h\" (UID: \"0d56529f-6fe0-4652-ac0a-f229aef1b86f\") " pod="openshift-controller-manager/controller-manager-f5df8899c-4vt5h" Mar 18 13:07:09.837548 master-0 kubenswrapper[7599]: I0318 13:07:09.837477 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:07:09.968967 master-0 kubenswrapper[7599]: I0318 13:07:09.968930 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-8487694857-49h6x"] Mar 18 13:07:09.978672 master-0 kubenswrapper[7599]: W0318 13:07:09.978642 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b2acd84_85c0_4c47_90a4_44745b79976d.slice/crio-6d8e82ffbe075824d8315d20c4a3c5c63d1c4a778f543315fadbc9c6a49fcd1c WatchSource:0}: Error finding container 6d8e82ffbe075824d8315d20c4a3c5c63d1c4a778f543315fadbc9c6a49fcd1c: Status 404 returned error can't find the container with id 6d8e82ffbe075824d8315d20c4a3c5c63d1c4a778f543315fadbc9c6a49fcd1c Mar 18 13:07:10.089155 master-0 kubenswrapper[7599]: I0318 13:07:10.089113 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:07:10.089444 master-0 kubenswrapper[7599]: I0318 13:07:10.089260 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:07:10.089444 master-0 kubenswrapper[7599]: I0318 13:07:10.089301 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls\") pod \"dns-operator-9c5679d8f-5lzzn\" (UID: \"59bf5114-29f9-4f70-8582-108e95327cb2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-5lzzn" Mar 18 13:07:10.089444 master-0 kubenswrapper[7599]: E0318 13:07:10.089431 7599 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:07:10.089597 master-0 kubenswrapper[7599]: E0318 13:07:10.089474 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls podName:59bf5114-29f9-4f70-8582-108e95327cb2 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:14.089459644 +0000 UTC m=+9.050513886 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls") pod "dns-operator-9c5679d8f-5lzzn" (UID: "59bf5114-29f9-4f70-8582-108e95327cb2") : secret "metrics-tls" not found Mar 18 13:07:10.089769 master-0 kubenswrapper[7599]: E0318 13:07:10.089749 7599 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 18 13:07:10.089835 master-0 kubenswrapper[7599]: E0318 13:07:10.089790 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls podName:ac6d8eb6-1d5e-4757-9823-5ffe478c711c nodeName:}" failed. No retries permitted until 2026-03-18 13:07:14.089779461 +0000 UTC m=+9.050833703 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-mz4qp" (UID: "ac6d8eb6-1d5e-4757-9823-5ffe478c711c") : secret "cluster-baremetal-operator-tls" not found Mar 18 13:07:10.089835 master-0 kubenswrapper[7599]: E0318 13:07:10.089827 7599 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:07:10.089924 master-0 kubenswrapper[7599]: E0318 13:07:10.089846 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert podName:ac6d8eb6-1d5e-4757-9823-5ffe478c711c nodeName:}" failed. No retries permitted until 2026-03-18 13:07:14.089839633 +0000 UTC m=+9.050893875 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert") pod "cluster-baremetal-operator-6f69995874-mz4qp" (UID: "ac6d8eb6-1d5e-4757-9823-5ffe478c711c") : secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:07:10.133766 master-0 kubenswrapper[7599]: I0318 13:07:10.133426 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-4vt5h"] Mar 18 13:07:10.133766 master-0 kubenswrapper[7599]: E0318 13:07:10.133668 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-f5df8899c-4vt5h" podUID="0d56529f-6fe0-4652-ac0a-f229aef1b86f" Mar 18 13:07:10.145382 master-0 kubenswrapper[7599]: I0318 13:07:10.144008 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-688d854df6-pbq74"] Mar 18 13:07:10.145382 master-0 kubenswrapper[7599]: I0318 13:07:10.144656 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-688d854df6-pbq74" Mar 18 13:07:10.149329 master-0 kubenswrapper[7599]: I0318 13:07:10.149079 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 13:07:10.154272 master-0 kubenswrapper[7599]: I0318 13:07:10.153613 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 13:07:10.154272 master-0 kubenswrapper[7599]: I0318 13:07:10.153677 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 13:07:10.154272 master-0 kubenswrapper[7599]: I0318 13:07:10.153951 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 13:07:10.154272 master-0 kubenswrapper[7599]: I0318 13:07:10.154097 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 13:07:10.160767 master-0 kubenswrapper[7599]: I0318 13:07:10.160672 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-688d854df6-pbq74"] Mar 18 13:07:10.190922 master-0 kubenswrapper[7599]: I0318 13:07:10.190878 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-n8hgl\" (UID: \"8c0e5eca-819b-40f3-bf77-0cd90a4f6e94\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" Mar 18 13:07:10.190922 master-0 kubenswrapper[7599]: I0318 13:07:10.190918 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:07:10.191159 master-0 kubenswrapper[7599]: I0318 13:07:10.190943 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88d9833d-372c-43a9-b6bb-d1753177443e-config\") pod \"route-controller-manager-688d854df6-pbq74\" (UID: \"88d9833d-372c-43a9-b6bb-d1753177443e\") " pod="openshift-route-controller-manager/route-controller-manager-688d854df6-pbq74" Mar 18 13:07:10.191159 master-0 kubenswrapper[7599]: I0318 13:07:10.190969 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:07:10.191159 master-0 kubenswrapper[7599]: I0318 13:07:10.190990 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert\") pod \"catalog-operator-68f85b4d6c-t84s9\" (UID: \"d2455453-5943-49ef-bfea-cba077197da0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:07:10.191159 master-0 kubenswrapper[7599]: I0318 13:07:10.191010 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88d9833d-372c-43a9-b6bb-d1753177443e-serving-cert\") pod \"route-controller-manager-688d854df6-pbq74\" (UID: \"88d9833d-372c-43a9-b6bb-d1753177443e\") " pod="openshift-route-controller-manager/route-controller-manager-688d854df6-pbq74" Mar 18 13:07:10.191159 master-0 kubenswrapper[7599]: I0318 13:07:10.191029 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:07:10.191159 master-0 kubenswrapper[7599]: I0318 13:07:10.191048 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-p7vvx\" (UID: \"2ea9eb53-0385-4a1a-a64f-696f8520cf49\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" Mar 18 13:07:10.191159 master-0 kubenswrapper[7599]: I0318 13:07:10.191071 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:07:10.191159 master-0 kubenswrapper[7599]: I0318 13:07:10.191087 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:07:10.191159 master-0 kubenswrapper[7599]: I0318 13:07:10.191119 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-zrc8h\" (UID: \"720a1f60-c1cb-4aef-aaec-f082090ca631\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" Mar 18 13:07:10.191159 master-0 kubenswrapper[7599]: I0318 13:07:10.191139 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88d9833d-372c-43a9-b6bb-d1753177443e-client-ca\") pod \"route-controller-manager-688d854df6-pbq74\" (UID: \"88d9833d-372c-43a9-b6bb-d1753177443e\") " pod="openshift-route-controller-manager/route-controller-manager-688d854df6-pbq74" Mar 18 13:07:10.191159 master-0 kubenswrapper[7599]: I0318 13:07:10.191161 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert\") pod \"olm-operator-5c9796789-z8jkt\" (UID: \"822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt" Mar 18 13:07:10.191605 master-0 kubenswrapper[7599]: I0318 13:07:10.191201 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs\") pod \"network-metrics-daemon-kbfbq\" (UID: \"bf1cc230-0a79-4a1d-b500-a65d02e50973\") " pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:07:10.191605 master-0 kubenswrapper[7599]: I0318 13:07:10.191226 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-99pzm\" (UID: \"fe643e40-d06d-4e69-9be3-0065c2a78567\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:07:10.191605 master-0 kubenswrapper[7599]: I0318 13:07:10.191245 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bf4z\" (UniqueName: \"kubernetes.io/projected/88d9833d-372c-43a9-b6bb-d1753177443e-kube-api-access-8bf4z\") pod \"route-controller-manager-688d854df6-pbq74\" (UID: \"88d9833d-372c-43a9-b6bb-d1753177443e\") " pod="openshift-route-controller-manager/route-controller-manager-688d854df6-pbq74" Mar 18 13:07:10.191605 master-0 kubenswrapper[7599]: I0318 13:07:10.191263 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:07:10.191605 master-0 kubenswrapper[7599]: E0318 13:07:10.191444 7599 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 13:07:10.191605 master-0 kubenswrapper[7599]: E0318 13:07:10.191490 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-apiservice-cert podName:c3ff09ab-cbe1-49e7-8121-5f71997a5176 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:14.191476614 +0000 UTC m=+9.152530856 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-kvbzn" (UID: "c3ff09ab-cbe1-49e7-8121-5f71997a5176") : secret "performance-addon-operator-webhook-cert" not found Mar 18 13:07:10.191809 master-0 kubenswrapper[7599]: E0318 13:07:10.191783 7599 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 13:07:10.191809 master-0 kubenswrapper[7599]: E0318 13:07:10.191806 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls podName:8c0e5eca-819b-40f3-bf77-0cd90a4f6e94 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:14.191799781 +0000 UTC m=+9.152854023 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-n8hgl" (UID: "8c0e5eca-819b-40f3-bf77-0cd90a4f6e94") : secret "cluster-monitoring-operator-tls" not found Mar 18 13:07:10.191868 master-0 kubenswrapper[7599]: E0318 13:07:10.191838 7599 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 13:07:10.191868 master-0 kubenswrapper[7599]: E0318 13:07:10.191854 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-node-tuning-operator-tls podName:c3ff09ab-cbe1-49e7-8121-5f71997a5176 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:14.191849204 +0000 UTC m=+9.152903446 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-kvbzn" (UID: "c3ff09ab-cbe1-49e7-8121-5f71997a5176") : secret "node-tuning-operator-tls" not found Mar 18 13:07:10.191927 master-0 kubenswrapper[7599]: E0318 13:07:10.191909 7599 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Mar 18 13:07:10.191927 master-0 kubenswrapper[7599]: E0318 13:07:10.191927 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls podName:5a4202c2-c330-4a5d-87e7-0a63d069113f nodeName:}" failed. No retries permitted until 2026-03-18 13:07:14.191921365 +0000 UTC m=+9.152975607 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls") pod "machine-config-operator-84d549f6d5-dlr6p" (UID: "5a4202c2-c330-4a5d-87e7-0a63d069113f") : secret "mco-proxy-tls" not found Mar 18 13:07:10.191986 master-0 kubenswrapper[7599]: E0318 13:07:10.191957 7599 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 13:07:10.191986 master-0 kubenswrapper[7599]: E0318 13:07:10.191973 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert podName:d2455453-5943-49ef-bfea-cba077197da0 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:14.191968136 +0000 UTC m=+9.153022378 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert") pod "catalog-operator-68f85b4d6c-t84s9" (UID: "d2455453-5943-49ef-bfea-cba077197da0") : secret "catalog-operator-serving-cert" not found Mar 18 13:07:10.192061 master-0 kubenswrapper[7599]: E0318 13:07:10.192010 7599 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:07:10.192061 master-0 kubenswrapper[7599]: E0318 13:07:10.192027 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls podName:d9d09a56-ed4c-40b7-8be1-f3934c07296e nodeName:}" failed. No retries permitted until 2026-03-18 13:07:14.192022148 +0000 UTC m=+9.153076400 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls") pod "ingress-operator-66b84d69b-wqxpk" (UID: "d9d09a56-ed4c-40b7-8be1-f3934c07296e") : secret "metrics-tls" not found Mar 18 13:07:10.192061 master-0 kubenswrapper[7599]: E0318 13:07:10.192056 7599 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 13:07:10.192190 master-0 kubenswrapper[7599]: E0318 13:07:10.192074 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert podName:2ea9eb53-0385-4a1a-a64f-696f8520cf49 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:14.192067639 +0000 UTC m=+9.153121881 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-p7vvx" (UID: "2ea9eb53-0385-4a1a-a64f-696f8520cf49") : secret "package-server-manager-serving-cert" not found Mar 18 13:07:10.192190 master-0 kubenswrapper[7599]: E0318 13:07:10.192103 7599 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 13:07:10.192190 master-0 kubenswrapper[7599]: E0318 13:07:10.192120 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls podName:290d1f84-5c5c-4bff-b045-e6020793cded nodeName:}" failed. No retries permitted until 2026-03-18 13:07:14.19211561 +0000 UTC m=+9.153169852 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-6l7pv" (UID: "290d1f84-5c5c-4bff-b045-e6020793cded") : secret "image-registry-operator-tls" not found Mar 18 13:07:10.192190 master-0 kubenswrapper[7599]: E0318 13:07:10.192149 7599 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 13:07:10.192190 master-0 kubenswrapper[7599]: E0318 13:07:10.192167 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert podName:2b06a568-4dad-44b4-8312-aa52911dbfb0 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:14.192162241 +0000 UTC m=+9.153216483 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert") pod "cluster-version-operator-56d8475767-f2hjp" (UID: "2b06a568-4dad-44b4-8312-aa52911dbfb0") : secret "cluster-version-operator-serving-cert" not found Mar 18 13:07:10.192337 master-0 kubenswrapper[7599]: E0318 13:07:10.192196 7599 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 13:07:10.192337 master-0 kubenswrapper[7599]: E0318 13:07:10.192214 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs podName:720a1f60-c1cb-4aef-aaec-f082090ca631 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:14.192209182 +0000 UTC m=+9.153263424 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-zrc8h" (UID: "720a1f60-c1cb-4aef-aaec-f082090ca631") : secret "multus-admission-controller-secret" not found Mar 18 13:07:10.192337 master-0 kubenswrapper[7599]: E0318 13:07:10.192248 7599 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 13:07:10.192337 master-0 kubenswrapper[7599]: E0318 13:07:10.192263 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert podName:822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:14.192258923 +0000 UTC m=+9.153313165 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert") pod "olm-operator-5c9796789-z8jkt" (UID: "822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9") : secret "olm-operator-serving-cert" not found Mar 18 13:07:10.192337 master-0 kubenswrapper[7599]: E0318 13:07:10.192295 7599 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 13:07:10.192337 master-0 kubenswrapper[7599]: E0318 13:07:10.192310 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs podName:bf1cc230-0a79-4a1d-b500-a65d02e50973 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:14.192305644 +0000 UTC m=+9.153359886 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs") pod "network-metrics-daemon-kbfbq" (UID: "bf1cc230-0a79-4a1d-b500-a65d02e50973") : secret "metrics-daemon-secret" not found Mar 18 13:07:10.192531 master-0 kubenswrapper[7599]: E0318 13:07:10.192349 7599 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 13:07:10.192531 master-0 kubenswrapper[7599]: E0318 13:07:10.192366 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics podName:fe643e40-d06d-4e69-9be3-0065c2a78567 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:14.192361336 +0000 UTC m=+9.153415578 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-99pzm" (UID: "fe643e40-d06d-4e69-9be3-0065c2a78567") : secret "marketplace-operator-metrics" not found Mar 18 13:07:10.292077 master-0 kubenswrapper[7599]: I0318 13:07:10.292022 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bf4z\" (UniqueName: \"kubernetes.io/projected/88d9833d-372c-43a9-b6bb-d1753177443e-kube-api-access-8bf4z\") pod \"route-controller-manager-688d854df6-pbq74\" (UID: \"88d9833d-372c-43a9-b6bb-d1753177443e\") " pod="openshift-route-controller-manager/route-controller-manager-688d854df6-pbq74" Mar 18 13:07:10.292077 master-0 kubenswrapper[7599]: I0318 13:07:10.292065 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d56529f-6fe0-4652-ac0a-f229aef1b86f-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-4vt5h\" (UID: \"0d56529f-6fe0-4652-ac0a-f229aef1b86f\") " pod="openshift-controller-manager/controller-manager-f5df8899c-4vt5h" Mar 18 13:07:10.292314 master-0 kubenswrapper[7599]: I0318 13:07:10.292098 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88d9833d-372c-43a9-b6bb-d1753177443e-config\") pod \"route-controller-manager-688d854df6-pbq74\" (UID: \"88d9833d-372c-43a9-b6bb-d1753177443e\") " pod="openshift-route-controller-manager/route-controller-manager-688d854df6-pbq74" Mar 18 13:07:10.292314 master-0 kubenswrapper[7599]: I0318 13:07:10.292268 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88d9833d-372c-43a9-b6bb-d1753177443e-serving-cert\") pod \"route-controller-manager-688d854df6-pbq74\" (UID: \"88d9833d-372c-43a9-b6bb-d1753177443e\") " pod="openshift-route-controller-manager/route-controller-manager-688d854df6-pbq74" Mar 18 13:07:10.292387 master-0 kubenswrapper[7599]: I0318 13:07:10.292317 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d56529f-6fe0-4652-ac0a-f229aef1b86f-client-ca\") pod \"controller-manager-f5df8899c-4vt5h\" (UID: \"0d56529f-6fe0-4652-ac0a-f229aef1b86f\") " pod="openshift-controller-manager/controller-manager-f5df8899c-4vt5h" Mar 18 13:07:10.293634 master-0 kubenswrapper[7599]: I0318 13:07:10.292605 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88d9833d-372c-43a9-b6bb-d1753177443e-client-ca\") pod \"route-controller-manager-688d854df6-pbq74\" (UID: \"88d9833d-372c-43a9-b6bb-d1753177443e\") " pod="openshift-route-controller-manager/route-controller-manager-688d854df6-pbq74" Mar 18 13:07:10.293634 master-0 kubenswrapper[7599]: E0318 13:07:10.292647 7599 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 13:07:10.293634 master-0 kubenswrapper[7599]: I0318 13:07:10.292663 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d56529f-6fe0-4652-ac0a-f229aef1b86f-serving-cert\") pod \"controller-manager-f5df8899c-4vt5h\" (UID: \"0d56529f-6fe0-4652-ac0a-f229aef1b86f\") " pod="openshift-controller-manager/controller-manager-f5df8899c-4vt5h" Mar 18 13:07:10.293634 master-0 kubenswrapper[7599]: E0318 13:07:10.292684 7599 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 18 13:07:10.293634 master-0 kubenswrapper[7599]: E0318 13:07:10.292732 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/88d9833d-372c-43a9-b6bb-d1753177443e-client-ca podName:88d9833d-372c-43a9-b6bb-d1753177443e nodeName:}" failed. No retries permitted until 2026-03-18 13:07:10.792714766 +0000 UTC m=+5.753769008 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/88d9833d-372c-43a9-b6bb-d1753177443e-client-ca") pod "route-controller-manager-688d854df6-pbq74" (UID: "88d9833d-372c-43a9-b6bb-d1753177443e") : configmap "client-ca" not found Mar 18 13:07:10.293634 master-0 kubenswrapper[7599]: E0318 13:07:10.292777 7599 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 13:07:10.293634 master-0 kubenswrapper[7599]: E0318 13:07:10.292812 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d56529f-6fe0-4652-ac0a-f229aef1b86f-serving-cert podName:0d56529f-6fe0-4652-ac0a-f229aef1b86f nodeName:}" failed. No retries permitted until 2026-03-18 13:07:11.292800998 +0000 UTC m=+6.253855240 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0d56529f-6fe0-4652-ac0a-f229aef1b86f-serving-cert") pod "controller-manager-f5df8899c-4vt5h" (UID: "0d56529f-6fe0-4652-ac0a-f229aef1b86f") : secret "serving-cert" not found Mar 18 13:07:10.293634 master-0 kubenswrapper[7599]: I0318 13:07:10.292775 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d56529f-6fe0-4652-ac0a-f229aef1b86f-config\") pod \"controller-manager-f5df8899c-4vt5h\" (UID: \"0d56529f-6fe0-4652-ac0a-f229aef1b86f\") " pod="openshift-controller-manager/controller-manager-f5df8899c-4vt5h" Mar 18 13:07:10.293634 master-0 kubenswrapper[7599]: E0318 13:07:10.292950 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0d56529f-6fe0-4652-ac0a-f229aef1b86f-client-ca podName:0d56529f-6fe0-4652-ac0a-f229aef1b86f nodeName:}" failed. No retries permitted until 2026-03-18 13:07:11.292936772 +0000 UTC m=+6.253991074 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/0d56529f-6fe0-4652-ac0a-f229aef1b86f-client-ca") pod "controller-manager-f5df8899c-4vt5h" (UID: "0d56529f-6fe0-4652-ac0a-f229aef1b86f") : configmap "client-ca" not found Mar 18 13:07:10.293634 master-0 kubenswrapper[7599]: E0318 13:07:10.292995 7599 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 13:07:10.293634 master-0 kubenswrapper[7599]: E0318 13:07:10.293036 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88d9833d-372c-43a9-b6bb-d1753177443e-serving-cert podName:88d9833d-372c-43a9-b6bb-d1753177443e nodeName:}" failed. No retries permitted until 2026-03-18 13:07:10.793026304 +0000 UTC m=+5.754080546 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/88d9833d-372c-43a9-b6bb-d1753177443e-serving-cert") pod "route-controller-manager-688d854df6-pbq74" (UID: "88d9833d-372c-43a9-b6bb-d1753177443e") : secret "serving-cert" not found Mar 18 13:07:10.293634 master-0 kubenswrapper[7599]: I0318 13:07:10.293433 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88d9833d-372c-43a9-b6bb-d1753177443e-config\") pod \"route-controller-manager-688d854df6-pbq74\" (UID: \"88d9833d-372c-43a9-b6bb-d1753177443e\") " pod="openshift-route-controller-manager/route-controller-manager-688d854df6-pbq74" Mar 18 13:07:10.293634 master-0 kubenswrapper[7599]: I0318 13:07:10.293594 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d56529f-6fe0-4652-ac0a-f229aef1b86f-config\") pod \"controller-manager-f5df8899c-4vt5h\" (UID: \"0d56529f-6fe0-4652-ac0a-f229aef1b86f\") " pod="openshift-controller-manager/controller-manager-f5df8899c-4vt5h" Mar 18 13:07:10.294138 master-0 kubenswrapper[7599]: I0318 13:07:10.293740 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d56529f-6fe0-4652-ac0a-f229aef1b86f-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-4vt5h\" (UID: \"0d56529f-6fe0-4652-ac0a-f229aef1b86f\") " pod="openshift-controller-manager/controller-manager-f5df8899c-4vt5h" Mar 18 13:07:10.308032 master-0 kubenswrapper[7599]: I0318 13:07:10.307981 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bf4z\" (UniqueName: \"kubernetes.io/projected/88d9833d-372c-43a9-b6bb-d1753177443e-kube-api-access-8bf4z\") pod \"route-controller-manager-688d854df6-pbq74\" (UID: \"88d9833d-372c-43a9-b6bb-d1753177443e\") " pod="openshift-route-controller-manager/route-controller-manager-688d854df6-pbq74" Mar 18 13:07:10.529912 master-0 kubenswrapper[7599]: I0318 13:07:10.529777 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-79bc6b8d76-2zvf2"] Mar 18 13:07:10.530306 master-0 kubenswrapper[7599]: I0318 13:07:10.530289 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-79bc6b8d76-2zvf2" Mar 18 13:07:10.530912 master-0 kubenswrapper[7599]: I0318 13:07:10.530857 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-79bc6b8d76-2zvf2"] Mar 18 13:07:10.532378 master-0 kubenswrapper[7599]: I0318 13:07:10.532262 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 18 13:07:10.533526 master-0 kubenswrapper[7599]: I0318 13:07:10.533181 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 18 13:07:10.533526 master-0 kubenswrapper[7599]: I0318 13:07:10.533191 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 18 13:07:10.533526 master-0 kubenswrapper[7599]: I0318 13:07:10.533376 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 18 13:07:10.591515 master-0 kubenswrapper[7599]: I0318 13:07:10.591478 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qsnxz" event={"ID":"deb67ea0-8342-40cb-b0f4-115270e878dd","Type":"ContainerStarted","Data":"86270375ddd9ef7091a168593f24db7b8afc117f301f953944886d249627818f"} Mar 18 13:07:10.592824 master-0 kubenswrapper[7599]: I0318 13:07:10.592799 7599 generic.go:334] "Generic (PLEG): container finished" podID="ce3728ab-5d50-40ac-95b3-74a5b62a557f" containerID="2be57f1bc2d84ad4ff4dd4172fd46f3cfddc882962f936029d991fec6bacfeb8" exitCode=0 Mar 18 13:07:10.592931 master-0 kubenswrapper[7599]: I0318 13:07:10.592846 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" event={"ID":"ce3728ab-5d50-40ac-95b3-74a5b62a557f","Type":"ContainerDied","Data":"2be57f1bc2d84ad4ff4dd4172fd46f3cfddc882962f936029d991fec6bacfeb8"} Mar 18 13:07:10.595284 master-0 kubenswrapper[7599]: I0318 13:07:10.595242 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-8487694857-49h6x" event={"ID":"5b2acd84-85c0-4c47-90a4-44745b79976d","Type":"ContainerStarted","Data":"6d8e82ffbe075824d8315d20c4a3c5c63d1c4a778f543315fadbc9c6a49fcd1c"} Mar 18 13:07:10.597112 master-0 kubenswrapper[7599]: I0318 13:07:10.597055 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm77k\" (UniqueName: \"kubernetes.io/projected/b89fb313-d01a-4305-b123-e253b3382b85-kube-api-access-dm77k\") pod \"service-ca-79bc6b8d76-2zvf2\" (UID: \"b89fb313-d01a-4305-b123-e253b3382b85\") " pod="openshift-service-ca/service-ca-79bc6b8d76-2zvf2" Mar 18 13:07:10.597183 master-0 kubenswrapper[7599]: I0318 13:07:10.597120 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b89fb313-d01a-4305-b123-e253b3382b85-signing-cabundle\") pod \"service-ca-79bc6b8d76-2zvf2\" (UID: \"b89fb313-d01a-4305-b123-e253b3382b85\") " pod="openshift-service-ca/service-ca-79bc6b8d76-2zvf2" Mar 18 13:07:10.597183 master-0 kubenswrapper[7599]: I0318 13:07:10.597152 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b89fb313-d01a-4305-b123-e253b3382b85-signing-key\") pod \"service-ca-79bc6b8d76-2zvf2\" (UID: \"b89fb313-d01a-4305-b123-e253b3382b85\") " pod="openshift-service-ca/service-ca-79bc6b8d76-2zvf2" Mar 18 13:07:10.597608 master-0 kubenswrapper[7599]: I0318 13:07:10.597264 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5df8899c-4vt5h" Mar 18 13:07:10.597866 master-0 kubenswrapper[7599]: I0318 13:07:10.597278 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-jkl4x" event={"ID":"053cc9bc-f98e-46f6-93bb-b5344d20bf74","Type":"ContainerStarted","Data":"7c62277fe5706e0717cad60492fc0cd55a642ceb15d309441041259d65ca5acd"} Mar 18 13:07:10.603034 master-0 kubenswrapper[7599]: I0318 13:07:10.603011 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5df8899c-4vt5h" Mar 18 13:07:10.697797 master-0 kubenswrapper[7599]: I0318 13:07:10.697752 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rntv\" (UniqueName: \"kubernetes.io/projected/0d56529f-6fe0-4652-ac0a-f229aef1b86f-kube-api-access-2rntv\") pod \"0d56529f-6fe0-4652-ac0a-f229aef1b86f\" (UID: \"0d56529f-6fe0-4652-ac0a-f229aef1b86f\") " Mar 18 13:07:10.697994 master-0 kubenswrapper[7599]: I0318 13:07:10.697822 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d56529f-6fe0-4652-ac0a-f229aef1b86f-config\") pod \"0d56529f-6fe0-4652-ac0a-f229aef1b86f\" (UID: \"0d56529f-6fe0-4652-ac0a-f229aef1b86f\") " Mar 18 13:07:10.697994 master-0 kubenswrapper[7599]: I0318 13:07:10.697863 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d56529f-6fe0-4652-ac0a-f229aef1b86f-proxy-ca-bundles\") pod \"0d56529f-6fe0-4652-ac0a-f229aef1b86f\" (UID: \"0d56529f-6fe0-4652-ac0a-f229aef1b86f\") " Mar 18 13:07:10.698793 master-0 kubenswrapper[7599]: I0318 13:07:10.698378 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm77k\" (UniqueName: \"kubernetes.io/projected/b89fb313-d01a-4305-b123-e253b3382b85-kube-api-access-dm77k\") pod \"service-ca-79bc6b8d76-2zvf2\" (UID: \"b89fb313-d01a-4305-b123-e253b3382b85\") " pod="openshift-service-ca/service-ca-79bc6b8d76-2zvf2" Mar 18 13:07:10.698793 master-0 kubenswrapper[7599]: I0318 13:07:10.698584 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d56529f-6fe0-4652-ac0a-f229aef1b86f-config" (OuterVolumeSpecName: "config") pod "0d56529f-6fe0-4652-ac0a-f229aef1b86f" (UID: "0d56529f-6fe0-4652-ac0a-f229aef1b86f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:07:10.698793 master-0 kubenswrapper[7599]: I0318 13:07:10.698689 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d56529f-6fe0-4652-ac0a-f229aef1b86f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0d56529f-6fe0-4652-ac0a-f229aef1b86f" (UID: "0d56529f-6fe0-4652-ac0a-f229aef1b86f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:07:10.699904 master-0 kubenswrapper[7599]: I0318 13:07:10.698769 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b89fb313-d01a-4305-b123-e253b3382b85-signing-cabundle\") pod \"service-ca-79bc6b8d76-2zvf2\" (UID: \"b89fb313-d01a-4305-b123-e253b3382b85\") " pod="openshift-service-ca/service-ca-79bc6b8d76-2zvf2" Mar 18 13:07:10.699904 master-0 kubenswrapper[7599]: I0318 13:07:10.698971 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b89fb313-d01a-4305-b123-e253b3382b85-signing-key\") pod \"service-ca-79bc6b8d76-2zvf2\" (UID: \"b89fb313-d01a-4305-b123-e253b3382b85\") " pod="openshift-service-ca/service-ca-79bc6b8d76-2zvf2" Mar 18 13:07:10.699904 master-0 kubenswrapper[7599]: I0318 13:07:10.699042 7599 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0d56529f-6fe0-4652-ac0a-f229aef1b86f-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:10.699904 master-0 kubenswrapper[7599]: I0318 13:07:10.699055 7599 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d56529f-6fe0-4652-ac0a-f229aef1b86f-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:10.700060 master-0 kubenswrapper[7599]: I0318 13:07:10.699992 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b89fb313-d01a-4305-b123-e253b3382b85-signing-cabundle\") pod \"service-ca-79bc6b8d76-2zvf2\" (UID: \"b89fb313-d01a-4305-b123-e253b3382b85\") " pod="openshift-service-ca/service-ca-79bc6b8d76-2zvf2" Mar 18 13:07:10.703085 master-0 kubenswrapper[7599]: I0318 13:07:10.703060 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b89fb313-d01a-4305-b123-e253b3382b85-signing-key\") pod \"service-ca-79bc6b8d76-2zvf2\" (UID: \"b89fb313-d01a-4305-b123-e253b3382b85\") " pod="openshift-service-ca/service-ca-79bc6b8d76-2zvf2" Mar 18 13:07:10.704246 master-0 kubenswrapper[7599]: I0318 13:07:10.704205 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d56529f-6fe0-4652-ac0a-f229aef1b86f-kube-api-access-2rntv" (OuterVolumeSpecName: "kube-api-access-2rntv") pod "0d56529f-6fe0-4652-ac0a-f229aef1b86f" (UID: "0d56529f-6fe0-4652-ac0a-f229aef1b86f"). InnerVolumeSpecName "kube-api-access-2rntv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:07:10.722501 master-0 kubenswrapper[7599]: I0318 13:07:10.722442 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm77k\" (UniqueName: \"kubernetes.io/projected/b89fb313-d01a-4305-b123-e253b3382b85-kube-api-access-dm77k\") pod \"service-ca-79bc6b8d76-2zvf2\" (UID: \"b89fb313-d01a-4305-b123-e253b3382b85\") " pod="openshift-service-ca/service-ca-79bc6b8d76-2zvf2" Mar 18 13:07:10.800086 master-0 kubenswrapper[7599]: I0318 13:07:10.799962 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88d9833d-372c-43a9-b6bb-d1753177443e-serving-cert\") pod \"route-controller-manager-688d854df6-pbq74\" (UID: \"88d9833d-372c-43a9-b6bb-d1753177443e\") " pod="openshift-route-controller-manager/route-controller-manager-688d854df6-pbq74" Mar 18 13:07:10.800086 master-0 kubenswrapper[7599]: I0318 13:07:10.800073 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88d9833d-372c-43a9-b6bb-d1753177443e-client-ca\") pod \"route-controller-manager-688d854df6-pbq74\" (UID: \"88d9833d-372c-43a9-b6bb-d1753177443e\") " pod="openshift-route-controller-manager/route-controller-manager-688d854df6-pbq74" Mar 18 13:07:10.800294 master-0 kubenswrapper[7599]: I0318 13:07:10.800142 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rntv\" (UniqueName: \"kubernetes.io/projected/0d56529f-6fe0-4652-ac0a-f229aef1b86f-kube-api-access-2rntv\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:10.800294 master-0 kubenswrapper[7599]: E0318 13:07:10.800214 7599 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 18 13:07:10.800294 master-0 kubenswrapper[7599]: E0318 13:07:10.800261 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/88d9833d-372c-43a9-b6bb-d1753177443e-client-ca podName:88d9833d-372c-43a9-b6bb-d1753177443e nodeName:}" failed. No retries permitted until 2026-03-18 13:07:11.800246557 +0000 UTC m=+6.761300799 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/88d9833d-372c-43a9-b6bb-d1753177443e-client-ca") pod "route-controller-manager-688d854df6-pbq74" (UID: "88d9833d-372c-43a9-b6bb-d1753177443e") : configmap "client-ca" not found Mar 18 13:07:10.800614 master-0 kubenswrapper[7599]: E0318 13:07:10.800524 7599 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 13:07:10.800614 master-0 kubenswrapper[7599]: E0318 13:07:10.800598 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88d9833d-372c-43a9-b6bb-d1753177443e-serving-cert podName:88d9833d-372c-43a9-b6bb-d1753177443e nodeName:}" failed. No retries permitted until 2026-03-18 13:07:11.800580865 +0000 UTC m=+6.761635097 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/88d9833d-372c-43a9-b6bb-d1753177443e-serving-cert") pod "route-controller-manager-688d854df6-pbq74" (UID: "88d9833d-372c-43a9-b6bb-d1753177443e") : secret "serving-cert" not found Mar 18 13:07:10.855165 master-0 kubenswrapper[7599]: I0318 13:07:10.854658 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-79bc6b8d76-2zvf2" Mar 18 13:07:11.084045 master-0 kubenswrapper[7599]: I0318 13:07:11.083903 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:11.122704 master-0 kubenswrapper[7599]: I0318 13:07:11.122315 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:11.305459 master-0 kubenswrapper[7599]: I0318 13:07:11.305350 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d56529f-6fe0-4652-ac0a-f229aef1b86f-client-ca\") pod \"controller-manager-f5df8899c-4vt5h\" (UID: \"0d56529f-6fe0-4652-ac0a-f229aef1b86f\") " pod="openshift-controller-manager/controller-manager-f5df8899c-4vt5h" Mar 18 13:07:11.305648 master-0 kubenswrapper[7599]: E0318 13:07:11.305492 7599 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 13:07:11.305648 master-0 kubenswrapper[7599]: E0318 13:07:11.305548 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0d56529f-6fe0-4652-ac0a-f229aef1b86f-client-ca podName:0d56529f-6fe0-4652-ac0a-f229aef1b86f nodeName:}" failed. No retries permitted until 2026-03-18 13:07:13.305531864 +0000 UTC m=+8.266586106 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/0d56529f-6fe0-4652-ac0a-f229aef1b86f-client-ca") pod "controller-manager-f5df8899c-4vt5h" (UID: "0d56529f-6fe0-4652-ac0a-f229aef1b86f") : configmap "client-ca" not found Mar 18 13:07:11.305648 master-0 kubenswrapper[7599]: I0318 13:07:11.305619 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d56529f-6fe0-4652-ac0a-f229aef1b86f-serving-cert\") pod \"controller-manager-f5df8899c-4vt5h\" (UID: \"0d56529f-6fe0-4652-ac0a-f229aef1b86f\") " pod="openshift-controller-manager/controller-manager-f5df8899c-4vt5h" Mar 18 13:07:11.305938 master-0 kubenswrapper[7599]: E0318 13:07:11.305907 7599 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 13:07:11.305986 master-0 kubenswrapper[7599]: E0318 13:07:11.305965 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d56529f-6fe0-4652-ac0a-f229aef1b86f-serving-cert podName:0d56529f-6fe0-4652-ac0a-f229aef1b86f nodeName:}" failed. No retries permitted until 2026-03-18 13:07:13.305951574 +0000 UTC m=+8.267005816 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0d56529f-6fe0-4652-ac0a-f229aef1b86f-serving-cert") pod "controller-manager-f5df8899c-4vt5h" (UID: "0d56529f-6fe0-4652-ac0a-f229aef1b86f") : secret "serving-cert" not found Mar 18 13:07:11.603146 master-0 kubenswrapper[7599]: I0318 13:07:11.603100 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5df8899c-4vt5h" Mar 18 13:07:11.603146 master-0 kubenswrapper[7599]: I0318 13:07:11.603111 7599 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 13:07:11.603146 master-0 kubenswrapper[7599]: I0318 13:07:11.603157 7599 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 13:07:11.642695 master-0 kubenswrapper[7599]: I0318 13:07:11.642559 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-59dbf97cf8-p7zns"] Mar 18 13:07:11.643815 master-0 kubenswrapper[7599]: I0318 13:07:11.643775 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59dbf97cf8-p7zns" Mar 18 13:07:11.645368 master-0 kubenswrapper[7599]: I0318 13:07:11.645324 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-4vt5h"] Mar 18 13:07:11.647214 master-0 kubenswrapper[7599]: I0318 13:07:11.647145 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 13:07:11.650319 master-0 kubenswrapper[7599]: I0318 13:07:11.650272 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 13:07:11.650452 master-0 kubenswrapper[7599]: I0318 13:07:11.650395 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 13:07:11.651745 master-0 kubenswrapper[7599]: I0318 13:07:11.651673 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 13:07:11.652011 master-0 kubenswrapper[7599]: I0318 13:07:11.651980 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 13:07:11.652498 master-0 kubenswrapper[7599]: I0318 13:07:11.652476 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-4vt5h"] Mar 18 13:07:11.665538 master-0 kubenswrapper[7599]: I0318 13:07:11.665494 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59dbf97cf8-p7zns"] Mar 18 13:07:11.668431 master-0 kubenswrapper[7599]: I0318 13:07:11.668387 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 13:07:11.710806 master-0 kubenswrapper[7599]: I0318 13:07:11.710764 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8c1e3d3-7045-433d-80f6-282300d67328-serving-cert\") pod \"controller-manager-59dbf97cf8-p7zns\" (UID: \"d8c1e3d3-7045-433d-80f6-282300d67328\") " pod="openshift-controller-manager/controller-manager-59dbf97cf8-p7zns" Mar 18 13:07:11.711191 master-0 kubenswrapper[7599]: I0318 13:07:11.711169 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8c1e3d3-7045-433d-80f6-282300d67328-client-ca\") pod \"controller-manager-59dbf97cf8-p7zns\" (UID: \"d8c1e3d3-7045-433d-80f6-282300d67328\") " pod="openshift-controller-manager/controller-manager-59dbf97cf8-p7zns" Mar 18 13:07:11.711447 master-0 kubenswrapper[7599]: I0318 13:07:11.711406 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8c1e3d3-7045-433d-80f6-282300d67328-config\") pod \"controller-manager-59dbf97cf8-p7zns\" (UID: \"d8c1e3d3-7045-433d-80f6-282300d67328\") " pod="openshift-controller-manager/controller-manager-59dbf97cf8-p7zns" Mar 18 13:07:11.712630 master-0 kubenswrapper[7599]: I0318 13:07:11.712585 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8c1e3d3-7045-433d-80f6-282300d67328-proxy-ca-bundles\") pod \"controller-manager-59dbf97cf8-p7zns\" (UID: \"d8c1e3d3-7045-433d-80f6-282300d67328\") " pod="openshift-controller-manager/controller-manager-59dbf97cf8-p7zns" Mar 18 13:07:11.712881 master-0 kubenswrapper[7599]: I0318 13:07:11.712862 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6jws\" (UniqueName: \"kubernetes.io/projected/d8c1e3d3-7045-433d-80f6-282300d67328-kube-api-access-c6jws\") pod \"controller-manager-59dbf97cf8-p7zns\" (UID: \"d8c1e3d3-7045-433d-80f6-282300d67328\") " pod="openshift-controller-manager/controller-manager-59dbf97cf8-p7zns" Mar 18 13:07:11.713056 master-0 kubenswrapper[7599]: I0318 13:07:11.713038 7599 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0d56529f-6fe0-4652-ac0a-f229aef1b86f-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:11.713139 master-0 kubenswrapper[7599]: I0318 13:07:11.713127 7599 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d56529f-6fe0-4652-ac0a-f229aef1b86f-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:11.718906 master-0 kubenswrapper[7599]: I0318 13:07:11.716799 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-59dbf97cf8-p7zns"] Mar 18 13:07:11.718906 master-0 kubenswrapper[7599]: E0318 13:07:11.717073 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-c6jws proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-59dbf97cf8-p7zns" podUID="d8c1e3d3-7045-433d-80f6-282300d67328" Mar 18 13:07:11.821184 master-0 kubenswrapper[7599]: I0318 13:07:11.817714 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88d9833d-372c-43a9-b6bb-d1753177443e-serving-cert\") pod \"route-controller-manager-688d854df6-pbq74\" (UID: \"88d9833d-372c-43a9-b6bb-d1753177443e\") " pod="openshift-route-controller-manager/route-controller-manager-688d854df6-pbq74" Mar 18 13:07:11.821184 master-0 kubenswrapper[7599]: I0318 13:07:11.817824 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88d9833d-372c-43a9-b6bb-d1753177443e-client-ca\") pod \"route-controller-manager-688d854df6-pbq74\" (UID: \"88d9833d-372c-43a9-b6bb-d1753177443e\") " pod="openshift-route-controller-manager/route-controller-manager-688d854df6-pbq74" Mar 18 13:07:11.821184 master-0 kubenswrapper[7599]: I0318 13:07:11.817848 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8c1e3d3-7045-433d-80f6-282300d67328-config\") pod \"controller-manager-59dbf97cf8-p7zns\" (UID: \"d8c1e3d3-7045-433d-80f6-282300d67328\") " pod="openshift-controller-manager/controller-manager-59dbf97cf8-p7zns" Mar 18 13:07:11.821184 master-0 kubenswrapper[7599]: I0318 13:07:11.817887 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8c1e3d3-7045-433d-80f6-282300d67328-proxy-ca-bundles\") pod \"controller-manager-59dbf97cf8-p7zns\" (UID: \"d8c1e3d3-7045-433d-80f6-282300d67328\") " pod="openshift-controller-manager/controller-manager-59dbf97cf8-p7zns" Mar 18 13:07:11.821184 master-0 kubenswrapper[7599]: I0318 13:07:11.817910 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6jws\" (UniqueName: \"kubernetes.io/projected/d8c1e3d3-7045-433d-80f6-282300d67328-kube-api-access-c6jws\") pod \"controller-manager-59dbf97cf8-p7zns\" (UID: \"d8c1e3d3-7045-433d-80f6-282300d67328\") " pod="openshift-controller-manager/controller-manager-59dbf97cf8-p7zns" Mar 18 13:07:11.821184 master-0 kubenswrapper[7599]: I0318 13:07:11.817975 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8c1e3d3-7045-433d-80f6-282300d67328-serving-cert\") pod \"controller-manager-59dbf97cf8-p7zns\" (UID: \"d8c1e3d3-7045-433d-80f6-282300d67328\") " pod="openshift-controller-manager/controller-manager-59dbf97cf8-p7zns" Mar 18 13:07:11.821184 master-0 kubenswrapper[7599]: I0318 13:07:11.818033 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8c1e3d3-7045-433d-80f6-282300d67328-client-ca\") pod \"controller-manager-59dbf97cf8-p7zns\" (UID: \"d8c1e3d3-7045-433d-80f6-282300d67328\") " pod="openshift-controller-manager/controller-manager-59dbf97cf8-p7zns" Mar 18 13:07:11.821184 master-0 kubenswrapper[7599]: E0318 13:07:11.818126 7599 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 13:07:11.821184 master-0 kubenswrapper[7599]: E0318 13:07:11.818184 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d8c1e3d3-7045-433d-80f6-282300d67328-client-ca podName:d8c1e3d3-7045-433d-80f6-282300d67328 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:12.318165206 +0000 UTC m=+7.279219448 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d8c1e3d3-7045-433d-80f6-282300d67328-client-ca") pod "controller-manager-59dbf97cf8-p7zns" (UID: "d8c1e3d3-7045-433d-80f6-282300d67328") : configmap "client-ca" not found Mar 18 13:07:11.821184 master-0 kubenswrapper[7599]: E0318 13:07:11.818475 7599 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 13:07:11.821184 master-0 kubenswrapper[7599]: E0318 13:07:11.818506 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88d9833d-372c-43a9-b6bb-d1753177443e-serving-cert podName:88d9833d-372c-43a9-b6bb-d1753177443e nodeName:}" failed. No retries permitted until 2026-03-18 13:07:13.818496414 +0000 UTC m=+8.779550656 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/88d9833d-372c-43a9-b6bb-d1753177443e-serving-cert") pod "route-controller-manager-688d854df6-pbq74" (UID: "88d9833d-372c-43a9-b6bb-d1753177443e") : secret "serving-cert" not found Mar 18 13:07:11.821184 master-0 kubenswrapper[7599]: E0318 13:07:11.818538 7599 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 18 13:07:11.821184 master-0 kubenswrapper[7599]: E0318 13:07:11.818562 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/88d9833d-372c-43a9-b6bb-d1753177443e-client-ca podName:88d9833d-372c-43a9-b6bb-d1753177443e nodeName:}" failed. No retries permitted until 2026-03-18 13:07:13.818554415 +0000 UTC m=+8.779608657 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/88d9833d-372c-43a9-b6bb-d1753177443e-client-ca") pod "route-controller-manager-688d854df6-pbq74" (UID: "88d9833d-372c-43a9-b6bb-d1753177443e") : configmap "client-ca" not found Mar 18 13:07:11.821184 master-0 kubenswrapper[7599]: I0318 13:07:11.819506 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8c1e3d3-7045-433d-80f6-282300d67328-config\") pod \"controller-manager-59dbf97cf8-p7zns\" (UID: \"d8c1e3d3-7045-433d-80f6-282300d67328\") " pod="openshift-controller-manager/controller-manager-59dbf97cf8-p7zns" Mar 18 13:07:11.821184 master-0 kubenswrapper[7599]: I0318 13:07:11.820866 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8c1e3d3-7045-433d-80f6-282300d67328-proxy-ca-bundles\") pod \"controller-manager-59dbf97cf8-p7zns\" (UID: \"d8c1e3d3-7045-433d-80f6-282300d67328\") " pod="openshift-controller-manager/controller-manager-59dbf97cf8-p7zns" Mar 18 13:07:11.821184 master-0 kubenswrapper[7599]: E0318 13:07:11.821123 7599 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 13:07:11.821184 master-0 kubenswrapper[7599]: E0318 13:07:11.821155 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8c1e3d3-7045-433d-80f6-282300d67328-serving-cert podName:d8c1e3d3-7045-433d-80f6-282300d67328 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:12.321145797 +0000 UTC m=+7.282200039 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d8c1e3d3-7045-433d-80f6-282300d67328-serving-cert") pod "controller-manager-59dbf97cf8-p7zns" (UID: "d8c1e3d3-7045-433d-80f6-282300d67328") : secret "serving-cert" not found Mar 18 13:07:11.847251 master-0 kubenswrapper[7599]: I0318 13:07:11.847211 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6jws\" (UniqueName: \"kubernetes.io/projected/d8c1e3d3-7045-433d-80f6-282300d67328-kube-api-access-c6jws\") pod \"controller-manager-59dbf97cf8-p7zns\" (UID: \"d8c1e3d3-7045-433d-80f6-282300d67328\") " pod="openshift-controller-manager/controller-manager-59dbf97cf8-p7zns" Mar 18 13:07:12.255209 master-0 kubenswrapper[7599]: I0318 13:07:12.255119 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:07:12.260304 master-0 kubenswrapper[7599]: I0318 13:07:12.260254 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:07:12.324215 master-0 kubenswrapper[7599]: I0318 13:07:12.323626 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8c1e3d3-7045-433d-80f6-282300d67328-serving-cert\") pod \"controller-manager-59dbf97cf8-p7zns\" (UID: \"d8c1e3d3-7045-433d-80f6-282300d67328\") " pod="openshift-controller-manager/controller-manager-59dbf97cf8-p7zns" Mar 18 13:07:12.324215 master-0 kubenswrapper[7599]: I0318 13:07:12.323693 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8c1e3d3-7045-433d-80f6-282300d67328-client-ca\") pod \"controller-manager-59dbf97cf8-p7zns\" (UID: \"d8c1e3d3-7045-433d-80f6-282300d67328\") " pod="openshift-controller-manager/controller-manager-59dbf97cf8-p7zns" Mar 18 13:07:12.324215 master-0 kubenswrapper[7599]: E0318 13:07:12.323840 7599 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 13:07:12.324215 master-0 kubenswrapper[7599]: E0318 13:07:12.323896 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d8c1e3d3-7045-433d-80f6-282300d67328-client-ca podName:d8c1e3d3-7045-433d-80f6-282300d67328 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:13.323881854 +0000 UTC m=+8.284936096 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d8c1e3d3-7045-433d-80f6-282300d67328-client-ca") pod "controller-manager-59dbf97cf8-p7zns" (UID: "d8c1e3d3-7045-433d-80f6-282300d67328") : configmap "client-ca" not found Mar 18 13:07:12.325261 master-0 kubenswrapper[7599]: E0318 13:07:12.325165 7599 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 13:07:12.325261 master-0 kubenswrapper[7599]: E0318 13:07:12.325222 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8c1e3d3-7045-433d-80f6-282300d67328-serving-cert podName:d8c1e3d3-7045-433d-80f6-282300d67328 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:13.325210825 +0000 UTC m=+8.286265067 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d8c1e3d3-7045-433d-80f6-282300d67328-serving-cert") pod "controller-manager-59dbf97cf8-p7zns" (UID: "d8c1e3d3-7045-433d-80f6-282300d67328") : secret "serving-cert" not found Mar 18 13:07:12.616285 master-0 kubenswrapper[7599]: I0318 13:07:12.615038 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59dbf97cf8-p7zns" Mar 18 13:07:12.643580 master-0 kubenswrapper[7599]: I0318 13:07:12.635723 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59dbf97cf8-p7zns" Mar 18 13:07:12.667494 master-0 kubenswrapper[7599]: I0318 13:07:12.665993 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-79bc6b8d76-2zvf2"] Mar 18 13:07:12.732681 master-0 kubenswrapper[7599]: I0318 13:07:12.732620 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6jws\" (UniqueName: \"kubernetes.io/projected/d8c1e3d3-7045-433d-80f6-282300d67328-kube-api-access-c6jws\") pod \"d8c1e3d3-7045-433d-80f6-282300d67328\" (UID: \"d8c1e3d3-7045-433d-80f6-282300d67328\") " Mar 18 13:07:12.732945 master-0 kubenswrapper[7599]: I0318 13:07:12.732701 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8c1e3d3-7045-433d-80f6-282300d67328-config\") pod \"d8c1e3d3-7045-433d-80f6-282300d67328\" (UID: \"d8c1e3d3-7045-433d-80f6-282300d67328\") " Mar 18 13:07:12.732945 master-0 kubenswrapper[7599]: I0318 13:07:12.732759 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8c1e3d3-7045-433d-80f6-282300d67328-proxy-ca-bundles\") pod \"d8c1e3d3-7045-433d-80f6-282300d67328\" (UID: \"d8c1e3d3-7045-433d-80f6-282300d67328\") " Mar 18 13:07:12.733618 master-0 kubenswrapper[7599]: I0318 13:07:12.733460 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8c1e3d3-7045-433d-80f6-282300d67328-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d8c1e3d3-7045-433d-80f6-282300d67328" (UID: "d8c1e3d3-7045-433d-80f6-282300d67328"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:07:12.733618 master-0 kubenswrapper[7599]: I0318 13:07:12.733585 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8c1e3d3-7045-433d-80f6-282300d67328-config" (OuterVolumeSpecName: "config") pod "d8c1e3d3-7045-433d-80f6-282300d67328" (UID: "d8c1e3d3-7045-433d-80f6-282300d67328"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:07:12.738541 master-0 kubenswrapper[7599]: I0318 13:07:12.738477 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8c1e3d3-7045-433d-80f6-282300d67328-kube-api-access-c6jws" (OuterVolumeSpecName: "kube-api-access-c6jws") pod "d8c1e3d3-7045-433d-80f6-282300d67328" (UID: "d8c1e3d3-7045-433d-80f6-282300d67328"). InnerVolumeSpecName "kube-api-access-c6jws". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:07:12.760076 master-0 kubenswrapper[7599]: W0318 13:07:12.759904 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb89fb313_d01a_4305_b123_e253b3382b85.slice/crio-1be597ce241a4c605b967a1c6529bc798d2a367805fff6066c48887fdc2a2af1 WatchSource:0}: Error finding container 1be597ce241a4c605b967a1c6529bc798d2a367805fff6066c48887fdc2a2af1: Status 404 returned error can't find the container with id 1be597ce241a4c605b967a1c6529bc798d2a367805fff6066c48887fdc2a2af1 Mar 18 13:07:12.834001 master-0 kubenswrapper[7599]: I0318 13:07:12.833960 7599 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8c1e3d3-7045-433d-80f6-282300d67328-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:12.834001 master-0 kubenswrapper[7599]: I0318 13:07:12.833996 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6jws\" (UniqueName: \"kubernetes.io/projected/d8c1e3d3-7045-433d-80f6-282300d67328-kube-api-access-c6jws\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:12.834134 master-0 kubenswrapper[7599]: I0318 13:07:12.834010 7599 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8c1e3d3-7045-433d-80f6-282300d67328-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:13.324134 master-0 kubenswrapper[7599]: I0318 13:07:13.323778 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:07:13.327954 master-0 kubenswrapper[7599]: I0318 13:07:13.327927 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:07:13.344812 master-0 kubenswrapper[7599]: I0318 13:07:13.344769 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8c1e3d3-7045-433d-80f6-282300d67328-serving-cert\") pod \"controller-manager-59dbf97cf8-p7zns\" (UID: \"d8c1e3d3-7045-433d-80f6-282300d67328\") " pod="openshift-controller-manager/controller-manager-59dbf97cf8-p7zns" Mar 18 13:07:13.345031 master-0 kubenswrapper[7599]: I0318 13:07:13.344839 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8c1e3d3-7045-433d-80f6-282300d67328-client-ca\") pod \"controller-manager-59dbf97cf8-p7zns\" (UID: \"d8c1e3d3-7045-433d-80f6-282300d67328\") " pod="openshift-controller-manager/controller-manager-59dbf97cf8-p7zns" Mar 18 13:07:13.345071 master-0 kubenswrapper[7599]: E0318 13:07:13.345032 7599 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 13:07:13.345103 master-0 kubenswrapper[7599]: E0318 13:07:13.345072 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d8c1e3d3-7045-433d-80f6-282300d67328-client-ca podName:d8c1e3d3-7045-433d-80f6-282300d67328 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:15.345059652 +0000 UTC m=+10.306113894 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d8c1e3d3-7045-433d-80f6-282300d67328-client-ca") pod "controller-manager-59dbf97cf8-p7zns" (UID: "d8c1e3d3-7045-433d-80f6-282300d67328") : configmap "client-ca" not found Mar 18 13:07:13.345392 master-0 kubenswrapper[7599]: E0318 13:07:13.345368 7599 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 13:07:13.345392 master-0 kubenswrapper[7599]: E0318 13:07:13.345427 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8c1e3d3-7045-433d-80f6-282300d67328-serving-cert podName:d8c1e3d3-7045-433d-80f6-282300d67328 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:15.34539136 +0000 UTC m=+10.306445602 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d8c1e3d3-7045-433d-80f6-282300d67328-serving-cert") pod "controller-manager-59dbf97cf8-p7zns" (UID: "d8c1e3d3-7045-433d-80f6-282300d67328") : secret "serving-cert" not found Mar 18 13:07:13.375806 master-0 kubenswrapper[7599]: I0318 13:07:13.375758 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d56529f-6fe0-4652-ac0a-f229aef1b86f" path="/var/lib/kubelet/pods/0d56529f-6fe0-4652-ac0a-f229aef1b86f/volumes" Mar 18 13:07:13.620209 master-0 kubenswrapper[7599]: I0318 13:07:13.620158 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-8487694857-49h6x" event={"ID":"5b2acd84-85c0-4c47-90a4-44745b79976d","Type":"ContainerStarted","Data":"ff6612978fe2b0582a45870266115fa659f6abe171419afdc4fcd20dc786a7cb"} Mar 18 13:07:13.620748 master-0 kubenswrapper[7599]: I0318 13:07:13.620211 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-8487694857-49h6x" event={"ID":"5b2acd84-85c0-4c47-90a4-44745b79976d","Type":"ContainerStarted","Data":"33f04bfdd9c035c5cd30a8348194efaf7b8c0c01d29ad4ecd3e45f3c84d558aa"} Mar 18 13:07:13.628567 master-0 kubenswrapper[7599]: I0318 13:07:13.625842 7599 generic.go:334] "Generic (PLEG): container finished" podID="902909ca-ab08-49aa-9736-70e073f8e67d" containerID="a4c090aab4f3bf89ced608a71e5db3af3d21ed7b2100020f019a5440d122cecc" exitCode=0 Mar 18 13:07:13.628567 master-0 kubenswrapper[7599]: I0318 13:07:13.625939 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4" event={"ID":"902909ca-ab08-49aa-9736-70e073f8e67d","Type":"ContainerDied","Data":"a4c090aab4f3bf89ced608a71e5db3af3d21ed7b2100020f019a5440d122cecc"} Mar 18 13:07:13.631574 master-0 kubenswrapper[7599]: I0318 13:07:13.631521 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-79bc6b8d76-2zvf2" event={"ID":"b89fb313-d01a-4305-b123-e253b3382b85","Type":"ContainerStarted","Data":"9a89fb2a5bf4388a7514a371a51f6ac933c33ac9c54d8113cf8c422503facd37"} Mar 18 13:07:13.631631 master-0 kubenswrapper[7599]: I0318 13:07:13.631589 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-79bc6b8d76-2zvf2" event={"ID":"b89fb313-d01a-4305-b123-e253b3382b85","Type":"ContainerStarted","Data":"1be597ce241a4c605b967a1c6529bc798d2a367805fff6066c48887fdc2a2af1"} Mar 18 13:07:13.639651 master-0 kubenswrapper[7599]: I0318 13:07:13.638196 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-8487694857-49h6x" podStartSLOduration=3.128706497 podStartE2EDuration="5.638172379s" podCreationTimestamp="2026-03-18 13:07:08 +0000 UTC" firstStartedPulling="2026-03-18 13:07:09.983266874 +0000 UTC m=+4.944321116" lastFinishedPulling="2026-03-18 13:07:12.492732746 +0000 UTC m=+7.453786998" observedRunningTime="2026-03-18 13:07:13.638065716 +0000 UTC m=+8.599119958" watchObservedRunningTime="2026-03-18 13:07:13.638172379 +0000 UTC m=+8.599226641" Mar 18 13:07:13.647931 master-0 kubenswrapper[7599]: I0318 13:07:13.647868 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59dbf97cf8-p7zns" Mar 18 13:07:13.648729 master-0 kubenswrapper[7599]: I0318 13:07:13.648617 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qsnxz" event={"ID":"deb67ea0-8342-40cb-b0f4-115270e878dd","Type":"ContainerStarted","Data":"8c9e4d7f5a1cfb905af9530af8305e93c12f5088f9374b32f042b05f77b48591"} Mar 18 13:07:13.655760 master-0 kubenswrapper[7599]: I0318 13:07:13.655662 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:07:13.684676 master-0 kubenswrapper[7599]: I0318 13:07:13.684588 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-79bc6b8d76-2zvf2" podStartSLOduration=3.684565388 podStartE2EDuration="3.684565388s" podCreationTimestamp="2026-03-18 13:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:07:13.678454145 +0000 UTC m=+8.639508387" watchObservedRunningTime="2026-03-18 13:07:13.684565388 +0000 UTC m=+8.645619630" Mar 18 13:07:13.725837 master-0 kubenswrapper[7599]: I0318 13:07:13.725737 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qsnxz" podStartSLOduration=2.873635263 podStartE2EDuration="5.725707486s" podCreationTimestamp="2026-03-18 13:07:08 +0000 UTC" firstStartedPulling="2026-03-18 13:07:09.688046402 +0000 UTC m=+4.649100644" lastFinishedPulling="2026-03-18 13:07:12.540118615 +0000 UTC m=+7.501172867" observedRunningTime="2026-03-18 13:07:13.724014684 +0000 UTC m=+8.685068946" watchObservedRunningTime="2026-03-18 13:07:13.725707486 +0000 UTC m=+8.686761738" Mar 18 13:07:13.766524 master-0 kubenswrapper[7599]: I0318 13:07:13.766470 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-59dbf97cf8-p7zns"] Mar 18 13:07:13.771262 master-0 kubenswrapper[7599]: I0318 13:07:13.771203 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7fdd9495d7-jnhck"] Mar 18 13:07:13.773034 master-0 kubenswrapper[7599]: I0318 13:07:13.772456 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fdd9495d7-jnhck" Mar 18 13:07:13.777624 master-0 kubenswrapper[7599]: I0318 13:07:13.777525 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 13:07:13.783528 master-0 kubenswrapper[7599]: I0318 13:07:13.781610 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 13:07:13.783528 master-0 kubenswrapper[7599]: I0318 13:07:13.782774 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 13:07:13.783528 master-0 kubenswrapper[7599]: I0318 13:07:13.782890 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 13:07:13.783528 master-0 kubenswrapper[7599]: I0318 13:07:13.782997 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 13:07:13.785004 master-0 kubenswrapper[7599]: I0318 13:07:13.784898 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-59dbf97cf8-p7zns"] Mar 18 13:07:13.794201 master-0 kubenswrapper[7599]: I0318 13:07:13.794082 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7fdd9495d7-jnhck"] Mar 18 13:07:13.795565 master-0 kubenswrapper[7599]: I0318 13:07:13.795472 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 13:07:13.849545 master-0 kubenswrapper[7599]: I0318 13:07:13.847540 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7fdd9495d7-jnhck"] Mar 18 13:07:13.849545 master-0 kubenswrapper[7599]: E0318 13:07:13.848436 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-fftww proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-7fdd9495d7-jnhck" podUID="bc27fbd9-f46a-487e-ba6b-edcfd049648b" Mar 18 13:07:13.858090 master-0 kubenswrapper[7599]: I0318 13:07:13.858029 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc27fbd9-f46a-487e-ba6b-edcfd049648b-serving-cert\") pod \"controller-manager-7fdd9495d7-jnhck\" (UID: \"bc27fbd9-f46a-487e-ba6b-edcfd049648b\") " pod="openshift-controller-manager/controller-manager-7fdd9495d7-jnhck" Mar 18 13:07:13.858236 master-0 kubenswrapper[7599]: I0318 13:07:13.858101 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fftww\" (UniqueName: \"kubernetes.io/projected/bc27fbd9-f46a-487e-ba6b-edcfd049648b-kube-api-access-fftww\") pod \"controller-manager-7fdd9495d7-jnhck\" (UID: \"bc27fbd9-f46a-487e-ba6b-edcfd049648b\") " pod="openshift-controller-manager/controller-manager-7fdd9495d7-jnhck" Mar 18 13:07:13.858236 master-0 kubenswrapper[7599]: I0318 13:07:13.858192 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88d9833d-372c-43a9-b6bb-d1753177443e-serving-cert\") pod \"route-controller-manager-688d854df6-pbq74\" (UID: \"88d9833d-372c-43a9-b6bb-d1753177443e\") " pod="openshift-route-controller-manager/route-controller-manager-688d854df6-pbq74" Mar 18 13:07:13.858304 master-0 kubenswrapper[7599]: I0318 13:07:13.858236 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc27fbd9-f46a-487e-ba6b-edcfd049648b-client-ca\") pod \"controller-manager-7fdd9495d7-jnhck\" (UID: \"bc27fbd9-f46a-487e-ba6b-edcfd049648b\") " pod="openshift-controller-manager/controller-manager-7fdd9495d7-jnhck" Mar 18 13:07:13.858304 master-0 kubenswrapper[7599]: I0318 13:07:13.858291 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc27fbd9-f46a-487e-ba6b-edcfd049648b-proxy-ca-bundles\") pod \"controller-manager-7fdd9495d7-jnhck\" (UID: \"bc27fbd9-f46a-487e-ba6b-edcfd049648b\") " pod="openshift-controller-manager/controller-manager-7fdd9495d7-jnhck" Mar 18 13:07:13.858823 master-0 kubenswrapper[7599]: I0318 13:07:13.858323 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88d9833d-372c-43a9-b6bb-d1753177443e-client-ca\") pod \"route-controller-manager-688d854df6-pbq74\" (UID: \"88d9833d-372c-43a9-b6bb-d1753177443e\") " pod="openshift-route-controller-manager/route-controller-manager-688d854df6-pbq74" Mar 18 13:07:13.858823 master-0 kubenswrapper[7599]: I0318 13:07:13.858345 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc27fbd9-f46a-487e-ba6b-edcfd049648b-config\") pod \"controller-manager-7fdd9495d7-jnhck\" (UID: \"bc27fbd9-f46a-487e-ba6b-edcfd049648b\") " pod="openshift-controller-manager/controller-manager-7fdd9495d7-jnhck" Mar 18 13:07:13.858823 master-0 kubenswrapper[7599]: E0318 13:07:13.858713 7599 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 13:07:13.858823 master-0 kubenswrapper[7599]: E0318 13:07:13.858758 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88d9833d-372c-43a9-b6bb-d1753177443e-serving-cert podName:88d9833d-372c-43a9-b6bb-d1753177443e nodeName:}" failed. No retries permitted until 2026-03-18 13:07:17.858743971 +0000 UTC m=+12.819798213 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/88d9833d-372c-43a9-b6bb-d1753177443e-serving-cert") pod "route-controller-manager-688d854df6-pbq74" (UID: "88d9833d-372c-43a9-b6bb-d1753177443e") : secret "serving-cert" not found Mar 18 13:07:13.863908 master-0 kubenswrapper[7599]: I0318 13:07:13.863868 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88d9833d-372c-43a9-b6bb-d1753177443e-client-ca\") pod \"route-controller-manager-688d854df6-pbq74\" (UID: \"88d9833d-372c-43a9-b6bb-d1753177443e\") " pod="openshift-route-controller-manager/route-controller-manager-688d854df6-pbq74" Mar 18 13:07:13.879441 master-0 kubenswrapper[7599]: I0318 13:07:13.874804 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-688d854df6-pbq74"] Mar 18 13:07:13.879441 master-0 kubenswrapper[7599]: E0318 13:07:13.875168 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-688d854df6-pbq74" podUID="88d9833d-372c-43a9-b6bb-d1753177443e" Mar 18 13:07:13.906644 master-0 kubenswrapper[7599]: I0318 13:07:13.906567 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:13.906860 master-0 kubenswrapper[7599]: I0318 13:07:13.906724 7599 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 13:07:13.906860 master-0 kubenswrapper[7599]: I0318 13:07:13.906736 7599 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 13:07:13.945503 master-0 kubenswrapper[7599]: I0318 13:07:13.945463 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:13.963356 master-0 kubenswrapper[7599]: I0318 13:07:13.962494 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc27fbd9-f46a-487e-ba6b-edcfd049648b-proxy-ca-bundles\") pod \"controller-manager-7fdd9495d7-jnhck\" (UID: \"bc27fbd9-f46a-487e-ba6b-edcfd049648b\") " pod="openshift-controller-manager/controller-manager-7fdd9495d7-jnhck" Mar 18 13:07:13.963356 master-0 kubenswrapper[7599]: I0318 13:07:13.962542 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc27fbd9-f46a-487e-ba6b-edcfd049648b-config\") pod \"controller-manager-7fdd9495d7-jnhck\" (UID: \"bc27fbd9-f46a-487e-ba6b-edcfd049648b\") " pod="openshift-controller-manager/controller-manager-7fdd9495d7-jnhck" Mar 18 13:07:13.963356 master-0 kubenswrapper[7599]: I0318 13:07:13.962599 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc27fbd9-f46a-487e-ba6b-edcfd049648b-serving-cert\") pod \"controller-manager-7fdd9495d7-jnhck\" (UID: \"bc27fbd9-f46a-487e-ba6b-edcfd049648b\") " pod="openshift-controller-manager/controller-manager-7fdd9495d7-jnhck" Mar 18 13:07:13.963356 master-0 kubenswrapper[7599]: I0318 13:07:13.962636 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fftww\" (UniqueName: \"kubernetes.io/projected/bc27fbd9-f46a-487e-ba6b-edcfd049648b-kube-api-access-fftww\") pod \"controller-manager-7fdd9495d7-jnhck\" (UID: \"bc27fbd9-f46a-487e-ba6b-edcfd049648b\") " pod="openshift-controller-manager/controller-manager-7fdd9495d7-jnhck" Mar 18 13:07:13.963356 master-0 kubenswrapper[7599]: I0318 13:07:13.962710 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc27fbd9-f46a-487e-ba6b-edcfd049648b-client-ca\") pod \"controller-manager-7fdd9495d7-jnhck\" (UID: \"bc27fbd9-f46a-487e-ba6b-edcfd049648b\") " pod="openshift-controller-manager/controller-manager-7fdd9495d7-jnhck" Mar 18 13:07:13.963356 master-0 kubenswrapper[7599]: I0318 13:07:13.962752 7599 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8c1e3d3-7045-433d-80f6-282300d67328-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:13.963356 master-0 kubenswrapper[7599]: I0318 13:07:13.962764 7599 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8c1e3d3-7045-433d-80f6-282300d67328-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:13.964501 master-0 kubenswrapper[7599]: I0318 13:07:13.963562 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc27fbd9-f46a-487e-ba6b-edcfd049648b-client-ca\") pod \"controller-manager-7fdd9495d7-jnhck\" (UID: \"bc27fbd9-f46a-487e-ba6b-edcfd049648b\") " pod="openshift-controller-manager/controller-manager-7fdd9495d7-jnhck" Mar 18 13:07:13.964702 master-0 kubenswrapper[7599]: I0318 13:07:13.964682 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc27fbd9-f46a-487e-ba6b-edcfd049648b-proxy-ca-bundles\") pod \"controller-manager-7fdd9495d7-jnhck\" (UID: \"bc27fbd9-f46a-487e-ba6b-edcfd049648b\") " pod="openshift-controller-manager/controller-manager-7fdd9495d7-jnhck" Mar 18 13:07:13.965281 master-0 kubenswrapper[7599]: I0318 13:07:13.965259 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc27fbd9-f46a-487e-ba6b-edcfd049648b-config\") pod \"controller-manager-7fdd9495d7-jnhck\" (UID: \"bc27fbd9-f46a-487e-ba6b-edcfd049648b\") " pod="openshift-controller-manager/controller-manager-7fdd9495d7-jnhck" Mar 18 13:07:13.965355 master-0 kubenswrapper[7599]: E0318 13:07:13.965333 7599 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 13:07:13.965387 master-0 kubenswrapper[7599]: E0318 13:07:13.965369 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc27fbd9-f46a-487e-ba6b-edcfd049648b-serving-cert podName:bc27fbd9-f46a-487e-ba6b-edcfd049648b nodeName:}" failed. No retries permitted until 2026-03-18 13:07:14.465358317 +0000 UTC m=+9.426412559 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/bc27fbd9-f46a-487e-ba6b-edcfd049648b-serving-cert") pod "controller-manager-7fdd9495d7-jnhck" (UID: "bc27fbd9-f46a-487e-ba6b-edcfd049648b") : secret "serving-cert" not found Mar 18 13:07:14.001436 master-0 kubenswrapper[7599]: I0318 13:07:14.001358 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fftww\" (UniqueName: \"kubernetes.io/projected/bc27fbd9-f46a-487e-ba6b-edcfd049648b-kube-api-access-fftww\") pod \"controller-manager-7fdd9495d7-jnhck\" (UID: \"bc27fbd9-f46a-487e-ba6b-edcfd049648b\") " pod="openshift-controller-manager/controller-manager-7fdd9495d7-jnhck" Mar 18 13:07:14.172363 master-0 kubenswrapper[7599]: I0318 13:07:14.172312 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:07:14.172673 master-0 kubenswrapper[7599]: I0318 13:07:14.172397 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls\") pod \"dns-operator-9c5679d8f-5lzzn\" (UID: \"59bf5114-29f9-4f70-8582-108e95327cb2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-5lzzn" Mar 18 13:07:14.172673 master-0 kubenswrapper[7599]: I0318 13:07:14.172448 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:07:14.172818 master-0 kubenswrapper[7599]: E0318 13:07:14.172705 7599 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:07:14.172818 master-0 kubenswrapper[7599]: E0318 13:07:14.172814 7599 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 18 13:07:14.172906 master-0 kubenswrapper[7599]: E0318 13:07:14.172869 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert podName:ac6d8eb6-1d5e-4757-9823-5ffe478c711c nodeName:}" failed. No retries permitted until 2026-03-18 13:07:22.172832883 +0000 UTC m=+17.133887305 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert") pod "cluster-baremetal-operator-6f69995874-mz4qp" (UID: "ac6d8eb6-1d5e-4757-9823-5ffe478c711c") : secret "cluster-baremetal-webhook-server-cert" not found Mar 18 13:07:14.172906 master-0 kubenswrapper[7599]: E0318 13:07:14.172900 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls podName:ac6d8eb6-1d5e-4757-9823-5ffe478c711c nodeName:}" failed. No retries permitted until 2026-03-18 13:07:22.172888074 +0000 UTC m=+17.133942616 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-mz4qp" (UID: "ac6d8eb6-1d5e-4757-9823-5ffe478c711c") : secret "cluster-baremetal-operator-tls" not found Mar 18 13:07:14.172906 master-0 kubenswrapper[7599]: E0318 13:07:14.172907 7599 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:07:14.173004 master-0 kubenswrapper[7599]: E0318 13:07:14.172938 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls podName:59bf5114-29f9-4f70-8582-108e95327cb2 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:22.172921935 +0000 UTC m=+17.133976177 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls") pod "dns-operator-9c5679d8f-5lzzn" (UID: "59bf5114-29f9-4f70-8582-108e95327cb2") : secret "metrics-tls" not found Mar 18 13:07:14.275850 master-0 kubenswrapper[7599]: I0318 13:07:14.275789 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-p7vvx\" (UID: \"2ea9eb53-0385-4a1a-a64f-696f8520cf49\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" Mar 18 13:07:14.275850 master-0 kubenswrapper[7599]: I0318 13:07:14.275849 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:07:14.276104 master-0 kubenswrapper[7599]: I0318 13:07:14.275874 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:07:14.276104 master-0 kubenswrapper[7599]: I0318 13:07:14.275893 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:07:14.276104 master-0 kubenswrapper[7599]: I0318 13:07:14.275922 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-zrc8h\" (UID: \"720a1f60-c1cb-4aef-aaec-f082090ca631\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" Mar 18 13:07:14.276104 master-0 kubenswrapper[7599]: I0318 13:07:14.275944 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert\") pod \"olm-operator-5c9796789-z8jkt\" (UID: \"822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt" Mar 18 13:07:14.276104 master-0 kubenswrapper[7599]: I0318 13:07:14.276008 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs\") pod \"network-metrics-daemon-kbfbq\" (UID: \"bf1cc230-0a79-4a1d-b500-a65d02e50973\") " pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:07:14.276104 master-0 kubenswrapper[7599]: I0318 13:07:14.276053 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-99pzm\" (UID: \"fe643e40-d06d-4e69-9be3-0065c2a78567\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:07:14.276104 master-0 kubenswrapper[7599]: I0318 13:07:14.276080 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:07:14.276104 master-0 kubenswrapper[7599]: I0318 13:07:14.276107 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-n8hgl\" (UID: \"8c0e5eca-819b-40f3-bf77-0cd90a4f6e94\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" Mar 18 13:07:14.276349 master-0 kubenswrapper[7599]: I0318 13:07:14.276123 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:07:14.276349 master-0 kubenswrapper[7599]: I0318 13:07:14.276139 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:07:14.276349 master-0 kubenswrapper[7599]: I0318 13:07:14.276157 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert\") pod \"catalog-operator-68f85b4d6c-t84s9\" (UID: \"d2455453-5943-49ef-bfea-cba077197da0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:07:14.276349 master-0 kubenswrapper[7599]: E0318 13:07:14.276268 7599 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 13:07:14.276349 master-0 kubenswrapper[7599]: E0318 13:07:14.276319 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert podName:d2455453-5943-49ef-bfea-cba077197da0 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:22.276302839 +0000 UTC m=+17.237357081 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert") pod "catalog-operator-68f85b4d6c-t84s9" (UID: "d2455453-5943-49ef-bfea-cba077197da0") : secret "catalog-operator-serving-cert" not found Mar 18 13:07:14.276655 master-0 kubenswrapper[7599]: E0318 13:07:14.276604 7599 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 13:07:14.276688 master-0 kubenswrapper[7599]: E0318 13:07:14.276646 7599 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 13:07:14.276716 master-0 kubenswrapper[7599]: E0318 13:07:14.276703 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls podName:290d1f84-5c5c-4bff-b045-e6020793cded nodeName:}" failed. No retries permitted until 2026-03-18 13:07:22.276682278 +0000 UTC m=+17.237736560 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-6l7pv" (UID: "290d1f84-5c5c-4bff-b045-e6020793cded") : secret "image-registry-operator-tls" not found Mar 18 13:07:14.276751 master-0 kubenswrapper[7599]: E0318 13:07:14.276725 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls podName:8c0e5eca-819b-40f3-bf77-0cd90a4f6e94 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:22.276716689 +0000 UTC m=+17.237771011 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-n8hgl" (UID: "8c0e5eca-819b-40f3-bf77-0cd90a4f6e94") : secret "cluster-monitoring-operator-tls" not found Mar 18 13:07:14.276794 master-0 kubenswrapper[7599]: E0318 13:07:14.276751 7599 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 13:07:14.276794 master-0 kubenswrapper[7599]: E0318 13:07:14.276776 7599 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 13:07:14.276874 master-0 kubenswrapper[7599]: E0318 13:07:14.276817 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs podName:bf1cc230-0a79-4a1d-b500-a65d02e50973 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:22.27677013 +0000 UTC m=+17.237824442 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs") pod "network-metrics-daemon-kbfbq" (UID: "bf1cc230-0a79-4a1d-b500-a65d02e50973") : secret "metrics-daemon-secret" not found Mar 18 13:07:14.276874 master-0 kubenswrapper[7599]: E0318 13:07:14.276834 7599 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 13:07:14.276874 master-0 kubenswrapper[7599]: E0318 13:07:14.276837 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert podName:2b06a568-4dad-44b4-8312-aa52911dbfb0 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:22.276828222 +0000 UTC m=+17.237882564 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert") pod "cluster-version-operator-56d8475767-f2hjp" (UID: "2b06a568-4dad-44b4-8312-aa52911dbfb0") : secret "cluster-version-operator-serving-cert" not found Mar 18 13:07:14.276874 master-0 kubenswrapper[7599]: E0318 13:07:14.276857 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs podName:720a1f60-c1cb-4aef-aaec-f082090ca631 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:22.276849052 +0000 UTC m=+17.237903404 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-zrc8h" (UID: "720a1f60-c1cb-4aef-aaec-f082090ca631") : secret "multus-admission-controller-secret" not found Mar 18 13:07:14.277027 master-0 kubenswrapper[7599]: E0318 13:07:14.276888 7599 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 13:07:14.277027 master-0 kubenswrapper[7599]: E0318 13:07:14.276899 7599 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 13:07:14.277027 master-0 kubenswrapper[7599]: E0318 13:07:14.276915 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics podName:fe643e40-d06d-4e69-9be3-0065c2a78567 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:22.276907094 +0000 UTC m=+17.237961416 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-99pzm" (UID: "fe643e40-d06d-4e69-9be3-0065c2a78567") : secret "marketplace-operator-metrics" not found Mar 18 13:07:14.277027 master-0 kubenswrapper[7599]: E0318 13:07:14.276932 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert podName:822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:22.276924604 +0000 UTC m=+17.237978946 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert") pod "olm-operator-5c9796789-z8jkt" (UID: "822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9") : secret "olm-operator-serving-cert" not found Mar 18 13:07:14.277027 master-0 kubenswrapper[7599]: E0318 13:07:14.276976 7599 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 13:07:14.277027 master-0 kubenswrapper[7599]: E0318 13:07:14.276995 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert podName:2ea9eb53-0385-4a1a-a64f-696f8520cf49 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:22.276989866 +0000 UTC m=+17.238044108 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-p7vvx" (UID: "2ea9eb53-0385-4a1a-a64f-696f8520cf49") : secret "package-server-manager-serving-cert" not found Mar 18 13:07:14.277216 master-0 kubenswrapper[7599]: E0318 13:07:14.277036 7599 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Mar 18 13:07:14.277216 master-0 kubenswrapper[7599]: E0318 13:07:14.277052 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls podName:5a4202c2-c330-4a5d-87e7-0a63d069113f nodeName:}" failed. No retries permitted until 2026-03-18 13:07:22.277047337 +0000 UTC m=+17.238101579 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls") pod "machine-config-operator-84d549f6d5-dlr6p" (UID: "5a4202c2-c330-4a5d-87e7-0a63d069113f") : secret "mco-proxy-tls" not found Mar 18 13:07:14.277216 master-0 kubenswrapper[7599]: E0318 13:07:14.277141 7599 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 13:07:14.277216 master-0 kubenswrapper[7599]: E0318 13:07:14.277183 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls podName:d9d09a56-ed4c-40b7-8be1-f3934c07296e nodeName:}" failed. No retries permitted until 2026-03-18 13:07:22.27717432 +0000 UTC m=+17.238228552 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls") pod "ingress-operator-66b84d69b-wqxpk" (UID: "d9d09a56-ed4c-40b7-8be1-f3934c07296e") : secret "metrics-tls" not found Mar 18 13:07:14.281432 master-0 kubenswrapper[7599]: I0318 13:07:14.280838 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:07:14.281432 master-0 kubenswrapper[7599]: I0318 13:07:14.280929 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:07:14.478714 master-0 kubenswrapper[7599]: I0318 13:07:14.478260 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc27fbd9-f46a-487e-ba6b-edcfd049648b-serving-cert\") pod \"controller-manager-7fdd9495d7-jnhck\" (UID: \"bc27fbd9-f46a-487e-ba6b-edcfd049648b\") " pod="openshift-controller-manager/controller-manager-7fdd9495d7-jnhck" Mar 18 13:07:14.478862 master-0 kubenswrapper[7599]: E0318 13:07:14.478461 7599 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 13:07:14.478862 master-0 kubenswrapper[7599]: E0318 13:07:14.478814 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc27fbd9-f46a-487e-ba6b-edcfd049648b-serving-cert podName:bc27fbd9-f46a-487e-ba6b-edcfd049648b nodeName:}" failed. No retries permitted until 2026-03-18 13:07:15.47879223 +0000 UTC m=+10.439846572 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/bc27fbd9-f46a-487e-ba6b-edcfd049648b-serving-cert") pod "controller-manager-7fdd9495d7-jnhck" (UID: "bc27fbd9-f46a-487e-ba6b-edcfd049648b") : secret "serving-cert" not found Mar 18 13:07:14.515796 master-0 kubenswrapper[7599]: I0318 13:07:14.515737 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:07:14.672999 master-0 kubenswrapper[7599]: I0318 13:07:14.672936 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" event={"ID":"ce3728ab-5d50-40ac-95b3-74a5b62a557f","Type":"ContainerStarted","Data":"9fff8d077ff23531e072493f53ed61382efbb48a6316de383a5ae419bc8c0c9f"} Mar 18 13:07:14.673795 master-0 kubenswrapper[7599]: I0318 13:07:14.673032 7599 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 13:07:14.673795 master-0 kubenswrapper[7599]: I0318 13:07:14.673070 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-688d854df6-pbq74" Mar 18 13:07:14.673795 master-0 kubenswrapper[7599]: I0318 13:07:14.673135 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fdd9495d7-jnhck" Mar 18 13:07:14.694746 master-0 kubenswrapper[7599]: I0318 13:07:14.694697 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fdd9495d7-jnhck" Mar 18 13:07:14.696400 master-0 kubenswrapper[7599]: I0318 13:07:14.696368 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-688d854df6-pbq74" Mar 18 13:07:14.743280 master-0 kubenswrapper[7599]: I0318 13:07:14.743153 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn"] Mar 18 13:07:14.746836 master-0 kubenswrapper[7599]: W0318 13:07:14.746798 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3ff09ab_cbe1_49e7_8121_5f71997a5176.slice/crio-35c33231cc5394e541c516a963005ff2abf91292685c4e1cbb8e7e960d479ab2 WatchSource:0}: Error finding container 35c33231cc5394e541c516a963005ff2abf91292685c4e1cbb8e7e960d479ab2: Status 404 returned error can't find the container with id 35c33231cc5394e541c516a963005ff2abf91292685c4e1cbb8e7e960d479ab2 Mar 18 13:07:14.791528 master-0 kubenswrapper[7599]: I0318 13:07:14.791465 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc27fbd9-f46a-487e-ba6b-edcfd049648b-proxy-ca-bundles\") pod \"bc27fbd9-f46a-487e-ba6b-edcfd049648b\" (UID: \"bc27fbd9-f46a-487e-ba6b-edcfd049648b\") " Mar 18 13:07:14.791820 master-0 kubenswrapper[7599]: I0318 13:07:14.791542 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bf4z\" (UniqueName: \"kubernetes.io/projected/88d9833d-372c-43a9-b6bb-d1753177443e-kube-api-access-8bf4z\") pod \"88d9833d-372c-43a9-b6bb-d1753177443e\" (UID: \"88d9833d-372c-43a9-b6bb-d1753177443e\") " Mar 18 13:07:14.791820 master-0 kubenswrapper[7599]: I0318 13:07:14.791595 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88d9833d-372c-43a9-b6bb-d1753177443e-client-ca\") pod \"88d9833d-372c-43a9-b6bb-d1753177443e\" (UID: \"88d9833d-372c-43a9-b6bb-d1753177443e\") " Mar 18 13:07:14.791820 master-0 kubenswrapper[7599]: I0318 13:07:14.791616 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc27fbd9-f46a-487e-ba6b-edcfd049648b-config\") pod \"bc27fbd9-f46a-487e-ba6b-edcfd049648b\" (UID: \"bc27fbd9-f46a-487e-ba6b-edcfd049648b\") " Mar 18 13:07:14.791820 master-0 kubenswrapper[7599]: I0318 13:07:14.791649 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88d9833d-372c-43a9-b6bb-d1753177443e-config\") pod \"88d9833d-372c-43a9-b6bb-d1753177443e\" (UID: \"88d9833d-372c-43a9-b6bb-d1753177443e\") " Mar 18 13:07:14.791820 master-0 kubenswrapper[7599]: I0318 13:07:14.791669 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc27fbd9-f46a-487e-ba6b-edcfd049648b-client-ca\") pod \"bc27fbd9-f46a-487e-ba6b-edcfd049648b\" (UID: \"bc27fbd9-f46a-487e-ba6b-edcfd049648b\") " Mar 18 13:07:14.791820 master-0 kubenswrapper[7599]: I0318 13:07:14.791696 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fftww\" (UniqueName: \"kubernetes.io/projected/bc27fbd9-f46a-487e-ba6b-edcfd049648b-kube-api-access-fftww\") pod \"bc27fbd9-f46a-487e-ba6b-edcfd049648b\" (UID: \"bc27fbd9-f46a-487e-ba6b-edcfd049648b\") " Mar 18 13:07:14.793702 master-0 kubenswrapper[7599]: I0318 13:07:14.793662 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc27fbd9-f46a-487e-ba6b-edcfd049648b-client-ca" (OuterVolumeSpecName: "client-ca") pod "bc27fbd9-f46a-487e-ba6b-edcfd049648b" (UID: "bc27fbd9-f46a-487e-ba6b-edcfd049648b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:07:14.794047 master-0 kubenswrapper[7599]: I0318 13:07:14.794001 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88d9833d-372c-43a9-b6bb-d1753177443e-client-ca" (OuterVolumeSpecName: "client-ca") pod "88d9833d-372c-43a9-b6bb-d1753177443e" (UID: "88d9833d-372c-43a9-b6bb-d1753177443e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:07:14.797756 master-0 kubenswrapper[7599]: I0318 13:07:14.797706 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc27fbd9-f46a-487e-ba6b-edcfd049648b-config" (OuterVolumeSpecName: "config") pod "bc27fbd9-f46a-487e-ba6b-edcfd049648b" (UID: "bc27fbd9-f46a-487e-ba6b-edcfd049648b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:07:14.798180 master-0 kubenswrapper[7599]: I0318 13:07:14.798121 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88d9833d-372c-43a9-b6bb-d1753177443e-kube-api-access-8bf4z" (OuterVolumeSpecName: "kube-api-access-8bf4z") pod "88d9833d-372c-43a9-b6bb-d1753177443e" (UID: "88d9833d-372c-43a9-b6bb-d1753177443e"). InnerVolumeSpecName "kube-api-access-8bf4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:07:14.799086 master-0 kubenswrapper[7599]: I0318 13:07:14.798962 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc27fbd9-f46a-487e-ba6b-edcfd049648b-kube-api-access-fftww" (OuterVolumeSpecName: "kube-api-access-fftww") pod "bc27fbd9-f46a-487e-ba6b-edcfd049648b" (UID: "bc27fbd9-f46a-487e-ba6b-edcfd049648b"). InnerVolumeSpecName "kube-api-access-fftww". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:07:14.799086 master-0 kubenswrapper[7599]: I0318 13:07:14.799045 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88d9833d-372c-43a9-b6bb-d1753177443e-config" (OuterVolumeSpecName: "config") pod "88d9833d-372c-43a9-b6bb-d1753177443e" (UID: "88d9833d-372c-43a9-b6bb-d1753177443e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:07:14.799259 master-0 kubenswrapper[7599]: I0318 13:07:14.799236 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc27fbd9-f46a-487e-ba6b-edcfd049648b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "bc27fbd9-f46a-487e-ba6b-edcfd049648b" (UID: "bc27fbd9-f46a-487e-ba6b-edcfd049648b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:07:14.894006 master-0 kubenswrapper[7599]: I0318 13:07:14.893950 7599 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88d9833d-372c-43a9-b6bb-d1753177443e-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:14.894006 master-0 kubenswrapper[7599]: I0318 13:07:14.893999 7599 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc27fbd9-f46a-487e-ba6b-edcfd049648b-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:14.894006 master-0 kubenswrapper[7599]: I0318 13:07:14.894017 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fftww\" (UniqueName: \"kubernetes.io/projected/bc27fbd9-f46a-487e-ba6b-edcfd049648b-kube-api-access-fftww\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:14.895229 master-0 kubenswrapper[7599]: I0318 13:07:14.894029 7599 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc27fbd9-f46a-487e-ba6b-edcfd049648b-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:14.895229 master-0 kubenswrapper[7599]: I0318 13:07:14.894043 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bf4z\" (UniqueName: \"kubernetes.io/projected/88d9833d-372c-43a9-b6bb-d1753177443e-kube-api-access-8bf4z\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:14.895229 master-0 kubenswrapper[7599]: I0318 13:07:14.894057 7599 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88d9833d-372c-43a9-b6bb-d1753177443e-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:14.895229 master-0 kubenswrapper[7599]: I0318 13:07:14.894066 7599 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc27fbd9-f46a-487e-ba6b-edcfd049648b-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:15.167427 master-0 kubenswrapper[7599]: I0318 13:07:15.167374 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:15.184353 master-0 kubenswrapper[7599]: I0318 13:07:15.184297 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:07:15.378109 master-0 kubenswrapper[7599]: I0318 13:07:15.378057 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8c1e3d3-7045-433d-80f6-282300d67328" path="/var/lib/kubelet/pods/d8c1e3d3-7045-433d-80f6-282300d67328/volumes" Mar 18 13:07:15.499902 master-0 kubenswrapper[7599]: I0318 13:07:15.499610 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc27fbd9-f46a-487e-ba6b-edcfd049648b-serving-cert\") pod \"controller-manager-7fdd9495d7-jnhck\" (UID: \"bc27fbd9-f46a-487e-ba6b-edcfd049648b\") " pod="openshift-controller-manager/controller-manager-7fdd9495d7-jnhck" Mar 18 13:07:15.499902 master-0 kubenswrapper[7599]: E0318 13:07:15.499882 7599 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 13:07:15.500099 master-0 kubenswrapper[7599]: E0318 13:07:15.499959 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc27fbd9-f46a-487e-ba6b-edcfd049648b-serving-cert podName:bc27fbd9-f46a-487e-ba6b-edcfd049648b nodeName:}" failed. No retries permitted until 2026-03-18 13:07:17.499938803 +0000 UTC m=+12.460993045 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/bc27fbd9-f46a-487e-ba6b-edcfd049648b-serving-cert") pod "controller-manager-7fdd9495d7-jnhck" (UID: "bc27fbd9-f46a-487e-ba6b-edcfd049648b") : secret "serving-cert" not found Mar 18 13:07:15.751955 master-0 kubenswrapper[7599]: I0318 13:07:15.747621 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" event={"ID":"c3ff09ab-cbe1-49e7-8121-5f71997a5176","Type":"ContainerStarted","Data":"35c33231cc5394e541c516a963005ff2abf91292685c4e1cbb8e7e960d479ab2"} Mar 18 13:07:15.751955 master-0 kubenswrapper[7599]: I0318 13:07:15.747695 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fdd9495d7-jnhck" Mar 18 13:07:15.751955 master-0 kubenswrapper[7599]: I0318 13:07:15.747991 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-688d854df6-pbq74" Mar 18 13:07:15.791248 master-0 kubenswrapper[7599]: I0318 13:07:15.791191 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c8db4484-th8hr"] Mar 18 13:07:15.796903 master-0 kubenswrapper[7599]: I0318 13:07:15.791766 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c8db4484-th8hr" Mar 18 13:07:15.796903 master-0 kubenswrapper[7599]: I0318 13:07:15.793099 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-688d854df6-pbq74"] Mar 18 13:07:15.796903 master-0 kubenswrapper[7599]: I0318 13:07:15.796793 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c8db4484-th8hr"] Mar 18 13:07:15.798773 master-0 kubenswrapper[7599]: I0318 13:07:15.797481 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 13:07:15.798856 master-0 kubenswrapper[7599]: I0318 13:07:15.798791 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 13:07:15.798891 master-0 kubenswrapper[7599]: I0318 13:07:15.798857 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-688d854df6-pbq74"] Mar 18 13:07:15.798891 master-0 kubenswrapper[7599]: I0318 13:07:15.798791 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 13:07:15.798947 master-0 kubenswrapper[7599]: I0318 13:07:15.798892 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 13:07:15.799509 master-0 kubenswrapper[7599]: I0318 13:07:15.799314 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 13:07:15.842876 master-0 kubenswrapper[7599]: I0318 13:07:15.842825 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7fdd9495d7-jnhck"] Mar 18 13:07:15.844454 master-0 kubenswrapper[7599]: I0318 13:07:15.844424 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7fdd9495d7-jnhck"] Mar 18 13:07:15.924083 master-0 kubenswrapper[7599]: I0318 13:07:15.918490 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-client-ca\") pod \"route-controller-manager-c8db4484-th8hr\" (UID: \"f53a0371-0ce3-4a3f-b3e1-0043a0f3e806\") " pod="openshift-route-controller-manager/route-controller-manager-c8db4484-th8hr" Mar 18 13:07:15.924083 master-0 kubenswrapper[7599]: I0318 13:07:15.918688 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-config\") pod \"route-controller-manager-c8db4484-th8hr\" (UID: \"f53a0371-0ce3-4a3f-b3e1-0043a0f3e806\") " pod="openshift-route-controller-manager/route-controller-manager-c8db4484-th8hr" Mar 18 13:07:15.924083 master-0 kubenswrapper[7599]: I0318 13:07:15.918816 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-serving-cert\") pod \"route-controller-manager-c8db4484-th8hr\" (UID: \"f53a0371-0ce3-4a3f-b3e1-0043a0f3e806\") " pod="openshift-route-controller-manager/route-controller-manager-c8db4484-th8hr" Mar 18 13:07:15.924083 master-0 kubenswrapper[7599]: I0318 13:07:15.918903 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m4pc\" (UniqueName: \"kubernetes.io/projected/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-kube-api-access-5m4pc\") pod \"route-controller-manager-c8db4484-th8hr\" (UID: \"f53a0371-0ce3-4a3f-b3e1-0043a0f3e806\") " pod="openshift-route-controller-manager/route-controller-manager-c8db4484-th8hr" Mar 18 13:07:15.924083 master-0 kubenswrapper[7599]: I0318 13:07:15.919060 7599 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88d9833d-372c-43a9-b6bb-d1753177443e-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:16.020502 master-0 kubenswrapper[7599]: I0318 13:07:16.020345 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-client-ca\") pod \"route-controller-manager-c8db4484-th8hr\" (UID: \"f53a0371-0ce3-4a3f-b3e1-0043a0f3e806\") " pod="openshift-route-controller-manager/route-controller-manager-c8db4484-th8hr" Mar 18 13:07:16.024961 master-0 kubenswrapper[7599]: I0318 13:07:16.020891 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-config\") pod \"route-controller-manager-c8db4484-th8hr\" (UID: \"f53a0371-0ce3-4a3f-b3e1-0043a0f3e806\") " pod="openshift-route-controller-manager/route-controller-manager-c8db4484-th8hr" Mar 18 13:07:16.024961 master-0 kubenswrapper[7599]: I0318 13:07:16.021007 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-serving-cert\") pod \"route-controller-manager-c8db4484-th8hr\" (UID: \"f53a0371-0ce3-4a3f-b3e1-0043a0f3e806\") " pod="openshift-route-controller-manager/route-controller-manager-c8db4484-th8hr" Mar 18 13:07:16.024961 master-0 kubenswrapper[7599]: I0318 13:07:16.021090 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m4pc\" (UniqueName: \"kubernetes.io/projected/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-kube-api-access-5m4pc\") pod \"route-controller-manager-c8db4484-th8hr\" (UID: \"f53a0371-0ce3-4a3f-b3e1-0043a0f3e806\") " pod="openshift-route-controller-manager/route-controller-manager-c8db4484-th8hr" Mar 18 13:07:16.024961 master-0 kubenswrapper[7599]: E0318 13:07:16.021369 7599 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 13:07:16.024961 master-0 kubenswrapper[7599]: I0318 13:07:16.021385 7599 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc27fbd9-f46a-487e-ba6b-edcfd049648b-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:16.024961 master-0 kubenswrapper[7599]: E0318 13:07:16.021515 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-serving-cert podName:f53a0371-0ce3-4a3f-b3e1-0043a0f3e806 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:16.52149521 +0000 UTC m=+11.482549452 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-serving-cert") pod "route-controller-manager-c8db4484-th8hr" (UID: "f53a0371-0ce3-4a3f-b3e1-0043a0f3e806") : secret "serving-cert" not found Mar 18 13:07:16.024961 master-0 kubenswrapper[7599]: I0318 13:07:16.022382 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-config\") pod \"route-controller-manager-c8db4484-th8hr\" (UID: \"f53a0371-0ce3-4a3f-b3e1-0043a0f3e806\") " pod="openshift-route-controller-manager/route-controller-manager-c8db4484-th8hr" Mar 18 13:07:16.024961 master-0 kubenswrapper[7599]: I0318 13:07:16.022635 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-client-ca\") pod \"route-controller-manager-c8db4484-th8hr\" (UID: \"f53a0371-0ce3-4a3f-b3e1-0043a0f3e806\") " pod="openshift-route-controller-manager/route-controller-manager-c8db4484-th8hr" Mar 18 13:07:16.050966 master-0 kubenswrapper[7599]: I0318 13:07:16.050890 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m4pc\" (UniqueName: \"kubernetes.io/projected/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-kube-api-access-5m4pc\") pod \"route-controller-manager-c8db4484-th8hr\" (UID: \"f53a0371-0ce3-4a3f-b3e1-0043a0f3e806\") " pod="openshift-route-controller-manager/route-controller-manager-c8db4484-th8hr" Mar 18 13:07:16.529434 master-0 kubenswrapper[7599]: I0318 13:07:16.529377 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-serving-cert\") pod \"route-controller-manager-c8db4484-th8hr\" (UID: \"f53a0371-0ce3-4a3f-b3e1-0043a0f3e806\") " pod="openshift-route-controller-manager/route-controller-manager-c8db4484-th8hr" Mar 18 13:07:16.529692 master-0 kubenswrapper[7599]: E0318 13:07:16.529633 7599 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 13:07:16.529937 master-0 kubenswrapper[7599]: E0318 13:07:16.529907 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-serving-cert podName:f53a0371-0ce3-4a3f-b3e1-0043a0f3e806 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:17.529879967 +0000 UTC m=+12.490934349 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-serving-cert") pod "route-controller-manager-c8db4484-th8hr" (UID: "f53a0371-0ce3-4a3f-b3e1-0043a0f3e806") : secret "serving-cert" not found Mar 18 13:07:16.977588 master-0 kubenswrapper[7599]: I0318 13:07:16.975753 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" Mar 18 13:07:16.981742 master-0 kubenswrapper[7599]: I0318 13:07:16.981706 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" Mar 18 13:07:17.378176 master-0 kubenswrapper[7599]: I0318 13:07:17.377793 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88d9833d-372c-43a9-b6bb-d1753177443e" path="/var/lib/kubelet/pods/88d9833d-372c-43a9-b6bb-d1753177443e/volumes" Mar 18 13:07:17.378819 master-0 kubenswrapper[7599]: I0318 13:07:17.378786 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc27fbd9-f46a-487e-ba6b-edcfd049648b" path="/var/lib/kubelet/pods/bc27fbd9-f46a-487e-ba6b-edcfd049648b/volumes" Mar 18 13:07:17.551889 master-0 kubenswrapper[7599]: I0318 13:07:17.551842 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-serving-cert\") pod \"route-controller-manager-c8db4484-th8hr\" (UID: \"f53a0371-0ce3-4a3f-b3e1-0043a0f3e806\") " pod="openshift-route-controller-manager/route-controller-manager-c8db4484-th8hr" Mar 18 13:07:17.552050 master-0 kubenswrapper[7599]: E0318 13:07:17.552016 7599 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 13:07:17.552138 master-0 kubenswrapper[7599]: E0318 13:07:17.552120 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-serving-cert podName:f53a0371-0ce3-4a3f-b3e1-0043a0f3e806 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:19.552097268 +0000 UTC m=+14.513151510 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-serving-cert") pod "route-controller-manager-c8db4484-th8hr" (UID: "f53a0371-0ce3-4a3f-b3e1-0043a0f3e806") : secret "serving-cert" not found Mar 18 13:07:17.759741 master-0 kubenswrapper[7599]: I0318 13:07:17.759317 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4" event={"ID":"902909ca-ab08-49aa-9736-70e073f8e67d","Type":"ContainerStarted","Data":"cc3f7f9178a5ebacbbfda5fb5509e6451e88862baa49b921cb87ec5d6cc82ee7"} Mar 18 13:07:17.971890 master-0 kubenswrapper[7599]: I0318 13:07:17.971827 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65c549646c-d9988"] Mar 18 13:07:17.972436 master-0 kubenswrapper[7599]: I0318 13:07:17.972383 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65c549646c-d9988" Mar 18 13:07:17.979378 master-0 kubenswrapper[7599]: I0318 13:07:17.979352 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 13:07:17.980955 master-0 kubenswrapper[7599]: I0318 13:07:17.979698 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 13:07:17.981171 master-0 kubenswrapper[7599]: I0318 13:07:17.979977 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 13:07:17.981382 master-0 kubenswrapper[7599]: I0318 13:07:17.980053 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 13:07:17.987278 master-0 kubenswrapper[7599]: I0318 13:07:17.987228 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 13:07:17.991547 master-0 kubenswrapper[7599]: I0318 13:07:17.990813 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65c549646c-d9988"] Mar 18 13:07:17.991547 master-0 kubenswrapper[7599]: I0318 13:07:17.991061 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 13:07:18.158048 master-0 kubenswrapper[7599]: I0318 13:07:18.157792 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b2a641f-3fc0-4efe-b72e-429bfdedd2cb-config\") pod \"controller-manager-65c549646c-d9988\" (UID: \"5b2a641f-3fc0-4efe-b72e-429bfdedd2cb\") " pod="openshift-controller-manager/controller-manager-65c549646c-d9988" Mar 18 13:07:18.158048 master-0 kubenswrapper[7599]: I0318 13:07:18.157870 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b2a641f-3fc0-4efe-b72e-429bfdedd2cb-client-ca\") pod \"controller-manager-65c549646c-d9988\" (UID: \"5b2a641f-3fc0-4efe-b72e-429bfdedd2cb\") " pod="openshift-controller-manager/controller-manager-65c549646c-d9988" Mar 18 13:07:18.158048 master-0 kubenswrapper[7599]: I0318 13:07:18.157986 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b2a641f-3fc0-4efe-b72e-429bfdedd2cb-serving-cert\") pod \"controller-manager-65c549646c-d9988\" (UID: \"5b2a641f-3fc0-4efe-b72e-429bfdedd2cb\") " pod="openshift-controller-manager/controller-manager-65c549646c-d9988" Mar 18 13:07:18.158048 master-0 kubenswrapper[7599]: I0318 13:07:18.158013 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5b2a641f-3fc0-4efe-b72e-429bfdedd2cb-proxy-ca-bundles\") pod \"controller-manager-65c549646c-d9988\" (UID: \"5b2a641f-3fc0-4efe-b72e-429bfdedd2cb\") " pod="openshift-controller-manager/controller-manager-65c549646c-d9988" Mar 18 13:07:18.158048 master-0 kubenswrapper[7599]: I0318 13:07:18.158046 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45rtd\" (UniqueName: \"kubernetes.io/projected/5b2a641f-3fc0-4efe-b72e-429bfdedd2cb-kube-api-access-45rtd\") pod \"controller-manager-65c549646c-d9988\" (UID: \"5b2a641f-3fc0-4efe-b72e-429bfdedd2cb\") " pod="openshift-controller-manager/controller-manager-65c549646c-d9988" Mar 18 13:07:18.258752 master-0 kubenswrapper[7599]: I0318 13:07:18.258670 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b2a641f-3fc0-4efe-b72e-429bfdedd2cb-config\") pod \"controller-manager-65c549646c-d9988\" (UID: \"5b2a641f-3fc0-4efe-b72e-429bfdedd2cb\") " pod="openshift-controller-manager/controller-manager-65c549646c-d9988" Mar 18 13:07:18.258752 master-0 kubenswrapper[7599]: I0318 13:07:18.258741 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b2a641f-3fc0-4efe-b72e-429bfdedd2cb-client-ca\") pod \"controller-manager-65c549646c-d9988\" (UID: \"5b2a641f-3fc0-4efe-b72e-429bfdedd2cb\") " pod="openshift-controller-manager/controller-manager-65c549646c-d9988" Mar 18 13:07:18.258975 master-0 kubenswrapper[7599]: I0318 13:07:18.258822 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b2a641f-3fc0-4efe-b72e-429bfdedd2cb-serving-cert\") pod \"controller-manager-65c549646c-d9988\" (UID: \"5b2a641f-3fc0-4efe-b72e-429bfdedd2cb\") " pod="openshift-controller-manager/controller-manager-65c549646c-d9988" Mar 18 13:07:18.258975 master-0 kubenswrapper[7599]: I0318 13:07:18.258843 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5b2a641f-3fc0-4efe-b72e-429bfdedd2cb-proxy-ca-bundles\") pod \"controller-manager-65c549646c-d9988\" (UID: \"5b2a641f-3fc0-4efe-b72e-429bfdedd2cb\") " pod="openshift-controller-manager/controller-manager-65c549646c-d9988" Mar 18 13:07:18.258975 master-0 kubenswrapper[7599]: I0318 13:07:18.258872 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45rtd\" (UniqueName: \"kubernetes.io/projected/5b2a641f-3fc0-4efe-b72e-429bfdedd2cb-kube-api-access-45rtd\") pod \"controller-manager-65c549646c-d9988\" (UID: \"5b2a641f-3fc0-4efe-b72e-429bfdedd2cb\") " pod="openshift-controller-manager/controller-manager-65c549646c-d9988" Mar 18 13:07:18.259725 master-0 kubenswrapper[7599]: I0318 13:07:18.259675 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b2a641f-3fc0-4efe-b72e-429bfdedd2cb-config\") pod \"controller-manager-65c549646c-d9988\" (UID: \"5b2a641f-3fc0-4efe-b72e-429bfdedd2cb\") " pod="openshift-controller-manager/controller-manager-65c549646c-d9988" Mar 18 13:07:18.263433 master-0 kubenswrapper[7599]: I0318 13:07:18.260149 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5b2a641f-3fc0-4efe-b72e-429bfdedd2cb-proxy-ca-bundles\") pod \"controller-manager-65c549646c-d9988\" (UID: \"5b2a641f-3fc0-4efe-b72e-429bfdedd2cb\") " pod="openshift-controller-manager/controller-manager-65c549646c-d9988" Mar 18 13:07:18.263433 master-0 kubenswrapper[7599]: I0318 13:07:18.261742 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b2a641f-3fc0-4efe-b72e-429bfdedd2cb-client-ca\") pod \"controller-manager-65c549646c-d9988\" (UID: \"5b2a641f-3fc0-4efe-b72e-429bfdedd2cb\") " pod="openshift-controller-manager/controller-manager-65c549646c-d9988" Mar 18 13:07:18.268196 master-0 kubenswrapper[7599]: I0318 13:07:18.268153 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b2a641f-3fc0-4efe-b72e-429bfdedd2cb-serving-cert\") pod \"controller-manager-65c549646c-d9988\" (UID: \"5b2a641f-3fc0-4efe-b72e-429bfdedd2cb\") " pod="openshift-controller-manager/controller-manager-65c549646c-d9988" Mar 18 13:07:18.280431 master-0 kubenswrapper[7599]: I0318 13:07:18.277684 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45rtd\" (UniqueName: \"kubernetes.io/projected/5b2a641f-3fc0-4efe-b72e-429bfdedd2cb-kube-api-access-45rtd\") pod \"controller-manager-65c549646c-d9988\" (UID: \"5b2a641f-3fc0-4efe-b72e-429bfdedd2cb\") " pod="openshift-controller-manager/controller-manager-65c549646c-d9988" Mar 18 13:07:18.304654 master-0 kubenswrapper[7599]: I0318 13:07:18.304597 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65c549646c-d9988" Mar 18 13:07:18.533533 master-0 kubenswrapper[7599]: I0318 13:07:18.533264 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65c549646c-d9988"] Mar 18 13:07:18.547216 master-0 kubenswrapper[7599]: W0318 13:07:18.547131 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b2a641f_3fc0_4efe_b72e_429bfdedd2cb.slice/crio-a5057585a8fcc6082a3c302654cc31b932a69b01c67b4b2e81c1135f36d47cf4 WatchSource:0}: Error finding container a5057585a8fcc6082a3c302654cc31b932a69b01c67b4b2e81c1135f36d47cf4: Status 404 returned error can't find the container with id a5057585a8fcc6082a3c302654cc31b932a69b01c67b4b2e81c1135f36d47cf4 Mar 18 13:07:18.766008 master-0 kubenswrapper[7599]: I0318 13:07:18.765856 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-67dcd4998-bppd4_902909ca-ab08-49aa-9736-70e073f8e67d/cluster-olm-operator/0.log" Mar 18 13:07:18.767498 master-0 kubenswrapper[7599]: I0318 13:07:18.767456 7599 generic.go:334] "Generic (PLEG): container finished" podID="902909ca-ab08-49aa-9736-70e073f8e67d" containerID="cc3f7f9178a5ebacbbfda5fb5509e6451e88862baa49b921cb87ec5d6cc82ee7" exitCode=255 Mar 18 13:07:18.767563 master-0 kubenswrapper[7599]: I0318 13:07:18.767499 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4" event={"ID":"902909ca-ab08-49aa-9736-70e073f8e67d","Type":"ContainerDied","Data":"cc3f7f9178a5ebacbbfda5fb5509e6451e88862baa49b921cb87ec5d6cc82ee7"} Mar 18 13:07:18.768030 master-0 kubenswrapper[7599]: I0318 13:07:18.767989 7599 scope.go:117] "RemoveContainer" containerID="cc3f7f9178a5ebacbbfda5fb5509e6451e88862baa49b921cb87ec5d6cc82ee7" Mar 18 13:07:18.771165 master-0 kubenswrapper[7599]: I0318 13:07:18.768855 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65c549646c-d9988" event={"ID":"5b2a641f-3fc0-4efe-b72e-429bfdedd2cb","Type":"ContainerStarted","Data":"a5057585a8fcc6082a3c302654cc31b932a69b01c67b4b2e81c1135f36d47cf4"} Mar 18 13:07:19.589258 master-0 kubenswrapper[7599]: I0318 13:07:19.588949 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-serving-cert\") pod \"route-controller-manager-c8db4484-th8hr\" (UID: \"f53a0371-0ce3-4a3f-b3e1-0043a0f3e806\") " pod="openshift-route-controller-manager/route-controller-manager-c8db4484-th8hr" Mar 18 13:07:19.590213 master-0 kubenswrapper[7599]: E0318 13:07:19.589138 7599 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 13:07:19.590213 master-0 kubenswrapper[7599]: E0318 13:07:19.589376 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-serving-cert podName:f53a0371-0ce3-4a3f-b3e1-0043a0f3e806 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:23.589354389 +0000 UTC m=+18.550408631 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-serving-cert") pod "route-controller-manager-c8db4484-th8hr" (UID: "f53a0371-0ce3-4a3f-b3e1-0043a0f3e806") : secret "serving-cert" not found Mar 18 13:07:19.776056 master-0 kubenswrapper[7599]: I0318 13:07:19.776005 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-67dcd4998-bppd4_902909ca-ab08-49aa-9736-70e073f8e67d/cluster-olm-operator/0.log" Mar 18 13:07:19.776880 master-0 kubenswrapper[7599]: I0318 13:07:19.776837 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4" event={"ID":"902909ca-ab08-49aa-9736-70e073f8e67d","Type":"ContainerStarted","Data":"55c94bf30a1ccca039ed50a5bce5510c09848033cc6f053a453f757341dfc8bc"} Mar 18 13:07:19.977726 master-0 kubenswrapper[7599]: I0318 13:07:19.977650 7599 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-qwgrm container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 18 13:07:19.978360 master-0 kubenswrapper[7599]: I0318 13:07:19.977715 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" podUID="ce3728ab-5d50-40ac-95b3-74a5b62a557f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 18 13:07:20.416247 master-0 kubenswrapper[7599]: I0318 13:07:20.409348 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-85f4f795dd-9l6gz"] Mar 18 13:07:20.416247 master-0 kubenswrapper[7599]: I0318 13:07:20.410661 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:20.416247 master-0 kubenswrapper[7599]: I0318 13:07:20.414614 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 18 13:07:20.416247 master-0 kubenswrapper[7599]: I0318 13:07:20.414653 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 18 13:07:20.416247 master-0 kubenswrapper[7599]: I0318 13:07:20.414702 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-0" Mar 18 13:07:20.416247 master-0 kubenswrapper[7599]: I0318 13:07:20.414813 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 18 13:07:20.416247 master-0 kubenswrapper[7599]: I0318 13:07:20.414862 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 18 13:07:20.423439 master-0 kubenswrapper[7599]: I0318 13:07:20.419806 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 18 13:07:20.423439 master-0 kubenswrapper[7599]: I0318 13:07:20.419994 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-0" Mar 18 13:07:20.424970 master-0 kubenswrapper[7599]: I0318 13:07:20.424926 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/277dcc33-1743-4926-8624-e5a3e850bb51-encryption-config\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:20.425038 master-0 kubenswrapper[7599]: I0318 13:07:20.424984 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-audit\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:20.425038 master-0 kubenswrapper[7599]: I0318 13:07:20.425009 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-config\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:20.425108 master-0 kubenswrapper[7599]: I0318 13:07:20.425069 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/277dcc33-1743-4926-8624-e5a3e850bb51-serving-cert\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:20.425108 master-0 kubenswrapper[7599]: I0318 13:07:20.425094 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-etcd-serving-ca\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:20.425191 master-0 kubenswrapper[7599]: I0318 13:07:20.425127 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/277dcc33-1743-4926-8624-e5a3e850bb51-audit-dir\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:20.425191 master-0 kubenswrapper[7599]: I0318 13:07:20.425154 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/277dcc33-1743-4926-8624-e5a3e850bb51-node-pullsecrets\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:20.425308 master-0 kubenswrapper[7599]: I0318 13:07:20.425223 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-trusted-ca-bundle\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:20.425308 master-0 kubenswrapper[7599]: I0318 13:07:20.425254 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-image-import-ca\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:20.425390 master-0 kubenswrapper[7599]: I0318 13:07:20.425320 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/277dcc33-1743-4926-8624-e5a3e850bb51-etcd-client\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:20.425390 master-0 kubenswrapper[7599]: I0318 13:07:20.425337 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bb5g\" (UniqueName: \"kubernetes.io/projected/277dcc33-1743-4926-8624-e5a3e850bb51-kube-api-access-9bb5g\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:20.426380 master-0 kubenswrapper[7599]: I0318 13:07:20.426135 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 18 13:07:20.427603 master-0 kubenswrapper[7599]: I0318 13:07:20.427397 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-85f4f795dd-9l6gz"] Mar 18 13:07:20.427969 master-0 kubenswrapper[7599]: I0318 13:07:20.427927 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 18 13:07:20.428069 master-0 kubenswrapper[7599]: I0318 13:07:20.428034 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 18 13:07:20.526791 master-0 kubenswrapper[7599]: I0318 13:07:20.526742 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/277dcc33-1743-4926-8624-e5a3e850bb51-etcd-client\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:20.526791 master-0 kubenswrapper[7599]: I0318 13:07:20.526784 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bb5g\" (UniqueName: \"kubernetes.io/projected/277dcc33-1743-4926-8624-e5a3e850bb51-kube-api-access-9bb5g\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:20.527045 master-0 kubenswrapper[7599]: I0318 13:07:20.526823 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/277dcc33-1743-4926-8624-e5a3e850bb51-encryption-config\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:20.527045 master-0 kubenswrapper[7599]: I0318 13:07:20.526888 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-audit\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:20.527045 master-0 kubenswrapper[7599]: I0318 13:07:20.526905 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-config\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:20.527045 master-0 kubenswrapper[7599]: E0318 13:07:20.526931 7599 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: secret "etcd-client" not found Mar 18 13:07:20.527045 master-0 kubenswrapper[7599]: I0318 13:07:20.526940 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/277dcc33-1743-4926-8624-e5a3e850bb51-serving-cert\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:20.527045 master-0 kubenswrapper[7599]: E0318 13:07:20.527017 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/277dcc33-1743-4926-8624-e5a3e850bb51-etcd-client podName:277dcc33-1743-4926-8624-e5a3e850bb51 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:21.026985666 +0000 UTC m=+15.988039908 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/277dcc33-1743-4926-8624-e5a3e850bb51-etcd-client") pod "apiserver-85f4f795dd-9l6gz" (UID: "277dcc33-1743-4926-8624-e5a3e850bb51") : secret "etcd-client" not found Mar 18 13:07:20.527264 master-0 kubenswrapper[7599]: E0318 13:07:20.527055 7599 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 18 13:07:20.527264 master-0 kubenswrapper[7599]: I0318 13:07:20.527069 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-etcd-serving-ca\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:20.527264 master-0 kubenswrapper[7599]: E0318 13:07:20.527104 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-audit podName:277dcc33-1743-4926-8624-e5a3e850bb51 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:21.027087808 +0000 UTC m=+15.988142050 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-audit") pod "apiserver-85f4f795dd-9l6gz" (UID: "277dcc33-1743-4926-8624-e5a3e850bb51") : configmap "audit-0" not found Mar 18 13:07:20.527264 master-0 kubenswrapper[7599]: I0318 13:07:20.527126 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/277dcc33-1743-4926-8624-e5a3e850bb51-audit-dir\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:20.527264 master-0 kubenswrapper[7599]: I0318 13:07:20.527146 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/277dcc33-1743-4926-8624-e5a3e850bb51-node-pullsecrets\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:20.527264 master-0 kubenswrapper[7599]: I0318 13:07:20.527181 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-trusted-ca-bundle\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:20.527264 master-0 kubenswrapper[7599]: I0318 13:07:20.527206 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-image-import-ca\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:20.527717 master-0 kubenswrapper[7599]: I0318 13:07:20.527702 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-image-import-ca\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:20.527773 master-0 kubenswrapper[7599]: I0318 13:07:20.527741 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-etcd-serving-ca\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:20.527868 master-0 kubenswrapper[7599]: I0318 13:07:20.527841 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/277dcc33-1743-4926-8624-e5a3e850bb51-node-pullsecrets\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:20.527912 master-0 kubenswrapper[7599]: I0318 13:07:20.527892 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/277dcc33-1743-4926-8624-e5a3e850bb51-audit-dir\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:20.528439 master-0 kubenswrapper[7599]: E0318 13:07:20.528399 7599 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 18 13:07:20.528479 master-0 kubenswrapper[7599]: I0318 13:07:20.528450 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-config\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:20.528479 master-0 kubenswrapper[7599]: E0318 13:07:20.528462 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/277dcc33-1743-4926-8624-e5a3e850bb51-serving-cert podName:277dcc33-1743-4926-8624-e5a3e850bb51 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:21.028451882 +0000 UTC m=+15.989506124 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/277dcc33-1743-4926-8624-e5a3e850bb51-serving-cert") pod "apiserver-85f4f795dd-9l6gz" (UID: "277dcc33-1743-4926-8624-e5a3e850bb51") : secret "serving-cert" not found Mar 18 13:07:20.528577 master-0 kubenswrapper[7599]: I0318 13:07:20.528566 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-trusted-ca-bundle\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:20.531880 master-0 kubenswrapper[7599]: I0318 13:07:20.531832 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/277dcc33-1743-4926-8624-e5a3e850bb51-encryption-config\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:20.553263 master-0 kubenswrapper[7599]: I0318 13:07:20.553202 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bb5g\" (UniqueName: \"kubernetes.io/projected/277dcc33-1743-4926-8624-e5a3e850bb51-kube-api-access-9bb5g\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:20.993661 master-0 kubenswrapper[7599]: I0318 13:07:20.993626 7599 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-qwgrm container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 18 13:07:20.994708 master-0 kubenswrapper[7599]: I0318 13:07:20.993676 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" podUID="ce3728ab-5d50-40ac-95b3-74a5b62a557f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 18 13:07:21.035708 master-0 kubenswrapper[7599]: I0318 13:07:21.035667 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-audit\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:21.035857 master-0 kubenswrapper[7599]: I0318 13:07:21.035733 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/277dcc33-1743-4926-8624-e5a3e850bb51-serving-cert\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:21.035857 master-0 kubenswrapper[7599]: I0318 13:07:21.035825 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/277dcc33-1743-4926-8624-e5a3e850bb51-etcd-client\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:21.036203 master-0 kubenswrapper[7599]: E0318 13:07:21.036168 7599 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 18 13:07:21.036272 master-0 kubenswrapper[7599]: E0318 13:07:21.036247 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-audit podName:277dcc33-1743-4926-8624-e5a3e850bb51 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:22.036232985 +0000 UTC m=+16.997287227 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-audit") pod "apiserver-85f4f795dd-9l6gz" (UID: "277dcc33-1743-4926-8624-e5a3e850bb51") : configmap "audit-0" not found Mar 18 13:07:21.036313 master-0 kubenswrapper[7599]: E0318 13:07:21.036257 7599 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 18 13:07:21.036373 master-0 kubenswrapper[7599]: E0318 13:07:21.036343 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/277dcc33-1743-4926-8624-e5a3e850bb51-serving-cert podName:277dcc33-1743-4926-8624-e5a3e850bb51 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:22.036329417 +0000 UTC m=+16.997383669 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/277dcc33-1743-4926-8624-e5a3e850bb51-serving-cert") pod "apiserver-85f4f795dd-9l6gz" (UID: "277dcc33-1743-4926-8624-e5a3e850bb51") : secret "serving-cert" not found Mar 18 13:07:21.039499 master-0 kubenswrapper[7599]: I0318 13:07:21.039477 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/277dcc33-1743-4926-8624-e5a3e850bb51-etcd-client\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:21.784323 master-0 kubenswrapper[7599]: I0318 13:07:21.784272 7599 generic.go:334] "Generic (PLEG): container finished" podID="ce3728ab-5d50-40ac-95b3-74a5b62a557f" containerID="9fff8d077ff23531e072493f53ed61382efbb48a6316de383a5ae419bc8c0c9f" exitCode=0 Mar 18 13:07:21.784323 master-0 kubenswrapper[7599]: I0318 13:07:21.784320 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" event={"ID":"ce3728ab-5d50-40ac-95b3-74a5b62a557f","Type":"ContainerDied","Data":"9fff8d077ff23531e072493f53ed61382efbb48a6316de383a5ae419bc8c0c9f"} Mar 18 13:07:21.784728 master-0 kubenswrapper[7599]: I0318 13:07:21.784697 7599 scope.go:117] "RemoveContainer" containerID="9fff8d077ff23531e072493f53ed61382efbb48a6316de383a5ae419bc8c0c9f" Mar 18 13:07:22.048936 master-0 kubenswrapper[7599]: I0318 13:07:22.048813 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/277dcc33-1743-4926-8624-e5a3e850bb51-serving-cert\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:22.049444 master-0 kubenswrapper[7599]: E0318 13:07:22.048962 7599 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 18 13:07:22.049444 master-0 kubenswrapper[7599]: I0318 13:07:22.049034 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-audit\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:22.049444 master-0 kubenswrapper[7599]: E0318 13:07:22.049098 7599 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 18 13:07:22.049444 master-0 kubenswrapper[7599]: E0318 13:07:22.049107 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/277dcc33-1743-4926-8624-e5a3e850bb51-serving-cert podName:277dcc33-1743-4926-8624-e5a3e850bb51 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:24.049081541 +0000 UTC m=+19.010135783 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/277dcc33-1743-4926-8624-e5a3e850bb51-serving-cert") pod "apiserver-85f4f795dd-9l6gz" (UID: "277dcc33-1743-4926-8624-e5a3e850bb51") : secret "serving-cert" not found Mar 18 13:07:22.049444 master-0 kubenswrapper[7599]: E0318 13:07:22.049135 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-audit podName:277dcc33-1743-4926-8624-e5a3e850bb51 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:24.049124482 +0000 UTC m=+19.010178734 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-audit") pod "apiserver-85f4f795dd-9l6gz" (UID: "277dcc33-1743-4926-8624-e5a3e850bb51") : configmap "audit-0" not found Mar 18 13:07:22.251526 master-0 kubenswrapper[7599]: I0318 13:07:22.250761 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls\") pod \"dns-operator-9c5679d8f-5lzzn\" (UID: \"59bf5114-29f9-4f70-8582-108e95327cb2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-5lzzn" Mar 18 13:07:22.251526 master-0 kubenswrapper[7599]: I0318 13:07:22.250796 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:07:22.251526 master-0 kubenswrapper[7599]: I0318 13:07:22.251110 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:07:22.255479 master-0 kubenswrapper[7599]: I0318 13:07:22.255180 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:07:22.255606 master-0 kubenswrapper[7599]: I0318 13:07:22.255555 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls\") pod \"dns-operator-9c5679d8f-5lzzn\" (UID: \"59bf5114-29f9-4f70-8582-108e95327cb2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-5lzzn" Mar 18 13:07:22.257806 master-0 kubenswrapper[7599]: I0318 13:07:22.257767 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:07:22.352218 master-0 kubenswrapper[7599]: I0318 13:07:22.352164 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:07:22.352322 master-0 kubenswrapper[7599]: I0318 13:07:22.352238 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:07:22.352322 master-0 kubenswrapper[7599]: I0318 13:07:22.352280 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-zrc8h\" (UID: \"720a1f60-c1cb-4aef-aaec-f082090ca631\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" Mar 18 13:07:22.352322 master-0 kubenswrapper[7599]: I0318 13:07:22.352303 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert\") pod \"olm-operator-5c9796789-z8jkt\" (UID: \"822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt" Mar 18 13:07:22.352456 master-0 kubenswrapper[7599]: I0318 13:07:22.352361 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs\") pod \"network-metrics-daemon-kbfbq\" (UID: \"bf1cc230-0a79-4a1d-b500-a65d02e50973\") " pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:07:22.352456 master-0 kubenswrapper[7599]: I0318 13:07:22.352398 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-99pzm\" (UID: \"fe643e40-d06d-4e69-9be3-0065c2a78567\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:07:22.352456 master-0 kubenswrapper[7599]: I0318 13:07:22.352452 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-n8hgl\" (UID: \"8c0e5eca-819b-40f3-bf77-0cd90a4f6e94\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" Mar 18 13:07:22.352569 master-0 kubenswrapper[7599]: I0318 13:07:22.352480 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:07:22.352569 master-0 kubenswrapper[7599]: I0318 13:07:22.352504 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert\") pod \"catalog-operator-68f85b4d6c-t84s9\" (UID: \"d2455453-5943-49ef-bfea-cba077197da0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:07:22.352628 master-0 kubenswrapper[7599]: I0318 13:07:22.352589 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:07:22.353230 master-0 kubenswrapper[7599]: E0318 13:07:22.353187 7599 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 13:07:22.353308 master-0 kubenswrapper[7599]: E0318 13:07:22.353295 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert podName:d2455453-5943-49ef-bfea-cba077197da0 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:38.353259504 +0000 UTC m=+33.314313746 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert") pod "catalog-operator-68f85b4d6c-t84s9" (UID: "d2455453-5943-49ef-bfea-cba077197da0") : secret "catalog-operator-serving-cert" not found Mar 18 13:07:22.353714 master-0 kubenswrapper[7599]: E0318 13:07:22.353701 7599 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 13:07:22.353757 master-0 kubenswrapper[7599]: E0318 13:07:22.353728 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert podName:822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:38.353720875 +0000 UTC m=+33.314775117 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert") pod "olm-operator-5c9796789-z8jkt" (UID: "822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9") : secret "olm-operator-serving-cert" not found Mar 18 13:07:22.353811 master-0 kubenswrapper[7599]: E0318 13:07:22.353791 7599 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 13:07:22.353843 master-0 kubenswrapper[7599]: E0318 13:07:22.353831 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs podName:bf1cc230-0a79-4a1d-b500-a65d02e50973 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:38.353807347 +0000 UTC m=+33.314861589 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs") pod "network-metrics-daemon-kbfbq" (UID: "bf1cc230-0a79-4a1d-b500-a65d02e50973") : secret "metrics-daemon-secret" not found Mar 18 13:07:22.353879 master-0 kubenswrapper[7599]: E0318 13:07:22.353870 7599 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 13:07:22.354736 master-0 kubenswrapper[7599]: E0318 13:07:22.353890 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics podName:fe643e40-d06d-4e69-9be3-0065c2a78567 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:38.353884229 +0000 UTC m=+33.314938461 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-99pzm" (UID: "fe643e40-d06d-4e69-9be3-0065c2a78567") : secret "marketplace-operator-metrics" not found Mar 18 13:07:22.354736 master-0 kubenswrapper[7599]: E0318 13:07:22.353944 7599 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 13:07:22.354736 master-0 kubenswrapper[7599]: E0318 13:07:22.353965 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls podName:8c0e5eca-819b-40f3-bf77-0cd90a4f6e94 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:38.353958741 +0000 UTC m=+33.315012983 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-n8hgl" (UID: "8c0e5eca-819b-40f3-bf77-0cd90a4f6e94") : secret "cluster-monitoring-operator-tls" not found Mar 18 13:07:22.354736 master-0 kubenswrapper[7599]: E0318 13:07:22.354022 7599 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Mar 18 13:07:22.354736 master-0 kubenswrapper[7599]: E0318 13:07:22.354040 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls podName:5a4202c2-c330-4a5d-87e7-0a63d069113f nodeName:}" failed. No retries permitted until 2026-03-18 13:07:38.354034833 +0000 UTC m=+33.315089075 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls") pod "machine-config-operator-84d549f6d5-dlr6p" (UID: "5a4202c2-c330-4a5d-87e7-0a63d069113f") : secret "mco-proxy-tls" not found Mar 18 13:07:22.354736 master-0 kubenswrapper[7599]: E0318 13:07:22.354091 7599 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 13:07:22.354736 master-0 kubenswrapper[7599]: E0318 13:07:22.354109 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs podName:720a1f60-c1cb-4aef-aaec-f082090ca631 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:38.354104155 +0000 UTC m=+33.315158397 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-zrc8h" (UID: "720a1f60-c1cb-4aef-aaec-f082090ca631") : secret "multus-admission-controller-secret" not found Mar 18 13:07:22.354736 master-0 kubenswrapper[7599]: I0318 13:07:22.354125 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-p7vvx\" (UID: \"2ea9eb53-0385-4a1a-a64f-696f8520cf49\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" Mar 18 13:07:22.354736 master-0 kubenswrapper[7599]: E0318 13:07:22.354319 7599 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 13:07:22.354736 master-0 kubenswrapper[7599]: E0318 13:07:22.354390 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert podName:2ea9eb53-0385-4a1a-a64f-696f8520cf49 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:38.354367851 +0000 UTC m=+33.315422153 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-p7vvx" (UID: "2ea9eb53-0385-4a1a-a64f-696f8520cf49") : secret "package-server-manager-serving-cert" not found Mar 18 13:07:22.357988 master-0 kubenswrapper[7599]: I0318 13:07:22.357548 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:07:22.358065 master-0 kubenswrapper[7599]: I0318 13:07:22.357988 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:07:22.362360 master-0 kubenswrapper[7599]: I0318 13:07:22.362326 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert\") pod \"cluster-version-operator-56d8475767-f2hjp\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:07:22.487511 master-0 kubenswrapper[7599]: I0318 13:07:22.487465 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:07:22.488432 master-0 kubenswrapper[7599]: I0318 13:07:22.488363 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:07:22.493625 master-0 kubenswrapper[7599]: I0318 13:07:22.493590 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-9c5679d8f-5lzzn" Mar 18 13:07:22.493741 master-0 kubenswrapper[7599]: I0318 13:07:22.493638 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:07:22.493965 master-0 kubenswrapper[7599]: I0318 13:07:22.493895 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:07:22.545758 master-0 kubenswrapper[7599]: W0318 13:07:22.545662 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b06a568_4dad_44b4_8312_aa52911dbfb0.slice/crio-8eed59e505def0bdb2b5fa411b6441ef6eb68d8285c1a135dd0ca4c116a1e491 WatchSource:0}: Error finding container 8eed59e505def0bdb2b5fa411b6441ef6eb68d8285c1a135dd0ca4c116a1e491: Status 404 returned error can't find the container with id 8eed59e505def0bdb2b5fa411b6441ef6eb68d8285c1a135dd0ca4c116a1e491 Mar 18 13:07:22.762805 master-0 kubenswrapper[7599]: I0318 13:07:22.762374 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-5ftdj"] Mar 18 13:07:22.763516 master-0 kubenswrapper[7599]: I0318 13:07:22.763480 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.797267 master-0 kubenswrapper[7599]: I0318 13:07:22.793457 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" event={"ID":"2b06a568-4dad-44b4-8312-aa52911dbfb0","Type":"ContainerStarted","Data":"8eed59e505def0bdb2b5fa411b6441ef6eb68d8285c1a135dd0ca4c116a1e491"} Mar 18 13:07:22.797267 master-0 kubenswrapper[7599]: I0318 13:07:22.796282 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" event={"ID":"ce3728ab-5d50-40ac-95b3-74a5b62a557f","Type":"ContainerStarted","Data":"ce628a61289a6356a4840f81be538656bf2f65763801f5f5367447fe1929945e"} Mar 18 13:07:22.797267 master-0 kubenswrapper[7599]: I0318 13:07:22.796596 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" Mar 18 13:07:22.798022 master-0 kubenswrapper[7599]: I0318 13:07:22.797439 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp"] Mar 18 13:07:22.803103 master-0 kubenswrapper[7599]: I0318 13:07:22.802777 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65c549646c-d9988" event={"ID":"5b2a641f-3fc0-4efe-b72e-429bfdedd2cb","Type":"ContainerStarted","Data":"b7c5dbd3ce8dfa128c538acb90abe2510694b1e554313e9a208817b7e69b7d69"} Mar 18 13:07:22.803611 master-0 kubenswrapper[7599]: I0318 13:07:22.803591 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-65c549646c-d9988" Mar 18 13:07:22.812194 master-0 kubenswrapper[7599]: I0318 13:07:22.811443 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" event={"ID":"c3ff09ab-cbe1-49e7-8121-5f71997a5176","Type":"ContainerStarted","Data":"8a0561b48d7cbb59281ef2be420f500c179586e31854a6ba87f0ee5471e4ee95"} Mar 18 13:07:22.819195 master-0 kubenswrapper[7599]: I0318 13:07:22.817704 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65c549646c-d9988" Mar 18 13:07:22.827239 master-0 kubenswrapper[7599]: I0318 13:07:22.827179 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-9c5679d8f-5lzzn"] Mar 18 13:07:22.856300 master-0 kubenswrapper[7599]: I0318 13:07:22.855851 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv"] Mar 18 13:07:22.868463 master-0 kubenswrapper[7599]: I0318 13:07:22.867133 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65c549646c-d9988" podStartSLOduration=6.16206926 podStartE2EDuration="9.867104197s" podCreationTimestamp="2026-03-18 13:07:13 +0000 UTC" firstStartedPulling="2026-03-18 13:07:18.54916215 +0000 UTC m=+13.510216392" lastFinishedPulling="2026-03-18 13:07:22.254197087 +0000 UTC m=+17.215251329" observedRunningTime="2026-03-18 13:07:22.86081805 +0000 UTC m=+17.821872322" watchObservedRunningTime="2026-03-18 13:07:22.867104197 +0000 UTC m=+17.828158449" Mar 18 13:07:22.884291 master-0 kubenswrapper[7599]: I0318 13:07:22.872907 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-var-lib-kubelet\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.884291 master-0 kubenswrapper[7599]: I0318 13:07:22.872945 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-sys\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.884291 master-0 kubenswrapper[7599]: I0318 13:07:22.872970 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-tuned\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.884291 master-0 kubenswrapper[7599]: I0318 13:07:22.873010 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-sysconfig\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.884291 master-0 kubenswrapper[7599]: I0318 13:07:22.873056 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-host\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.884291 master-0 kubenswrapper[7599]: I0318 13:07:22.873089 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-modprobe-d\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.884291 master-0 kubenswrapper[7599]: I0318 13:07:22.873109 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnkdr\" (UniqueName: \"kubernetes.io/projected/bf4c5410-fb44-45e8-ab66-24806e6349b8-kube-api-access-hnkdr\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.884291 master-0 kubenswrapper[7599]: I0318 13:07:22.873129 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-run\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.884291 master-0 kubenswrapper[7599]: I0318 13:07:22.873212 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bf4c5410-fb44-45e8-ab66-24806e6349b8-tmp\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.884291 master-0 kubenswrapper[7599]: I0318 13:07:22.873245 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-sysctl-conf\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.884291 master-0 kubenswrapper[7599]: I0318 13:07:22.873274 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-kubernetes\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.884291 master-0 kubenswrapper[7599]: I0318 13:07:22.873293 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-systemd\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.884291 master-0 kubenswrapper[7599]: I0318 13:07:22.873324 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-sysctl-d\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.884291 master-0 kubenswrapper[7599]: I0318 13:07:22.873402 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-lib-modules\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.891016 master-0 kubenswrapper[7599]: I0318 13:07:22.890972 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk"] Mar 18 13:07:22.901708 master-0 kubenswrapper[7599]: W0318 13:07:22.901656 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9d09a56_ed4c_40b7_8be1_f3934c07296e.slice/crio-15ab52d652113ef266940e33258fee75e250f493080fb37576944ab0faae3a29 WatchSource:0}: Error finding container 15ab52d652113ef266940e33258fee75e250f493080fb37576944ab0faae3a29: Status 404 returned error can't find the container with id 15ab52d652113ef266940e33258fee75e250f493080fb37576944ab0faae3a29 Mar 18 13:07:22.977083 master-0 kubenswrapper[7599]: I0318 13:07:22.974842 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-host\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.977083 master-0 kubenswrapper[7599]: I0318 13:07:22.974923 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-modprobe-d\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.977083 master-0 kubenswrapper[7599]: I0318 13:07:22.974957 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnkdr\" (UniqueName: \"kubernetes.io/projected/bf4c5410-fb44-45e8-ab66-24806e6349b8-kube-api-access-hnkdr\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.977083 master-0 kubenswrapper[7599]: I0318 13:07:22.974986 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-run\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.977083 master-0 kubenswrapper[7599]: I0318 13:07:22.975047 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bf4c5410-fb44-45e8-ab66-24806e6349b8-tmp\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.977083 master-0 kubenswrapper[7599]: I0318 13:07:22.975070 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-sysctl-conf\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.977083 master-0 kubenswrapper[7599]: I0318 13:07:22.975099 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-kubernetes\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.977083 master-0 kubenswrapper[7599]: I0318 13:07:22.975119 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-systemd\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.977083 master-0 kubenswrapper[7599]: I0318 13:07:22.975187 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-sysctl-d\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.977083 master-0 kubenswrapper[7599]: I0318 13:07:22.975236 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-lib-modules\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.977083 master-0 kubenswrapper[7599]: I0318 13:07:22.975304 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-var-lib-kubelet\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.977083 master-0 kubenswrapper[7599]: I0318 13:07:22.975329 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-sys\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.977083 master-0 kubenswrapper[7599]: I0318 13:07:22.975354 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-tuned\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.977083 master-0 kubenswrapper[7599]: I0318 13:07:22.975403 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-sysconfig\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.977083 master-0 kubenswrapper[7599]: I0318 13:07:22.975562 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-sysconfig\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.977083 master-0 kubenswrapper[7599]: I0318 13:07:22.975619 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-host\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.977083 master-0 kubenswrapper[7599]: I0318 13:07:22.975656 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-systemd\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.977083 master-0 kubenswrapper[7599]: I0318 13:07:22.975733 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-kubernetes\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.977083 master-0 kubenswrapper[7599]: I0318 13:07:22.975737 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-modprobe-d\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.977083 master-0 kubenswrapper[7599]: I0318 13:07:22.975814 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-sysctl-d\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.977083 master-0 kubenswrapper[7599]: I0318 13:07:22.975922 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-sysctl-conf\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.977083 master-0 kubenswrapper[7599]: I0318 13:07:22.976160 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-lib-modules\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.977083 master-0 kubenswrapper[7599]: I0318 13:07:22.976178 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-var-lib-kubelet\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.977083 master-0 kubenswrapper[7599]: I0318 13:07:22.976198 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-sys\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.977083 master-0 kubenswrapper[7599]: I0318 13:07:22.976251 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-run\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.980983 master-0 kubenswrapper[7599]: I0318 13:07:22.980565 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bf4c5410-fb44-45e8-ab66-24806e6349b8-tmp\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.980983 master-0 kubenswrapper[7599]: I0318 13:07:22.980621 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-tuned\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:22.993795 master-0 kubenswrapper[7599]: I0318 13:07:22.993751 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnkdr\" (UniqueName: \"kubernetes.io/projected/bf4c5410-fb44-45e8-ab66-24806e6349b8-kube-api-access-hnkdr\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:23.088648 master-0 kubenswrapper[7599]: I0318 13:07:23.088587 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:07:23.686527 master-0 kubenswrapper[7599]: I0318 13:07:23.685215 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-serving-cert\") pod \"route-controller-manager-c8db4484-th8hr\" (UID: \"f53a0371-0ce3-4a3f-b3e1-0043a0f3e806\") " pod="openshift-route-controller-manager/route-controller-manager-c8db4484-th8hr" Mar 18 13:07:23.686527 master-0 kubenswrapper[7599]: E0318 13:07:23.685380 7599 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 13:07:23.686527 master-0 kubenswrapper[7599]: E0318 13:07:23.686159 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-serving-cert podName:f53a0371-0ce3-4a3f-b3e1-0043a0f3e806 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:31.68613575 +0000 UTC m=+26.647189992 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-serving-cert") pod "route-controller-manager-c8db4484-th8hr" (UID: "f53a0371-0ce3-4a3f-b3e1-0043a0f3e806") : secret "serving-cert" not found Mar 18 13:07:23.816931 master-0 kubenswrapper[7599]: I0318 13:07:23.816884 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" event={"ID":"290d1f84-5c5c-4bff-b045-e6020793cded","Type":"ContainerStarted","Data":"cb74a42e367af8586d98d799b6ded81e9d93e7b3d806a9a925a94b3e763a3830"} Mar 18 13:07:23.817998 master-0 kubenswrapper[7599]: I0318 13:07:23.817974 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" event={"ID":"d9d09a56-ed4c-40b7-8be1-f3934c07296e","Type":"ContainerStarted","Data":"15ab52d652113ef266940e33258fee75e250f493080fb37576944ab0faae3a29"} Mar 18 13:07:23.820802 master-0 kubenswrapper[7599]: I0318 13:07:23.820524 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-9c5679d8f-5lzzn" event={"ID":"59bf5114-29f9-4f70-8582-108e95327cb2","Type":"ContainerStarted","Data":"e48d984bde067fff459bf66d3627856479bf9e2fe952a4228b45cfe581507bda"} Mar 18 13:07:23.822178 master-0 kubenswrapper[7599]: I0318 13:07:23.822131 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" event={"ID":"ac6d8eb6-1d5e-4757-9823-5ffe478c711c","Type":"ContainerStarted","Data":"5e20d46e2ff68c35ec5f71de1a7613daa62264adc487ab5ef65e9454569fe466"} Mar 18 13:07:23.824092 master-0 kubenswrapper[7599]: I0318 13:07:23.824022 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" event={"ID":"bf4c5410-fb44-45e8-ab66-24806e6349b8","Type":"ContainerStarted","Data":"ace486964b3baf64b75fe000f29856b42211876fd8f8e8061e47b74f3fd46fe3"} Mar 18 13:07:23.824092 master-0 kubenswrapper[7599]: I0318 13:07:23.824060 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" event={"ID":"bf4c5410-fb44-45e8-ab66-24806e6349b8","Type":"ContainerStarted","Data":"7781d91a527c8a68391c4776d4e2edee3564ed2594b252a676d696ac4e021083"} Mar 18 13:07:24.090968 master-0 kubenswrapper[7599]: I0318 13:07:24.090822 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/277dcc33-1743-4926-8624-e5a3e850bb51-serving-cert\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:24.091773 master-0 kubenswrapper[7599]: I0318 13:07:24.090993 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-audit\") pod \"apiserver-85f4f795dd-9l6gz\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:24.091773 master-0 kubenswrapper[7599]: E0318 13:07:24.091120 7599 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 18 13:07:24.091773 master-0 kubenswrapper[7599]: E0318 13:07:24.091186 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-audit podName:277dcc33-1743-4926-8624-e5a3e850bb51 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:28.091152833 +0000 UTC m=+23.052207075 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-audit") pod "apiserver-85f4f795dd-9l6gz" (UID: "277dcc33-1743-4926-8624-e5a3e850bb51") : configmap "audit-0" not found Mar 18 13:07:24.091773 master-0 kubenswrapper[7599]: E0318 13:07:24.091372 7599 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 18 13:07:24.091773 master-0 kubenswrapper[7599]: E0318 13:07:24.091469 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/277dcc33-1743-4926-8624-e5a3e850bb51-serving-cert podName:277dcc33-1743-4926-8624-e5a3e850bb51 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:28.09144899 +0000 UTC m=+23.052503232 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/277dcc33-1743-4926-8624-e5a3e850bb51-serving-cert") pod "apiserver-85f4f795dd-9l6gz" (UID: "277dcc33-1743-4926-8624-e5a3e850bb51") : secret "serving-cert" not found Mar 18 13:07:24.223574 master-0 kubenswrapper[7599]: I0318 13:07:24.223482 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" podStartSLOduration=2.22346415 podStartE2EDuration="2.22346415s" podCreationTimestamp="2026-03-18 13:07:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:07:23.84297752 +0000 UTC m=+18.804031782" watchObservedRunningTime="2026-03-18 13:07:24.22346415 +0000 UTC m=+19.184518392" Mar 18 13:07:24.224608 master-0 kubenswrapper[7599]: I0318 13:07:24.224586 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-85f4f795dd-9l6gz"] Mar 18 13:07:24.225170 master-0 kubenswrapper[7599]: E0318 13:07:24.225143 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[audit serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" podUID="277dcc33-1743-4926-8624-e5a3e850bb51" Mar 18 13:07:24.830777 master-0 kubenswrapper[7599]: I0318 13:07:24.830027 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:24.841615 master-0 kubenswrapper[7599]: I0318 13:07:24.841579 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:24.901638 master-0 kubenswrapper[7599]: I0318 13:07:24.901589 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-image-import-ca\") pod \"277dcc33-1743-4926-8624-e5a3e850bb51\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " Mar 18 13:07:24.901638 master-0 kubenswrapper[7599]: I0318 13:07:24.901657 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-etcd-serving-ca\") pod \"277dcc33-1743-4926-8624-e5a3e850bb51\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " Mar 18 13:07:24.902089 master-0 kubenswrapper[7599]: I0318 13:07:24.901715 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/277dcc33-1743-4926-8624-e5a3e850bb51-audit-dir\") pod \"277dcc33-1743-4926-8624-e5a3e850bb51\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " Mar 18 13:07:24.902089 master-0 kubenswrapper[7599]: I0318 13:07:24.901748 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bb5g\" (UniqueName: \"kubernetes.io/projected/277dcc33-1743-4926-8624-e5a3e850bb51-kube-api-access-9bb5g\") pod \"277dcc33-1743-4926-8624-e5a3e850bb51\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " Mar 18 13:07:24.902089 master-0 kubenswrapper[7599]: I0318 13:07:24.901810 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-config\") pod \"277dcc33-1743-4926-8624-e5a3e850bb51\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " Mar 18 13:07:24.902089 master-0 kubenswrapper[7599]: I0318 13:07:24.902061 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/277dcc33-1743-4926-8624-e5a3e850bb51-encryption-config\") pod \"277dcc33-1743-4926-8624-e5a3e850bb51\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " Mar 18 13:07:24.902203 master-0 kubenswrapper[7599]: I0318 13:07:24.902079 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-trusted-ca-bundle\") pod \"277dcc33-1743-4926-8624-e5a3e850bb51\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " Mar 18 13:07:24.902203 master-0 kubenswrapper[7599]: I0318 13:07:24.902117 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/277dcc33-1743-4926-8624-e5a3e850bb51-node-pullsecrets\") pod \"277dcc33-1743-4926-8624-e5a3e850bb51\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " Mar 18 13:07:24.902203 master-0 kubenswrapper[7599]: I0318 13:07:24.902140 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/277dcc33-1743-4926-8624-e5a3e850bb51-etcd-client\") pod \"277dcc33-1743-4926-8624-e5a3e850bb51\" (UID: \"277dcc33-1743-4926-8624-e5a3e850bb51\") " Mar 18 13:07:24.903134 master-0 kubenswrapper[7599]: I0318 13:07:24.902530 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/277dcc33-1743-4926-8624-e5a3e850bb51-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "277dcc33-1743-4926-8624-e5a3e850bb51" (UID: "277dcc33-1743-4926-8624-e5a3e850bb51"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:07:24.903134 master-0 kubenswrapper[7599]: I0318 13:07:24.902934 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "277dcc33-1743-4926-8624-e5a3e850bb51" (UID: "277dcc33-1743-4926-8624-e5a3e850bb51"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:07:24.903544 master-0 kubenswrapper[7599]: I0318 13:07:24.903242 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-config" (OuterVolumeSpecName: "config") pod "277dcc33-1743-4926-8624-e5a3e850bb51" (UID: "277dcc33-1743-4926-8624-e5a3e850bb51"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:07:24.903544 master-0 kubenswrapper[7599]: I0318 13:07:24.903320 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "277dcc33-1743-4926-8624-e5a3e850bb51" (UID: "277dcc33-1743-4926-8624-e5a3e850bb51"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:07:24.903751 master-0 kubenswrapper[7599]: I0318 13:07:24.903703 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/277dcc33-1743-4926-8624-e5a3e850bb51-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "277dcc33-1743-4926-8624-e5a3e850bb51" (UID: "277dcc33-1743-4926-8624-e5a3e850bb51"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:07:24.903961 master-0 kubenswrapper[7599]: I0318 13:07:24.903932 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "277dcc33-1743-4926-8624-e5a3e850bb51" (UID: "277dcc33-1743-4926-8624-e5a3e850bb51"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:07:24.906892 master-0 kubenswrapper[7599]: I0318 13:07:24.906856 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/277dcc33-1743-4926-8624-e5a3e850bb51-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "277dcc33-1743-4926-8624-e5a3e850bb51" (UID: "277dcc33-1743-4926-8624-e5a3e850bb51"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:07:24.907456 master-0 kubenswrapper[7599]: I0318 13:07:24.907401 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/277dcc33-1743-4926-8624-e5a3e850bb51-kube-api-access-9bb5g" (OuterVolumeSpecName: "kube-api-access-9bb5g") pod "277dcc33-1743-4926-8624-e5a3e850bb51" (UID: "277dcc33-1743-4926-8624-e5a3e850bb51"). InnerVolumeSpecName "kube-api-access-9bb5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:07:24.908351 master-0 kubenswrapper[7599]: I0318 13:07:24.908267 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/277dcc33-1743-4926-8624-e5a3e850bb51-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "277dcc33-1743-4926-8624-e5a3e850bb51" (UID: "277dcc33-1743-4926-8624-e5a3e850bb51"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:07:25.003274 master-0 kubenswrapper[7599]: I0318 13:07:25.003231 7599 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/277dcc33-1743-4926-8624-e5a3e850bb51-encryption-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:25.003274 master-0 kubenswrapper[7599]: I0318 13:07:25.003269 7599 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:25.003274 master-0 kubenswrapper[7599]: I0318 13:07:25.003284 7599 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/277dcc33-1743-4926-8624-e5a3e850bb51-node-pullsecrets\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:25.003274 master-0 kubenswrapper[7599]: I0318 13:07:25.003295 7599 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/277dcc33-1743-4926-8624-e5a3e850bb51-etcd-client\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:25.003584 master-0 kubenswrapper[7599]: I0318 13:07:25.003306 7599 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-image-import-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:25.003584 master-0 kubenswrapper[7599]: I0318 13:07:25.003318 7599 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:25.003584 master-0 kubenswrapper[7599]: I0318 13:07:25.003328 7599 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/277dcc33-1743-4926-8624-e5a3e850bb51-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:25.003584 master-0 kubenswrapper[7599]: I0318 13:07:25.003339 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bb5g\" (UniqueName: \"kubernetes.io/projected/277dcc33-1743-4926-8624-e5a3e850bb51-kube-api-access-9bb5g\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:25.003584 master-0 kubenswrapper[7599]: I0318 13:07:25.003348 7599 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:25.834467 master-0 kubenswrapper[7599]: I0318 13:07:25.834399 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-85f4f795dd-9l6gz" Mar 18 13:07:25.956509 master-0 kubenswrapper[7599]: I0318 13:07:25.953433 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-85b59d8688-wd26k"] Mar 18 13:07:25.956509 master-0 kubenswrapper[7599]: I0318 13:07:25.954444 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:25.959903 master-0 kubenswrapper[7599]: I0318 13:07:25.959853 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 18 13:07:25.960054 master-0 kubenswrapper[7599]: I0318 13:07:25.960041 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 18 13:07:25.960247 master-0 kubenswrapper[7599]: I0318 13:07:25.960121 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 18 13:07:25.960247 master-0 kubenswrapper[7599]: I0318 13:07:25.960194 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 18 13:07:25.960247 master-0 kubenswrapper[7599]: I0318 13:07:25.960134 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 18 13:07:25.960376 master-0 kubenswrapper[7599]: I0318 13:07:25.960270 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 18 13:07:25.960689 master-0 kubenswrapper[7599]: I0318 13:07:25.960642 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 18 13:07:25.961325 master-0 kubenswrapper[7599]: I0318 13:07:25.961309 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 18 13:07:25.961479 master-0 kubenswrapper[7599]: I0318 13:07:25.961463 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 18 13:07:25.965429 master-0 kubenswrapper[7599]: I0318 13:07:25.965289 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-85f4f795dd-9l6gz"] Mar 18 13:07:25.967239 master-0 kubenswrapper[7599]: I0318 13:07:25.967215 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-85f4f795dd-9l6gz"] Mar 18 13:07:25.968473 master-0 kubenswrapper[7599]: I0318 13:07:25.968399 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-85b59d8688-wd26k"] Mar 18 13:07:25.969757 master-0 kubenswrapper[7599]: I0318 13:07:25.969702 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 18 13:07:25.980454 master-0 kubenswrapper[7599]: I0318 13:07:25.980399 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" Mar 18 13:07:26.034087 master-0 kubenswrapper[7599]: I0318 13:07:26.034013 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a2bdf5b0-8764-4b15-97c9-20af36634fd0-etcd-client\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:26.034087 master-0 kubenswrapper[7599]: I0318 13:07:26.034085 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a2bdf5b0-8764-4b15-97c9-20af36634fd0-encryption-config\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:26.034283 master-0 kubenswrapper[7599]: I0318 13:07:26.034112 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2bdf5b0-8764-4b15-97c9-20af36634fd0-audit-dir\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:26.034283 master-0 kubenswrapper[7599]: I0318 13:07:26.034149 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a2bdf5b0-8764-4b15-97c9-20af36634fd0-node-pullsecrets\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:26.034283 master-0 kubenswrapper[7599]: I0318 13:07:26.034169 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2bdf5b0-8764-4b15-97c9-20af36634fd0-config\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:26.034283 master-0 kubenswrapper[7599]: I0318 13:07:26.034200 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2bdf5b0-8764-4b15-97c9-20af36634fd0-serving-cert\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:26.034283 master-0 kubenswrapper[7599]: I0318 13:07:26.034222 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2bdf5b0-8764-4b15-97c9-20af36634fd0-trusted-ca-bundle\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:26.034283 master-0 kubenswrapper[7599]: I0318 13:07:26.034242 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfb5c\" (UniqueName: \"kubernetes.io/projected/a2bdf5b0-8764-4b15-97c9-20af36634fd0-kube-api-access-sfb5c\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:26.034283 master-0 kubenswrapper[7599]: I0318 13:07:26.034263 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a2bdf5b0-8764-4b15-97c9-20af36634fd0-audit\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:26.034526 master-0 kubenswrapper[7599]: I0318 13:07:26.034306 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a2bdf5b0-8764-4b15-97c9-20af36634fd0-etcd-serving-ca\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:26.034526 master-0 kubenswrapper[7599]: I0318 13:07:26.034339 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a2bdf5b0-8764-4b15-97c9-20af36634fd0-image-import-ca\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:26.034526 master-0 kubenswrapper[7599]: I0318 13:07:26.034402 7599 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/277dcc33-1743-4926-8624-e5a3e850bb51-audit\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:26.034526 master-0 kubenswrapper[7599]: I0318 13:07:26.034481 7599 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/277dcc33-1743-4926-8624-e5a3e850bb51-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:26.135607 master-0 kubenswrapper[7599]: I0318 13:07:26.135293 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfb5c\" (UniqueName: \"kubernetes.io/projected/a2bdf5b0-8764-4b15-97c9-20af36634fd0-kube-api-access-sfb5c\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:26.135607 master-0 kubenswrapper[7599]: I0318 13:07:26.135329 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a2bdf5b0-8764-4b15-97c9-20af36634fd0-audit\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:26.135607 master-0 kubenswrapper[7599]: I0318 13:07:26.135374 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a2bdf5b0-8764-4b15-97c9-20af36634fd0-etcd-serving-ca\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:26.135607 master-0 kubenswrapper[7599]: I0318 13:07:26.135398 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a2bdf5b0-8764-4b15-97c9-20af36634fd0-image-import-ca\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:26.135607 master-0 kubenswrapper[7599]: I0318 13:07:26.135464 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a2bdf5b0-8764-4b15-97c9-20af36634fd0-etcd-client\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:26.135607 master-0 kubenswrapper[7599]: I0318 13:07:26.135487 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a2bdf5b0-8764-4b15-97c9-20af36634fd0-encryption-config\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:26.136205 master-0 kubenswrapper[7599]: I0318 13:07:26.136131 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2bdf5b0-8764-4b15-97c9-20af36634fd0-audit-dir\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:26.136255 master-0 kubenswrapper[7599]: I0318 13:07:26.136232 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a2bdf5b0-8764-4b15-97c9-20af36634fd0-node-pullsecrets\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:26.136308 master-0 kubenswrapper[7599]: I0318 13:07:26.136261 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2bdf5b0-8764-4b15-97c9-20af36634fd0-config\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:26.136368 master-0 kubenswrapper[7599]: I0318 13:07:26.136334 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2bdf5b0-8764-4b15-97c9-20af36634fd0-serving-cert\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:26.136368 master-0 kubenswrapper[7599]: I0318 13:07:26.136346 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a2bdf5b0-8764-4b15-97c9-20af36634fd0-audit\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:26.136456 master-0 kubenswrapper[7599]: I0318 13:07:26.136370 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2bdf5b0-8764-4b15-97c9-20af36634fd0-trusted-ca-bundle\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:26.136649 master-0 kubenswrapper[7599]: I0318 13:07:26.136590 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a2bdf5b0-8764-4b15-97c9-20af36634fd0-node-pullsecrets\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:26.136992 master-0 kubenswrapper[7599]: I0318 13:07:26.136777 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2bdf5b0-8764-4b15-97c9-20af36634fd0-audit-dir\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:26.136992 master-0 kubenswrapper[7599]: E0318 13:07:26.136872 7599 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 18 13:07:26.137189 master-0 kubenswrapper[7599]: E0318 13:07:26.137166 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2bdf5b0-8764-4b15-97c9-20af36634fd0-serving-cert podName:a2bdf5b0-8764-4b15-97c9-20af36634fd0 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:26.636928227 +0000 UTC m=+21.597982469 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a2bdf5b0-8764-4b15-97c9-20af36634fd0-serving-cert") pod "apiserver-85b59d8688-wd26k" (UID: "a2bdf5b0-8764-4b15-97c9-20af36634fd0") : secret "serving-cert" not found Mar 18 13:07:26.137696 master-0 kubenswrapper[7599]: I0318 13:07:26.137667 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2bdf5b0-8764-4b15-97c9-20af36634fd0-trusted-ca-bundle\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:26.155023 master-0 kubenswrapper[7599]: I0318 13:07:26.154963 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2bdf5b0-8764-4b15-97c9-20af36634fd0-config\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:26.155023 master-0 kubenswrapper[7599]: I0318 13:07:26.154975 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a2bdf5b0-8764-4b15-97c9-20af36634fd0-etcd-client\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:26.155128 master-0 kubenswrapper[7599]: I0318 13:07:26.155030 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a2bdf5b0-8764-4b15-97c9-20af36634fd0-etcd-serving-ca\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:26.155167 master-0 kubenswrapper[7599]: I0318 13:07:26.155141 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a2bdf5b0-8764-4b15-97c9-20af36634fd0-image-import-ca\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:26.156980 master-0 kubenswrapper[7599]: I0318 13:07:26.156948 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a2bdf5b0-8764-4b15-97c9-20af36634fd0-encryption-config\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:26.157238 master-0 kubenswrapper[7599]: I0318 13:07:26.157206 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfb5c\" (UniqueName: \"kubernetes.io/projected/a2bdf5b0-8764-4b15-97c9-20af36634fd0-kube-api-access-sfb5c\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:26.324732 master-0 kubenswrapper[7599]: I0318 13:07:26.324691 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 18 13:07:26.325153 master-0 kubenswrapper[7599]: I0318 13:07:26.325135 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 13:07:26.329132 master-0 kubenswrapper[7599]: I0318 13:07:26.329011 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 18 13:07:26.329656 master-0 kubenswrapper[7599]: I0318 13:07:26.329613 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 18 13:07:26.339154 master-0 kubenswrapper[7599]: I0318 13:07:26.338768 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ea7380f-2659-4054-ac83-2e4e698f382d-kube-api-access\") pod \"installer-1-master-0\" (UID: \"5ea7380f-2659-4054-ac83-2e4e698f382d\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 13:07:26.339154 master-0 kubenswrapper[7599]: I0318 13:07:26.338877 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5ea7380f-2659-4054-ac83-2e4e698f382d-var-lock\") pod \"installer-1-master-0\" (UID: \"5ea7380f-2659-4054-ac83-2e4e698f382d\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 13:07:26.339154 master-0 kubenswrapper[7599]: I0318 13:07:26.338923 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ea7380f-2659-4054-ac83-2e4e698f382d-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"5ea7380f-2659-4054-ac83-2e4e698f382d\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 13:07:26.440270 master-0 kubenswrapper[7599]: I0318 13:07:26.440199 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ea7380f-2659-4054-ac83-2e4e698f382d-kube-api-access\") pod \"installer-1-master-0\" (UID: \"5ea7380f-2659-4054-ac83-2e4e698f382d\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 13:07:26.440794 master-0 kubenswrapper[7599]: I0318 13:07:26.440285 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5ea7380f-2659-4054-ac83-2e4e698f382d-var-lock\") pod \"installer-1-master-0\" (UID: \"5ea7380f-2659-4054-ac83-2e4e698f382d\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 13:07:26.440794 master-0 kubenswrapper[7599]: I0318 13:07:26.440328 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ea7380f-2659-4054-ac83-2e4e698f382d-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"5ea7380f-2659-4054-ac83-2e4e698f382d\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 13:07:26.440794 master-0 kubenswrapper[7599]: I0318 13:07:26.440444 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ea7380f-2659-4054-ac83-2e4e698f382d-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"5ea7380f-2659-4054-ac83-2e4e698f382d\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 13:07:26.440794 master-0 kubenswrapper[7599]: I0318 13:07:26.440490 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5ea7380f-2659-4054-ac83-2e4e698f382d-var-lock\") pod \"installer-1-master-0\" (UID: \"5ea7380f-2659-4054-ac83-2e4e698f382d\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 13:07:26.456386 master-0 kubenswrapper[7599]: I0318 13:07:26.456350 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ea7380f-2659-4054-ac83-2e4e698f382d-kube-api-access\") pod \"installer-1-master-0\" (UID: \"5ea7380f-2659-4054-ac83-2e4e698f382d\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 13:07:26.643835 master-0 kubenswrapper[7599]: I0318 13:07:26.643774 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2bdf5b0-8764-4b15-97c9-20af36634fd0-serving-cert\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:26.644017 master-0 kubenswrapper[7599]: E0318 13:07:26.643984 7599 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 18 13:07:26.644049 master-0 kubenswrapper[7599]: E0318 13:07:26.644033 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2bdf5b0-8764-4b15-97c9-20af36634fd0-serving-cert podName:a2bdf5b0-8764-4b15-97c9-20af36634fd0 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:27.644017302 +0000 UTC m=+22.605071544 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a2bdf5b0-8764-4b15-97c9-20af36634fd0-serving-cert") pod "apiserver-85b59d8688-wd26k" (UID: "a2bdf5b0-8764-4b15-97c9-20af36634fd0") : secret "serving-cert" not found Mar 18 13:07:26.662395 master-0 kubenswrapper[7599]: I0318 13:07:26.662350 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 13:07:27.378101 master-0 kubenswrapper[7599]: I0318 13:07:27.377836 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="277dcc33-1743-4926-8624-e5a3e850bb51" path="/var/lib/kubelet/pods/277dcc33-1743-4926-8624-e5a3e850bb51/volumes" Mar 18 13:07:27.653941 master-0 kubenswrapper[7599]: I0318 13:07:27.653793 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2bdf5b0-8764-4b15-97c9-20af36634fd0-serving-cert\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:27.654151 master-0 kubenswrapper[7599]: E0318 13:07:27.654021 7599 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 18 13:07:27.654151 master-0 kubenswrapper[7599]: E0318 13:07:27.654069 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2bdf5b0-8764-4b15-97c9-20af36634fd0-serving-cert podName:a2bdf5b0-8764-4b15-97c9-20af36634fd0 nodeName:}" failed. No retries permitted until 2026-03-18 13:07:29.654055798 +0000 UTC m=+24.615110040 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a2bdf5b0-8764-4b15-97c9-20af36634fd0-serving-cert") pod "apiserver-85b59d8688-wd26k" (UID: "a2bdf5b0-8764-4b15-97c9-20af36634fd0") : secret "serving-cert" not found Mar 18 13:07:29.678228 master-0 kubenswrapper[7599]: I0318 13:07:29.677970 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2bdf5b0-8764-4b15-97c9-20af36634fd0-serving-cert\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:29.681169 master-0 kubenswrapper[7599]: I0318 13:07:29.681130 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2bdf5b0-8764-4b15-97c9-20af36634fd0-serving-cert\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:29.873532 master-0 kubenswrapper[7599]: I0318 13:07:29.873437 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:31.716820 master-0 kubenswrapper[7599]: I0318 13:07:31.716719 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-serving-cert\") pod \"route-controller-manager-c8db4484-th8hr\" (UID: \"f53a0371-0ce3-4a3f-b3e1-0043a0f3e806\") " pod="openshift-route-controller-manager/route-controller-manager-c8db4484-th8hr" Mar 18 13:07:31.725339 master-0 kubenswrapper[7599]: I0318 13:07:31.725262 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-serving-cert\") pod \"route-controller-manager-c8db4484-th8hr\" (UID: \"f53a0371-0ce3-4a3f-b3e1-0043a0f3e806\") " pod="openshift-route-controller-manager/route-controller-manager-c8db4484-th8hr" Mar 18 13:07:31.735722 master-0 kubenswrapper[7599]: I0318 13:07:31.735656 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c8db4484-th8hr" Mar 18 13:07:32.013867 master-0 kubenswrapper[7599]: I0318 13:07:32.013755 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 18 13:07:32.014364 master-0 kubenswrapper[7599]: I0318 13:07:32.014335 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 18 13:07:32.020913 master-0 kubenswrapper[7599]: I0318 13:07:32.020881 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 18 13:07:32.021458 master-0 kubenswrapper[7599]: I0318 13:07:32.021394 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/814ffa63-b08e-4de8-b912-8d7f0638230b-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"814ffa63-b08e-4de8-b912-8d7f0638230b\") " pod="openshift-etcd/installer-1-master-0" Mar 18 13:07:32.021532 master-0 kubenswrapper[7599]: I0318 13:07:32.021503 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/814ffa63-b08e-4de8-b912-8d7f0638230b-kube-api-access\") pod \"installer-1-master-0\" (UID: \"814ffa63-b08e-4de8-b912-8d7f0638230b\") " pod="openshift-etcd/installer-1-master-0" Mar 18 13:07:32.021587 master-0 kubenswrapper[7599]: I0318 13:07:32.021556 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/814ffa63-b08e-4de8-b912-8d7f0638230b-var-lock\") pod \"installer-1-master-0\" (UID: \"814ffa63-b08e-4de8-b912-8d7f0638230b\") " pod="openshift-etcd/installer-1-master-0" Mar 18 13:07:32.126400 master-0 kubenswrapper[7599]: I0318 13:07:32.126319 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/814ffa63-b08e-4de8-b912-8d7f0638230b-kube-api-access\") pod \"installer-1-master-0\" (UID: \"814ffa63-b08e-4de8-b912-8d7f0638230b\") " pod="openshift-etcd/installer-1-master-0" Mar 18 13:07:32.126400 master-0 kubenswrapper[7599]: I0318 13:07:32.126400 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/814ffa63-b08e-4de8-b912-8d7f0638230b-var-lock\") pod \"installer-1-master-0\" (UID: \"814ffa63-b08e-4de8-b912-8d7f0638230b\") " pod="openshift-etcd/installer-1-master-0" Mar 18 13:07:32.126766 master-0 kubenswrapper[7599]: I0318 13:07:32.126503 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/814ffa63-b08e-4de8-b912-8d7f0638230b-var-lock\") pod \"installer-1-master-0\" (UID: \"814ffa63-b08e-4de8-b912-8d7f0638230b\") " pod="openshift-etcd/installer-1-master-0" Mar 18 13:07:32.126766 master-0 kubenswrapper[7599]: I0318 13:07:32.126665 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/814ffa63-b08e-4de8-b912-8d7f0638230b-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"814ffa63-b08e-4de8-b912-8d7f0638230b\") " pod="openshift-etcd/installer-1-master-0" Mar 18 13:07:32.126766 master-0 kubenswrapper[7599]: I0318 13:07:32.126732 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/814ffa63-b08e-4de8-b912-8d7f0638230b-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"814ffa63-b08e-4de8-b912-8d7f0638230b\") " pod="openshift-etcd/installer-1-master-0" Mar 18 13:07:32.573911 master-0 kubenswrapper[7599]: I0318 13:07:32.573823 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 18 13:07:32.642159 master-0 kubenswrapper[7599]: I0318 13:07:32.642093 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/814ffa63-b08e-4de8-b912-8d7f0638230b-kube-api-access\") pod \"installer-1-master-0\" (UID: \"814ffa63-b08e-4de8-b912-8d7f0638230b\") " pod="openshift-etcd/installer-1-master-0" Mar 18 13:07:32.649669 master-0 kubenswrapper[7599]: I0318 13:07:32.649616 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 18 13:07:35.522084 master-0 kubenswrapper[7599]: I0318 13:07:35.513824 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 18 13:07:35.609665 master-0 kubenswrapper[7599]: I0318 13:07:35.609499 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj"] Mar 18 13:07:35.620229 master-0 kubenswrapper[7599]: I0318 13:07:35.610284 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb"] Mar 18 13:07:35.620229 master-0 kubenswrapper[7599]: I0318 13:07:35.610913 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:07:35.620229 master-0 kubenswrapper[7599]: I0318 13:07:35.611309 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:07:35.641445 master-0 kubenswrapper[7599]: I0318 13:07:35.625242 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 18 13:07:35.641445 master-0 kubenswrapper[7599]: I0318 13:07:35.625490 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 18 13:07:35.641445 master-0 kubenswrapper[7599]: I0318 13:07:35.625610 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 18 13:07:35.641445 master-0 kubenswrapper[7599]: I0318 13:07:35.625860 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 18 13:07:35.641445 master-0 kubenswrapper[7599]: I0318 13:07:35.626204 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 18 13:07:35.641445 master-0 kubenswrapper[7599]: I0318 13:07:35.640162 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj"] Mar 18 13:07:35.641445 master-0 kubenswrapper[7599]: I0318 13:07:35.640302 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 18 13:07:35.641445 master-0 kubenswrapper[7599]: I0318 13:07:35.640423 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb"] Mar 18 13:07:35.641445 master-0 kubenswrapper[7599]: I0318 13:07:35.640545 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 18 13:07:35.661442 master-0 kubenswrapper[7599]: I0318 13:07:35.658133 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c"] Mar 18 13:07:35.675629 master-0 kubenswrapper[7599]: I0318 13:07:35.674780 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:07:35.676029 master-0 kubenswrapper[7599]: I0318 13:07:35.675916 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hvsl\" (UniqueName: \"kubernetes.io/projected/16f8e725-f18a-478e-88c5-87d54aeb4857-kube-api-access-8hvsl\") pod \"catalogd-controller-manager-6864dc98f7-q2ndb\" (UID: \"16f8e725-f18a-478e-88c5-87d54aeb4857\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:07:35.676029 master-0 kubenswrapper[7599]: I0318 13:07:35.675942 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/16f8e725-f18a-478e-88c5-87d54aeb4857-cache\") pod \"catalogd-controller-manager-6864dc98f7-q2ndb\" (UID: \"16f8e725-f18a-478e-88c5-87d54aeb4857\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:07:35.676029 master-0 kubenswrapper[7599]: I0318 13:07:35.675961 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/16f8e725-f18a-478e-88c5-87d54aeb4857-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-q2ndb\" (UID: \"16f8e725-f18a-478e-88c5-87d54aeb4857\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:07:35.676029 master-0 kubenswrapper[7599]: I0318 13:07:35.675985 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbsq9\" (UniqueName: \"kubernetes.io/projected/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617-kube-api-access-fbsq9\") pod \"operator-controller-controller-manager-57777556ff-9bjsj\" (UID: \"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:07:35.676029 master-0 kubenswrapper[7599]: I0318 13:07:35.676007 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/16f8e725-f18a-478e-88c5-87d54aeb4857-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-q2ndb\" (UID: \"16f8e725-f18a-478e-88c5-87d54aeb4857\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:07:35.676029 master-0 kubenswrapper[7599]: I0318 13:07:35.676026 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-9bjsj\" (UID: \"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:07:35.676558 master-0 kubenswrapper[7599]: I0318 13:07:35.676054 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/16f8e725-f18a-478e-88c5-87d54aeb4857-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-q2ndb\" (UID: \"16f8e725-f18a-478e-88c5-87d54aeb4857\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:07:35.676558 master-0 kubenswrapper[7599]: I0318 13:07:35.676082 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617-cache\") pod \"operator-controller-controller-manager-57777556ff-9bjsj\" (UID: \"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:07:35.676558 master-0 kubenswrapper[7599]: I0318 13:07:35.676100 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-9bjsj\" (UID: \"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:07:35.676558 master-0 kubenswrapper[7599]: I0318 13:07:35.676142 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/16f8e725-f18a-478e-88c5-87d54aeb4857-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-q2ndb\" (UID: \"16f8e725-f18a-478e-88c5-87d54aeb4857\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:07:35.676558 master-0 kubenswrapper[7599]: I0318 13:07:35.676349 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-9bjsj\" (UID: \"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:07:35.685972 master-0 kubenswrapper[7599]: I0318 13:07:35.684991 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c"] Mar 18 13:07:35.686336 master-0 kubenswrapper[7599]: I0318 13:07:35.686311 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 18 13:07:35.687552 master-0 kubenswrapper[7599]: I0318 13:07:35.687529 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 18 13:07:35.693716 master-0 kubenswrapper[7599]: I0318 13:07:35.688137 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 18 13:07:35.693716 master-0 kubenswrapper[7599]: I0318 13:07:35.690984 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 18 13:07:35.693716 master-0 kubenswrapper[7599]: I0318 13:07:35.691090 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 18 13:07:35.693716 master-0 kubenswrapper[7599]: I0318 13:07:35.691148 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 18 13:07:35.693716 master-0 kubenswrapper[7599]: I0318 13:07:35.691214 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 18 13:07:35.693716 master-0 kubenswrapper[7599]: I0318 13:07:35.691358 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 18 13:07:35.782723 master-0 kubenswrapper[7599]: I0318 13:07:35.780768 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbsq9\" (UniqueName: \"kubernetes.io/projected/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617-kube-api-access-fbsq9\") pod \"operator-controller-controller-manager-57777556ff-9bjsj\" (UID: \"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:07:35.782723 master-0 kubenswrapper[7599]: I0318 13:07:35.780809 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7fb5bad7-07d9-45ac-ad27-a887d12d148f-encryption-config\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:07:35.782723 master-0 kubenswrapper[7599]: I0318 13:07:35.780826 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7fb5bad7-07d9-45ac-ad27-a887d12d148f-audit-policies\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:07:35.782723 master-0 kubenswrapper[7599]: I0318 13:07:35.780849 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/16f8e725-f18a-478e-88c5-87d54aeb4857-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-q2ndb\" (UID: \"16f8e725-f18a-478e-88c5-87d54aeb4857\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:07:35.782723 master-0 kubenswrapper[7599]: I0318 13:07:35.780869 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-9bjsj\" (UID: \"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:07:35.782723 master-0 kubenswrapper[7599]: I0318 13:07:35.780900 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/16f8e725-f18a-478e-88c5-87d54aeb4857-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-q2ndb\" (UID: \"16f8e725-f18a-478e-88c5-87d54aeb4857\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:07:35.782723 master-0 kubenswrapper[7599]: I0318 13:07:35.780929 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617-cache\") pod \"operator-controller-controller-manager-57777556ff-9bjsj\" (UID: \"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:07:35.782723 master-0 kubenswrapper[7599]: I0318 13:07:35.780945 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-9bjsj\" (UID: \"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:07:35.782723 master-0 kubenswrapper[7599]: I0318 13:07:35.780976 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7fb5bad7-07d9-45ac-ad27-a887d12d148f-audit-dir\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:07:35.782723 master-0 kubenswrapper[7599]: I0318 13:07:35.781000 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7fb5bad7-07d9-45ac-ad27-a887d12d148f-etcd-client\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:07:35.782723 master-0 kubenswrapper[7599]: I0318 13:07:35.781014 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fb5bad7-07d9-45ac-ad27-a887d12d148f-trusted-ca-bundle\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:07:35.782723 master-0 kubenswrapper[7599]: I0318 13:07:35.781031 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdkqm\" (UniqueName: \"kubernetes.io/projected/7fb5bad7-07d9-45ac-ad27-a887d12d148f-kube-api-access-sdkqm\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:07:35.782723 master-0 kubenswrapper[7599]: I0318 13:07:35.781055 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/16f8e725-f18a-478e-88c5-87d54aeb4857-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-q2ndb\" (UID: \"16f8e725-f18a-478e-88c5-87d54aeb4857\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:07:35.782723 master-0 kubenswrapper[7599]: I0318 13:07:35.781071 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-9bjsj\" (UID: \"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:07:35.782723 master-0 kubenswrapper[7599]: I0318 13:07:35.781090 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fb5bad7-07d9-45ac-ad27-a887d12d148f-serving-cert\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:07:35.782723 master-0 kubenswrapper[7599]: I0318 13:07:35.781121 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hvsl\" (UniqueName: \"kubernetes.io/projected/16f8e725-f18a-478e-88c5-87d54aeb4857-kube-api-access-8hvsl\") pod \"catalogd-controller-manager-6864dc98f7-q2ndb\" (UID: \"16f8e725-f18a-478e-88c5-87d54aeb4857\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:07:35.782723 master-0 kubenswrapper[7599]: I0318 13:07:35.781139 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/16f8e725-f18a-478e-88c5-87d54aeb4857-cache\") pod \"catalogd-controller-manager-6864dc98f7-q2ndb\" (UID: \"16f8e725-f18a-478e-88c5-87d54aeb4857\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:07:35.782723 master-0 kubenswrapper[7599]: I0318 13:07:35.781155 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/16f8e725-f18a-478e-88c5-87d54aeb4857-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-q2ndb\" (UID: \"16f8e725-f18a-478e-88c5-87d54aeb4857\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:07:35.782723 master-0 kubenswrapper[7599]: I0318 13:07:35.781370 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7fb5bad7-07d9-45ac-ad27-a887d12d148f-etcd-serving-ca\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:07:35.782723 master-0 kubenswrapper[7599]: I0318 13:07:35.782300 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/16f8e725-f18a-478e-88c5-87d54aeb4857-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-q2ndb\" (UID: \"16f8e725-f18a-478e-88c5-87d54aeb4857\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:07:35.792662 master-0 kubenswrapper[7599]: I0318 13:07:35.788723 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-9bjsj\" (UID: \"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:07:35.792662 master-0 kubenswrapper[7599]: I0318 13:07:35.788815 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/16f8e725-f18a-478e-88c5-87d54aeb4857-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-q2ndb\" (UID: \"16f8e725-f18a-478e-88c5-87d54aeb4857\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:07:35.792662 master-0 kubenswrapper[7599]: I0318 13:07:35.788857 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-9bjsj\" (UID: \"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:07:35.792662 master-0 kubenswrapper[7599]: I0318 13:07:35.788873 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617-cache\") pod \"operator-controller-controller-manager-57777556ff-9bjsj\" (UID: \"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:07:35.792662 master-0 kubenswrapper[7599]: I0318 13:07:35.789248 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/16f8e725-f18a-478e-88c5-87d54aeb4857-cache\") pod \"catalogd-controller-manager-6864dc98f7-q2ndb\" (UID: \"16f8e725-f18a-478e-88c5-87d54aeb4857\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:07:35.804065 master-0 kubenswrapper[7599]: I0318 13:07:35.803235 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65c549646c-d9988"] Mar 18 13:07:35.804065 master-0 kubenswrapper[7599]: I0318 13:07:35.803489 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65c549646c-d9988" podUID="5b2a641f-3fc0-4efe-b72e-429bfdedd2cb" containerName="controller-manager" containerID="cri-o://b7c5dbd3ce8dfa128c538acb90abe2510694b1e554313e9a208817b7e69b7d69" gracePeriod=30 Mar 18 13:07:35.817247 master-0 kubenswrapper[7599]: I0318 13:07:35.805020 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/16f8e725-f18a-478e-88c5-87d54aeb4857-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-q2ndb\" (UID: \"16f8e725-f18a-478e-88c5-87d54aeb4857\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:07:35.819728 master-0 kubenswrapper[7599]: I0318 13:07:35.819678 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c8db4484-th8hr"] Mar 18 13:07:35.822566 master-0 kubenswrapper[7599]: I0318 13:07:35.822468 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-9bjsj\" (UID: \"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:07:35.834234 master-0 kubenswrapper[7599]: I0318 13:07:35.834197 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hvsl\" (UniqueName: \"kubernetes.io/projected/16f8e725-f18a-478e-88c5-87d54aeb4857-kube-api-access-8hvsl\") pod \"catalogd-controller-manager-6864dc98f7-q2ndb\" (UID: \"16f8e725-f18a-478e-88c5-87d54aeb4857\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:07:35.834813 master-0 kubenswrapper[7599]: I0318 13:07:35.834784 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbsq9\" (UniqueName: \"kubernetes.io/projected/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617-kube-api-access-fbsq9\") pod \"operator-controller-controller-manager-57777556ff-9bjsj\" (UID: \"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:07:35.843529 master-0 kubenswrapper[7599]: I0318 13:07:35.838148 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/16f8e725-f18a-478e-88c5-87d54aeb4857-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-q2ndb\" (UID: \"16f8e725-f18a-478e-88c5-87d54aeb4857\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:07:35.897532 master-0 kubenswrapper[7599]: I0318 13:07:35.890823 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7fb5bad7-07d9-45ac-ad27-a887d12d148f-encryption-config\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:07:35.897532 master-0 kubenswrapper[7599]: I0318 13:07:35.890868 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7fb5bad7-07d9-45ac-ad27-a887d12d148f-audit-policies\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:07:35.897532 master-0 kubenswrapper[7599]: I0318 13:07:35.890940 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7fb5bad7-07d9-45ac-ad27-a887d12d148f-audit-dir\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:07:35.897532 master-0 kubenswrapper[7599]: I0318 13:07:35.890964 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7fb5bad7-07d9-45ac-ad27-a887d12d148f-etcd-client\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:07:35.897532 master-0 kubenswrapper[7599]: I0318 13:07:35.890979 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fb5bad7-07d9-45ac-ad27-a887d12d148f-trusted-ca-bundle\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:07:35.897532 master-0 kubenswrapper[7599]: I0318 13:07:35.890993 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdkqm\" (UniqueName: \"kubernetes.io/projected/7fb5bad7-07d9-45ac-ad27-a887d12d148f-kube-api-access-sdkqm\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:07:35.897532 master-0 kubenswrapper[7599]: I0318 13:07:35.891029 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fb5bad7-07d9-45ac-ad27-a887d12d148f-serving-cert\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:07:35.897532 master-0 kubenswrapper[7599]: I0318 13:07:35.891089 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7fb5bad7-07d9-45ac-ad27-a887d12d148f-etcd-serving-ca\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:07:35.897532 master-0 kubenswrapper[7599]: I0318 13:07:35.891707 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7fb5bad7-07d9-45ac-ad27-a887d12d148f-etcd-serving-ca\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:07:35.897532 master-0 kubenswrapper[7599]: I0318 13:07:35.895096 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7fb5bad7-07d9-45ac-ad27-a887d12d148f-audit-dir\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:07:35.897532 master-0 kubenswrapper[7599]: I0318 13:07:35.895097 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7fb5bad7-07d9-45ac-ad27-a887d12d148f-audit-policies\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:07:35.897532 master-0 kubenswrapper[7599]: I0318 13:07:35.895674 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fb5bad7-07d9-45ac-ad27-a887d12d148f-trusted-ca-bundle\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:07:35.898785 master-0 kubenswrapper[7599]: I0318 13:07:35.898750 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7fb5bad7-07d9-45ac-ad27-a887d12d148f-encryption-config\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:07:35.899163 master-0 kubenswrapper[7599]: I0318 13:07:35.899121 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7fb5bad7-07d9-45ac-ad27-a887d12d148f-etcd-client\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:07:35.917181 master-0 kubenswrapper[7599]: I0318 13:07:35.917028 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fb5bad7-07d9-45ac-ad27-a887d12d148f-serving-cert\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:07:35.953871 master-0 kubenswrapper[7599]: I0318 13:07:35.953837 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdkqm\" (UniqueName: \"kubernetes.io/projected/7fb5bad7-07d9-45ac-ad27-a887d12d148f-kube-api-access-sdkqm\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:07:35.967649 master-0 kubenswrapper[7599]: I0318 13:07:35.967572 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" event={"ID":"d9d09a56-ed4c-40b7-8be1-f3934c07296e","Type":"ContainerStarted","Data":"a09e30a0e0a70728f4eacd16714f41244f1eaa2c744901296ee7506c0e6ed81f"} Mar 18 13:07:36.020889 master-0 kubenswrapper[7599]: I0318 13:07:36.020496 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:07:36.048844 master-0 kubenswrapper[7599]: I0318 13:07:36.048383 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:07:36.147611 master-0 kubenswrapper[7599]: I0318 13:07:36.147351 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:07:36.472258 master-0 kubenswrapper[7599]: I0318 13:07:36.472202 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 18 13:07:36.477929 master-0 kubenswrapper[7599]: I0318 13:07:36.477898 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c8db4484-th8hr"] Mar 18 13:07:36.660601 master-0 kubenswrapper[7599]: I0318 13:07:36.648976 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65c549646c-d9988" Mar 18 13:07:36.713812 master-0 kubenswrapper[7599]: I0318 13:07:36.713199 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b2a641f-3fc0-4efe-b72e-429bfdedd2cb-serving-cert\") pod \"5b2a641f-3fc0-4efe-b72e-429bfdedd2cb\" (UID: \"5b2a641f-3fc0-4efe-b72e-429bfdedd2cb\") " Mar 18 13:07:36.713812 master-0 kubenswrapper[7599]: I0318 13:07:36.713299 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45rtd\" (UniqueName: \"kubernetes.io/projected/5b2a641f-3fc0-4efe-b72e-429bfdedd2cb-kube-api-access-45rtd\") pod \"5b2a641f-3fc0-4efe-b72e-429bfdedd2cb\" (UID: \"5b2a641f-3fc0-4efe-b72e-429bfdedd2cb\") " Mar 18 13:07:36.713812 master-0 kubenswrapper[7599]: I0318 13:07:36.713357 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b2a641f-3fc0-4efe-b72e-429bfdedd2cb-client-ca\") pod \"5b2a641f-3fc0-4efe-b72e-429bfdedd2cb\" (UID: \"5b2a641f-3fc0-4efe-b72e-429bfdedd2cb\") " Mar 18 13:07:36.713812 master-0 kubenswrapper[7599]: I0318 13:07:36.713455 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b2a641f-3fc0-4efe-b72e-429bfdedd2cb-config\") pod \"5b2a641f-3fc0-4efe-b72e-429bfdedd2cb\" (UID: \"5b2a641f-3fc0-4efe-b72e-429bfdedd2cb\") " Mar 18 13:07:36.713812 master-0 kubenswrapper[7599]: I0318 13:07:36.713566 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5b2a641f-3fc0-4efe-b72e-429bfdedd2cb-proxy-ca-bundles\") pod \"5b2a641f-3fc0-4efe-b72e-429bfdedd2cb\" (UID: \"5b2a641f-3fc0-4efe-b72e-429bfdedd2cb\") " Mar 18 13:07:36.714457 master-0 kubenswrapper[7599]: I0318 13:07:36.714424 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b2a641f-3fc0-4efe-b72e-429bfdedd2cb-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5b2a641f-3fc0-4efe-b72e-429bfdedd2cb" (UID: "5b2a641f-3fc0-4efe-b72e-429bfdedd2cb"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:07:36.714746 master-0 kubenswrapper[7599]: I0318 13:07:36.714713 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b2a641f-3fc0-4efe-b72e-429bfdedd2cb-client-ca" (OuterVolumeSpecName: "client-ca") pod "5b2a641f-3fc0-4efe-b72e-429bfdedd2cb" (UID: "5b2a641f-3fc0-4efe-b72e-429bfdedd2cb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:07:36.715014 master-0 kubenswrapper[7599]: I0318 13:07:36.714986 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b2a641f-3fc0-4efe-b72e-429bfdedd2cb-config" (OuterVolumeSpecName: "config") pod "5b2a641f-3fc0-4efe-b72e-429bfdedd2cb" (UID: "5b2a641f-3fc0-4efe-b72e-429bfdedd2cb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:07:36.720534 master-0 kubenswrapper[7599]: I0318 13:07:36.719595 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b2a641f-3fc0-4efe-b72e-429bfdedd2cb-kube-api-access-45rtd" (OuterVolumeSpecName: "kube-api-access-45rtd") pod "5b2a641f-3fc0-4efe-b72e-429bfdedd2cb" (UID: "5b2a641f-3fc0-4efe-b72e-429bfdedd2cb"). InnerVolumeSpecName "kube-api-access-45rtd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:07:36.721880 master-0 kubenswrapper[7599]: I0318 13:07:36.721841 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 18 13:07:36.723730 master-0 kubenswrapper[7599]: I0318 13:07:36.723200 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb"] Mar 18 13:07:36.728801 master-0 kubenswrapper[7599]: I0318 13:07:36.727561 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 18 13:07:36.728801 master-0 kubenswrapper[7599]: E0318 13:07:36.727736 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b2a641f-3fc0-4efe-b72e-429bfdedd2cb" containerName="controller-manager" Mar 18 13:07:36.728801 master-0 kubenswrapper[7599]: I0318 13:07:36.727755 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b2a641f-3fc0-4efe-b72e-429bfdedd2cb" containerName="controller-manager" Mar 18 13:07:36.728801 master-0 kubenswrapper[7599]: I0318 13:07:36.727832 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b2a641f-3fc0-4efe-b72e-429bfdedd2cb" containerName="controller-manager" Mar 18 13:07:36.728801 master-0 kubenswrapper[7599]: I0318 13:07:36.728131 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 13:07:36.729869 master-0 kubenswrapper[7599]: I0318 13:07:36.729805 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b2a641f-3fc0-4efe-b72e-429bfdedd2cb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5b2a641f-3fc0-4efe-b72e-429bfdedd2cb" (UID: "5b2a641f-3fc0-4efe-b72e-429bfdedd2cb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:07:36.735548 master-0 kubenswrapper[7599]: W0318 13:07:36.735509 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod814ffa63_b08e_4de8_b912_8d7f0638230b.slice/crio-399e36a17781740e987661a51d15dc9628e7dba92fbae5bfa7767552365b7e5a WatchSource:0}: Error finding container 399e36a17781740e987661a51d15dc9628e7dba92fbae5bfa7767552365b7e5a: Status 404 returned error can't find the container with id 399e36a17781740e987661a51d15dc9628e7dba92fbae5bfa7767552365b7e5a Mar 18 13:07:36.746624 master-0 kubenswrapper[7599]: I0318 13:07:36.746575 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 18 13:07:36.754466 master-0 kubenswrapper[7599]: I0318 13:07:36.754047 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-85b59d8688-wd26k"] Mar 18 13:07:36.770466 master-0 kubenswrapper[7599]: W0318 13:07:36.761750 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16f8e725_f18a_478e_88c5_87d54aeb4857.slice/crio-66854fab27d048679dff3730825d0acfff884899a282ccd890ab724bab9d3de2 WatchSource:0}: Error finding container 66854fab27d048679dff3730825d0acfff884899a282ccd890ab724bab9d3de2: Status 404 returned error can't find the container with id 66854fab27d048679dff3730825d0acfff884899a282ccd890ab724bab9d3de2 Mar 18 13:07:36.818371 master-0 kubenswrapper[7599]: I0318 13:07:36.816336 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/250b63d5-21ee-44d3-821e-f42a8112dc50-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"250b63d5-21ee-44d3-821e-f42a8112dc50\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 13:07:36.818371 master-0 kubenswrapper[7599]: I0318 13:07:36.816506 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/250b63d5-21ee-44d3-821e-f42a8112dc50-kube-api-access\") pod \"installer-2-master-0\" (UID: \"250b63d5-21ee-44d3-821e-f42a8112dc50\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 13:07:36.818371 master-0 kubenswrapper[7599]: I0318 13:07:36.817281 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/250b63d5-21ee-44d3-821e-f42a8112dc50-var-lock\") pod \"installer-2-master-0\" (UID: \"250b63d5-21ee-44d3-821e-f42a8112dc50\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 13:07:36.818371 master-0 kubenswrapper[7599]: I0318 13:07:36.818120 7599 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b2a641f-3fc0-4efe-b72e-429bfdedd2cb-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:36.818371 master-0 kubenswrapper[7599]: I0318 13:07:36.818138 7599 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b2a641f-3fc0-4efe-b72e-429bfdedd2cb-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:36.818371 master-0 kubenswrapper[7599]: I0318 13:07:36.818175 7599 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5b2a641f-3fc0-4efe-b72e-429bfdedd2cb-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:36.818371 master-0 kubenswrapper[7599]: I0318 13:07:36.818185 7599 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b2a641f-3fc0-4efe-b72e-429bfdedd2cb-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:36.818371 master-0 kubenswrapper[7599]: I0318 13:07:36.818194 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45rtd\" (UniqueName: \"kubernetes.io/projected/5b2a641f-3fc0-4efe-b72e-429bfdedd2cb-kube-api-access-45rtd\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:36.831175 master-0 kubenswrapper[7599]: I0318 13:07:36.831119 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c"] Mar 18 13:07:36.875046 master-0 kubenswrapper[7599]: I0318 13:07:36.874976 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj"] Mar 18 13:07:36.900312 master-0 kubenswrapper[7599]: W0318 13:07:36.900268 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98e9b9f2_dd2b_4bb0_b2a8_5659a7f95617.slice/crio-1bf3d426d907a1cb94f7713355be45a70fb7cd061dca794ecb62191beca0b9d4 WatchSource:0}: Error finding container 1bf3d426d907a1cb94f7713355be45a70fb7cd061dca794ecb62191beca0b9d4: Status 404 returned error can't find the container with id 1bf3d426d907a1cb94f7713355be45a70fb7cd061dca794ecb62191beca0b9d4 Mar 18 13:07:36.923474 master-0 kubenswrapper[7599]: I0318 13:07:36.920732 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/250b63d5-21ee-44d3-821e-f42a8112dc50-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"250b63d5-21ee-44d3-821e-f42a8112dc50\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 13:07:36.923474 master-0 kubenswrapper[7599]: I0318 13:07:36.920781 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/250b63d5-21ee-44d3-821e-f42a8112dc50-kube-api-access\") pod \"installer-2-master-0\" (UID: \"250b63d5-21ee-44d3-821e-f42a8112dc50\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 13:07:36.923474 master-0 kubenswrapper[7599]: I0318 13:07:36.920853 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/250b63d5-21ee-44d3-821e-f42a8112dc50-var-lock\") pod \"installer-2-master-0\" (UID: \"250b63d5-21ee-44d3-821e-f42a8112dc50\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 13:07:36.923474 master-0 kubenswrapper[7599]: I0318 13:07:36.920914 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/250b63d5-21ee-44d3-821e-f42a8112dc50-var-lock\") pod \"installer-2-master-0\" (UID: \"250b63d5-21ee-44d3-821e-f42a8112dc50\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 13:07:36.923474 master-0 kubenswrapper[7599]: I0318 13:07:36.920947 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/250b63d5-21ee-44d3-821e-f42a8112dc50-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"250b63d5-21ee-44d3-821e-f42a8112dc50\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 13:07:36.937964 master-0 kubenswrapper[7599]: I0318 13:07:36.937904 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/250b63d5-21ee-44d3-821e-f42a8112dc50-kube-api-access\") pod \"installer-2-master-0\" (UID: \"250b63d5-21ee-44d3-821e-f42a8112dc50\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 13:07:36.990555 master-0 kubenswrapper[7599]: I0318 13:07:36.981842 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-92s8c"] Mar 18 13:07:36.990555 master-0 kubenswrapper[7599]: I0318 13:07:36.982686 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-92s8c" Mar 18 13:07:36.990555 master-0 kubenswrapper[7599]: I0318 13:07:36.984379 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 18 13:07:36.990555 master-0 kubenswrapper[7599]: I0318 13:07:36.984590 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 18 13:07:36.990555 master-0 kubenswrapper[7599]: I0318 13:07:36.984723 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 18 13:07:36.990555 master-0 kubenswrapper[7599]: I0318 13:07:36.984900 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 18 13:07:37.014681 master-0 kubenswrapper[7599]: I0318 13:07:37.013757 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-92s8c"] Mar 18 13:07:37.015204 master-0 kubenswrapper[7599]: I0318 13:07:37.015176 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-774f95d4b8-2gwjr"] Mar 18 13:07:37.015682 master-0 kubenswrapper[7599]: I0318 13:07:37.015659 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-774f95d4b8-2gwjr" Mar 18 13:07:37.027530 master-0 kubenswrapper[7599]: I0318 13:07:37.026835 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/029b127e-0faf-4957-b591-9c561b053cda-config-volume\") pod \"dns-default-92s8c\" (UID: \"029b127e-0faf-4957-b591-9c561b053cda\") " pod="openshift-dns/dns-default-92s8c" Mar 18 13:07:37.027530 master-0 kubenswrapper[7599]: I0318 13:07:37.027388 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/029b127e-0faf-4957-b591-9c561b053cda-metrics-tls\") pod \"dns-default-92s8c\" (UID: \"029b127e-0faf-4957-b591-9c561b053cda\") " pod="openshift-dns/dns-default-92s8c" Mar 18 13:07:37.027530 master-0 kubenswrapper[7599]: I0318 13:07:37.027478 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgt55\" (UniqueName: \"kubernetes.io/projected/029b127e-0faf-4957-b591-9c561b053cda-kube-api-access-wgt55\") pod \"dns-default-92s8c\" (UID: \"029b127e-0faf-4957-b591-9c561b053cda\") " pod="openshift-dns/dns-default-92s8c" Mar 18 13:07:37.041355 master-0 kubenswrapper[7599]: I0318 13:07:37.040829 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-774f95d4b8-2gwjr"] Mar 18 13:07:37.055377 master-0 kubenswrapper[7599]: I0318 13:07:37.055335 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 13:07:37.070490 master-0 kubenswrapper[7599]: I0318 13:07:37.069350 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" event={"ID":"d9d09a56-ed4c-40b7-8be1-f3934c07296e","Type":"ContainerStarted","Data":"dd510699ed24732f88c9dd87f0f6af2740999700a9b342734a145bb0ab91ee55"} Mar 18 13:07:37.072394 master-0 kubenswrapper[7599]: I0318 13:07:37.072362 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" event={"ID":"ac6d8eb6-1d5e-4757-9823-5ffe478c711c","Type":"ContainerStarted","Data":"3cfbaa8df9a218f8dee119016e4288585bf40d98d2be646b4c356cf8d4a6af1b"} Mar 18 13:07:37.072394 master-0 kubenswrapper[7599]: I0318 13:07:37.072392 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" event={"ID":"ac6d8eb6-1d5e-4757-9823-5ffe478c711c","Type":"ContainerStarted","Data":"49667c3562724d21d11f45af9648468c2dd5436306c9e389954957510ee7b256"} Mar 18 13:07:37.074982 master-0 kubenswrapper[7599]: I0318 13:07:37.074752 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" event={"ID":"2b06a568-4dad-44b4-8312-aa52911dbfb0","Type":"ContainerStarted","Data":"893058ed098fcd8ac9014b0600e582c79fa2f151d93e9514c20dfe69557e7efc"} Mar 18 13:07:37.120827 master-0 kubenswrapper[7599]: I0318 13:07:37.120168 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" event={"ID":"290d1f84-5c5c-4bff-b045-e6020793cded","Type":"ContainerStarted","Data":"070b282208ec733465a61cb3d4378f64269708ba5a361a70c5483204a7f87847"} Mar 18 13:07:37.122299 master-0 kubenswrapper[7599]: I0318 13:07:37.122260 7599 generic.go:334] "Generic (PLEG): container finished" podID="5b2a641f-3fc0-4efe-b72e-429bfdedd2cb" containerID="b7c5dbd3ce8dfa128c538acb90abe2510694b1e554313e9a208817b7e69b7d69" exitCode=0 Mar 18 13:07:37.122347 master-0 kubenswrapper[7599]: I0318 13:07:37.122311 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65c549646c-d9988" event={"ID":"5b2a641f-3fc0-4efe-b72e-429bfdedd2cb","Type":"ContainerDied","Data":"b7c5dbd3ce8dfa128c538acb90abe2510694b1e554313e9a208817b7e69b7d69"} Mar 18 13:07:37.122347 master-0 kubenswrapper[7599]: I0318 13:07:37.122325 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65c549646c-d9988" Mar 18 13:07:37.122347 master-0 kubenswrapper[7599]: I0318 13:07:37.122343 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65c549646c-d9988" event={"ID":"5b2a641f-3fc0-4efe-b72e-429bfdedd2cb","Type":"ContainerDied","Data":"a5057585a8fcc6082a3c302654cc31b932a69b01c67b4b2e81c1135f36d47cf4"} Mar 18 13:07:37.122451 master-0 kubenswrapper[7599]: I0318 13:07:37.122381 7599 scope.go:117] "RemoveContainer" containerID="b7c5dbd3ce8dfa128c538acb90abe2510694b1e554313e9a208817b7e69b7d69" Mar 18 13:07:37.128287 master-0 kubenswrapper[7599]: I0318 13:07:37.128255 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b03ef547-0c72-404f-8309-8078ccd57f15-proxy-ca-bundles\") pod \"controller-manager-774f95d4b8-2gwjr\" (UID: \"b03ef547-0c72-404f-8309-8078ccd57f15\") " pod="openshift-controller-manager/controller-manager-774f95d4b8-2gwjr" Mar 18 13:07:37.128361 master-0 kubenswrapper[7599]: I0318 13:07:37.128303 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b03ef547-0c72-404f-8309-8078ccd57f15-config\") pod \"controller-manager-774f95d4b8-2gwjr\" (UID: \"b03ef547-0c72-404f-8309-8078ccd57f15\") " pod="openshift-controller-manager/controller-manager-774f95d4b8-2gwjr" Mar 18 13:07:37.128361 master-0 kubenswrapper[7599]: I0318 13:07:37.128322 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hz46\" (UniqueName: \"kubernetes.io/projected/b03ef547-0c72-404f-8309-8078ccd57f15-kube-api-access-9hz46\") pod \"controller-manager-774f95d4b8-2gwjr\" (UID: \"b03ef547-0c72-404f-8309-8078ccd57f15\") " pod="openshift-controller-manager/controller-manager-774f95d4b8-2gwjr" Mar 18 13:07:37.128459 master-0 kubenswrapper[7599]: I0318 13:07:37.128420 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/029b127e-0faf-4957-b591-9c561b053cda-config-volume\") pod \"dns-default-92s8c\" (UID: \"029b127e-0faf-4957-b591-9c561b053cda\") " pod="openshift-dns/dns-default-92s8c" Mar 18 13:07:37.128515 master-0 kubenswrapper[7599]: I0318 13:07:37.128482 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b03ef547-0c72-404f-8309-8078ccd57f15-serving-cert\") pod \"controller-manager-774f95d4b8-2gwjr\" (UID: \"b03ef547-0c72-404f-8309-8078ccd57f15\") " pod="openshift-controller-manager/controller-manager-774f95d4b8-2gwjr" Mar 18 13:07:37.128570 master-0 kubenswrapper[7599]: I0318 13:07:37.128519 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/029b127e-0faf-4957-b591-9c561b053cda-metrics-tls\") pod \"dns-default-92s8c\" (UID: \"029b127e-0faf-4957-b591-9c561b053cda\") " pod="openshift-dns/dns-default-92s8c" Mar 18 13:07:37.128570 master-0 kubenswrapper[7599]: I0318 13:07:37.128547 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b03ef547-0c72-404f-8309-8078ccd57f15-client-ca\") pod \"controller-manager-774f95d4b8-2gwjr\" (UID: \"b03ef547-0c72-404f-8309-8078ccd57f15\") " pod="openshift-controller-manager/controller-manager-774f95d4b8-2gwjr" Mar 18 13:07:37.128648 master-0 kubenswrapper[7599]: I0318 13:07:37.128590 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgt55\" (UniqueName: \"kubernetes.io/projected/029b127e-0faf-4957-b591-9c561b053cda-kube-api-access-wgt55\") pod \"dns-default-92s8c\" (UID: \"029b127e-0faf-4957-b591-9c561b053cda\") " pod="openshift-dns/dns-default-92s8c" Mar 18 13:07:37.134635 master-0 kubenswrapper[7599]: I0318 13:07:37.134489 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/029b127e-0faf-4957-b591-9c561b053cda-config-volume\") pod \"dns-default-92s8c\" (UID: \"029b127e-0faf-4957-b591-9c561b053cda\") " pod="openshift-dns/dns-default-92s8c" Mar 18 13:07:37.140823 master-0 kubenswrapper[7599]: I0318 13:07:37.140759 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/029b127e-0faf-4957-b591-9c561b053cda-metrics-tls\") pod \"dns-default-92s8c\" (UID: \"029b127e-0faf-4957-b591-9c561b053cda\") " pod="openshift-dns/dns-default-92s8c" Mar 18 13:07:37.155092 master-0 kubenswrapper[7599]: I0318 13:07:37.155065 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-9c5679d8f-5lzzn" event={"ID":"59bf5114-29f9-4f70-8582-108e95327cb2","Type":"ContainerStarted","Data":"2c1eaab5376b76077cdd4ce6b7a0fb23bc1c0baefb99ecfa31f11681b75f8136"} Mar 18 13:07:37.155181 master-0 kubenswrapper[7599]: I0318 13:07:37.155164 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-9c5679d8f-5lzzn" event={"ID":"59bf5114-29f9-4f70-8582-108e95327cb2","Type":"ContainerStarted","Data":"6edbcdc30c81dd06208679a3331d6c44ead81bfa5ca710d7268a4a8e1bd10597"} Mar 18 13:07:37.155654 master-0 kubenswrapper[7599]: I0318 13:07:37.155639 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgt55\" (UniqueName: \"kubernetes.io/projected/029b127e-0faf-4957-b591-9c561b053cda-kube-api-access-wgt55\") pod \"dns-default-92s8c\" (UID: \"029b127e-0faf-4957-b591-9c561b053cda\") " pod="openshift-dns/dns-default-92s8c" Mar 18 13:07:37.186578 master-0 kubenswrapper[7599]: I0318 13:07:37.180835 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" event={"ID":"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617","Type":"ContainerStarted","Data":"1bf3d426d907a1cb94f7713355be45a70fb7cd061dca794ecb62191beca0b9d4"} Mar 18 13:07:37.186578 master-0 kubenswrapper[7599]: I0318 13:07:37.182228 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c8db4484-th8hr" event={"ID":"f53a0371-0ce3-4a3f-b3e1-0043a0f3e806","Type":"ContainerStarted","Data":"d0124db992c8b4a40b2921c9c854708919e8cb86949f743f1b0be67934dcf587"} Mar 18 13:07:37.186578 master-0 kubenswrapper[7599]: I0318 13:07:37.186351 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" event={"ID":"16f8e725-f18a-478e-88c5-87d54aeb4857","Type":"ContainerStarted","Data":"66854fab27d048679dff3730825d0acfff884899a282ccd890ab724bab9d3de2"} Mar 18 13:07:37.189567 master-0 kubenswrapper[7599]: I0318 13:07:37.188661 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-1-master-0" podUID="5ea7380f-2659-4054-ac83-2e4e698f382d" containerName="installer" containerID="cri-o://c28d73dd01e2a2c007b30a41a2b628f1f5b28df07cc0688388b2cc159cc11b79" gracePeriod=30 Mar 18 13:07:37.189567 master-0 kubenswrapper[7599]: I0318 13:07:37.188733 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"5ea7380f-2659-4054-ac83-2e4e698f382d","Type":"ContainerStarted","Data":"c28d73dd01e2a2c007b30a41a2b628f1f5b28df07cc0688388b2cc159cc11b79"} Mar 18 13:07:37.189567 master-0 kubenswrapper[7599]: I0318 13:07:37.188774 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"5ea7380f-2659-4054-ac83-2e4e698f382d","Type":"ContainerStarted","Data":"36a47f299381d84049f4ba7292ec286424a9c867c28704d2d552780a1b889b9f"} Mar 18 13:07:37.191225 master-0 kubenswrapper[7599]: I0318 13:07:37.191186 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-85b59d8688-wd26k" event={"ID":"a2bdf5b0-8764-4b15-97c9-20af36634fd0","Type":"ContainerStarted","Data":"7c0a9d3ecc02d97801da90faa78ea9a04fc4381142a502c2ebc0a26f2eb9f11b"} Mar 18 13:07:37.197516 master-0 kubenswrapper[7599]: I0318 13:07:37.196597 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" event={"ID":"7fb5bad7-07d9-45ac-ad27-a887d12d148f","Type":"ContainerStarted","Data":"70ca4cb931b7545d294f00c69b8bfe23595c69c1d94a66566a713806aa3eda58"} Mar 18 13:07:37.200571 master-0 kubenswrapper[7599]: I0318 13:07:37.200533 7599 scope.go:117] "RemoveContainer" containerID="b7c5dbd3ce8dfa128c538acb90abe2510694b1e554313e9a208817b7e69b7d69" Mar 18 13:07:37.207978 master-0 kubenswrapper[7599]: E0318 13:07:37.207669 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7c5dbd3ce8dfa128c538acb90abe2510694b1e554313e9a208817b7e69b7d69\": container with ID starting with b7c5dbd3ce8dfa128c538acb90abe2510694b1e554313e9a208817b7e69b7d69 not found: ID does not exist" containerID="b7c5dbd3ce8dfa128c538acb90abe2510694b1e554313e9a208817b7e69b7d69" Mar 18 13:07:37.207978 master-0 kubenswrapper[7599]: I0318 13:07:37.207703 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7c5dbd3ce8dfa128c538acb90abe2510694b1e554313e9a208817b7e69b7d69"} err="failed to get container status \"b7c5dbd3ce8dfa128c538acb90abe2510694b1e554313e9a208817b7e69b7d69\": rpc error: code = NotFound desc = could not find container \"b7c5dbd3ce8dfa128c538acb90abe2510694b1e554313e9a208817b7e69b7d69\": container with ID starting with b7c5dbd3ce8dfa128c538acb90abe2510694b1e554313e9a208817b7e69b7d69 not found: ID does not exist" Mar 18 13:07:37.207978 master-0 kubenswrapper[7599]: I0318 13:07:37.207919 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"814ffa63-b08e-4de8-b912-8d7f0638230b","Type":"ContainerStarted","Data":"399e36a17781740e987661a51d15dc9628e7dba92fbae5bfa7767552365b7e5a"} Mar 18 13:07:37.232759 master-0 kubenswrapper[7599]: I0318 13:07:37.231459 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b03ef547-0c72-404f-8309-8078ccd57f15-proxy-ca-bundles\") pod \"controller-manager-774f95d4b8-2gwjr\" (UID: \"b03ef547-0c72-404f-8309-8078ccd57f15\") " pod="openshift-controller-manager/controller-manager-774f95d4b8-2gwjr" Mar 18 13:07:37.232759 master-0 kubenswrapper[7599]: I0318 13:07:37.231553 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b03ef547-0c72-404f-8309-8078ccd57f15-config\") pod \"controller-manager-774f95d4b8-2gwjr\" (UID: \"b03ef547-0c72-404f-8309-8078ccd57f15\") " pod="openshift-controller-manager/controller-manager-774f95d4b8-2gwjr" Mar 18 13:07:37.232759 master-0 kubenswrapper[7599]: I0318 13:07:37.231593 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hz46\" (UniqueName: \"kubernetes.io/projected/b03ef547-0c72-404f-8309-8078ccd57f15-kube-api-access-9hz46\") pod \"controller-manager-774f95d4b8-2gwjr\" (UID: \"b03ef547-0c72-404f-8309-8078ccd57f15\") " pod="openshift-controller-manager/controller-manager-774f95d4b8-2gwjr" Mar 18 13:07:37.237792 master-0 kubenswrapper[7599]: I0318 13:07:37.234688 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b03ef547-0c72-404f-8309-8078ccd57f15-serving-cert\") pod \"controller-manager-774f95d4b8-2gwjr\" (UID: \"b03ef547-0c72-404f-8309-8078ccd57f15\") " pod="openshift-controller-manager/controller-manager-774f95d4b8-2gwjr" Mar 18 13:07:37.237792 master-0 kubenswrapper[7599]: I0318 13:07:37.234858 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b03ef547-0c72-404f-8309-8078ccd57f15-client-ca\") pod \"controller-manager-774f95d4b8-2gwjr\" (UID: \"b03ef547-0c72-404f-8309-8078ccd57f15\") " pod="openshift-controller-manager/controller-manager-774f95d4b8-2gwjr" Mar 18 13:07:37.237792 master-0 kubenswrapper[7599]: I0318 13:07:37.237201 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b03ef547-0c72-404f-8309-8078ccd57f15-client-ca\") pod \"controller-manager-774f95d4b8-2gwjr\" (UID: \"b03ef547-0c72-404f-8309-8078ccd57f15\") " pod="openshift-controller-manager/controller-manager-774f95d4b8-2gwjr" Mar 18 13:07:37.239203 master-0 kubenswrapper[7599]: I0318 13:07:37.239045 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-1-master-0" podStartSLOduration=11.239031034 podStartE2EDuration="11.239031034s" podCreationTimestamp="2026-03-18 13:07:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:07:37.237690591 +0000 UTC m=+32.198744833" watchObservedRunningTime="2026-03-18 13:07:37.239031034 +0000 UTC m=+32.200085276" Mar 18 13:07:37.239787 master-0 kubenswrapper[7599]: I0318 13:07:37.239320 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b03ef547-0c72-404f-8309-8078ccd57f15-proxy-ca-bundles\") pod \"controller-manager-774f95d4b8-2gwjr\" (UID: \"b03ef547-0c72-404f-8309-8078ccd57f15\") " pod="openshift-controller-manager/controller-manager-774f95d4b8-2gwjr" Mar 18 13:07:37.248606 master-0 kubenswrapper[7599]: I0318 13:07:37.248001 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b03ef547-0c72-404f-8309-8078ccd57f15-serving-cert\") pod \"controller-manager-774f95d4b8-2gwjr\" (UID: \"b03ef547-0c72-404f-8309-8078ccd57f15\") " pod="openshift-controller-manager/controller-manager-774f95d4b8-2gwjr" Mar 18 13:07:37.257483 master-0 kubenswrapper[7599]: I0318 13:07:37.255687 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b03ef547-0c72-404f-8309-8078ccd57f15-config\") pod \"controller-manager-774f95d4b8-2gwjr\" (UID: \"b03ef547-0c72-404f-8309-8078ccd57f15\") " pod="openshift-controller-manager/controller-manager-774f95d4b8-2gwjr" Mar 18 13:07:37.259988 master-0 kubenswrapper[7599]: I0318 13:07:37.259916 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hz46\" (UniqueName: \"kubernetes.io/projected/b03ef547-0c72-404f-8309-8078ccd57f15-kube-api-access-9hz46\") pod \"controller-manager-774f95d4b8-2gwjr\" (UID: \"b03ef547-0c72-404f-8309-8078ccd57f15\") " pod="openshift-controller-manager/controller-manager-774f95d4b8-2gwjr" Mar 18 13:07:37.260437 master-0 kubenswrapper[7599]: I0318 13:07:37.260383 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65c549646c-d9988"] Mar 18 13:07:37.272346 master-0 kubenswrapper[7599]: I0318 13:07:37.267950 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65c549646c-d9988"] Mar 18 13:07:37.287331 master-0 kubenswrapper[7599]: I0318 13:07:37.285771 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-1-master-0" podStartSLOduration=7.285753022 podStartE2EDuration="7.285753022s" podCreationTimestamp="2026-03-18 13:07:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:07:37.284538942 +0000 UTC m=+32.245593184" watchObservedRunningTime="2026-03-18 13:07:37.285753022 +0000 UTC m=+32.246807264" Mar 18 13:07:37.300132 master-0 kubenswrapper[7599]: I0318 13:07:37.300067 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-92s8c" Mar 18 13:07:37.381867 master-0 kubenswrapper[7599]: I0318 13:07:37.381829 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-774f95d4b8-2gwjr" Mar 18 13:07:37.405903 master-0 kubenswrapper[7599]: I0318 13:07:37.401313 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b2a641f-3fc0-4efe-b72e-429bfdedd2cb" path="/var/lib/kubelet/pods/5b2a641f-3fc0-4efe-b72e-429bfdedd2cb/volumes" Mar 18 13:07:37.504037 master-0 kubenswrapper[7599]: I0318 13:07:37.495547 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-7vddk"] Mar 18 13:07:37.507518 master-0 kubenswrapper[7599]: I0318 13:07:37.505835 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-7vddk" Mar 18 13:07:37.552619 master-0 kubenswrapper[7599]: I0318 13:07:37.551331 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/13c71f7d-1485-4f86-beb2-ee16cf420350-hosts-file\") pod \"node-resolver-7vddk\" (UID: \"13c71f7d-1485-4f86-beb2-ee16cf420350\") " pod="openshift-dns/node-resolver-7vddk" Mar 18 13:07:37.552619 master-0 kubenswrapper[7599]: I0318 13:07:37.551376 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zplb4\" (UniqueName: \"kubernetes.io/projected/13c71f7d-1485-4f86-beb2-ee16cf420350-kube-api-access-zplb4\") pod \"node-resolver-7vddk\" (UID: \"13c71f7d-1485-4f86-beb2-ee16cf420350\") " pod="openshift-dns/node-resolver-7vddk" Mar 18 13:07:37.637167 master-0 kubenswrapper[7599]: I0318 13:07:37.632912 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 18 13:07:37.659899 master-0 kubenswrapper[7599]: I0318 13:07:37.659852 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/13c71f7d-1485-4f86-beb2-ee16cf420350-hosts-file\") pod \"node-resolver-7vddk\" (UID: \"13c71f7d-1485-4f86-beb2-ee16cf420350\") " pod="openshift-dns/node-resolver-7vddk" Mar 18 13:07:37.659899 master-0 kubenswrapper[7599]: I0318 13:07:37.659889 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zplb4\" (UniqueName: \"kubernetes.io/projected/13c71f7d-1485-4f86-beb2-ee16cf420350-kube-api-access-zplb4\") pod \"node-resolver-7vddk\" (UID: \"13c71f7d-1485-4f86-beb2-ee16cf420350\") " pod="openshift-dns/node-resolver-7vddk" Mar 18 13:07:37.662548 master-0 kubenswrapper[7599]: I0318 13:07:37.660222 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/13c71f7d-1485-4f86-beb2-ee16cf420350-hosts-file\") pod \"node-resolver-7vddk\" (UID: \"13c71f7d-1485-4f86-beb2-ee16cf420350\") " pod="openshift-dns/node-resolver-7vddk" Mar 18 13:07:37.680160 master-0 kubenswrapper[7599]: I0318 13:07:37.679327 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-92s8c"] Mar 18 13:07:37.681209 master-0 kubenswrapper[7599]: I0318 13:07:37.681022 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zplb4\" (UniqueName: \"kubernetes.io/projected/13c71f7d-1485-4f86-beb2-ee16cf420350-kube-api-access-zplb4\") pod \"node-resolver-7vddk\" (UID: \"13c71f7d-1485-4f86-beb2-ee16cf420350\") " pod="openshift-dns/node-resolver-7vddk" Mar 18 13:07:37.704616 master-0 kubenswrapper[7599]: W0318 13:07:37.704564 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod029b127e_0faf_4957_b591_9c561b053cda.slice/crio-0bec5b0b6152a0f7f02d36d9ef96fae029938eb181145d603f9ae776f9e6ecbd WatchSource:0}: Error finding container 0bec5b0b6152a0f7f02d36d9ef96fae029938eb181145d603f9ae776f9e6ecbd: Status 404 returned error can't find the container with id 0bec5b0b6152a0f7f02d36d9ef96fae029938eb181145d603f9ae776f9e6ecbd Mar 18 13:07:37.744778 master-0 kubenswrapper[7599]: I0318 13:07:37.744513 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_5ea7380f-2659-4054-ac83-2e4e698f382d/installer/0.log" Mar 18 13:07:37.744778 master-0 kubenswrapper[7599]: I0318 13:07:37.744567 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 13:07:37.865304 master-0 kubenswrapper[7599]: I0318 13:07:37.865255 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5ea7380f-2659-4054-ac83-2e4e698f382d-var-lock\") pod \"5ea7380f-2659-4054-ac83-2e4e698f382d\" (UID: \"5ea7380f-2659-4054-ac83-2e4e698f382d\") " Mar 18 13:07:37.865304 master-0 kubenswrapper[7599]: I0318 13:07:37.865310 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ea7380f-2659-4054-ac83-2e4e698f382d-kube-api-access\") pod \"5ea7380f-2659-4054-ac83-2e4e698f382d\" (UID: \"5ea7380f-2659-4054-ac83-2e4e698f382d\") " Mar 18 13:07:37.865531 master-0 kubenswrapper[7599]: I0318 13:07:37.865352 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ea7380f-2659-4054-ac83-2e4e698f382d-kubelet-dir\") pod \"5ea7380f-2659-4054-ac83-2e4e698f382d\" (UID: \"5ea7380f-2659-4054-ac83-2e4e698f382d\") " Mar 18 13:07:37.865531 master-0 kubenswrapper[7599]: I0318 13:07:37.865341 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ea7380f-2659-4054-ac83-2e4e698f382d-var-lock" (OuterVolumeSpecName: "var-lock") pod "5ea7380f-2659-4054-ac83-2e4e698f382d" (UID: "5ea7380f-2659-4054-ac83-2e4e698f382d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:07:37.865646 master-0 kubenswrapper[7599]: I0318 13:07:37.865604 7599 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5ea7380f-2659-4054-ac83-2e4e698f382d-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:37.865687 master-0 kubenswrapper[7599]: I0318 13:07:37.865643 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ea7380f-2659-4054-ac83-2e4e698f382d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5ea7380f-2659-4054-ac83-2e4e698f382d" (UID: "5ea7380f-2659-4054-ac83-2e4e698f382d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:07:37.886469 master-0 kubenswrapper[7599]: I0318 13:07:37.879917 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ea7380f-2659-4054-ac83-2e4e698f382d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5ea7380f-2659-4054-ac83-2e4e698f382d" (UID: "5ea7380f-2659-4054-ac83-2e4e698f382d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:07:37.916438 master-0 kubenswrapper[7599]: I0318 13:07:37.912549 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-7vddk" Mar 18 13:07:37.949770 master-0 kubenswrapper[7599]: I0318 13:07:37.947324 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-774f95d4b8-2gwjr"] Mar 18 13:07:37.967520 master-0 kubenswrapper[7599]: I0318 13:07:37.966903 7599 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ea7380f-2659-4054-ac83-2e4e698f382d-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:37.967520 master-0 kubenswrapper[7599]: I0318 13:07:37.967525 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ea7380f-2659-4054-ac83-2e4e698f382d-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:38.216040 master-0 kubenswrapper[7599]: I0318 13:07:38.215855 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-774f95d4b8-2gwjr" event={"ID":"b03ef547-0c72-404f-8309-8078ccd57f15","Type":"ContainerStarted","Data":"37418d71a90f4e5e4fb106e54c1ba781c996f1f874aea276bb0d562acfaba992"} Mar 18 13:07:38.216040 master-0 kubenswrapper[7599]: I0318 13:07:38.215898 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-774f95d4b8-2gwjr" event={"ID":"b03ef547-0c72-404f-8309-8078ccd57f15","Type":"ContainerStarted","Data":"3093ac8ba00b798fb5f357f30926a61bbcc60409e576af7343bf0b0aec18058b"} Mar 18 13:07:38.216262 master-0 kubenswrapper[7599]: I0318 13:07:38.216184 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-774f95d4b8-2gwjr" Mar 18 13:07:38.218777 master-0 kubenswrapper[7599]: I0318 13:07:38.218747 7599 patch_prober.go:28] interesting pod/controller-manager-774f95d4b8-2gwjr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 18 13:07:38.218899 master-0 kubenswrapper[7599]: I0318 13:07:38.218787 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-774f95d4b8-2gwjr" podUID="b03ef547-0c72-404f-8309-8078ccd57f15" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 18 13:07:38.219279 master-0 kubenswrapper[7599]: I0318 13:07:38.219246 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"250b63d5-21ee-44d3-821e-f42a8112dc50","Type":"ContainerStarted","Data":"53bd0f911da22f6347919de47020dd5ee65cf68785aa75b9d25bd48d7e0221f2"} Mar 18 13:07:38.219342 master-0 kubenswrapper[7599]: I0318 13:07:38.219285 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"250b63d5-21ee-44d3-821e-f42a8112dc50","Type":"ContainerStarted","Data":"1c1ec2ef0ddc216ba6a24212f029996acc4207f26f3f7674359334d3b8b83054"} Mar 18 13:07:38.228289 master-0 kubenswrapper[7599]: I0318 13:07:38.227872 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-7vddk" event={"ID":"13c71f7d-1485-4f86-beb2-ee16cf420350","Type":"ContainerStarted","Data":"9e3149a06c6f175072a4f298029a63d5886a08058f2cfbf229c65bf7015d1f34"} Mar 18 13:07:38.231956 master-0 kubenswrapper[7599]: I0318 13:07:38.231834 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" event={"ID":"16f8e725-f18a-478e-88c5-87d54aeb4857","Type":"ContainerStarted","Data":"aebb870640af737294de5fde7faf1b19862e6f81b4ae715f35fdf208373b75e7"} Mar 18 13:07:38.231956 master-0 kubenswrapper[7599]: I0318 13:07:38.231880 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" event={"ID":"16f8e725-f18a-478e-88c5-87d54aeb4857","Type":"ContainerStarted","Data":"eefde7657bd37afb4c9f371b147972b63886e6cf2d4cda43e0d1e78de918e266"} Mar 18 13:07:38.232160 master-0 kubenswrapper[7599]: I0318 13:07:38.232118 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:07:38.232201 master-0 kubenswrapper[7599]: I0318 13:07:38.232106 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-774f95d4b8-2gwjr" podStartSLOduration=3.232087266 podStartE2EDuration="3.232087266s" podCreationTimestamp="2026-03-18 13:07:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:07:38.231637145 +0000 UTC m=+33.192691377" watchObservedRunningTime="2026-03-18 13:07:38.232087266 +0000 UTC m=+33.193141508" Mar 18 13:07:38.239868 master-0 kubenswrapper[7599]: I0318 13:07:38.239827 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" event={"ID":"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617","Type":"ContainerStarted","Data":"59c8efa660020136a15ef14448bc9cb0b22e7df7c1b1767ff473eca4a83bd7ff"} Mar 18 13:07:38.240042 master-0 kubenswrapper[7599]: I0318 13:07:38.239877 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" event={"ID":"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617","Type":"ContainerStarted","Data":"e6d3b86684e16237f7515b45dbb7b40a94f5f8bddf2d34d18c36a6a4d6af41b4"} Mar 18 13:07:38.240440 master-0 kubenswrapper[7599]: I0318 13:07:38.240394 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:07:38.247685 master-0 kubenswrapper[7599]: I0318 13:07:38.247197 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_5ea7380f-2659-4054-ac83-2e4e698f382d/installer/0.log" Mar 18 13:07:38.247685 master-0 kubenswrapper[7599]: I0318 13:07:38.247247 7599 generic.go:334] "Generic (PLEG): container finished" podID="5ea7380f-2659-4054-ac83-2e4e698f382d" containerID="c28d73dd01e2a2c007b30a41a2b628f1f5b28df07cc0688388b2cc159cc11b79" exitCode=2 Mar 18 13:07:38.247685 master-0 kubenswrapper[7599]: I0318 13:07:38.247306 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"5ea7380f-2659-4054-ac83-2e4e698f382d","Type":"ContainerDied","Data":"c28d73dd01e2a2c007b30a41a2b628f1f5b28df07cc0688388b2cc159cc11b79"} Mar 18 13:07:38.247685 master-0 kubenswrapper[7599]: I0318 13:07:38.247334 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"5ea7380f-2659-4054-ac83-2e4e698f382d","Type":"ContainerDied","Data":"36a47f299381d84049f4ba7292ec286424a9c867c28704d2d552780a1b889b9f"} Mar 18 13:07:38.247685 master-0 kubenswrapper[7599]: I0318 13:07:38.247353 7599 scope.go:117] "RemoveContainer" containerID="c28d73dd01e2a2c007b30a41a2b628f1f5b28df07cc0688388b2cc159cc11b79" Mar 18 13:07:38.247685 master-0 kubenswrapper[7599]: I0318 13:07:38.247479 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 13:07:38.248063 master-0 kubenswrapper[7599]: I0318 13:07:38.247991 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-2-master-0" podStartSLOduration=2.247971873 podStartE2EDuration="2.247971873s" podCreationTimestamp="2026-03-18 13:07:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:07:38.24586946 +0000 UTC m=+33.206923732" watchObservedRunningTime="2026-03-18 13:07:38.247971873 +0000 UTC m=+33.209026115" Mar 18 13:07:38.263503 master-0 kubenswrapper[7599]: I0318 13:07:38.261534 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"814ffa63-b08e-4de8-b912-8d7f0638230b","Type":"ContainerStarted","Data":"bd16bdf4e73c45c278128af3a659c5a213de4cb9ef8b0c72e75eabe56dd40dbc"} Mar 18 13:07:38.266773 master-0 kubenswrapper[7599]: I0318 13:07:38.266567 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-92s8c" event={"ID":"029b127e-0faf-4957-b591-9c561b053cda","Type":"ContainerStarted","Data":"0bec5b0b6152a0f7f02d36d9ef96fae029938eb181145d603f9ae776f9e6ecbd"} Mar 18 13:07:38.266934 master-0 kubenswrapper[7599]: I0318 13:07:38.266853 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" podStartSLOduration=3.266840064 podStartE2EDuration="3.266840064s" podCreationTimestamp="2026-03-18 13:07:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:07:38.265356077 +0000 UTC m=+33.226410319" watchObservedRunningTime="2026-03-18 13:07:38.266840064 +0000 UTC m=+33.227894306" Mar 18 13:07:38.290191 master-0 kubenswrapper[7599]: I0318 13:07:38.290149 7599 scope.go:117] "RemoveContainer" containerID="c28d73dd01e2a2c007b30a41a2b628f1f5b28df07cc0688388b2cc159cc11b79" Mar 18 13:07:38.290810 master-0 kubenswrapper[7599]: E0318 13:07:38.290613 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c28d73dd01e2a2c007b30a41a2b628f1f5b28df07cc0688388b2cc159cc11b79\": container with ID starting with c28d73dd01e2a2c007b30a41a2b628f1f5b28df07cc0688388b2cc159cc11b79 not found: ID does not exist" containerID="c28d73dd01e2a2c007b30a41a2b628f1f5b28df07cc0688388b2cc159cc11b79" Mar 18 13:07:38.290810 master-0 kubenswrapper[7599]: I0318 13:07:38.290655 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c28d73dd01e2a2c007b30a41a2b628f1f5b28df07cc0688388b2cc159cc11b79"} err="failed to get container status \"c28d73dd01e2a2c007b30a41a2b628f1f5b28df07cc0688388b2cc159cc11b79\": rpc error: code = NotFound desc = could not find container \"c28d73dd01e2a2c007b30a41a2b628f1f5b28df07cc0688388b2cc159cc11b79\": container with ID starting with c28d73dd01e2a2c007b30a41a2b628f1f5b28df07cc0688388b2cc159cc11b79 not found: ID does not exist" Mar 18 13:07:38.320128 master-0 kubenswrapper[7599]: I0318 13:07:38.320067 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" podStartSLOduration=3.320052254 podStartE2EDuration="3.320052254s" podCreationTimestamp="2026-03-18 13:07:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:07:38.298276 +0000 UTC m=+33.259330242" watchObservedRunningTime="2026-03-18 13:07:38.320052254 +0000 UTC m=+33.281106496" Mar 18 13:07:38.320286 master-0 kubenswrapper[7599]: I0318 13:07:38.320261 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 18 13:07:38.334999 master-0 kubenswrapper[7599]: I0318 13:07:38.334946 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 18 13:07:38.373680 master-0 kubenswrapper[7599]: I0318 13:07:38.373625 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-zrc8h\" (UID: \"720a1f60-c1cb-4aef-aaec-f082090ca631\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" Mar 18 13:07:38.373933 master-0 kubenswrapper[7599]: I0318 13:07:38.373766 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert\") pod \"catalog-operator-68f85b4d6c-t84s9\" (UID: \"d2455453-5943-49ef-bfea-cba077197da0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:07:38.373933 master-0 kubenswrapper[7599]: I0318 13:07:38.373793 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-p7vvx\" (UID: \"2ea9eb53-0385-4a1a-a64f-696f8520cf49\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" Mar 18 13:07:38.373933 master-0 kubenswrapper[7599]: I0318 13:07:38.373851 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert\") pod \"olm-operator-5c9796789-z8jkt\" (UID: \"822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt" Mar 18 13:07:38.373933 master-0 kubenswrapper[7599]: I0318 13:07:38.373900 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs\") pod \"network-metrics-daemon-kbfbq\" (UID: \"bf1cc230-0a79-4a1d-b500-a65d02e50973\") " pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:07:38.373933 master-0 kubenswrapper[7599]: I0318 13:07:38.373920 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-99pzm\" (UID: \"fe643e40-d06d-4e69-9be3-0065c2a78567\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:07:38.374129 master-0 kubenswrapper[7599]: I0318 13:07:38.373949 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-n8hgl\" (UID: \"8c0e5eca-819b-40f3-bf77-0cd90a4f6e94\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" Mar 18 13:07:38.374129 master-0 kubenswrapper[7599]: I0318 13:07:38.373966 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:07:38.379098 master-0 kubenswrapper[7599]: I0318 13:07:38.378710 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:07:38.380870 master-0 kubenswrapper[7599]: I0318 13:07:38.380843 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-99pzm\" (UID: \"fe643e40-d06d-4e69-9be3-0065c2a78567\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:07:38.381804 master-0 kubenswrapper[7599]: I0318 13:07:38.381770 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-zrc8h\" (UID: \"720a1f60-c1cb-4aef-aaec-f082090ca631\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" Mar 18 13:07:38.381858 master-0 kubenswrapper[7599]: I0318 13:07:38.381829 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert\") pod \"olm-operator-5c9796789-z8jkt\" (UID: \"822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt" Mar 18 13:07:38.382370 master-0 kubenswrapper[7599]: I0318 13:07:38.382343 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-p7vvx\" (UID: \"2ea9eb53-0385-4a1a-a64f-696f8520cf49\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" Mar 18 13:07:38.383063 master-0 kubenswrapper[7599]: I0318 13:07:38.383032 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-n8hgl\" (UID: \"8c0e5eca-819b-40f3-bf77-0cd90a4f6e94\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" Mar 18 13:07:38.384982 master-0 kubenswrapper[7599]: I0318 13:07:38.384937 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs\") pod \"network-metrics-daemon-kbfbq\" (UID: \"bf1cc230-0a79-4a1d-b500-a65d02e50973\") " pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:07:38.392477 master-0 kubenswrapper[7599]: I0318 13:07:38.391114 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:07:38.392477 master-0 kubenswrapper[7599]: I0318 13:07:38.391575 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" Mar 18 13:07:38.393870 master-0 kubenswrapper[7599]: I0318 13:07:38.393824 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:07:38.395887 master-0 kubenswrapper[7599]: I0318 13:07:38.395855 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:07:38.396185 master-0 kubenswrapper[7599]: I0318 13:07:38.396161 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt" Mar 18 13:07:38.400133 master-0 kubenswrapper[7599]: I0318 13:07:38.398568 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" Mar 18 13:07:38.405853 master-0 kubenswrapper[7599]: I0318 13:07:38.405208 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert\") pod \"catalog-operator-68f85b4d6c-t84s9\" (UID: \"d2455453-5943-49ef-bfea-cba077197da0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:07:38.516843 master-0 kubenswrapper[7599]: I0318 13:07:38.514182 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" Mar 18 13:07:38.698285 master-0 kubenswrapper[7599]: I0318 13:07:38.698119 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:07:39.039547 master-0 kubenswrapper[7599]: I0318 13:07:39.036983 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h"] Mar 18 13:07:39.109370 master-0 kubenswrapper[7599]: I0318 13:07:39.109234 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt"] Mar 18 13:07:39.138044 master-0 kubenswrapper[7599]: I0318 13:07:39.136647 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl"] Mar 18 13:07:39.138044 master-0 kubenswrapper[7599]: I0318 13:07:39.136937 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kbfbq"] Mar 18 13:07:39.138655 master-0 kubenswrapper[7599]: I0318 13:07:39.138068 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p"] Mar 18 13:07:39.144010 master-0 kubenswrapper[7599]: W0318 13:07:39.143971 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a4202c2_c330_4a5d_87e7_0a63d069113f.slice/crio-29147c6f3d8625422f173796ecc5c56624b69d9bc34abe3727182adc4dde3e20 WatchSource:0}: Error finding container 29147c6f3d8625422f173796ecc5c56624b69d9bc34abe3727182adc4dde3e20: Status 404 returned error can't find the container with id 29147c6f3d8625422f173796ecc5c56624b69d9bc34abe3727182adc4dde3e20 Mar 18 13:07:39.145852 master-0 kubenswrapper[7599]: W0318 13:07:39.145792 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c0e5eca_819b_40f3_bf77_0cd90a4f6e94.slice/crio-d5c45f47f10bb08721004bc944edd8b049be91900e107372ecc9bc0e512a2248 WatchSource:0}: Error finding container d5c45f47f10bb08721004bc944edd8b049be91900e107372ecc9bc0e512a2248: Status 404 returned error can't find the container with id d5c45f47f10bb08721004bc944edd8b049be91900e107372ecc9bc0e512a2248 Mar 18 13:07:39.162638 master-0 kubenswrapper[7599]: W0318 13:07:39.162587 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf1cc230_0a79_4a1d_b500_a65d02e50973.slice/crio-8a23814e5648f40975e3bf4990cc1d8a9b9e996452a93cd95f8834fb95ae4fd9 WatchSource:0}: Error finding container 8a23814e5648f40975e3bf4990cc1d8a9b9e996452a93cd95f8834fb95ae4fd9: Status 404 returned error can't find the container with id 8a23814e5648f40975e3bf4990cc1d8a9b9e996452a93cd95f8834fb95ae4fd9 Mar 18 13:07:39.272783 master-0 kubenswrapper[7599]: I0318 13:07:39.272743 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" event={"ID":"5a4202c2-c330-4a5d-87e7-0a63d069113f","Type":"ContainerStarted","Data":"bb9b31a21bb7804acfec780725627f507b26c743a8732b84d2fc722559953044"} Mar 18 13:07:39.276078 master-0 kubenswrapper[7599]: I0318 13:07:39.272794 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" event={"ID":"5a4202c2-c330-4a5d-87e7-0a63d069113f","Type":"ContainerStarted","Data":"29147c6f3d8625422f173796ecc5c56624b69d9bc34abe3727182adc4dde3e20"} Mar 18 13:07:39.280485 master-0 kubenswrapper[7599]: I0318 13:07:39.280443 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-7vddk" event={"ID":"13c71f7d-1485-4f86-beb2-ee16cf420350","Type":"ContainerStarted","Data":"60319838cb4c436130cf522bd5ef49f412cc405649c46fe810d603e975d6844e"} Mar 18 13:07:39.282110 master-0 kubenswrapper[7599]: I0318 13:07:39.282050 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kbfbq" event={"ID":"bf1cc230-0a79-4a1d-b500-a65d02e50973","Type":"ContainerStarted","Data":"8a23814e5648f40975e3bf4990cc1d8a9b9e996452a93cd95f8834fb95ae4fd9"} Mar 18 13:07:39.283336 master-0 kubenswrapper[7599]: I0318 13:07:39.283252 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" event={"ID":"8c0e5eca-819b-40f3-bf77-0cd90a4f6e94","Type":"ContainerStarted","Data":"d5c45f47f10bb08721004bc944edd8b049be91900e107372ecc9bc0e512a2248"} Mar 18 13:07:39.284780 master-0 kubenswrapper[7599]: I0318 13:07:39.284729 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" event={"ID":"720a1f60-c1cb-4aef-aaec-f082090ca631","Type":"ContainerStarted","Data":"54bd19e9b4d7f9ab310771b8b4db448ca0ec68978bb44a7d76ba5895f6b7148d"} Mar 18 13:07:39.286063 master-0 kubenswrapper[7599]: I0318 13:07:39.286014 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt" event={"ID":"822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9","Type":"ContainerStarted","Data":"1cceb4712c77ca2fdf0849f1bea9fd2ebeb3d8a95d1db4ec067d2a7d333a8d1f"} Mar 18 13:07:39.291240 master-0 kubenswrapper[7599]: I0318 13:07:39.291027 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-89ccd998f-99pzm"] Mar 18 13:07:39.296182 master-0 kubenswrapper[7599]: I0318 13:07:39.296133 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-7vddk" podStartSLOduration=2.29607045 podStartE2EDuration="2.29607045s" podCreationTimestamp="2026-03-18 13:07:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:07:39.295511526 +0000 UTC m=+34.256565768" watchObservedRunningTime="2026-03-18 13:07:39.29607045 +0000 UTC m=+34.257124692" Mar 18 13:07:39.299701 master-0 kubenswrapper[7599]: I0318 13:07:39.299685 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-774f95d4b8-2gwjr" Mar 18 13:07:39.354240 master-0 kubenswrapper[7599]: I0318 13:07:39.354215 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9"] Mar 18 13:07:39.359356 master-0 kubenswrapper[7599]: I0318 13:07:39.359225 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx"] Mar 18 13:07:39.363534 master-0 kubenswrapper[7599]: W0318 13:07:39.363474 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ea9eb53_0385_4a1a_a64f_696f8520cf49.slice/crio-5ea36913089cb553f8b6a17431d06736cf6ac63c1508cc4d7903325dd9e50f7f WatchSource:0}: Error finding container 5ea36913089cb553f8b6a17431d06736cf6ac63c1508cc4d7903325dd9e50f7f: Status 404 returned error can't find the container with id 5ea36913089cb553f8b6a17431d06736cf6ac63c1508cc4d7903325dd9e50f7f Mar 18 13:07:39.391021 master-0 kubenswrapper[7599]: I0318 13:07:39.390952 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ea7380f-2659-4054-ac83-2e4e698f382d" path="/var/lib/kubelet/pods/5ea7380f-2659-4054-ac83-2e4e698f382d/volumes" Mar 18 13:07:39.842445 master-0 kubenswrapper[7599]: I0318 13:07:39.842329 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:07:40.301239 master-0 kubenswrapper[7599]: I0318 13:07:40.300961 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" event={"ID":"5a4202c2-c330-4a5d-87e7-0a63d069113f","Type":"ContainerStarted","Data":"5fb9005824f3eda87674b34f8ef509039990a1d8b887fbb8b0af782cf52d8bd8"} Mar 18 13:07:40.302692 master-0 kubenswrapper[7599]: I0318 13:07:40.302652 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" event={"ID":"d2455453-5943-49ef-bfea-cba077197da0","Type":"ContainerStarted","Data":"637824f5bb31724423d6735813857b47b37d15ab88987d8a010fd58f58c5ab69"} Mar 18 13:07:40.304271 master-0 kubenswrapper[7599]: I0318 13:07:40.304112 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" event={"ID":"fe643e40-d06d-4e69-9be3-0065c2a78567","Type":"ContainerStarted","Data":"cb3a395c88586f9726036952a749f0819efe1ca07bfec591e8bf77ac60734a87"} Mar 18 13:07:40.312447 master-0 kubenswrapper[7599]: I0318 13:07:40.312340 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" event={"ID":"2ea9eb53-0385-4a1a-a64f-696f8520cf49","Type":"ContainerStarted","Data":"b4b45a7fb108962bc9dd2947cc8423b17d4611ef737a6e7507f8ef8f54c77640"} Mar 18 13:07:40.312700 master-0 kubenswrapper[7599]: I0318 13:07:40.312659 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" event={"ID":"2ea9eb53-0385-4a1a-a64f-696f8520cf49","Type":"ContainerStarted","Data":"5ea36913089cb553f8b6a17431d06736cf6ac63c1508cc4d7903325dd9e50f7f"} Mar 18 13:07:42.088327 master-0 kubenswrapper[7599]: I0318 13:07:42.088145 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-5blrl"] Mar 18 13:07:42.090086 master-0 kubenswrapper[7599]: E0318 13:07:42.089694 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ea7380f-2659-4054-ac83-2e4e698f382d" containerName="installer" Mar 18 13:07:42.090086 master-0 kubenswrapper[7599]: I0318 13:07:42.089717 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ea7380f-2659-4054-ac83-2e4e698f382d" containerName="installer" Mar 18 13:07:42.090086 master-0 kubenswrapper[7599]: I0318 13:07:42.089819 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ea7380f-2659-4054-ac83-2e4e698f382d" containerName="installer" Mar 18 13:07:42.091586 master-0 kubenswrapper[7599]: I0318 13:07:42.091191 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5blrl" Mar 18 13:07:42.093556 master-0 kubenswrapper[7599]: I0318 13:07:42.093524 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 18 13:07:42.179022 master-0 kubenswrapper[7599]: I0318 13:07:42.178973 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce3c462e-b655-40bc-811a-95ccde49fdb8-mcd-auth-proxy-config\") pod \"machine-config-daemon-5blrl\" (UID: \"ce3c462e-b655-40bc-811a-95ccde49fdb8\") " pod="openshift-machine-config-operator/machine-config-daemon-5blrl" Mar 18 13:07:42.179253 master-0 kubenswrapper[7599]: I0318 13:07:42.179140 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce3c462e-b655-40bc-811a-95ccde49fdb8-proxy-tls\") pod \"machine-config-daemon-5blrl\" (UID: \"ce3c462e-b655-40bc-811a-95ccde49fdb8\") " pod="openshift-machine-config-operator/machine-config-daemon-5blrl" Mar 18 13:07:42.179253 master-0 kubenswrapper[7599]: I0318 13:07:42.179208 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jxdg\" (UniqueName: \"kubernetes.io/projected/ce3c462e-b655-40bc-811a-95ccde49fdb8-kube-api-access-8jxdg\") pod \"machine-config-daemon-5blrl\" (UID: \"ce3c462e-b655-40bc-811a-95ccde49fdb8\") " pod="openshift-machine-config-operator/machine-config-daemon-5blrl" Mar 18 13:07:42.179253 master-0 kubenswrapper[7599]: I0318 13:07:42.179240 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ce3c462e-b655-40bc-811a-95ccde49fdb8-rootfs\") pod \"machine-config-daemon-5blrl\" (UID: \"ce3c462e-b655-40bc-811a-95ccde49fdb8\") " pod="openshift-machine-config-operator/machine-config-daemon-5blrl" Mar 18 13:07:42.279895 master-0 kubenswrapper[7599]: I0318 13:07:42.279669 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jxdg\" (UniqueName: \"kubernetes.io/projected/ce3c462e-b655-40bc-811a-95ccde49fdb8-kube-api-access-8jxdg\") pod \"machine-config-daemon-5blrl\" (UID: \"ce3c462e-b655-40bc-811a-95ccde49fdb8\") " pod="openshift-machine-config-operator/machine-config-daemon-5blrl" Mar 18 13:07:42.279895 master-0 kubenswrapper[7599]: I0318 13:07:42.279741 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ce3c462e-b655-40bc-811a-95ccde49fdb8-rootfs\") pod \"machine-config-daemon-5blrl\" (UID: \"ce3c462e-b655-40bc-811a-95ccde49fdb8\") " pod="openshift-machine-config-operator/machine-config-daemon-5blrl" Mar 18 13:07:42.279895 master-0 kubenswrapper[7599]: I0318 13:07:42.279792 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce3c462e-b655-40bc-811a-95ccde49fdb8-mcd-auth-proxy-config\") pod \"machine-config-daemon-5blrl\" (UID: \"ce3c462e-b655-40bc-811a-95ccde49fdb8\") " pod="openshift-machine-config-operator/machine-config-daemon-5blrl" Mar 18 13:07:42.279895 master-0 kubenswrapper[7599]: I0318 13:07:42.279827 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce3c462e-b655-40bc-811a-95ccde49fdb8-proxy-tls\") pod \"machine-config-daemon-5blrl\" (UID: \"ce3c462e-b655-40bc-811a-95ccde49fdb8\") " pod="openshift-machine-config-operator/machine-config-daemon-5blrl" Mar 18 13:07:42.280242 master-0 kubenswrapper[7599]: I0318 13:07:42.279944 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ce3c462e-b655-40bc-811a-95ccde49fdb8-rootfs\") pod \"machine-config-daemon-5blrl\" (UID: \"ce3c462e-b655-40bc-811a-95ccde49fdb8\") " pod="openshift-machine-config-operator/machine-config-daemon-5blrl" Mar 18 13:07:42.280911 master-0 kubenswrapper[7599]: I0318 13:07:42.280839 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce3c462e-b655-40bc-811a-95ccde49fdb8-mcd-auth-proxy-config\") pod \"machine-config-daemon-5blrl\" (UID: \"ce3c462e-b655-40bc-811a-95ccde49fdb8\") " pod="openshift-machine-config-operator/machine-config-daemon-5blrl" Mar 18 13:07:42.295626 master-0 kubenswrapper[7599]: I0318 13:07:42.284185 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce3c462e-b655-40bc-811a-95ccde49fdb8-proxy-tls\") pod \"machine-config-daemon-5blrl\" (UID: \"ce3c462e-b655-40bc-811a-95ccde49fdb8\") " pod="openshift-machine-config-operator/machine-config-daemon-5blrl" Mar 18 13:07:42.813511 master-0 kubenswrapper[7599]: I0318 13:07:42.807696 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jxdg\" (UniqueName: \"kubernetes.io/projected/ce3c462e-b655-40bc-811a-95ccde49fdb8-kube-api-access-8jxdg\") pod \"machine-config-daemon-5blrl\" (UID: \"ce3c462e-b655-40bc-811a-95ccde49fdb8\") " pod="openshift-machine-config-operator/machine-config-daemon-5blrl" Mar 18 13:07:43.016151 master-0 kubenswrapper[7599]: I0318 13:07:43.015395 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5blrl" Mar 18 13:07:46.124013 master-0 kubenswrapper[7599]: I0318 13:07:46.122887 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:07:46.124013 master-0 kubenswrapper[7599]: I0318 13:07:46.122945 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:07:46.132155 master-0 kubenswrapper[7599]: I0318 13:07:46.132103 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 18 13:07:46.134929 master-0 kubenswrapper[7599]: I0318 13:07:46.134887 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 13:07:46.142197 master-0 kubenswrapper[7599]: I0318 13:07:46.139923 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 18 13:07:46.145551 master-0 kubenswrapper[7599]: I0318 13:07:46.145515 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 18 13:07:46.252271 master-0 kubenswrapper[7599]: I0318 13:07:46.252212 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 18 13:07:46.253181 master-0 kubenswrapper[7599]: I0318 13:07:46.253039 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 13:07:46.255478 master-0 kubenswrapper[7599]: I0318 13:07:46.255446 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 18 13:07:46.256272 master-0 kubenswrapper[7599]: I0318 13:07:46.256228 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 18 13:07:46.294639 master-0 kubenswrapper[7599]: I0318 13:07:46.294596 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6298bf7b-ba09-4b4a-a0c6-f1989113eb5f-kube-api-access\") pod \"installer-1-master-0\" (UID: \"6298bf7b-ba09-4b4a-a0c6-f1989113eb5f\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 13:07:46.294822 master-0 kubenswrapper[7599]: I0318 13:07:46.294672 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6298bf7b-ba09-4b4a-a0c6-f1989113eb5f-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"6298bf7b-ba09-4b4a-a0c6-f1989113eb5f\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 13:07:46.294822 master-0 kubenswrapper[7599]: I0318 13:07:46.294699 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6298bf7b-ba09-4b4a-a0c6-f1989113eb5f-var-lock\") pod \"installer-1-master-0\" (UID: \"6298bf7b-ba09-4b4a-a0c6-f1989113eb5f\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 13:07:46.395920 master-0 kubenswrapper[7599]: I0318 13:07:46.395818 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6298bf7b-ba09-4b4a-a0c6-f1989113eb5f-var-lock\") pod \"installer-1-master-0\" (UID: \"6298bf7b-ba09-4b4a-a0c6-f1989113eb5f\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 13:07:46.396087 master-0 kubenswrapper[7599]: I0318 13:07:46.395919 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41-var-lock\") pod \"installer-1-master-0\" (UID: \"e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 13:07:46.396087 master-0 kubenswrapper[7599]: I0318 13:07:46.395954 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41-kube-api-access\") pod \"installer-1-master-0\" (UID: \"e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 13:07:46.396087 master-0 kubenswrapper[7599]: I0318 13:07:46.395992 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6298bf7b-ba09-4b4a-a0c6-f1989113eb5f-kube-api-access\") pod \"installer-1-master-0\" (UID: \"6298bf7b-ba09-4b4a-a0c6-f1989113eb5f\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 13:07:46.396087 master-0 kubenswrapper[7599]: I0318 13:07:46.396016 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 13:07:46.396087 master-0 kubenswrapper[7599]: I0318 13:07:46.396044 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6298bf7b-ba09-4b4a-a0c6-f1989113eb5f-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"6298bf7b-ba09-4b4a-a0c6-f1989113eb5f\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 13:07:46.396250 master-0 kubenswrapper[7599]: I0318 13:07:46.396125 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6298bf7b-ba09-4b4a-a0c6-f1989113eb5f-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"6298bf7b-ba09-4b4a-a0c6-f1989113eb5f\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 13:07:46.396250 master-0 kubenswrapper[7599]: I0318 13:07:46.396173 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6298bf7b-ba09-4b4a-a0c6-f1989113eb5f-var-lock\") pod \"installer-1-master-0\" (UID: \"6298bf7b-ba09-4b4a-a0c6-f1989113eb5f\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 13:07:46.428031 master-0 kubenswrapper[7599]: I0318 13:07:46.427969 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6298bf7b-ba09-4b4a-a0c6-f1989113eb5f-kube-api-access\") pod \"installer-1-master-0\" (UID: \"6298bf7b-ba09-4b4a-a0c6-f1989113eb5f\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 13:07:46.492437 master-0 kubenswrapper[7599]: I0318 13:07:46.485140 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 13:07:46.510438 master-0 kubenswrapper[7599]: I0318 13:07:46.497363 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41-var-lock\") pod \"installer-1-master-0\" (UID: \"e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 13:07:46.510438 master-0 kubenswrapper[7599]: I0318 13:07:46.497441 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41-kube-api-access\") pod \"installer-1-master-0\" (UID: \"e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 13:07:46.510438 master-0 kubenswrapper[7599]: I0318 13:07:46.497478 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 13:07:46.510438 master-0 kubenswrapper[7599]: I0318 13:07:46.497577 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 13:07:46.510438 master-0 kubenswrapper[7599]: I0318 13:07:46.497625 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41-var-lock\") pod \"installer-1-master-0\" (UID: \"e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 13:07:46.528442 master-0 kubenswrapper[7599]: I0318 13:07:46.525354 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41-kube-api-access\") pod \"installer-1-master-0\" (UID: \"e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 13:07:46.598443 master-0 kubenswrapper[7599]: I0318 13:07:46.598032 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 13:07:48.345471 master-0 kubenswrapper[7599]: I0318 13:07:48.342271 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 18 13:07:48.448338 master-0 kubenswrapper[7599]: I0318 13:07:48.448289 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 18 13:07:48.587346 master-0 kubenswrapper[7599]: I0318 13:07:48.587298 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kbfbq" event={"ID":"bf1cc230-0a79-4a1d-b500-a65d02e50973","Type":"ContainerStarted","Data":"556adc9fcaa1c4f729bd2c62ba03266f487249b8813b55699d8f5f124825641f"} Mar 18 13:07:48.604834 master-0 kubenswrapper[7599]: I0318 13:07:48.604452 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" event={"ID":"720a1f60-c1cb-4aef-aaec-f082090ca631","Type":"ContainerStarted","Data":"b6d0118c2fdf2cbc54c92133c6e31568d8996365d7d961746064b4d6f7f3d6e8"} Mar 18 13:07:48.610249 master-0 kubenswrapper[7599]: I0318 13:07:48.610206 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" event={"ID":"fe643e40-d06d-4e69-9be3-0065c2a78567","Type":"ContainerStarted","Data":"ae6b8122ce3ad297d1b8d967c790c62c2b0fe5b326636877eaeee68260e70360"} Mar 18 13:07:48.612262 master-0 kubenswrapper[7599]: I0318 13:07:48.611477 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:07:48.612802 master-0 kubenswrapper[7599]: I0318 13:07:48.612767 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41","Type":"ContainerStarted","Data":"3cb0fd8ad50843d858abaee21b28a02e53fe5cd0a20c10c6df87f1573285730f"} Mar 18 13:07:48.614245 master-0 kubenswrapper[7599]: I0318 13:07:48.614207 7599 patch_prober.go:28] interesting pod/marketplace-operator-89ccd998f-99pzm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" start-of-body= Mar 18 13:07:48.614308 master-0 kubenswrapper[7599]: I0318 13:07:48.614265 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" podUID="fe643e40-d06d-4e69-9be3-0065c2a78567" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" Mar 18 13:07:48.615479 master-0 kubenswrapper[7599]: I0318 13:07:48.615426 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"6298bf7b-ba09-4b4a-a0c6-f1989113eb5f","Type":"ContainerStarted","Data":"772ff10871f2688fceb3e214c45da5d6fc0693e88c7f44d1fd3c3965a234fca8"} Mar 18 13:07:48.636481 master-0 kubenswrapper[7599]: I0318 13:07:48.636244 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-92s8c" event={"ID":"029b127e-0faf-4957-b591-9c561b053cda","Type":"ContainerStarted","Data":"be09797185331aebbcbf41f53d2dbc11c634e6ebb97e729dc7217ba21143b152"} Mar 18 13:07:48.642367 master-0 kubenswrapper[7599]: I0318 13:07:48.640063 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" event={"ID":"8c0e5eca-819b-40f3-bf77-0cd90a4f6e94","Type":"ContainerStarted","Data":"ad72c23a28f38e825c0456b52af920cafa1e150c8d395ab556d6b63b8187ab88"} Mar 18 13:07:48.643479 master-0 kubenswrapper[7599]: I0318 13:07:48.643194 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c8db4484-th8hr" event={"ID":"f53a0371-0ce3-4a3f-b3e1-0043a0f3e806","Type":"ContainerStarted","Data":"cfb80ec80f3f4bb3da8f90e6e2d50739abb474eea488f9278f9fca73770892e5"} Mar 18 13:07:48.643479 master-0 kubenswrapper[7599]: I0318 13:07:48.643340 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-c8db4484-th8hr" podUID="f53a0371-0ce3-4a3f-b3e1-0043a0f3e806" containerName="route-controller-manager" containerID="cri-o://cfb80ec80f3f4bb3da8f90e6e2d50739abb474eea488f9278f9fca73770892e5" gracePeriod=30 Mar 18 13:07:48.643663 master-0 kubenswrapper[7599]: I0318 13:07:48.643570 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-c8db4484-th8hr" Mar 18 13:07:48.658260 master-0 kubenswrapper[7599]: I0318 13:07:48.656557 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5blrl" event={"ID":"ce3c462e-b655-40bc-811a-95ccde49fdb8","Type":"ContainerStarted","Data":"7e3df25b205a0f81e8b3659d8d979ba18ce4e4a3839b35bafa1b5c2dfee3ce6c"} Mar 18 13:07:48.658260 master-0 kubenswrapper[7599]: I0318 13:07:48.656600 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5blrl" event={"ID":"ce3c462e-b655-40bc-811a-95ccde49fdb8","Type":"ContainerStarted","Data":"4730d27ac1ee53c97b091eb46aa90f3ccfdd14d063c45f26304bbc54bbafa80e"} Mar 18 13:07:48.658260 master-0 kubenswrapper[7599]: I0318 13:07:48.656609 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5blrl" event={"ID":"ce3c462e-b655-40bc-811a-95ccde49fdb8","Type":"ContainerStarted","Data":"9d840b1327f66205cf6b23b15b1f1425e68ae2cb9d5dd3a177c50ba638a9ce65"} Mar 18 13:07:48.661938 master-0 kubenswrapper[7599]: I0318 13:07:48.661908 7599 patch_prober.go:28] interesting pod/route-controller-manager-c8db4484-th8hr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.37:8443/healthz\": EOF" start-of-body= Mar 18 13:07:48.661987 master-0 kubenswrapper[7599]: I0318 13:07:48.661954 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-c8db4484-th8hr" podUID="f53a0371-0ce3-4a3f-b3e1-0043a0f3e806" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.37:8443/healthz\": EOF" Mar 18 13:07:48.702950 master-0 kubenswrapper[7599]: I0318 13:07:48.702801 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-5blrl" podStartSLOduration=6.702781693 podStartE2EDuration="6.702781693s" podCreationTimestamp="2026-03-18 13:07:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:07:48.692444774 +0000 UTC m=+43.653499016" watchObservedRunningTime="2026-03-18 13:07:48.702781693 +0000 UTC m=+43.663835935" Mar 18 13:07:48.727151 master-0 kubenswrapper[7599]: I0318 13:07:48.726878 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-c8db4484-th8hr" podStartSLOduration=26.165410035 podStartE2EDuration="35.726854164s" podCreationTimestamp="2026-03-18 13:07:13 +0000 UTC" firstStartedPulling="2026-03-18 13:07:36.567318515 +0000 UTC m=+31.528372757" lastFinishedPulling="2026-03-18 13:07:46.128762644 +0000 UTC m=+41.089816886" observedRunningTime="2026-03-18 13:07:48.712528756 +0000 UTC m=+43.673582998" watchObservedRunningTime="2026-03-18 13:07:48.726854164 +0000 UTC m=+43.687908426" Mar 18 13:07:49.021209 master-0 kubenswrapper[7599]: I0318 13:07:49.021148 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-774f95d4b8-2gwjr"] Mar 18 13:07:49.030394 master-0 kubenswrapper[7599]: I0318 13:07:49.028207 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-774f95d4b8-2gwjr" podUID="b03ef547-0c72-404f-8309-8078ccd57f15" containerName="controller-manager" containerID="cri-o://37418d71a90f4e5e4fb106e54c1ba781c996f1f874aea276bb0d562acfaba992" gracePeriod=30 Mar 18 13:07:49.037437 master-0 kubenswrapper[7599]: I0318 13:07:49.036249 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c8db4484-th8hr" Mar 18 13:07:49.094576 master-0 kubenswrapper[7599]: I0318 13:07:49.094532 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56d4dd5dfc-8qqm4"] Mar 18 13:07:49.094760 master-0 kubenswrapper[7599]: E0318 13:07:49.094713 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f53a0371-0ce3-4a3f-b3e1-0043a0f3e806" containerName="route-controller-manager" Mar 18 13:07:49.094760 master-0 kubenswrapper[7599]: I0318 13:07:49.094727 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="f53a0371-0ce3-4a3f-b3e1-0043a0f3e806" containerName="route-controller-manager" Mar 18 13:07:49.094821 master-0 kubenswrapper[7599]: I0318 13:07:49.094802 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="f53a0371-0ce3-4a3f-b3e1-0043a0f3e806" containerName="route-controller-manager" Mar 18 13:07:49.095127 master-0 kubenswrapper[7599]: I0318 13:07:49.095109 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56d4dd5dfc-8qqm4" Mar 18 13:07:49.123168 master-0 kubenswrapper[7599]: I0318 13:07:49.123098 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56d4dd5dfc-8qqm4"] Mar 18 13:07:49.125463 master-0 kubenswrapper[7599]: I0318 13:07:49.124731 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56d4dd5dfc-8qqm4"] Mar 18 13:07:49.128824 master-0 kubenswrapper[7599]: I0318 13:07:49.127378 7599 status_manager.go:875] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-56d4dd5dfc-8qqm4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"168e6bb7-461b-4a11-a79f-cbbf58f47ffd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T13:07:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T13:07:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T13:07:49Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-18T13:07:49Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:446bedea4916d3c1ee52be94137e484659e9561bd1de95c8189eee279aae984b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/client-ca\\\",\\\"name\\\":\\\"client-ca\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chlpk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.32.10\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.32.10\\\"}],\\\"startTime\\\":\\\"2026-03-18T13:07:49Z\\\"}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-56d4dd5dfc-8qqm4\": pods \"route-controller-manager-56d4dd5dfc-8qqm4\" not found" Mar 18 13:07:49.146397 master-0 kubenswrapper[7599]: E0318 13:07:49.145915 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-chlpk serving-cert], unattached volumes=[], failed to process volumes=[client-ca config kube-api-access-chlpk serving-cert]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-56d4dd5dfc-8qqm4" podUID="168e6bb7-461b-4a11-a79f-cbbf58f47ffd" Mar 18 13:07:49.166744 master-0 kubenswrapper[7599]: I0318 13:07:49.166703 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-serving-cert\") pod \"f53a0371-0ce3-4a3f-b3e1-0043a0f3e806\" (UID: \"f53a0371-0ce3-4a3f-b3e1-0043a0f3e806\") " Mar 18 13:07:49.166895 master-0 kubenswrapper[7599]: I0318 13:07:49.166751 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-config\") pod \"f53a0371-0ce3-4a3f-b3e1-0043a0f3e806\" (UID: \"f53a0371-0ce3-4a3f-b3e1-0043a0f3e806\") " Mar 18 13:07:49.166895 master-0 kubenswrapper[7599]: I0318 13:07:49.166797 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-client-ca\") pod \"f53a0371-0ce3-4a3f-b3e1-0043a0f3e806\" (UID: \"f53a0371-0ce3-4a3f-b3e1-0043a0f3e806\") " Mar 18 13:07:49.166895 master-0 kubenswrapper[7599]: I0318 13:07:49.166840 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m4pc\" (UniqueName: \"kubernetes.io/projected/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-kube-api-access-5m4pc\") pod \"f53a0371-0ce3-4a3f-b3e1-0043a0f3e806\" (UID: \"f53a0371-0ce3-4a3f-b3e1-0043a0f3e806\") " Mar 18 13:07:49.170087 master-0 kubenswrapper[7599]: I0318 13:07:49.167478 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-config" (OuterVolumeSpecName: "config") pod "f53a0371-0ce3-4a3f-b3e1-0043a0f3e806" (UID: "f53a0371-0ce3-4a3f-b3e1-0043a0f3e806"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:07:49.170087 master-0 kubenswrapper[7599]: I0318 13:07:49.168185 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-client-ca" (OuterVolumeSpecName: "client-ca") pod "f53a0371-0ce3-4a3f-b3e1-0043a0f3e806" (UID: "f53a0371-0ce3-4a3f-b3e1-0043a0f3e806"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:07:49.170787 master-0 kubenswrapper[7599]: I0318 13:07:49.170687 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-kube-api-access-5m4pc" (OuterVolumeSpecName: "kube-api-access-5m4pc") pod "f53a0371-0ce3-4a3f-b3e1-0043a0f3e806" (UID: "f53a0371-0ce3-4a3f-b3e1-0043a0f3e806"). InnerVolumeSpecName "kube-api-access-5m4pc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:07:49.183235 master-0 kubenswrapper[7599]: I0318 13:07:49.179385 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6"] Mar 18 13:07:49.183235 master-0 kubenswrapper[7599]: I0318 13:07:49.180244 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" Mar 18 13:07:49.183982 master-0 kubenswrapper[7599]: I0318 13:07:49.183066 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f53a0371-0ce3-4a3f-b3e1-0043a0f3e806" (UID: "f53a0371-0ce3-4a3f-b3e1-0043a0f3e806"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:07:49.202254 master-0 kubenswrapper[7599]: I0318 13:07:49.202209 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6"] Mar 18 13:07:49.279019 master-0 kubenswrapper[7599]: I0318 13:07:49.278952 7599 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:49.279019 master-0 kubenswrapper[7599]: I0318 13:07:49.279003 7599 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:49.279355 master-0 kubenswrapper[7599]: I0318 13:07:49.279066 7599 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:49.279355 master-0 kubenswrapper[7599]: I0318 13:07:49.279119 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m4pc\" (UniqueName: \"kubernetes.io/projected/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806-kube-api-access-5m4pc\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:49.380537 master-0 kubenswrapper[7599]: I0318 13:07:49.380407 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a350f317-f058-4102-af5c-cbba46d35e02-config\") pod \"route-controller-manager-c8888769b-8mxp6\" (UID: \"a350f317-f058-4102-af5c-cbba46d35e02\") " pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" Mar 18 13:07:49.380537 master-0 kubenswrapper[7599]: I0318 13:07:49.380467 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a350f317-f058-4102-af5c-cbba46d35e02-serving-cert\") pod \"route-controller-manager-c8888769b-8mxp6\" (UID: \"a350f317-f058-4102-af5c-cbba46d35e02\") " pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" Mar 18 13:07:49.380537 master-0 kubenswrapper[7599]: I0318 13:07:49.380489 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a350f317-f058-4102-af5c-cbba46d35e02-client-ca\") pod \"route-controller-manager-c8888769b-8mxp6\" (UID: \"a350f317-f058-4102-af5c-cbba46d35e02\") " pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" Mar 18 13:07:49.380537 master-0 kubenswrapper[7599]: I0318 13:07:49.380515 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t56bf\" (UniqueName: \"kubernetes.io/projected/a350f317-f058-4102-af5c-cbba46d35e02-kube-api-access-t56bf\") pod \"route-controller-manager-c8888769b-8mxp6\" (UID: \"a350f317-f058-4102-af5c-cbba46d35e02\") " pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" Mar 18 13:07:49.388775 master-0 kubenswrapper[7599]: I0318 13:07:49.388651 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="168e6bb7-461b-4a11-a79f-cbbf58f47ffd" path="/var/lib/kubelet/pods/168e6bb7-461b-4a11-a79f-cbbf58f47ffd/volumes" Mar 18 13:07:49.478635 master-0 kubenswrapper[7599]: I0318 13:07:49.477477 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-774f95d4b8-2gwjr" Mar 18 13:07:49.483674 master-0 kubenswrapper[7599]: I0318 13:07:49.481746 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a350f317-f058-4102-af5c-cbba46d35e02-config\") pod \"route-controller-manager-c8888769b-8mxp6\" (UID: \"a350f317-f058-4102-af5c-cbba46d35e02\") " pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" Mar 18 13:07:49.483674 master-0 kubenswrapper[7599]: I0318 13:07:49.481797 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a350f317-f058-4102-af5c-cbba46d35e02-serving-cert\") pod \"route-controller-manager-c8888769b-8mxp6\" (UID: \"a350f317-f058-4102-af5c-cbba46d35e02\") " pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" Mar 18 13:07:49.483674 master-0 kubenswrapper[7599]: I0318 13:07:49.481829 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a350f317-f058-4102-af5c-cbba46d35e02-client-ca\") pod \"route-controller-manager-c8888769b-8mxp6\" (UID: \"a350f317-f058-4102-af5c-cbba46d35e02\") " pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" Mar 18 13:07:49.483674 master-0 kubenswrapper[7599]: I0318 13:07:49.481855 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t56bf\" (UniqueName: \"kubernetes.io/projected/a350f317-f058-4102-af5c-cbba46d35e02-kube-api-access-t56bf\") pod \"route-controller-manager-c8888769b-8mxp6\" (UID: \"a350f317-f058-4102-af5c-cbba46d35e02\") " pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" Mar 18 13:07:49.483674 master-0 kubenswrapper[7599]: I0318 13:07:49.483222 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a350f317-f058-4102-af5c-cbba46d35e02-config\") pod \"route-controller-manager-c8888769b-8mxp6\" (UID: \"a350f317-f058-4102-af5c-cbba46d35e02\") " pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" Mar 18 13:07:49.484455 master-0 kubenswrapper[7599]: I0318 13:07:49.484425 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a350f317-f058-4102-af5c-cbba46d35e02-client-ca\") pod \"route-controller-manager-c8888769b-8mxp6\" (UID: \"a350f317-f058-4102-af5c-cbba46d35e02\") " pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" Mar 18 13:07:49.486053 master-0 kubenswrapper[7599]: I0318 13:07:49.486002 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a350f317-f058-4102-af5c-cbba46d35e02-serving-cert\") pod \"route-controller-manager-c8888769b-8mxp6\" (UID: \"a350f317-f058-4102-af5c-cbba46d35e02\") " pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" Mar 18 13:07:49.506998 master-0 kubenswrapper[7599]: I0318 13:07:49.506946 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t56bf\" (UniqueName: \"kubernetes.io/projected/a350f317-f058-4102-af5c-cbba46d35e02-kube-api-access-t56bf\") pod \"route-controller-manager-c8888769b-8mxp6\" (UID: \"a350f317-f058-4102-af5c-cbba46d35e02\") " pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" Mar 18 13:07:49.583594 master-0 kubenswrapper[7599]: I0318 13:07:49.583450 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b03ef547-0c72-404f-8309-8078ccd57f15-serving-cert\") pod \"b03ef547-0c72-404f-8309-8078ccd57f15\" (UID: \"b03ef547-0c72-404f-8309-8078ccd57f15\") " Mar 18 13:07:49.584080 master-0 kubenswrapper[7599]: I0318 13:07:49.583879 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b03ef547-0c72-404f-8309-8078ccd57f15-proxy-ca-bundles\") pod \"b03ef547-0c72-404f-8309-8078ccd57f15\" (UID: \"b03ef547-0c72-404f-8309-8078ccd57f15\") " Mar 18 13:07:49.584080 master-0 kubenswrapper[7599]: I0318 13:07:49.583907 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b03ef547-0c72-404f-8309-8078ccd57f15-client-ca\") pod \"b03ef547-0c72-404f-8309-8078ccd57f15\" (UID: \"b03ef547-0c72-404f-8309-8078ccd57f15\") " Mar 18 13:07:49.584262 master-0 kubenswrapper[7599]: I0318 13:07:49.584085 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b03ef547-0c72-404f-8309-8078ccd57f15-config\") pod \"b03ef547-0c72-404f-8309-8078ccd57f15\" (UID: \"b03ef547-0c72-404f-8309-8078ccd57f15\") " Mar 18 13:07:49.584262 master-0 kubenswrapper[7599]: I0318 13:07:49.584132 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hz46\" (UniqueName: \"kubernetes.io/projected/b03ef547-0c72-404f-8309-8078ccd57f15-kube-api-access-9hz46\") pod \"b03ef547-0c72-404f-8309-8078ccd57f15\" (UID: \"b03ef547-0c72-404f-8309-8078ccd57f15\") " Mar 18 13:07:49.586639 master-0 kubenswrapper[7599]: I0318 13:07:49.586585 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b03ef547-0c72-404f-8309-8078ccd57f15-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b03ef547-0c72-404f-8309-8078ccd57f15" (UID: "b03ef547-0c72-404f-8309-8078ccd57f15"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:07:49.586759 master-0 kubenswrapper[7599]: I0318 13:07:49.586730 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b03ef547-0c72-404f-8309-8078ccd57f15-client-ca" (OuterVolumeSpecName: "client-ca") pod "b03ef547-0c72-404f-8309-8078ccd57f15" (UID: "b03ef547-0c72-404f-8309-8078ccd57f15"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:07:49.587047 master-0 kubenswrapper[7599]: I0318 13:07:49.587006 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b03ef547-0c72-404f-8309-8078ccd57f15-config" (OuterVolumeSpecName: "config") pod "b03ef547-0c72-404f-8309-8078ccd57f15" (UID: "b03ef547-0c72-404f-8309-8078ccd57f15"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:07:49.587115 master-0 kubenswrapper[7599]: I0318 13:07:49.587054 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b03ef547-0c72-404f-8309-8078ccd57f15-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b03ef547-0c72-404f-8309-8078ccd57f15" (UID: "b03ef547-0c72-404f-8309-8078ccd57f15"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:07:49.589859 master-0 kubenswrapper[7599]: I0318 13:07:49.589805 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b03ef547-0c72-404f-8309-8078ccd57f15-kube-api-access-9hz46" (OuterVolumeSpecName: "kube-api-access-9hz46") pod "b03ef547-0c72-404f-8309-8078ccd57f15" (UID: "b03ef547-0c72-404f-8309-8078ccd57f15"). InnerVolumeSpecName "kube-api-access-9hz46". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:07:49.665370 master-0 kubenswrapper[7599]: I0318 13:07:49.665309 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" event={"ID":"720a1f60-c1cb-4aef-aaec-f082090ca631","Type":"ContainerStarted","Data":"6fc3b00292545591e6c5349f2483ea9d57bac5ac21bd098a1969c029ee5e5b9a"} Mar 18 13:07:49.670230 master-0 kubenswrapper[7599]: I0318 13:07:49.667360 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"6298bf7b-ba09-4b4a-a0c6-f1989113eb5f","Type":"ContainerStarted","Data":"6aac6c573e5ccdfd0f675c97899bd4b5c29da8d75eaf745fc557c6d2353170c9"} Mar 18 13:07:49.670230 master-0 kubenswrapper[7599]: I0318 13:07:49.669311 7599 generic.go:334] "Generic (PLEG): container finished" podID="7fb5bad7-07d9-45ac-ad27-a887d12d148f" containerID="36dcdc5868f986f835679461c4df710fd18e0dcfbcbbdc4c74c1460f2651a842" exitCode=0 Mar 18 13:07:49.670230 master-0 kubenswrapper[7599]: I0318 13:07:49.669366 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" event={"ID":"7fb5bad7-07d9-45ac-ad27-a887d12d148f","Type":"ContainerDied","Data":"36dcdc5868f986f835679461c4df710fd18e0dcfbcbbdc4c74c1460f2651a842"} Mar 18 13:07:49.675038 master-0 kubenswrapper[7599]: I0318 13:07:49.675011 7599 generic.go:334] "Generic (PLEG): container finished" podID="b03ef547-0c72-404f-8309-8078ccd57f15" containerID="37418d71a90f4e5e4fb106e54c1ba781c996f1f874aea276bb0d562acfaba992" exitCode=0 Mar 18 13:07:49.675157 master-0 kubenswrapper[7599]: I0318 13:07:49.675069 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-774f95d4b8-2gwjr" event={"ID":"b03ef547-0c72-404f-8309-8078ccd57f15","Type":"ContainerDied","Data":"37418d71a90f4e5e4fb106e54c1ba781c996f1f874aea276bb0d562acfaba992"} Mar 18 13:07:49.675157 master-0 kubenswrapper[7599]: I0318 13:07:49.675090 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-774f95d4b8-2gwjr" event={"ID":"b03ef547-0c72-404f-8309-8078ccd57f15","Type":"ContainerDied","Data":"3093ac8ba00b798fb5f357f30926a61bbcc60409e576af7343bf0b0aec18058b"} Mar 18 13:07:49.675157 master-0 kubenswrapper[7599]: I0318 13:07:49.675109 7599 scope.go:117] "RemoveContainer" containerID="37418d71a90f4e5e4fb106e54c1ba781c996f1f874aea276bb0d562acfaba992" Mar 18 13:07:49.675274 master-0 kubenswrapper[7599]: I0318 13:07:49.675238 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-774f95d4b8-2gwjr" Mar 18 13:07:49.678678 master-0 kubenswrapper[7599]: I0318 13:07:49.678617 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41","Type":"ContainerStarted","Data":"31b0fc8784eb8367b69b8a7c847bfd1469f93f534490b89c89aa0c82a72151b2"} Mar 18 13:07:49.682347 master-0 kubenswrapper[7599]: I0318 13:07:49.682255 7599 generic.go:334] "Generic (PLEG): container finished" podID="a2bdf5b0-8764-4b15-97c9-20af36634fd0" containerID="fbf0aecf9f06b167d5a00c6e13e0a1fb74d188d7a55e8c083388c3f5b4d41a40" exitCode=0 Mar 18 13:07:49.682425 master-0 kubenswrapper[7599]: I0318 13:07:49.682327 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-85b59d8688-wd26k" event={"ID":"a2bdf5b0-8764-4b15-97c9-20af36634fd0","Type":"ContainerDied","Data":"fbf0aecf9f06b167d5a00c6e13e0a1fb74d188d7a55e8c083388c3f5b4d41a40"} Mar 18 13:07:49.689040 master-0 kubenswrapper[7599]: I0318 13:07:49.686672 7599 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b03ef547-0c72-404f-8309-8078ccd57f15-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:49.689040 master-0 kubenswrapper[7599]: I0318 13:07:49.687013 7599 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b03ef547-0c72-404f-8309-8078ccd57f15-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:49.689040 master-0 kubenswrapper[7599]: I0318 13:07:49.687048 7599 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b03ef547-0c72-404f-8309-8078ccd57f15-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:49.689040 master-0 kubenswrapper[7599]: I0318 13:07:49.687059 7599 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b03ef547-0c72-404f-8309-8078ccd57f15-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:49.689040 master-0 kubenswrapper[7599]: I0318 13:07:49.687081 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hz46\" (UniqueName: \"kubernetes.io/projected/b03ef547-0c72-404f-8309-8078ccd57f15-kube-api-access-9hz46\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:49.692540 master-0 kubenswrapper[7599]: I0318 13:07:49.692496 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kbfbq" event={"ID":"bf1cc230-0a79-4a1d-b500-a65d02e50973","Type":"ContainerStarted","Data":"2a8d489983f6bd76b9f322763b6391f38fc2342999f533803f70c94c9fb9e891"} Mar 18 13:07:49.699245 master-0 kubenswrapper[7599]: I0318 13:07:49.699180 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-92s8c" event={"ID":"029b127e-0faf-4957-b591-9c561b053cda","Type":"ContainerStarted","Data":"ff6705ae022d0bf84f4c53bcb269bd6cef0bdaa6cd2d1b607917b732069608ca"} Mar 18 13:07:49.699516 master-0 kubenswrapper[7599]: I0318 13:07:49.699335 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-92s8c" Mar 18 13:07:49.709098 master-0 kubenswrapper[7599]: I0318 13:07:49.709025 7599 generic.go:334] "Generic (PLEG): container finished" podID="f53a0371-0ce3-4a3f-b3e1-0043a0f3e806" containerID="cfb80ec80f3f4bb3da8f90e6e2d50739abb474eea488f9278f9fca73770892e5" exitCode=0 Mar 18 13:07:49.709285 master-0 kubenswrapper[7599]: I0318 13:07:49.709130 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56d4dd5dfc-8qqm4" Mar 18 13:07:49.709641 master-0 kubenswrapper[7599]: I0318 13:07:49.709529 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c8db4484-th8hr" Mar 18 13:07:49.710512 master-0 kubenswrapper[7599]: I0318 13:07:49.710047 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c8db4484-th8hr" event={"ID":"f53a0371-0ce3-4a3f-b3e1-0043a0f3e806","Type":"ContainerDied","Data":"cfb80ec80f3f4bb3da8f90e6e2d50739abb474eea488f9278f9fca73770892e5"} Mar 18 13:07:49.710512 master-0 kubenswrapper[7599]: I0318 13:07:49.710106 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c8db4484-th8hr" event={"ID":"f53a0371-0ce3-4a3f-b3e1-0043a0f3e806","Type":"ContainerDied","Data":"d0124db992c8b4a40b2921c9c854708919e8cb86949f743f1b0be67934dcf587"} Mar 18 13:07:49.714729 master-0 kubenswrapper[7599]: I0318 13:07:49.713102 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-1-master-0" podStartSLOduration=3.713084045 podStartE2EDuration="3.713084045s" podCreationTimestamp="2026-03-18 13:07:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:07:49.707245979 +0000 UTC m=+44.668300221" watchObservedRunningTime="2026-03-18 13:07:49.713084045 +0000 UTC m=+44.674138287" Mar 18 13:07:49.716070 master-0 kubenswrapper[7599]: I0318 13:07:49.715401 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:07:49.724815 master-0 kubenswrapper[7599]: I0318 13:07:49.724764 7599 scope.go:117] "RemoveContainer" containerID="37418d71a90f4e5e4fb106e54c1ba781c996f1f874aea276bb0d562acfaba992" Mar 18 13:07:49.725392 master-0 kubenswrapper[7599]: E0318 13:07:49.725353 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37418d71a90f4e5e4fb106e54c1ba781c996f1f874aea276bb0d562acfaba992\": container with ID starting with 37418d71a90f4e5e4fb106e54c1ba781c996f1f874aea276bb0d562acfaba992 not found: ID does not exist" containerID="37418d71a90f4e5e4fb106e54c1ba781c996f1f874aea276bb0d562acfaba992" Mar 18 13:07:49.725634 master-0 kubenswrapper[7599]: I0318 13:07:49.725605 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56d4dd5dfc-8qqm4" Mar 18 13:07:49.725801 master-0 kubenswrapper[7599]: I0318 13:07:49.725753 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37418d71a90f4e5e4fb106e54c1ba781c996f1f874aea276bb0d562acfaba992"} err="failed to get container status \"37418d71a90f4e5e4fb106e54c1ba781c996f1f874aea276bb0d562acfaba992\": rpc error: code = NotFound desc = could not find container \"37418d71a90f4e5e4fb106e54c1ba781c996f1f874aea276bb0d562acfaba992\": container with ID starting with 37418d71a90f4e5e4fb106e54c1ba781c996f1f874aea276bb0d562acfaba992 not found: ID does not exist" Mar 18 13:07:49.725801 master-0 kubenswrapper[7599]: I0318 13:07:49.725788 7599 scope.go:117] "RemoveContainer" containerID="cfb80ec80f3f4bb3da8f90e6e2d50739abb474eea488f9278f9fca73770892e5" Mar 18 13:07:49.754139 master-0 kubenswrapper[7599]: I0318 13:07:49.753370 7599 scope.go:117] "RemoveContainer" containerID="cfb80ec80f3f4bb3da8f90e6e2d50739abb474eea488f9278f9fca73770892e5" Mar 18 13:07:49.754139 master-0 kubenswrapper[7599]: E0318 13:07:49.753720 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfb80ec80f3f4bb3da8f90e6e2d50739abb474eea488f9278f9fca73770892e5\": container with ID starting with cfb80ec80f3f4bb3da8f90e6e2d50739abb474eea488f9278f9fca73770892e5 not found: ID does not exist" containerID="cfb80ec80f3f4bb3da8f90e6e2d50739abb474eea488f9278f9fca73770892e5" Mar 18 13:07:49.754139 master-0 kubenswrapper[7599]: I0318 13:07:49.753752 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfb80ec80f3f4bb3da8f90e6e2d50739abb474eea488f9278f9fca73770892e5"} err="failed to get container status \"cfb80ec80f3f4bb3da8f90e6e2d50739abb474eea488f9278f9fca73770892e5\": rpc error: code = NotFound desc = could not find container \"cfb80ec80f3f4bb3da8f90e6e2d50739abb474eea488f9278f9fca73770892e5\": container with ID starting with cfb80ec80f3f4bb3da8f90e6e2d50739abb474eea488f9278f9fca73770892e5 not found: ID does not exist" Mar 18 13:07:49.796997 master-0 kubenswrapper[7599]: I0318 13:07:49.788126 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-master-0" podStartSLOduration=3.78810621 podStartE2EDuration="3.78810621s" podCreationTimestamp="2026-03-18 13:07:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:07:49.782204673 +0000 UTC m=+44.743258915" watchObservedRunningTime="2026-03-18 13:07:49.78810621 +0000 UTC m=+44.749160452" Mar 18 13:07:49.810290 master-0 kubenswrapper[7599]: I0318 13:07:49.809452 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c8db4484-th8hr"] Mar 18 13:07:49.814157 master-0 kubenswrapper[7599]: I0318 13:07:49.814102 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c8db4484-th8hr"] Mar 18 13:07:49.817991 master-0 kubenswrapper[7599]: I0318 13:07:49.817915 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" Mar 18 13:07:49.867760 master-0 kubenswrapper[7599]: I0318 13:07:49.867211 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-92s8c" podStartSLOduration=3.741650477 podStartE2EDuration="13.867192957s" podCreationTimestamp="2026-03-18 13:07:36 +0000 UTC" firstStartedPulling="2026-03-18 13:07:37.708701203 +0000 UTC m=+32.669755445" lastFinishedPulling="2026-03-18 13:07:47.834243683 +0000 UTC m=+42.795297925" observedRunningTime="2026-03-18 13:07:49.865600447 +0000 UTC m=+44.826654689" watchObservedRunningTime="2026-03-18 13:07:49.867192957 +0000 UTC m=+44.828247199" Mar 18 13:07:49.884078 master-0 kubenswrapper[7599]: I0318 13:07:49.884030 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-774f95d4b8-2gwjr"] Mar 18 13:07:49.896273 master-0 kubenswrapper[7599]: I0318 13:07:49.896157 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-774f95d4b8-2gwjr"] Mar 18 13:07:50.300530 master-0 kubenswrapper[7599]: I0318 13:07:50.300392 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6"] Mar 18 13:07:50.489256 master-0 kubenswrapper[7599]: I0318 13:07:50.488905 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-fffb75699-b7pwr"] Mar 18 13:07:50.490246 master-0 kubenswrapper[7599]: E0318 13:07:50.489863 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b03ef547-0c72-404f-8309-8078ccd57f15" containerName="controller-manager" Mar 18 13:07:50.490246 master-0 kubenswrapper[7599]: I0318 13:07:50.489883 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="b03ef547-0c72-404f-8309-8078ccd57f15" containerName="controller-manager" Mar 18 13:07:50.490246 master-0 kubenswrapper[7599]: I0318 13:07:50.489970 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="b03ef547-0c72-404f-8309-8078ccd57f15" containerName="controller-manager" Mar 18 13:07:50.490959 master-0 kubenswrapper[7599]: I0318 13:07:50.490530 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-fffb75699-b7pwr" Mar 18 13:07:50.497526 master-0 kubenswrapper[7599]: I0318 13:07:50.494278 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 13:07:50.497526 master-0 kubenswrapper[7599]: I0318 13:07:50.494654 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 13:07:50.497526 master-0 kubenswrapper[7599]: I0318 13:07:50.494816 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 13:07:50.497526 master-0 kubenswrapper[7599]: I0318 13:07:50.494948 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 13:07:50.497526 master-0 kubenswrapper[7599]: I0318 13:07:50.495228 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 13:07:50.502916 master-0 kubenswrapper[7599]: I0318 13:07:50.502468 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-fffb75699-b7pwr"] Mar 18 13:07:50.512928 master-0 kubenswrapper[7599]: I0318 13:07:50.511676 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 13:07:50.592930 master-0 kubenswrapper[7599]: I0318 13:07:50.592884 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp"] Mar 18 13:07:50.594780 master-0 kubenswrapper[7599]: I0318 13:07:50.594754 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" podUID="2b06a568-4dad-44b4-8312-aa52911dbfb0" containerName="cluster-version-operator" containerID="cri-o://893058ed098fcd8ac9014b0600e582c79fa2f151d93e9514c20dfe69557e7efc" gracePeriod=130 Mar 18 13:07:50.603545 master-0 kubenswrapper[7599]: I0318 13:07:50.601554 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0-client-ca\") pod \"controller-manager-fffb75699-b7pwr\" (UID: \"6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0\") " pod="openshift-controller-manager/controller-manager-fffb75699-b7pwr" Mar 18 13:07:50.603545 master-0 kubenswrapper[7599]: I0318 13:07:50.601615 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0-proxy-ca-bundles\") pod \"controller-manager-fffb75699-b7pwr\" (UID: \"6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0\") " pod="openshift-controller-manager/controller-manager-fffb75699-b7pwr" Mar 18 13:07:50.603545 master-0 kubenswrapper[7599]: I0318 13:07:50.601636 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0-serving-cert\") pod \"controller-manager-fffb75699-b7pwr\" (UID: \"6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0\") " pod="openshift-controller-manager/controller-manager-fffb75699-b7pwr" Mar 18 13:07:50.603545 master-0 kubenswrapper[7599]: I0318 13:07:50.601662 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5nvb\" (UniqueName: \"kubernetes.io/projected/6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0-kube-api-access-m5nvb\") pod \"controller-manager-fffb75699-b7pwr\" (UID: \"6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0\") " pod="openshift-controller-manager/controller-manager-fffb75699-b7pwr" Mar 18 13:07:50.603545 master-0 kubenswrapper[7599]: I0318 13:07:50.601718 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0-config\") pod \"controller-manager-fffb75699-b7pwr\" (UID: \"6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0\") " pod="openshift-controller-manager/controller-manager-fffb75699-b7pwr" Mar 18 13:07:50.707884 master-0 kubenswrapper[7599]: I0318 13:07:50.703200 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0-serving-cert\") pod \"controller-manager-fffb75699-b7pwr\" (UID: \"6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0\") " pod="openshift-controller-manager/controller-manager-fffb75699-b7pwr" Mar 18 13:07:50.707884 master-0 kubenswrapper[7599]: I0318 13:07:50.703267 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5nvb\" (UniqueName: \"kubernetes.io/projected/6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0-kube-api-access-m5nvb\") pod \"controller-manager-fffb75699-b7pwr\" (UID: \"6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0\") " pod="openshift-controller-manager/controller-manager-fffb75699-b7pwr" Mar 18 13:07:50.707884 master-0 kubenswrapper[7599]: I0318 13:07:50.703295 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0-config\") pod \"controller-manager-fffb75699-b7pwr\" (UID: \"6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0\") " pod="openshift-controller-manager/controller-manager-fffb75699-b7pwr" Mar 18 13:07:50.707884 master-0 kubenswrapper[7599]: I0318 13:07:50.703323 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0-client-ca\") pod \"controller-manager-fffb75699-b7pwr\" (UID: \"6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0\") " pod="openshift-controller-manager/controller-manager-fffb75699-b7pwr" Mar 18 13:07:50.707884 master-0 kubenswrapper[7599]: I0318 13:07:50.703357 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0-proxy-ca-bundles\") pod \"controller-manager-fffb75699-b7pwr\" (UID: \"6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0\") " pod="openshift-controller-manager/controller-manager-fffb75699-b7pwr" Mar 18 13:07:50.707884 master-0 kubenswrapper[7599]: I0318 13:07:50.704834 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0-proxy-ca-bundles\") pod \"controller-manager-fffb75699-b7pwr\" (UID: \"6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0\") " pod="openshift-controller-manager/controller-manager-fffb75699-b7pwr" Mar 18 13:07:50.707884 master-0 kubenswrapper[7599]: I0318 13:07:50.707789 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0-config\") pod \"controller-manager-fffb75699-b7pwr\" (UID: \"6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0\") " pod="openshift-controller-manager/controller-manager-fffb75699-b7pwr" Mar 18 13:07:50.716942 master-0 kubenswrapper[7599]: I0318 13:07:50.708601 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0-client-ca\") pod \"controller-manager-fffb75699-b7pwr\" (UID: \"6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0\") " pod="openshift-controller-manager/controller-manager-fffb75699-b7pwr" Mar 18 13:07:50.718820 master-0 kubenswrapper[7599]: I0318 13:07:50.718757 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0-serving-cert\") pod \"controller-manager-fffb75699-b7pwr\" (UID: \"6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0\") " pod="openshift-controller-manager/controller-manager-fffb75699-b7pwr" Mar 18 13:07:50.723044 master-0 kubenswrapper[7599]: I0318 13:07:50.720339 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-85b59d8688-wd26k" event={"ID":"a2bdf5b0-8764-4b15-97c9-20af36634fd0","Type":"ContainerStarted","Data":"e591f5850f593f50de8a2695774798cb0f8224a8598b6cd3cc1b58fe720d0858"} Mar 18 13:07:50.723044 master-0 kubenswrapper[7599]: I0318 13:07:50.720378 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-85b59d8688-wd26k" event={"ID":"a2bdf5b0-8764-4b15-97c9-20af36634fd0","Type":"ContainerStarted","Data":"72bcea02bc364ef9e96c66bb5c3590d1a8a24253dd3b5839088e3771a465ac82"} Mar 18 13:07:50.731379 master-0 kubenswrapper[7599]: I0318 13:07:50.731313 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5nvb\" (UniqueName: \"kubernetes.io/projected/6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0-kube-api-access-m5nvb\") pod \"controller-manager-fffb75699-b7pwr\" (UID: \"6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0\") " pod="openshift-controller-manager/controller-manager-fffb75699-b7pwr" Mar 18 13:07:50.743131 master-0 kubenswrapper[7599]: I0318 13:07:50.743037 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" event={"ID":"7fb5bad7-07d9-45ac-ad27-a887d12d148f","Type":"ContainerStarted","Data":"c363b4bee719d98f91140350d9af5c483f50d31b877a20b1c896b84c11923483"} Mar 18 13:07:50.749594 master-0 kubenswrapper[7599]: I0318 13:07:50.749560 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56d4dd5dfc-8qqm4" Mar 18 13:07:50.759472 master-0 kubenswrapper[7599]: I0318 13:07:50.759382 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-85b59d8688-wd26k" podStartSLOduration=15.637857542999999 podStartE2EDuration="26.759368517s" podCreationTimestamp="2026-03-18 13:07:24 +0000 UTC" firstStartedPulling="2026-03-18 13:07:36.76718421 +0000 UTC m=+31.728238452" lastFinishedPulling="2026-03-18 13:07:47.888695184 +0000 UTC m=+42.849749426" observedRunningTime="2026-03-18 13:07:50.756234709 +0000 UTC m=+45.717288951" watchObservedRunningTime="2026-03-18 13:07:50.759368517 +0000 UTC m=+45.720422759" Mar 18 13:07:50.786436 master-0 kubenswrapper[7599]: I0318 13:07:50.784297 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" podStartSLOduration=4.829412262 podStartE2EDuration="15.78427764s" podCreationTimestamp="2026-03-18 13:07:35 +0000 UTC" firstStartedPulling="2026-03-18 13:07:36.848063682 +0000 UTC m=+31.809117924" lastFinishedPulling="2026-03-18 13:07:47.80292906 +0000 UTC m=+42.763983302" observedRunningTime="2026-03-18 13:07:50.782355862 +0000 UTC m=+45.743410104" watchObservedRunningTime="2026-03-18 13:07:50.78427764 +0000 UTC m=+45.745331872" Mar 18 13:07:50.870510 master-0 kubenswrapper[7599]: I0318 13:07:50.870053 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-fffb75699-b7pwr" Mar 18 13:07:50.919988 master-0 kubenswrapper[7599]: I0318 13:07:50.919629 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 18 13:07:50.919988 master-0 kubenswrapper[7599]: I0318 13:07:50.919860 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-2-master-0" podUID="250b63d5-21ee-44d3-821e-f42a8112dc50" containerName="installer" containerID="cri-o://53bd0f911da22f6347919de47020dd5ee65cf68785aa75b9d25bd48d7e0221f2" gracePeriod=30 Mar 18 13:07:51.148179 master-0 kubenswrapper[7599]: I0318 13:07:51.147862 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:07:51.148179 master-0 kubenswrapper[7599]: I0318 13:07:51.147937 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:07:51.262803 master-0 kubenswrapper[7599]: I0318 13:07:51.262744 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:07:51.341451 master-0 kubenswrapper[7599]: I0318 13:07:51.341073 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fdnt"] Mar 18 13:07:51.343488 master-0 kubenswrapper[7599]: I0318 13:07:51.343468 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fdnt" Mar 18 13:07:51.346359 master-0 kubenswrapper[7599]: I0318 13:07:51.346299 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 18 13:07:51.363019 master-0 kubenswrapper[7599]: I0318 13:07:51.362958 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fdnt"] Mar 18 13:07:51.388434 master-0 kubenswrapper[7599]: I0318 13:07:51.388380 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b03ef547-0c72-404f-8309-8078ccd57f15" path="/var/lib/kubelet/pods/b03ef547-0c72-404f-8309-8078ccd57f15/volumes" Mar 18 13:07:51.389329 master-0 kubenswrapper[7599]: I0318 13:07:51.389305 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f53a0371-0ce3-4a3f-b3e1-0043a0f3e806" path="/var/lib/kubelet/pods/f53a0371-0ce3-4a3f-b3e1-0043a0f3e806/volumes" Mar 18 13:07:51.529447 master-0 kubenswrapper[7599]: I0318 13:07:51.527338 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clhcj\" (UniqueName: \"kubernetes.io/projected/708812af-3249-4d57-8f28-055da22a7329-kube-api-access-clhcj\") pod \"machine-config-controller-b4f87c5b9-9fdnt\" (UID: \"708812af-3249-4d57-8f28-055da22a7329\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fdnt" Mar 18 13:07:51.529447 master-0 kubenswrapper[7599]: I0318 13:07:51.527455 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/708812af-3249-4d57-8f28-055da22a7329-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-9fdnt\" (UID: \"708812af-3249-4d57-8f28-055da22a7329\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fdnt" Mar 18 13:07:51.529447 master-0 kubenswrapper[7599]: I0318 13:07:51.527483 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/708812af-3249-4d57-8f28-055da22a7329-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-9fdnt\" (UID: \"708812af-3249-4d57-8f28-055da22a7329\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fdnt" Mar 18 13:07:51.628702 master-0 kubenswrapper[7599]: I0318 13:07:51.628387 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clhcj\" (UniqueName: \"kubernetes.io/projected/708812af-3249-4d57-8f28-055da22a7329-kube-api-access-clhcj\") pod \"machine-config-controller-b4f87c5b9-9fdnt\" (UID: \"708812af-3249-4d57-8f28-055da22a7329\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fdnt" Mar 18 13:07:51.628702 master-0 kubenswrapper[7599]: I0318 13:07:51.628472 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/708812af-3249-4d57-8f28-055da22a7329-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-9fdnt\" (UID: \"708812af-3249-4d57-8f28-055da22a7329\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fdnt" Mar 18 13:07:51.628702 master-0 kubenswrapper[7599]: I0318 13:07:51.628491 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/708812af-3249-4d57-8f28-055da22a7329-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-9fdnt\" (UID: \"708812af-3249-4d57-8f28-055da22a7329\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fdnt" Mar 18 13:07:51.629931 master-0 kubenswrapper[7599]: I0318 13:07:51.629887 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/708812af-3249-4d57-8f28-055da22a7329-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-9fdnt\" (UID: \"708812af-3249-4d57-8f28-055da22a7329\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fdnt" Mar 18 13:07:51.640222 master-0 kubenswrapper[7599]: I0318 13:07:51.640185 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/708812af-3249-4d57-8f28-055da22a7329-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-9fdnt\" (UID: \"708812af-3249-4d57-8f28-055da22a7329\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fdnt" Mar 18 13:07:51.644550 master-0 kubenswrapper[7599]: I0318 13:07:51.644491 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clhcj\" (UniqueName: \"kubernetes.io/projected/708812af-3249-4d57-8f28-055da22a7329-kube-api-access-clhcj\") pod \"machine-config-controller-b4f87c5b9-9fdnt\" (UID: \"708812af-3249-4d57-8f28-055da22a7329\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fdnt" Mar 18 13:07:51.665187 master-0 kubenswrapper[7599]: I0318 13:07:51.665114 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fdnt" Mar 18 13:07:51.761887 master-0 kubenswrapper[7599]: I0318 13:07:51.761831 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:07:53.126798 master-0 kubenswrapper[7599]: I0318 13:07:53.126751 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 18 13:07:53.127476 master-0 kubenswrapper[7599]: I0318 13:07:53.127453 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 13:07:53.140289 master-0 kubenswrapper[7599]: I0318 13:07:53.140225 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 18 13:07:53.250339 master-0 kubenswrapper[7599]: I0318 13:07:53.250280 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/615539dc-56e1-4489-9aee-33b3e769d4fc-kube-api-access\") pod \"installer-3-master-0\" (UID: \"615539dc-56e1-4489-9aee-33b3e769d4fc\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 13:07:53.250534 master-0 kubenswrapper[7599]: I0318 13:07:53.250366 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/615539dc-56e1-4489-9aee-33b3e769d4fc-var-lock\") pod \"installer-3-master-0\" (UID: \"615539dc-56e1-4489-9aee-33b3e769d4fc\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 13:07:53.250534 master-0 kubenswrapper[7599]: I0318 13:07:53.250395 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/615539dc-56e1-4489-9aee-33b3e769d4fc-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"615539dc-56e1-4489-9aee-33b3e769d4fc\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 13:07:53.352088 master-0 kubenswrapper[7599]: I0318 13:07:53.351579 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/615539dc-56e1-4489-9aee-33b3e769d4fc-var-lock\") pod \"installer-3-master-0\" (UID: \"615539dc-56e1-4489-9aee-33b3e769d4fc\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 13:07:53.352088 master-0 kubenswrapper[7599]: I0318 13:07:53.351625 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/615539dc-56e1-4489-9aee-33b3e769d4fc-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"615539dc-56e1-4489-9aee-33b3e769d4fc\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 13:07:53.352088 master-0 kubenswrapper[7599]: I0318 13:07:53.351665 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/615539dc-56e1-4489-9aee-33b3e769d4fc-kube-api-access\") pod \"installer-3-master-0\" (UID: \"615539dc-56e1-4489-9aee-33b3e769d4fc\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 13:07:53.352088 master-0 kubenswrapper[7599]: I0318 13:07:53.351931 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/615539dc-56e1-4489-9aee-33b3e769d4fc-var-lock\") pod \"installer-3-master-0\" (UID: \"615539dc-56e1-4489-9aee-33b3e769d4fc\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 13:07:53.352088 master-0 kubenswrapper[7599]: I0318 13:07:53.351981 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/615539dc-56e1-4489-9aee-33b3e769d4fc-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"615539dc-56e1-4489-9aee-33b3e769d4fc\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 13:07:53.380883 master-0 kubenswrapper[7599]: I0318 13:07:53.380778 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/615539dc-56e1-4489-9aee-33b3e769d4fc-kube-api-access\") pod \"installer-3-master-0\" (UID: \"615539dc-56e1-4489-9aee-33b3e769d4fc\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 13:07:53.460756 master-0 kubenswrapper[7599]: I0318 13:07:53.460707 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 13:07:54.088727 master-0 kubenswrapper[7599]: W0318 13:07:54.088649 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda350f317_f058_4102_af5c_cbba46d35e02.slice/crio-71a8c5f3dcdb995fab9a53a08bf0c85f486ffb1845cc1c509788983f480ff491 WatchSource:0}: Error finding container 71a8c5f3dcdb995fab9a53a08bf0c85f486ffb1845cc1c509788983f480ff491: Status 404 returned error can't find the container with id 71a8c5f3dcdb995fab9a53a08bf0c85f486ffb1845cc1c509788983f480ff491 Mar 18 13:07:54.153494 master-0 kubenswrapper[7599]: I0318 13:07:54.153462 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:07:54.267482 master-0 kubenswrapper[7599]: I0318 13:07:54.267344 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert\") pod \"2b06a568-4dad-44b4-8312-aa52911dbfb0\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " Mar 18 13:07:54.267482 master-0 kubenswrapper[7599]: I0318 13:07:54.267456 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b06a568-4dad-44b4-8312-aa52911dbfb0-kube-api-access\") pod \"2b06a568-4dad-44b4-8312-aa52911dbfb0\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " Mar 18 13:07:54.267796 master-0 kubenswrapper[7599]: I0318 13:07:54.267539 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2b06a568-4dad-44b4-8312-aa52911dbfb0-service-ca\") pod \"2b06a568-4dad-44b4-8312-aa52911dbfb0\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " Mar 18 13:07:54.267796 master-0 kubenswrapper[7599]: I0318 13:07:54.267572 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2b06a568-4dad-44b4-8312-aa52911dbfb0-etc-cvo-updatepayloads\") pod \"2b06a568-4dad-44b4-8312-aa52911dbfb0\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " Mar 18 13:07:54.267796 master-0 kubenswrapper[7599]: I0318 13:07:54.267609 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2b06a568-4dad-44b4-8312-aa52911dbfb0-etc-ssl-certs\") pod \"2b06a568-4dad-44b4-8312-aa52911dbfb0\" (UID: \"2b06a568-4dad-44b4-8312-aa52911dbfb0\") " Mar 18 13:07:54.267796 master-0 kubenswrapper[7599]: I0318 13:07:54.267618 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b06a568-4dad-44b4-8312-aa52911dbfb0-etc-cvo-updatepayloads" (OuterVolumeSpecName: "etc-cvo-updatepayloads") pod "2b06a568-4dad-44b4-8312-aa52911dbfb0" (UID: "2b06a568-4dad-44b4-8312-aa52911dbfb0"). InnerVolumeSpecName "etc-cvo-updatepayloads". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:07:54.268310 master-0 kubenswrapper[7599]: I0318 13:07:54.267832 7599 reconciler_common.go:293] "Volume detached for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2b06a568-4dad-44b4-8312-aa52911dbfb0-etc-cvo-updatepayloads\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:54.268310 master-0 kubenswrapper[7599]: I0318 13:07:54.267832 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b06a568-4dad-44b4-8312-aa52911dbfb0-etc-ssl-certs" (OuterVolumeSpecName: "etc-ssl-certs") pod "2b06a568-4dad-44b4-8312-aa52911dbfb0" (UID: "2b06a568-4dad-44b4-8312-aa52911dbfb0"). InnerVolumeSpecName "etc-ssl-certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:07:54.268310 master-0 kubenswrapper[7599]: I0318 13:07:54.268021 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b06a568-4dad-44b4-8312-aa52911dbfb0-service-ca" (OuterVolumeSpecName: "service-ca") pod "2b06a568-4dad-44b4-8312-aa52911dbfb0" (UID: "2b06a568-4dad-44b4-8312-aa52911dbfb0"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:07:54.271330 master-0 kubenswrapper[7599]: I0318 13:07:54.271281 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2b06a568-4dad-44b4-8312-aa52911dbfb0" (UID: "2b06a568-4dad-44b4-8312-aa52911dbfb0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:07:54.275333 master-0 kubenswrapper[7599]: I0318 13:07:54.273171 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b06a568-4dad-44b4-8312-aa52911dbfb0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2b06a568-4dad-44b4-8312-aa52911dbfb0" (UID: "2b06a568-4dad-44b4-8312-aa52911dbfb0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:07:54.373465 master-0 kubenswrapper[7599]: I0318 13:07:54.369469 7599 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2b06a568-4dad-44b4-8312-aa52911dbfb0-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:54.373465 master-0 kubenswrapper[7599]: I0318 13:07:54.369512 7599 reconciler_common.go:293] "Volume detached for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2b06a568-4dad-44b4-8312-aa52911dbfb0-etc-ssl-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:54.373465 master-0 kubenswrapper[7599]: I0318 13:07:54.369527 7599 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b06a568-4dad-44b4-8312-aa52911dbfb0-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:54.373465 master-0 kubenswrapper[7599]: I0318 13:07:54.369539 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b06a568-4dad-44b4-8312-aa52911dbfb0-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:07:54.556569 master-0 kubenswrapper[7599]: I0318 13:07:54.556515 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fdnt"] Mar 18 13:07:54.563775 master-0 kubenswrapper[7599]: I0318 13:07:54.563627 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 18 13:07:54.565817 master-0 kubenswrapper[7599]: W0318 13:07:54.565781 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod708812af_3249_4d57_8f28_055da22a7329.slice/crio-8f5a82461be0913418e367f26894d38008c42543db14c5256b1c342d3bda363f WatchSource:0}: Error finding container 8f5a82461be0913418e367f26894d38008c42543db14c5256b1c342d3bda363f: Status 404 returned error can't find the container with id 8f5a82461be0913418e367f26894d38008c42543db14c5256b1c342d3bda363f Mar 18 13:07:54.568672 master-0 kubenswrapper[7599]: W0318 13:07:54.568633 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod615539dc_56e1_4489_9aee_33b3e769d4fc.slice/crio-2ce4033218ef99b5e9450dbf20ed2bc82ac943f7fb09ce23f0d76d88d185685f WatchSource:0}: Error finding container 2ce4033218ef99b5e9450dbf20ed2bc82ac943f7fb09ce23f0d76d88d185685f: Status 404 returned error can't find the container with id 2ce4033218ef99b5e9450dbf20ed2bc82ac943f7fb09ce23f0d76d88d185685f Mar 18 13:07:54.688078 master-0 kubenswrapper[7599]: I0318 13:07:54.686985 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-fffb75699-b7pwr"] Mar 18 13:07:54.801125 master-0 kubenswrapper[7599]: I0318 13:07:54.800177 7599 generic.go:334] "Generic (PLEG): container finished" podID="2b06a568-4dad-44b4-8312-aa52911dbfb0" containerID="893058ed098fcd8ac9014b0600e582c79fa2f151d93e9514c20dfe69557e7efc" exitCode=0 Mar 18 13:07:54.801125 master-0 kubenswrapper[7599]: I0318 13:07:54.800271 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" Mar 18 13:07:54.801125 master-0 kubenswrapper[7599]: I0318 13:07:54.800506 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" event={"ID":"2b06a568-4dad-44b4-8312-aa52911dbfb0","Type":"ContainerDied","Data":"893058ed098fcd8ac9014b0600e582c79fa2f151d93e9514c20dfe69557e7efc"} Mar 18 13:07:54.801125 master-0 kubenswrapper[7599]: I0318 13:07:54.800566 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp" event={"ID":"2b06a568-4dad-44b4-8312-aa52911dbfb0","Type":"ContainerDied","Data":"8eed59e505def0bdb2b5fa411b6441ef6eb68d8285c1a135dd0ca4c116a1e491"} Mar 18 13:07:54.801125 master-0 kubenswrapper[7599]: I0318 13:07:54.800593 7599 scope.go:117] "RemoveContainer" containerID="893058ed098fcd8ac9014b0600e582c79fa2f151d93e9514c20dfe69557e7efc" Mar 18 13:07:54.801908 master-0 kubenswrapper[7599]: I0318 13:07:54.801883 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-fffb75699-b7pwr" event={"ID":"6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0","Type":"ContainerStarted","Data":"2db7c8466cbb5260cee9337225110234f66cfda1042ae3eab42421d66a814e6e"} Mar 18 13:07:54.804063 master-0 kubenswrapper[7599]: I0318 13:07:54.804027 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" event={"ID":"a350f317-f058-4102-af5c-cbba46d35e02","Type":"ContainerStarted","Data":"74afab1d4776e159eb27ac77593909c8a0f9782fdc2bad1e15b99fc960c20db9"} Mar 18 13:07:54.804162 master-0 kubenswrapper[7599]: I0318 13:07:54.804066 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" event={"ID":"a350f317-f058-4102-af5c-cbba46d35e02","Type":"ContainerStarted","Data":"71a8c5f3dcdb995fab9a53a08bf0c85f486ffb1845cc1c509788983f480ff491"} Mar 18 13:07:54.808798 master-0 kubenswrapper[7599]: I0318 13:07:54.806505 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" Mar 18 13:07:54.809347 master-0 kubenswrapper[7599]: I0318 13:07:54.809196 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"615539dc-56e1-4489-9aee-33b3e769d4fc","Type":"ContainerStarted","Data":"2ce4033218ef99b5e9450dbf20ed2bc82ac943f7fb09ce23f0d76d88d185685f"} Mar 18 13:07:54.812197 master-0 kubenswrapper[7599]: I0318 13:07:54.812165 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt" event={"ID":"822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9","Type":"ContainerStarted","Data":"9d653ae820cf4c6210b4fa575e4bc19b9b9f22c2b83029e507c4eb09ffdf189c"} Mar 18 13:07:54.816293 master-0 kubenswrapper[7599]: I0318 13:07:54.816250 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" Mar 18 13:07:54.817554 master-0 kubenswrapper[7599]: I0318 13:07:54.817145 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt" Mar 18 13:07:54.829483 master-0 kubenswrapper[7599]: I0318 13:07:54.829391 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" podStartSLOduration=5.829356988 podStartE2EDuration="5.829356988s" podCreationTimestamp="2026-03-18 13:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:07:54.825317896 +0000 UTC m=+49.786372138" watchObservedRunningTime="2026-03-18 13:07:54.829356988 +0000 UTC m=+49.790411240" Mar 18 13:07:54.834604 master-0 kubenswrapper[7599]: I0318 13:07:54.834552 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt" Mar 18 13:07:54.844084 master-0 kubenswrapper[7599]: I0318 13:07:54.843155 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" event={"ID":"2ea9eb53-0385-4a1a-a64f-696f8520cf49","Type":"ContainerStarted","Data":"7a03a9b2903f78b606da104794c398882ae1463636eb659e02174a991cae43c1"} Mar 18 13:07:54.844084 master-0 kubenswrapper[7599]: I0318 13:07:54.843345 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" Mar 18 13:07:54.846818 master-0 kubenswrapper[7599]: I0318 13:07:54.846528 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fdnt" event={"ID":"708812af-3249-4d57-8f28-055da22a7329","Type":"ContainerStarted","Data":"46c5f01485b7d374a9f96f911d1e08a0851a6da27ef5610d41f394290374b7e5"} Mar 18 13:07:54.846818 master-0 kubenswrapper[7599]: I0318 13:07:54.846578 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fdnt" event={"ID":"708812af-3249-4d57-8f28-055da22a7329","Type":"ContainerStarted","Data":"8f5a82461be0913418e367f26894d38008c42543db14c5256b1c342d3bda363f"} Mar 18 13:07:54.849770 master-0 kubenswrapper[7599]: I0318 13:07:54.849625 7599 scope.go:117] "RemoveContainer" containerID="893058ed098fcd8ac9014b0600e582c79fa2f151d93e9514c20dfe69557e7efc" Mar 18 13:07:54.851251 master-0 kubenswrapper[7599]: E0318 13:07:54.851142 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"893058ed098fcd8ac9014b0600e582c79fa2f151d93e9514c20dfe69557e7efc\": container with ID starting with 893058ed098fcd8ac9014b0600e582c79fa2f151d93e9514c20dfe69557e7efc not found: ID does not exist" containerID="893058ed098fcd8ac9014b0600e582c79fa2f151d93e9514c20dfe69557e7efc" Mar 18 13:07:54.851251 master-0 kubenswrapper[7599]: I0318 13:07:54.851208 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"893058ed098fcd8ac9014b0600e582c79fa2f151d93e9514c20dfe69557e7efc"} err="failed to get container status \"893058ed098fcd8ac9014b0600e582c79fa2f151d93e9514c20dfe69557e7efc\": rpc error: code = NotFound desc = could not find container \"893058ed098fcd8ac9014b0600e582c79fa2f151d93e9514c20dfe69557e7efc\": container with ID starting with 893058ed098fcd8ac9014b0600e582c79fa2f151d93e9514c20dfe69557e7efc not found: ID does not exist" Mar 18 13:07:54.875458 master-0 kubenswrapper[7599]: I0318 13:07:54.874644 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:54.875458 master-0 kubenswrapper[7599]: I0318 13:07:54.875189 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:54.896649 master-0 kubenswrapper[7599]: I0318 13:07:54.896603 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:54.916778 master-0 kubenswrapper[7599]: I0318 13:07:54.915471 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-7dcf5569b5-gvmtv"] Mar 18 13:07:54.916778 master-0 kubenswrapper[7599]: E0318 13:07:54.915728 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b06a568-4dad-44b4-8312-aa52911dbfb0" containerName="cluster-version-operator" Mar 18 13:07:54.916778 master-0 kubenswrapper[7599]: I0318 13:07:54.915740 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b06a568-4dad-44b4-8312-aa52911dbfb0" containerName="cluster-version-operator" Mar 18 13:07:54.916778 master-0 kubenswrapper[7599]: I0318 13:07:54.915830 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b06a568-4dad-44b4-8312-aa52911dbfb0" containerName="cluster-version-operator" Mar 18 13:07:54.916778 master-0 kubenswrapper[7599]: I0318 13:07:54.916149 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:07:54.920442 master-0 kubenswrapper[7599]: I0318 13:07:54.920336 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 18 13:07:54.920579 master-0 kubenswrapper[7599]: I0318 13:07:54.920341 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 18 13:07:54.921273 master-0 kubenswrapper[7599]: I0318 13:07:54.921021 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-b4bf74f6-tw7c7"] Mar 18 13:07:54.922081 master-0 kubenswrapper[7599]: I0318 13:07:54.921354 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 18 13:07:54.922081 master-0 kubenswrapper[7599]: I0318 13:07:54.921648 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-tw7c7" Mar 18 13:07:54.922930 master-0 kubenswrapper[7599]: I0318 13:07:54.922451 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 18 13:07:54.922930 master-0 kubenswrapper[7599]: I0318 13:07:54.922654 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 18 13:07:54.922930 master-0 kubenswrapper[7599]: I0318 13:07:54.922766 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 18 13:07:54.928800 master-0 kubenswrapper[7599]: I0318 13:07:54.928249 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wsmsc"] Mar 18 13:07:54.929085 master-0 kubenswrapper[7599]: I0318 13:07:54.929041 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wsmsc" Mar 18 13:07:54.930319 master-0 kubenswrapper[7599]: I0318 13:07:54.930295 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 18 13:07:54.947045 master-0 kubenswrapper[7599]: I0318 13:07:54.944225 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp"] Mar 18 13:07:54.947045 master-0 kubenswrapper[7599]: I0318 13:07:54.946345 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-version/cluster-version-operator-56d8475767-f2hjp"] Mar 18 13:07:54.949982 master-0 kubenswrapper[7599]: I0318 13:07:54.948837 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-b4bf74f6-tw7c7"] Mar 18 13:07:54.950658 master-0 kubenswrapper[7599]: I0318 13:07:54.950453 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wsmsc"] Mar 18 13:07:54.992819 master-0 kubenswrapper[7599]: I0318 13:07:54.992108 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7d58488df-bqmqw"] Mar 18 13:07:54.994781 master-0 kubenswrapper[7599]: I0318 13:07:54.994747 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7d58488df-bqmqw" Mar 18 13:07:54.996537 master-0 kubenswrapper[7599]: I0318 13:07:54.996505 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 18 13:07:54.996764 master-0 kubenswrapper[7599]: I0318 13:07:54.996731 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 18 13:07:54.997175 master-0 kubenswrapper[7599]: I0318 13:07:54.997137 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 18 13:07:55.080443 master-0 kubenswrapper[7599]: I0318 13:07:55.079056 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00375107-9a3b-4161-a90d-72ea8827c5fc-service-ca-bundle\") pod \"router-default-7dcf5569b5-gvmtv\" (UID: \"00375107-9a3b-4161-a90d-72ea8827c5fc\") " pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:07:55.080443 master-0 kubenswrapper[7599]: I0318 13:07:55.079127 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/00375107-9a3b-4161-a90d-72ea8827c5fc-default-certificate\") pod \"router-default-7dcf5569b5-gvmtv\" (UID: \"00375107-9a3b-4161-a90d-72ea8827c5fc\") " pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:07:55.080443 master-0 kubenswrapper[7599]: I0318 13:07:55.079254 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/6db2bfbd-d8db-4384-8979-23e8a1e87e5e-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-wsmsc\" (UID: \"6db2bfbd-d8db-4384-8979-23e8a1e87e5e\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wsmsc" Mar 18 13:07:55.080443 master-0 kubenswrapper[7599]: I0318 13:07:55.079333 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00375107-9a3b-4161-a90d-72ea8827c5fc-metrics-certs\") pod \"router-default-7dcf5569b5-gvmtv\" (UID: \"00375107-9a3b-4161-a90d-72ea8827c5fc\") " pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:07:55.080443 master-0 kubenswrapper[7599]: I0318 13:07:55.079358 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wnqw\" (UniqueName: \"kubernetes.io/projected/f3be6654-f969-4952-976d-218c86af7d2d-kube-api-access-9wnqw\") pod \"network-check-source-b4bf74f6-tw7c7\" (UID: \"f3be6654-f969-4952-976d-218c86af7d2d\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-tw7c7" Mar 18 13:07:55.080443 master-0 kubenswrapper[7599]: I0318 13:07:55.079461 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/00375107-9a3b-4161-a90d-72ea8827c5fc-stats-auth\") pod \"router-default-7dcf5569b5-gvmtv\" (UID: \"00375107-9a3b-4161-a90d-72ea8827c5fc\") " pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:07:55.080443 master-0 kubenswrapper[7599]: I0318 13:07:55.079514 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlb8t\" (UniqueName: \"kubernetes.io/projected/00375107-9a3b-4161-a90d-72ea8827c5fc-kube-api-access-zlb8t\") pod \"router-default-7dcf5569b5-gvmtv\" (UID: \"00375107-9a3b-4161-a90d-72ea8827c5fc\") " pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:07:55.181101 master-0 kubenswrapper[7599]: I0318 13:07:55.181007 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8ffe2e75-9cc3-4244-95c8-800463c5aa28-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-bqmqw\" (UID: \"8ffe2e75-9cc3-4244-95c8-800463c5aa28\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bqmqw" Mar 18 13:07:55.181101 master-0 kubenswrapper[7599]: I0318 13:07:55.181065 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/00375107-9a3b-4161-a90d-72ea8827c5fc-default-certificate\") pod \"router-default-7dcf5569b5-gvmtv\" (UID: \"00375107-9a3b-4161-a90d-72ea8827c5fc\") " pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:07:55.181101 master-0 kubenswrapper[7599]: I0318 13:07:55.181093 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/6db2bfbd-d8db-4384-8979-23e8a1e87e5e-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-wsmsc\" (UID: \"6db2bfbd-d8db-4384-8979-23e8a1e87e5e\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wsmsc" Mar 18 13:07:55.181101 master-0 kubenswrapper[7599]: I0318 13:07:55.181118 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00375107-9a3b-4161-a90d-72ea8827c5fc-metrics-certs\") pod \"router-default-7dcf5569b5-gvmtv\" (UID: \"00375107-9a3b-4161-a90d-72ea8827c5fc\") " pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:07:55.181921 master-0 kubenswrapper[7599]: I0318 13:07:55.181138 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wnqw\" (UniqueName: \"kubernetes.io/projected/f3be6654-f969-4952-976d-218c86af7d2d-kube-api-access-9wnqw\") pod \"network-check-source-b4bf74f6-tw7c7\" (UID: \"f3be6654-f969-4952-976d-218c86af7d2d\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-tw7c7" Mar 18 13:07:55.181921 master-0 kubenswrapper[7599]: I0318 13:07:55.181167 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/00375107-9a3b-4161-a90d-72ea8827c5fc-stats-auth\") pod \"router-default-7dcf5569b5-gvmtv\" (UID: \"00375107-9a3b-4161-a90d-72ea8827c5fc\") " pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:07:55.181921 master-0 kubenswrapper[7599]: I0318 13:07:55.181190 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8ffe2e75-9cc3-4244-95c8-800463c5aa28-service-ca\") pod \"cluster-version-operator-7d58488df-bqmqw\" (UID: \"8ffe2e75-9cc3-4244-95c8-800463c5aa28\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bqmqw" Mar 18 13:07:55.181921 master-0 kubenswrapper[7599]: I0318 13:07:55.181209 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ffe2e75-9cc3-4244-95c8-800463c5aa28-serving-cert\") pod \"cluster-version-operator-7d58488df-bqmqw\" (UID: \"8ffe2e75-9cc3-4244-95c8-800463c5aa28\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bqmqw" Mar 18 13:07:55.181921 master-0 kubenswrapper[7599]: I0318 13:07:55.181224 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ffe2e75-9cc3-4244-95c8-800463c5aa28-kube-api-access\") pod \"cluster-version-operator-7d58488df-bqmqw\" (UID: \"8ffe2e75-9cc3-4244-95c8-800463c5aa28\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bqmqw" Mar 18 13:07:55.181921 master-0 kubenswrapper[7599]: I0318 13:07:55.181240 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8ffe2e75-9cc3-4244-95c8-800463c5aa28-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-bqmqw\" (UID: \"8ffe2e75-9cc3-4244-95c8-800463c5aa28\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bqmqw" Mar 18 13:07:55.181921 master-0 kubenswrapper[7599]: I0318 13:07:55.181257 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlb8t\" (UniqueName: \"kubernetes.io/projected/00375107-9a3b-4161-a90d-72ea8827c5fc-kube-api-access-zlb8t\") pod \"router-default-7dcf5569b5-gvmtv\" (UID: \"00375107-9a3b-4161-a90d-72ea8827c5fc\") " pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:07:55.181921 master-0 kubenswrapper[7599]: I0318 13:07:55.181275 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00375107-9a3b-4161-a90d-72ea8827c5fc-service-ca-bundle\") pod \"router-default-7dcf5569b5-gvmtv\" (UID: \"00375107-9a3b-4161-a90d-72ea8827c5fc\") " pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:07:55.182220 master-0 kubenswrapper[7599]: I0318 13:07:55.182052 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00375107-9a3b-4161-a90d-72ea8827c5fc-service-ca-bundle\") pod \"router-default-7dcf5569b5-gvmtv\" (UID: \"00375107-9a3b-4161-a90d-72ea8827c5fc\") " pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:07:55.185622 master-0 kubenswrapper[7599]: I0318 13:07:55.185577 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/00375107-9a3b-4161-a90d-72ea8827c5fc-default-certificate\") pod \"router-default-7dcf5569b5-gvmtv\" (UID: \"00375107-9a3b-4161-a90d-72ea8827c5fc\") " pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:07:55.187953 master-0 kubenswrapper[7599]: I0318 13:07:55.187636 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00375107-9a3b-4161-a90d-72ea8827c5fc-metrics-certs\") pod \"router-default-7dcf5569b5-gvmtv\" (UID: \"00375107-9a3b-4161-a90d-72ea8827c5fc\") " pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:07:55.189956 master-0 kubenswrapper[7599]: I0318 13:07:55.189914 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/6db2bfbd-d8db-4384-8979-23e8a1e87e5e-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-wsmsc\" (UID: \"6db2bfbd-d8db-4384-8979-23e8a1e87e5e\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wsmsc" Mar 18 13:07:55.190923 master-0 kubenswrapper[7599]: I0318 13:07:55.190883 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/00375107-9a3b-4161-a90d-72ea8827c5fc-stats-auth\") pod \"router-default-7dcf5569b5-gvmtv\" (UID: \"00375107-9a3b-4161-a90d-72ea8827c5fc\") " pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:07:55.222431 master-0 kubenswrapper[7599]: I0318 13:07:55.218509 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wnqw\" (UniqueName: \"kubernetes.io/projected/f3be6654-f969-4952-976d-218c86af7d2d-kube-api-access-9wnqw\") pod \"network-check-source-b4bf74f6-tw7c7\" (UID: \"f3be6654-f969-4952-976d-218c86af7d2d\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-tw7c7" Mar 18 13:07:55.229471 master-0 kubenswrapper[7599]: I0318 13:07:55.229362 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlb8t\" (UniqueName: \"kubernetes.io/projected/00375107-9a3b-4161-a90d-72ea8827c5fc-kube-api-access-zlb8t\") pod \"router-default-7dcf5569b5-gvmtv\" (UID: \"00375107-9a3b-4161-a90d-72ea8827c5fc\") " pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:07:55.251134 master-0 kubenswrapper[7599]: I0318 13:07:55.250469 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:07:55.269672 master-0 kubenswrapper[7599]: I0318 13:07:55.267508 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-tw7c7" Mar 18 13:07:55.279615 master-0 kubenswrapper[7599]: I0318 13:07:55.279156 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wsmsc" Mar 18 13:07:55.282957 master-0 kubenswrapper[7599]: I0318 13:07:55.282881 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8ffe2e75-9cc3-4244-95c8-800463c5aa28-service-ca\") pod \"cluster-version-operator-7d58488df-bqmqw\" (UID: \"8ffe2e75-9cc3-4244-95c8-800463c5aa28\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bqmqw" Mar 18 13:07:55.282957 master-0 kubenswrapper[7599]: I0318 13:07:55.282928 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ffe2e75-9cc3-4244-95c8-800463c5aa28-serving-cert\") pod \"cluster-version-operator-7d58488df-bqmqw\" (UID: \"8ffe2e75-9cc3-4244-95c8-800463c5aa28\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bqmqw" Mar 18 13:07:55.282957 master-0 kubenswrapper[7599]: I0318 13:07:55.282949 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ffe2e75-9cc3-4244-95c8-800463c5aa28-kube-api-access\") pod \"cluster-version-operator-7d58488df-bqmqw\" (UID: \"8ffe2e75-9cc3-4244-95c8-800463c5aa28\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bqmqw" Mar 18 13:07:55.283257 master-0 kubenswrapper[7599]: I0318 13:07:55.282964 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8ffe2e75-9cc3-4244-95c8-800463c5aa28-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-bqmqw\" (UID: \"8ffe2e75-9cc3-4244-95c8-800463c5aa28\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bqmqw" Mar 18 13:07:55.283257 master-0 kubenswrapper[7599]: I0318 13:07:55.282994 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8ffe2e75-9cc3-4244-95c8-800463c5aa28-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-bqmqw\" (UID: \"8ffe2e75-9cc3-4244-95c8-800463c5aa28\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bqmqw" Mar 18 13:07:55.283257 master-0 kubenswrapper[7599]: I0318 13:07:55.283049 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8ffe2e75-9cc3-4244-95c8-800463c5aa28-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-bqmqw\" (UID: \"8ffe2e75-9cc3-4244-95c8-800463c5aa28\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bqmqw" Mar 18 13:07:55.283872 master-0 kubenswrapper[7599]: I0318 13:07:55.283818 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8ffe2e75-9cc3-4244-95c8-800463c5aa28-service-ca\") pod \"cluster-version-operator-7d58488df-bqmqw\" (UID: \"8ffe2e75-9cc3-4244-95c8-800463c5aa28\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bqmqw" Mar 18 13:07:55.284120 master-0 kubenswrapper[7599]: I0318 13:07:55.284090 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8ffe2e75-9cc3-4244-95c8-800463c5aa28-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-bqmqw\" (UID: \"8ffe2e75-9cc3-4244-95c8-800463c5aa28\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bqmqw" Mar 18 13:07:55.286044 master-0 kubenswrapper[7599]: I0318 13:07:55.286019 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ffe2e75-9cc3-4244-95c8-800463c5aa28-serving-cert\") pod \"cluster-version-operator-7d58488df-bqmqw\" (UID: \"8ffe2e75-9cc3-4244-95c8-800463c5aa28\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bqmqw" Mar 18 13:07:55.305497 master-0 kubenswrapper[7599]: I0318 13:07:55.305158 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ffe2e75-9cc3-4244-95c8-800463c5aa28-kube-api-access\") pod \"cluster-version-operator-7d58488df-bqmqw\" (UID: \"8ffe2e75-9cc3-4244-95c8-800463c5aa28\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bqmqw" Mar 18 13:07:55.353491 master-0 kubenswrapper[7599]: I0318 13:07:55.352562 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7d58488df-bqmqw" Mar 18 13:07:55.380715 master-0 kubenswrapper[7599]: I0318 13:07:55.378901 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b06a568-4dad-44b4-8312-aa52911dbfb0" path="/var/lib/kubelet/pods/2b06a568-4dad-44b4-8312-aa52911dbfb0/volumes" Mar 18 13:07:55.764663 master-0 kubenswrapper[7599]: I0318 13:07:55.764609 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wsmsc"] Mar 18 13:07:55.770243 master-0 kubenswrapper[7599]: W0318 13:07:55.770187 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6db2bfbd_d8db_4384_8979_23e8a1e87e5e.slice/crio-bb505f490d9f0d175bd48b40f2b116d9b59fe037e6e27e85a04f72f615f5d521 WatchSource:0}: Error finding container bb505f490d9f0d175bd48b40f2b116d9b59fe037e6e27e85a04f72f615f5d521: Status 404 returned error can't find the container with id bb505f490d9f0d175bd48b40f2b116d9b59fe037e6e27e85a04f72f615f5d521 Mar 18 13:07:55.845534 master-0 kubenswrapper[7599]: I0318 13:07:55.840670 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-b4bf74f6-tw7c7"] Mar 18 13:07:55.845534 master-0 kubenswrapper[7599]: W0318 13:07:55.845080 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3be6654_f969_4952_976d_218c86af7d2d.slice/crio-85d2f2197e1e2ff4c1589210cd39f7a91df442afde156e6d9ca6ea0a582e9f7e WatchSource:0}: Error finding container 85d2f2197e1e2ff4c1589210cd39f7a91df442afde156e6d9ca6ea0a582e9f7e: Status 404 returned error can't find the container with id 85d2f2197e1e2ff4c1589210cd39f7a91df442afde156e6d9ca6ea0a582e9f7e Mar 18 13:07:55.869432 master-0 kubenswrapper[7599]: I0318 13:07:55.867526 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"615539dc-56e1-4489-9aee-33b3e769d4fc","Type":"ContainerStarted","Data":"60014c22022db848874d3a05474beca08d37dd24a5fad732534f373108a2dd40"} Mar 18 13:07:55.876051 master-0 kubenswrapper[7599]: I0318 13:07:55.876001 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-tw7c7" event={"ID":"f3be6654-f969-4952-976d-218c86af7d2d","Type":"ContainerStarted","Data":"85d2f2197e1e2ff4c1589210cd39f7a91df442afde156e6d9ca6ea0a582e9f7e"} Mar 18 13:07:55.884449 master-0 kubenswrapper[7599]: I0318 13:07:55.882754 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7d58488df-bqmqw" event={"ID":"8ffe2e75-9cc3-4244-95c8-800463c5aa28","Type":"ContainerStarted","Data":"6c8e2099733ce74e4ed7853f255ed961973595eaee20fcbedbc997cee28f6bf1"} Mar 18 13:07:55.884449 master-0 kubenswrapper[7599]: I0318 13:07:55.882790 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7d58488df-bqmqw" event={"ID":"8ffe2e75-9cc3-4244-95c8-800463c5aa28","Type":"ContainerStarted","Data":"77922c67e22a90e02f2bc6f9c2c3361d1f9624d65d1b4a186c450f61aa3c27f3"} Mar 18 13:07:55.886820 master-0 kubenswrapper[7599]: I0318 13:07:55.885273 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fdnt" event={"ID":"708812af-3249-4d57-8f28-055da22a7329","Type":"ContainerStarted","Data":"256b1acfd961770152114ac2f96390408c67e8cdc51d71250cbe9043324535ff"} Mar 18 13:07:55.887983 master-0 kubenswrapper[7599]: I0318 13:07:55.887960 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wsmsc" event={"ID":"6db2bfbd-d8db-4384-8979-23e8a1e87e5e","Type":"ContainerStarted","Data":"bb505f490d9f0d175bd48b40f2b116d9b59fe037e6e27e85a04f72f615f5d521"} Mar 18 13:07:55.902088 master-0 kubenswrapper[7599]: I0318 13:07:55.901765 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-3-master-0" podStartSLOduration=2.901745382 podStartE2EDuration="2.901745382s" podCreationTimestamp="2026-03-18 13:07:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:07:55.900743397 +0000 UTC m=+50.861797639" watchObservedRunningTime="2026-03-18 13:07:55.901745382 +0000 UTC m=+50.862799624" Mar 18 13:07:55.906733 master-0 kubenswrapper[7599]: I0318 13:07:55.905643 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-fffb75699-b7pwr" event={"ID":"6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0","Type":"ContainerStarted","Data":"bb553102326f2c4e518a7d4b30f5d51c49f697e05a25cafd876fd2a4683c379b"} Mar 18 13:07:55.906733 master-0 kubenswrapper[7599]: I0318 13:07:55.906019 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-fffb75699-b7pwr" Mar 18 13:07:55.913474 master-0 kubenswrapper[7599]: I0318 13:07:55.912121 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" event={"ID":"00375107-9a3b-4161-a90d-72ea8827c5fc","Type":"ContainerStarted","Data":"ca95e515f4a5a1b63626328ea2ad328d0f3f07c258a5281fc61399ac842b383f"} Mar 18 13:07:55.918142 master-0 kubenswrapper[7599]: I0318 13:07:55.918102 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-fffb75699-b7pwr" Mar 18 13:07:55.920571 master-0 kubenswrapper[7599]: I0318 13:07:55.920539 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:07:56.170275 master-0 kubenswrapper[7599]: I0318 13:07:56.169759 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fdnt" podStartSLOduration=5.16974298 podStartE2EDuration="5.16974298s" podCreationTimestamp="2026-03-18 13:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:07:56.167519804 +0000 UTC m=+51.128574046" watchObservedRunningTime="2026-03-18 13:07:56.16974298 +0000 UTC m=+51.130797222" Mar 18 13:07:56.917574 master-0 kubenswrapper[7599]: I0318 13:07:56.917521 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-tw7c7" event={"ID":"f3be6654-f969-4952-976d-218c86af7d2d","Type":"ContainerStarted","Data":"cfad49cbff250b58b653ccae069695f63c9bab515760a0757841107af6244cda"} Mar 18 13:07:57.192871 master-0 kubenswrapper[7599]: I0318 13:07:57.192673 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7d58488df-bqmqw" podStartSLOduration=3.192650978 podStartE2EDuration="3.192650978s" podCreationTimestamp="2026-03-18 13:07:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:07:57.19109278 +0000 UTC m=+52.152147062" watchObservedRunningTime="2026-03-18 13:07:57.192650978 +0000 UTC m=+52.153705250" Mar 18 13:07:57.975285 master-0 kubenswrapper[7599]: I0318 13:07:57.974466 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-tw7c7" podStartSLOduration=104.97044491 podStartE2EDuration="1m44.97044491s" podCreationTimestamp="2026-03-18 13:06:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:07:57.968844209 +0000 UTC m=+52.929898461" watchObservedRunningTime="2026-03-18 13:07:57.97044491 +0000 UTC m=+52.931499152" Mar 18 13:07:58.104293 master-0 kubenswrapper[7599]: I0318 13:07:58.100532 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-fffb75699-b7pwr" podStartSLOduration=9.10051598 podStartE2EDuration="9.10051598s" podCreationTimestamp="2026-03-18 13:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:07:58.031171048 +0000 UTC m=+52.992225290" watchObservedRunningTime="2026-03-18 13:07:58.10051598 +0000 UTC m=+53.061570222" Mar 18 13:07:58.132478 master-0 kubenswrapper[7599]: I0318 13:07:58.118528 7599 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 13:07:58.309404 master-0 kubenswrapper[7599]: I0318 13:07:58.307608 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-92s8c" Mar 18 13:07:58.470946 master-0 kubenswrapper[7599]: I0318 13:07:58.470898 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm"] Mar 18 13:07:58.476427 master-0 kubenswrapper[7599]: I0318 13:07:58.471500 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" Mar 18 13:07:58.476427 master-0 kubenswrapper[7599]: I0318 13:07:58.475729 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 18 13:07:58.498486 master-0 kubenswrapper[7599]: I0318 13:07:58.497344 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm"] Mar 18 13:07:58.632599 master-0 kubenswrapper[7599]: I0318 13:07:58.629052 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3ee0f85b-219b-47cb-a22a-67d359a69881-webhook-cert\") pod \"packageserver-ff75f747c-r46tm\" (UID: \"3ee0f85b-219b-47cb-a22a-67d359a69881\") " pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" Mar 18 13:07:58.632599 master-0 kubenswrapper[7599]: I0318 13:07:58.629151 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3ee0f85b-219b-47cb-a22a-67d359a69881-tmpfs\") pod \"packageserver-ff75f747c-r46tm\" (UID: \"3ee0f85b-219b-47cb-a22a-67d359a69881\") " pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" Mar 18 13:07:58.632599 master-0 kubenswrapper[7599]: I0318 13:07:58.629181 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82f9g\" (UniqueName: \"kubernetes.io/projected/3ee0f85b-219b-47cb-a22a-67d359a69881-kube-api-access-82f9g\") pod \"packageserver-ff75f747c-r46tm\" (UID: \"3ee0f85b-219b-47cb-a22a-67d359a69881\") " pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" Mar 18 13:07:58.632599 master-0 kubenswrapper[7599]: I0318 13:07:58.629198 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3ee0f85b-219b-47cb-a22a-67d359a69881-apiservice-cert\") pod \"packageserver-ff75f747c-r46tm\" (UID: \"3ee0f85b-219b-47cb-a22a-67d359a69881\") " pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" Mar 18 13:07:58.730090 master-0 kubenswrapper[7599]: I0318 13:07:58.729789 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3ee0f85b-219b-47cb-a22a-67d359a69881-tmpfs\") pod \"packageserver-ff75f747c-r46tm\" (UID: \"3ee0f85b-219b-47cb-a22a-67d359a69881\") " pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" Mar 18 13:07:58.730090 master-0 kubenswrapper[7599]: I0318 13:07:58.729839 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82f9g\" (UniqueName: \"kubernetes.io/projected/3ee0f85b-219b-47cb-a22a-67d359a69881-kube-api-access-82f9g\") pod \"packageserver-ff75f747c-r46tm\" (UID: \"3ee0f85b-219b-47cb-a22a-67d359a69881\") " pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" Mar 18 13:07:58.730090 master-0 kubenswrapper[7599]: I0318 13:07:58.729861 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3ee0f85b-219b-47cb-a22a-67d359a69881-apiservice-cert\") pod \"packageserver-ff75f747c-r46tm\" (UID: \"3ee0f85b-219b-47cb-a22a-67d359a69881\") " pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" Mar 18 13:07:58.730090 master-0 kubenswrapper[7599]: I0318 13:07:58.729886 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3ee0f85b-219b-47cb-a22a-67d359a69881-webhook-cert\") pod \"packageserver-ff75f747c-r46tm\" (UID: \"3ee0f85b-219b-47cb-a22a-67d359a69881\") " pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" Mar 18 13:07:58.731394 master-0 kubenswrapper[7599]: I0318 13:07:58.731366 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3ee0f85b-219b-47cb-a22a-67d359a69881-tmpfs\") pod \"packageserver-ff75f747c-r46tm\" (UID: \"3ee0f85b-219b-47cb-a22a-67d359a69881\") " pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" Mar 18 13:07:58.734280 master-0 kubenswrapper[7599]: I0318 13:07:58.734255 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3ee0f85b-219b-47cb-a22a-67d359a69881-webhook-cert\") pod \"packageserver-ff75f747c-r46tm\" (UID: \"3ee0f85b-219b-47cb-a22a-67d359a69881\") " pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" Mar 18 13:07:58.736720 master-0 kubenswrapper[7599]: I0318 13:07:58.736650 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3ee0f85b-219b-47cb-a22a-67d359a69881-apiservice-cert\") pod \"packageserver-ff75f747c-r46tm\" (UID: \"3ee0f85b-219b-47cb-a22a-67d359a69881\") " pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" Mar 18 13:07:58.762540 master-0 kubenswrapper[7599]: I0318 13:07:58.762482 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82f9g\" (UniqueName: \"kubernetes.io/projected/3ee0f85b-219b-47cb-a22a-67d359a69881-kube-api-access-82f9g\") pod \"packageserver-ff75f747c-r46tm\" (UID: \"3ee0f85b-219b-47cb-a22a-67d359a69881\") " pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" Mar 18 13:07:58.850486 master-0 kubenswrapper[7599]: I0318 13:07:58.850131 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" Mar 18 13:08:00.532204 master-0 kubenswrapper[7599]: I0318 13:08:00.532092 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-wxht4"] Mar 18 13:08:00.532704 master-0 kubenswrapper[7599]: I0318 13:08:00.532673 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wxht4" Mar 18 13:08:00.536578 master-0 kubenswrapper[7599]: I0318 13:08:00.536398 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 18 13:08:00.536578 master-0 kubenswrapper[7599]: I0318 13:08:00.536456 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 18 13:08:00.536745 master-0 kubenswrapper[7599]: I0318 13:08:00.536633 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-lz5d6" Mar 18 13:08:00.655821 master-0 kubenswrapper[7599]: I0318 13:08:00.655704 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w477x\" (UniqueName: \"kubernetes.io/projected/bd8aa7c1-0a04-4df0-9047-63ab846b9535-kube-api-access-w477x\") pod \"machine-config-server-wxht4\" (UID: \"bd8aa7c1-0a04-4df0-9047-63ab846b9535\") " pod="openshift-machine-config-operator/machine-config-server-wxht4" Mar 18 13:08:00.655821 master-0 kubenswrapper[7599]: I0318 13:08:00.655765 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bd8aa7c1-0a04-4df0-9047-63ab846b9535-certs\") pod \"machine-config-server-wxht4\" (UID: \"bd8aa7c1-0a04-4df0-9047-63ab846b9535\") " pod="openshift-machine-config-operator/machine-config-server-wxht4" Mar 18 13:08:00.656045 master-0 kubenswrapper[7599]: I0318 13:08:00.655861 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bd8aa7c1-0a04-4df0-9047-63ab846b9535-node-bootstrap-token\") pod \"machine-config-server-wxht4\" (UID: \"bd8aa7c1-0a04-4df0-9047-63ab846b9535\") " pod="openshift-machine-config-operator/machine-config-server-wxht4" Mar 18 13:08:00.760704 master-0 kubenswrapper[7599]: I0318 13:08:00.758052 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bd8aa7c1-0a04-4df0-9047-63ab846b9535-node-bootstrap-token\") pod \"machine-config-server-wxht4\" (UID: \"bd8aa7c1-0a04-4df0-9047-63ab846b9535\") " pod="openshift-machine-config-operator/machine-config-server-wxht4" Mar 18 13:08:00.761325 master-0 kubenswrapper[7599]: I0318 13:08:00.761272 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w477x\" (UniqueName: \"kubernetes.io/projected/bd8aa7c1-0a04-4df0-9047-63ab846b9535-kube-api-access-w477x\") pod \"machine-config-server-wxht4\" (UID: \"bd8aa7c1-0a04-4df0-9047-63ab846b9535\") " pod="openshift-machine-config-operator/machine-config-server-wxht4" Mar 18 13:08:00.761446 master-0 kubenswrapper[7599]: I0318 13:08:00.761388 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bd8aa7c1-0a04-4df0-9047-63ab846b9535-certs\") pod \"machine-config-server-wxht4\" (UID: \"bd8aa7c1-0a04-4df0-9047-63ab846b9535\") " pod="openshift-machine-config-operator/machine-config-server-wxht4" Mar 18 13:08:00.767299 master-0 kubenswrapper[7599]: I0318 13:08:00.767234 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bd8aa7c1-0a04-4df0-9047-63ab846b9535-certs\") pod \"machine-config-server-wxht4\" (UID: \"bd8aa7c1-0a04-4df0-9047-63ab846b9535\") " pod="openshift-machine-config-operator/machine-config-server-wxht4" Mar 18 13:08:00.770864 master-0 kubenswrapper[7599]: I0318 13:08:00.770816 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bd8aa7c1-0a04-4df0-9047-63ab846b9535-node-bootstrap-token\") pod \"machine-config-server-wxht4\" (UID: \"bd8aa7c1-0a04-4df0-9047-63ab846b9535\") " pod="openshift-machine-config-operator/machine-config-server-wxht4" Mar 18 13:08:00.785883 master-0 kubenswrapper[7599]: I0318 13:08:00.785771 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w477x\" (UniqueName: \"kubernetes.io/projected/bd8aa7c1-0a04-4df0-9047-63ab846b9535-kube-api-access-w477x\") pod \"machine-config-server-wxht4\" (UID: \"bd8aa7c1-0a04-4df0-9047-63ab846b9535\") " pod="openshift-machine-config-operator/machine-config-server-wxht4" Mar 18 13:08:00.849002 master-0 kubenswrapper[7599]: I0318 13:08:00.848947 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wxht4" Mar 18 13:08:04.911116 master-0 kubenswrapper[7599]: I0318 13:08:04.911033 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 18 13:08:04.911961 master-0 kubenswrapper[7599]: I0318 13:08:04.911350 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-1-master-0" podUID="6298bf7b-ba09-4b4a-a0c6-f1989113eb5f" containerName="installer" containerID="cri-o://6aac6c573e5ccdfd0f675c97899bd4b5c29da8d75eaf745fc557c6d2353170c9" gracePeriod=30 Mar 18 13:08:06.407555 master-0 kubenswrapper[7599]: W0318 13:08:06.407495 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd8aa7c1_0a04_4df0_9047_63ab846b9535.slice/crio-91e6cc574de9ab0ab69c5ac67d10cbf7cd272238dd17877d6c8486b06ad54731 WatchSource:0}: Error finding container 91e6cc574de9ab0ab69c5ac67d10cbf7cd272238dd17877d6c8486b06ad54731: Status 404 returned error can't find the container with id 91e6cc574de9ab0ab69c5ac67d10cbf7cd272238dd17877d6c8486b06ad54731 Mar 18 13:08:06.970056 master-0 kubenswrapper[7599]: I0318 13:08:06.970000 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wxht4" event={"ID":"bd8aa7c1-0a04-4df0-9047-63ab846b9535","Type":"ContainerStarted","Data":"91e6cc574de9ab0ab69c5ac67d10cbf7cd272238dd17877d6c8486b06ad54731"} Mar 18 13:08:07.838740 master-0 kubenswrapper[7599]: I0318 13:08:07.838677 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm"] Mar 18 13:08:07.883091 master-0 kubenswrapper[7599]: I0318 13:08:07.883045 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 18 13:08:07.883914 master-0 kubenswrapper[7599]: I0318 13:08:07.883887 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 13:08:07.890721 master-0 kubenswrapper[7599]: I0318 13:08:07.890693 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-w7jpc" Mar 18 13:08:07.901789 master-0 kubenswrapper[7599]: I0318 13:08:07.901594 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 18 13:08:07.969227 master-0 kubenswrapper[7599]: I0318 13:08:07.968861 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6a4c87a8-6bf0-43b2-b598-1561cba3e391-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"6a4c87a8-6bf0-43b2-b598-1561cba3e391\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 13:08:07.969227 master-0 kubenswrapper[7599]: I0318 13:08:07.968936 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6a4c87a8-6bf0-43b2-b598-1561cba3e391-var-lock\") pod \"installer-2-master-0\" (UID: \"6a4c87a8-6bf0-43b2-b598-1561cba3e391\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 13:08:07.969227 master-0 kubenswrapper[7599]: I0318 13:08:07.968972 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6a4c87a8-6bf0-43b2-b598-1561cba3e391-kube-api-access\") pod \"installer-2-master-0\" (UID: \"6a4c87a8-6bf0-43b2-b598-1561cba3e391\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 13:08:07.995311 master-0 kubenswrapper[7599]: I0318 13:08:07.994653 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wxht4" event={"ID":"bd8aa7c1-0a04-4df0-9047-63ab846b9535","Type":"ContainerStarted","Data":"fc86d76759dab4eb23de1eef4d8d288bf3dac5716425557a62f1343bc2eae90e"} Mar 18 13:08:08.009882 master-0 kubenswrapper[7599]: I0318 13:08:08.007704 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" event={"ID":"3ee0f85b-219b-47cb-a22a-67d359a69881","Type":"ContainerStarted","Data":"99a3ea12b4f55e1c479ad9ada5ad2452af1ac0e39904d45fd6656f0a1828ea6f"} Mar 18 13:08:08.009882 master-0 kubenswrapper[7599]: I0318 13:08:08.009174 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" event={"ID":"00375107-9a3b-4161-a90d-72ea8827c5fc","Type":"ContainerStarted","Data":"9ce3394879cb362e5d7236279a34aac71fedeb577c1dc6ec801d0fa7287bb15c"} Mar 18 13:08:08.023136 master-0 kubenswrapper[7599]: I0318 13:08:08.021107 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-wxht4" podStartSLOduration=8.021091698 podStartE2EDuration="8.021091698s" podCreationTimestamp="2026-03-18 13:08:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:08:08.019844647 +0000 UTC m=+62.980898899" watchObservedRunningTime="2026-03-18 13:08:08.021091698 +0000 UTC m=+62.982145940" Mar 18 13:08:08.045447 master-0 kubenswrapper[7599]: I0318 13:08:08.044615 7599 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 18 13:08:08.045447 master-0 kubenswrapper[7599]: I0318 13:08:08.044868 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcdctl" containerID="cri-o://ac4829129d2b10e54c93299397b9cc08e24d1caf7621cbb7c7ae941e1d3f8b33" gracePeriod=30 Mar 18 13:08:08.045447 master-0 kubenswrapper[7599]: I0318 13:08:08.045014 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcd" containerID="cri-o://b680406b619eed9e8ddd7c1c6d8c3b3b49b30c6e09e5967e05d69c77c60fb8e7" gracePeriod=30 Mar 18 13:08:08.053689 master-0 kubenswrapper[7599]: I0318 13:08:08.051580 7599 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 18 13:08:08.053689 master-0 kubenswrapper[7599]: E0318 13:08:08.051790 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcd" Mar 18 13:08:08.053689 master-0 kubenswrapper[7599]: I0318 13:08:08.051800 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcd" Mar 18 13:08:08.053689 master-0 kubenswrapper[7599]: E0318 13:08:08.051820 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcdctl" Mar 18 13:08:08.053689 master-0 kubenswrapper[7599]: I0318 13:08:08.051826 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcdctl" Mar 18 13:08:08.053689 master-0 kubenswrapper[7599]: I0318 13:08:08.051917 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcd" Mar 18 13:08:08.053689 master-0 kubenswrapper[7599]: I0318 13:08:08.051930 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcdctl" Mar 18 13:08:08.062642 master-0 kubenswrapper[7599]: I0318 13:08:08.062601 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 18 13:08:08.069035 master-0 kubenswrapper[7599]: I0318 13:08:08.068462 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podStartSLOduration=19.967386548 podStartE2EDuration="31.068446192s" podCreationTimestamp="2026-03-18 13:07:37 +0000 UTC" firstStartedPulling="2026-03-18 13:07:55.29637416 +0000 UTC m=+50.257428402" lastFinishedPulling="2026-03-18 13:08:06.397433794 +0000 UTC m=+61.358488046" observedRunningTime="2026-03-18 13:08:08.066836751 +0000 UTC m=+63.027890993" watchObservedRunningTime="2026-03-18 13:08:08.068446192 +0000 UTC m=+63.029500434" Mar 18 13:08:08.074522 master-0 kubenswrapper[7599]: I0318 13:08:08.070003 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6a4c87a8-6bf0-43b2-b598-1561cba3e391-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"6a4c87a8-6bf0-43b2-b598-1561cba3e391\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 13:08:08.074522 master-0 kubenswrapper[7599]: I0318 13:08:08.070063 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6a4c87a8-6bf0-43b2-b598-1561cba3e391-var-lock\") pod \"installer-2-master-0\" (UID: \"6a4c87a8-6bf0-43b2-b598-1561cba3e391\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 13:08:08.074522 master-0 kubenswrapper[7599]: I0318 13:08:08.070093 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6a4c87a8-6bf0-43b2-b598-1561cba3e391-kube-api-access\") pod \"installer-2-master-0\" (UID: \"6a4c87a8-6bf0-43b2-b598-1561cba3e391\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 13:08:08.074522 master-0 kubenswrapper[7599]: I0318 13:08:08.070181 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6a4c87a8-6bf0-43b2-b598-1561cba3e391-var-lock\") pod \"installer-2-master-0\" (UID: \"6a4c87a8-6bf0-43b2-b598-1561cba3e391\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 13:08:08.074522 master-0 kubenswrapper[7599]: I0318 13:08:08.070236 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6a4c87a8-6bf0-43b2-b598-1561cba3e391-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"6a4c87a8-6bf0-43b2-b598-1561cba3e391\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 13:08:08.171367 master-0 kubenswrapper[7599]: I0318 13:08:08.171179 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:08:08.171367 master-0 kubenswrapper[7599]: I0318 13:08:08.171301 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:08:08.171367 master-0 kubenswrapper[7599]: I0318 13:08:08.171321 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:08:08.171367 master-0 kubenswrapper[7599]: I0318 13:08:08.171358 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:08:08.171367 master-0 kubenswrapper[7599]: I0318 13:08:08.171376 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:08:08.171834 master-0 kubenswrapper[7599]: I0318 13:08:08.171407 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:08:08.251889 master-0 kubenswrapper[7599]: I0318 13:08:08.251819 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:08:08.253834 master-0 kubenswrapper[7599]: I0318 13:08:08.253784 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:08.253834 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:08.253834 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:08.253834 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:08.253834 master-0 kubenswrapper[7599]: I0318 13:08:08.253826 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:08.276092 master-0 kubenswrapper[7599]: I0318 13:08:08.273010 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:08:08.276092 master-0 kubenswrapper[7599]: I0318 13:08:08.273070 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:08:08.276092 master-0 kubenswrapper[7599]: I0318 13:08:08.273099 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:08:08.276092 master-0 kubenswrapper[7599]: I0318 13:08:08.273125 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:08:08.276092 master-0 kubenswrapper[7599]: I0318 13:08:08.273161 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:08:08.276092 master-0 kubenswrapper[7599]: I0318 13:08:08.273195 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:08:08.276092 master-0 kubenswrapper[7599]: I0318 13:08:08.273268 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:08:08.276092 master-0 kubenswrapper[7599]: I0318 13:08:08.273320 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:08:08.276092 master-0 kubenswrapper[7599]: I0318 13:08:08.273355 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:08:08.276092 master-0 kubenswrapper[7599]: I0318 13:08:08.273387 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:08:08.276092 master-0 kubenswrapper[7599]: I0318 13:08:08.273436 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:08:08.276092 master-0 kubenswrapper[7599]: I0318 13:08:08.273473 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:08:09.017135 master-0 kubenswrapper[7599]: I0318 13:08:09.017065 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wsmsc" event={"ID":"6db2bfbd-d8db-4384-8979-23e8a1e87e5e","Type":"ContainerStarted","Data":"832f56b880d15099c333330fc427d4ba30a01745231832a4a7863a3a894c690d"} Mar 18 13:08:09.017577 master-0 kubenswrapper[7599]: I0318 13:08:09.017515 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wsmsc" Mar 18 13:08:09.018963 master-0 kubenswrapper[7599]: I0318 13:08:09.018905 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_250b63d5-21ee-44d3-821e-f42a8112dc50/installer/0.log" Mar 18 13:08:09.019045 master-0 kubenswrapper[7599]: I0318 13:08:09.018996 7599 generic.go:334] "Generic (PLEG): container finished" podID="250b63d5-21ee-44d3-821e-f42a8112dc50" containerID="53bd0f911da22f6347919de47020dd5ee65cf68785aa75b9d25bd48d7e0221f2" exitCode=1 Mar 18 13:08:09.019167 master-0 kubenswrapper[7599]: I0318 13:08:09.019107 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"250b63d5-21ee-44d3-821e-f42a8112dc50","Type":"ContainerDied","Data":"53bd0f911da22f6347919de47020dd5ee65cf68785aa75b9d25bd48d7e0221f2"} Mar 18 13:08:09.019238 master-0 kubenswrapper[7599]: I0318 13:08:09.019186 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"250b63d5-21ee-44d3-821e-f42a8112dc50","Type":"ContainerDied","Data":"1c1ec2ef0ddc216ba6a24212f029996acc4207f26f3f7674359334d3b8b83054"} Mar 18 13:08:09.019238 master-0 kubenswrapper[7599]: I0318 13:08:09.019231 7599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c1ec2ef0ddc216ba6a24212f029996acc4207f26f3f7674359334d3b8b83054" Mar 18 13:08:09.025172 master-0 kubenswrapper[7599]: I0318 13:08:09.025095 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_250b63d5-21ee-44d3-821e-f42a8112dc50/installer/0.log" Mar 18 13:08:09.025311 master-0 kubenswrapper[7599]: I0318 13:08:09.025271 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 13:08:09.029094 master-0 kubenswrapper[7599]: I0318 13:08:09.029035 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" event={"ID":"3ee0f85b-219b-47cb-a22a-67d359a69881","Type":"ContainerStarted","Data":"48e43ee75779b8e1045feaede050da1592482395d03ca73890f0546a58a0cc80"} Mar 18 13:08:09.029156 master-0 kubenswrapper[7599]: I0318 13:08:09.029112 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" Mar 18 13:08:09.029622 master-0 kubenswrapper[7599]: I0318 13:08:09.029582 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wsmsc" Mar 18 13:08:09.194196 master-0 kubenswrapper[7599]: I0318 13:08:09.193014 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/250b63d5-21ee-44d3-821e-f42a8112dc50-var-lock\") pod \"250b63d5-21ee-44d3-821e-f42a8112dc50\" (UID: \"250b63d5-21ee-44d3-821e-f42a8112dc50\") " Mar 18 13:08:09.194196 master-0 kubenswrapper[7599]: I0318 13:08:09.193107 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/250b63d5-21ee-44d3-821e-f42a8112dc50-kube-api-access\") pod \"250b63d5-21ee-44d3-821e-f42a8112dc50\" (UID: \"250b63d5-21ee-44d3-821e-f42a8112dc50\") " Mar 18 13:08:09.194196 master-0 kubenswrapper[7599]: I0318 13:08:09.193207 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/250b63d5-21ee-44d3-821e-f42a8112dc50-kubelet-dir\") pod \"250b63d5-21ee-44d3-821e-f42a8112dc50\" (UID: \"250b63d5-21ee-44d3-821e-f42a8112dc50\") " Mar 18 13:08:09.194451 master-0 kubenswrapper[7599]: I0318 13:08:09.194255 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/250b63d5-21ee-44d3-821e-f42a8112dc50-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "250b63d5-21ee-44d3-821e-f42a8112dc50" (UID: "250b63d5-21ee-44d3-821e-f42a8112dc50"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:08:09.195454 master-0 kubenswrapper[7599]: I0318 13:08:09.194674 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/250b63d5-21ee-44d3-821e-f42a8112dc50-var-lock" (OuterVolumeSpecName: "var-lock") pod "250b63d5-21ee-44d3-821e-f42a8112dc50" (UID: "250b63d5-21ee-44d3-821e-f42a8112dc50"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:08:09.195768 master-0 kubenswrapper[7599]: I0318 13:08:09.195737 7599 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/250b63d5-21ee-44d3-821e-f42a8112dc50-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:09.195768 master-0 kubenswrapper[7599]: I0318 13:08:09.195765 7599 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/250b63d5-21ee-44d3-821e-f42a8112dc50-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:09.198872 master-0 kubenswrapper[7599]: I0318 13:08:09.197668 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/250b63d5-21ee-44d3-821e-f42a8112dc50-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "250b63d5-21ee-44d3-821e-f42a8112dc50" (UID: "250b63d5-21ee-44d3-821e-f42a8112dc50"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:08:09.254653 master-0 kubenswrapper[7599]: I0318 13:08:09.254526 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:09.254653 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:09.254653 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:09.254653 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:09.254653 master-0 kubenswrapper[7599]: I0318 13:08:09.254623 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:09.296593 master-0 kubenswrapper[7599]: I0318 13:08:09.296536 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/250b63d5-21ee-44d3-821e-f42a8112dc50-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:10.029293 master-0 kubenswrapper[7599]: I0318 13:08:10.029238 7599 patch_prober.go:28] interesting pod/packageserver-ff75f747c-r46tm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.58:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 13:08:10.029957 master-0 kubenswrapper[7599]: I0318 13:08:10.029293 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" podUID="3ee0f85b-219b-47cb-a22a-67d359a69881" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.58:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:08:10.030907 master-0 kubenswrapper[7599]: I0318 13:08:10.030861 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 13:08:10.253867 master-0 kubenswrapper[7599]: I0318 13:08:10.253764 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:10.253867 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:10.253867 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:10.253867 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:10.253867 master-0 kubenswrapper[7599]: I0318 13:08:10.253830 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:11.031782 master-0 kubenswrapper[7599]: I0318 13:08:11.031692 7599 patch_prober.go:28] interesting pod/packageserver-ff75f747c-r46tm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.58:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 13:08:11.031782 master-0 kubenswrapper[7599]: I0318 13:08:11.031787 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" podUID="3ee0f85b-219b-47cb-a22a-67d359a69881" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.58:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:08:11.254784 master-0 kubenswrapper[7599]: I0318 13:08:11.254674 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:11.254784 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:11.254784 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:11.254784 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:11.254784 master-0 kubenswrapper[7599]: I0318 13:08:11.254768 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:12.254850 master-0 kubenswrapper[7599]: I0318 13:08:12.254782 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:12.254850 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:12.254850 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:12.254850 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:12.255390 master-0 kubenswrapper[7599]: I0318 13:08:12.254855 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:13.253247 master-0 kubenswrapper[7599]: I0318 13:08:13.253198 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:13.253247 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:13.253247 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:13.253247 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:13.253560 master-0 kubenswrapper[7599]: I0318 13:08:13.253263 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:14.254563 master-0 kubenswrapper[7599]: I0318 13:08:14.254513 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:14.254563 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:14.254563 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:14.254563 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:14.255274 master-0 kubenswrapper[7599]: I0318 13:08:14.255241 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:15.252187 master-0 kubenswrapper[7599]: I0318 13:08:15.252117 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:08:15.255366 master-0 kubenswrapper[7599]: I0318 13:08:15.255311 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:15.255366 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:15.255366 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:15.255366 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:15.256551 master-0 kubenswrapper[7599]: I0318 13:08:15.256497 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:16.254546 master-0 kubenswrapper[7599]: I0318 13:08:16.254471 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:16.254546 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:16.254546 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:16.254546 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:16.254844 master-0 kubenswrapper[7599]: I0318 13:08:16.254591 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:17.254039 master-0 kubenswrapper[7599]: I0318 13:08:17.253950 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:17.254039 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:17.254039 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:17.254039 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:17.255191 master-0 kubenswrapper[7599]: I0318 13:08:17.254042 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:18.076227 master-0 kubenswrapper[7599]: E0318 13:08:18.075856 7599 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:08:18.255287 master-0 kubenswrapper[7599]: I0318 13:08:18.255211 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:18.255287 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:18.255287 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:18.255287 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:18.256086 master-0 kubenswrapper[7599]: I0318 13:08:18.255312 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:19.254074 master-0 kubenswrapper[7599]: I0318 13:08:19.253952 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:19.254074 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:19.254074 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:19.254074 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:19.254074 master-0 kubenswrapper[7599]: I0318 13:08:19.254070 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:19.852584 master-0 kubenswrapper[7599]: I0318 13:08:19.852510 7599 patch_prober.go:28] interesting pod/packageserver-ff75f747c-r46tm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.58:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 13:08:19.852584 master-0 kubenswrapper[7599]: I0318 13:08:19.852528 7599 patch_prober.go:28] interesting pod/packageserver-ff75f747c-r46tm container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.128.0.58:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 13:08:19.852584 master-0 kubenswrapper[7599]: I0318 13:08:19.852573 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" podUID="3ee0f85b-219b-47cb-a22a-67d359a69881" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.58:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:08:19.853283 master-0 kubenswrapper[7599]: I0318 13:08:19.852632 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" podUID="3ee0f85b-219b-47cb-a22a-67d359a69881" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.58:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:08:20.083366 master-0 kubenswrapper[7599]: I0318 13:08:20.083252 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_6298bf7b-ba09-4b4a-a0c6-f1989113eb5f/installer/0.log" Mar 18 13:08:20.083366 master-0 kubenswrapper[7599]: I0318 13:08:20.083322 7599 generic.go:334] "Generic (PLEG): container finished" podID="6298bf7b-ba09-4b4a-a0c6-f1989113eb5f" containerID="6aac6c573e5ccdfd0f675c97899bd4b5c29da8d75eaf745fc557c6d2353170c9" exitCode=1 Mar 18 13:08:20.083683 master-0 kubenswrapper[7599]: I0318 13:08:20.083360 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"6298bf7b-ba09-4b4a-a0c6-f1989113eb5f","Type":"ContainerDied","Data":"6aac6c573e5ccdfd0f675c97899bd4b5c29da8d75eaf745fc557c6d2353170c9"} Mar 18 13:08:20.254688 master-0 kubenswrapper[7599]: I0318 13:08:20.254593 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:20.254688 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:20.254688 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:20.254688 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:20.254688 master-0 kubenswrapper[7599]: I0318 13:08:20.254661 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:20.666065 master-0 kubenswrapper[7599]: I0318 13:08:20.666004 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_6298bf7b-ba09-4b4a-a0c6-f1989113eb5f/installer/0.log" Mar 18 13:08:20.666065 master-0 kubenswrapper[7599]: I0318 13:08:20.666070 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 13:08:20.847897 master-0 kubenswrapper[7599]: I0318 13:08:20.847827 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6298bf7b-ba09-4b4a-a0c6-f1989113eb5f-var-lock\") pod \"6298bf7b-ba09-4b4a-a0c6-f1989113eb5f\" (UID: \"6298bf7b-ba09-4b4a-a0c6-f1989113eb5f\") " Mar 18 13:08:20.847897 master-0 kubenswrapper[7599]: I0318 13:08:20.847890 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6298bf7b-ba09-4b4a-a0c6-f1989113eb5f-kubelet-dir\") pod \"6298bf7b-ba09-4b4a-a0c6-f1989113eb5f\" (UID: \"6298bf7b-ba09-4b4a-a0c6-f1989113eb5f\") " Mar 18 13:08:20.848183 master-0 kubenswrapper[7599]: I0318 13:08:20.847951 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6298bf7b-ba09-4b4a-a0c6-f1989113eb5f-kube-api-access\") pod \"6298bf7b-ba09-4b4a-a0c6-f1989113eb5f\" (UID: \"6298bf7b-ba09-4b4a-a0c6-f1989113eb5f\") " Mar 18 13:08:20.848183 master-0 kubenswrapper[7599]: I0318 13:08:20.847948 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6298bf7b-ba09-4b4a-a0c6-f1989113eb5f-var-lock" (OuterVolumeSpecName: "var-lock") pod "6298bf7b-ba09-4b4a-a0c6-f1989113eb5f" (UID: "6298bf7b-ba09-4b4a-a0c6-f1989113eb5f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:08:20.848183 master-0 kubenswrapper[7599]: I0318 13:08:20.848075 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6298bf7b-ba09-4b4a-a0c6-f1989113eb5f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6298bf7b-ba09-4b4a-a0c6-f1989113eb5f" (UID: "6298bf7b-ba09-4b4a-a0c6-f1989113eb5f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:08:20.848621 master-0 kubenswrapper[7599]: I0318 13:08:20.848573 7599 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6298bf7b-ba09-4b4a-a0c6-f1989113eb5f-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:20.848684 master-0 kubenswrapper[7599]: I0318 13:08:20.848620 7599 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6298bf7b-ba09-4b4a-a0c6-f1989113eb5f-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:20.853148 master-0 kubenswrapper[7599]: I0318 13:08:20.853088 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6298bf7b-ba09-4b4a-a0c6-f1989113eb5f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6298bf7b-ba09-4b4a-a0c6-f1989113eb5f" (UID: "6298bf7b-ba09-4b4a-a0c6-f1989113eb5f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:08:20.950155 master-0 kubenswrapper[7599]: I0318 13:08:20.950065 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6298bf7b-ba09-4b4a-a0c6-f1989113eb5f-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:21.090074 master-0 kubenswrapper[7599]: I0318 13:08:21.090011 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_6298bf7b-ba09-4b4a-a0c6-f1989113eb5f/installer/0.log" Mar 18 13:08:21.090239 master-0 kubenswrapper[7599]: I0318 13:08:21.090082 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"6298bf7b-ba09-4b4a-a0c6-f1989113eb5f","Type":"ContainerDied","Data":"772ff10871f2688fceb3e214c45da5d6fc0693e88c7f44d1fd3c3965a234fca8"} Mar 18 13:08:21.090239 master-0 kubenswrapper[7599]: I0318 13:08:21.090131 7599 scope.go:117] "RemoveContainer" containerID="6aac6c573e5ccdfd0f675c97899bd4b5c29da8d75eaf745fc557c6d2353170c9" Mar 18 13:08:21.090239 master-0 kubenswrapper[7599]: I0318 13:08:21.090151 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 13:08:21.109881 master-0 kubenswrapper[7599]: E0318 13:08:21.109822 7599 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 18 13:08:21.110437 master-0 kubenswrapper[7599]: I0318 13:08:21.110382 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 18 13:08:21.125795 master-0 kubenswrapper[7599]: W0318 13:08:21.125733 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24b4ed170d527099878cb5fdd508a2fb.slice/crio-7982d37cc1b5aad930ddd864b39138fff115e8b337b3ec2eb4562170e6f8c442 WatchSource:0}: Error finding container 7982d37cc1b5aad930ddd864b39138fff115e8b337b3ec2eb4562170e6f8c442: Status 404 returned error can't find the container with id 7982d37cc1b5aad930ddd864b39138fff115e8b337b3ec2eb4562170e6f8c442 Mar 18 13:08:21.253586 master-0 kubenswrapper[7599]: I0318 13:08:21.253462 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:21.253586 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:21.253586 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:21.253586 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:21.253586 master-0 kubenswrapper[7599]: I0318 13:08:21.253530 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:22.101229 master-0 kubenswrapper[7599]: I0318 13:08:22.101132 7599 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="34cdcbdb05e204049cd01ed9f0a676fced26049cc5906931e3951888aaef818a" exitCode=0 Mar 18 13:08:22.101229 master-0 kubenswrapper[7599]: I0318 13:08:22.101203 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerDied","Data":"34cdcbdb05e204049cd01ed9f0a676fced26049cc5906931e3951888aaef818a"} Mar 18 13:08:22.102240 master-0 kubenswrapper[7599]: I0318 13:08:22.101250 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"7982d37cc1b5aad930ddd864b39138fff115e8b337b3ec2eb4562170e6f8c442"} Mar 18 13:08:22.255210 master-0 kubenswrapper[7599]: I0318 13:08:22.255012 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:22.255210 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:22.255210 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:22.255210 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:22.255210 master-0 kubenswrapper[7599]: I0318 13:08:22.255133 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:23.111048 master-0 kubenswrapper[7599]: I0318 13:08:23.110949 7599 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="f95a3bc3d3ba83cb38567fab408924e4ffe01d6a95b0daefb0d6bae2338f0525" exitCode=1 Mar 18 13:08:23.112150 master-0 kubenswrapper[7599]: I0318 13:08:23.111098 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"f95a3bc3d3ba83cb38567fab408924e4ffe01d6a95b0daefb0d6bae2338f0525"} Mar 18 13:08:23.112150 master-0 kubenswrapper[7599]: I0318 13:08:23.111213 7599 scope.go:117] "RemoveContainer" containerID="ea4c7c8dc1dee8fb69dc17e4e5c096e51c691a4d47e30362ad839b224364d388" Mar 18 13:08:23.112150 master-0 kubenswrapper[7599]: I0318 13:08:23.111923 7599 scope.go:117] "RemoveContainer" containerID="f95a3bc3d3ba83cb38567fab408924e4ffe01d6a95b0daefb0d6bae2338f0525" Mar 18 13:08:23.114466 master-0 kubenswrapper[7599]: I0318 13:08:23.114391 7599 generic.go:334] "Generic (PLEG): container finished" podID="814ffa63-b08e-4de8-b912-8d7f0638230b" containerID="bd16bdf4e73c45c278128af3a659c5a213de4cb9ef8b0c72e75eabe56dd40dbc" exitCode=0 Mar 18 13:08:23.114466 master-0 kubenswrapper[7599]: I0318 13:08:23.114445 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"814ffa63-b08e-4de8-b912-8d7f0638230b","Type":"ContainerDied","Data":"bd16bdf4e73c45c278128af3a659c5a213de4cb9ef8b0c72e75eabe56dd40dbc"} Mar 18 13:08:23.253211 master-0 kubenswrapper[7599]: I0318 13:08:23.253070 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:23.253211 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:23.253211 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:23.253211 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:23.253211 master-0 kubenswrapper[7599]: I0318 13:08:23.253131 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:23.324206 master-0 kubenswrapper[7599]: I0318 13:08:23.324160 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:08:24.118343 master-0 kubenswrapper[7599]: I0318 13:08:24.118287 7599 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:08:24.123117 master-0 kubenswrapper[7599]: I0318 13:08:24.123069 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"c54bcf4ddd56343697f9602341ecf51d80939627fe3f4a59637f96162fa1598d"} Mar 18 13:08:24.293445 master-0 kubenswrapper[7599]: I0318 13:08:24.292826 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:24.293445 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:24.293445 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:24.293445 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:24.293445 master-0 kubenswrapper[7599]: I0318 13:08:24.292921 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:24.562854 master-0 kubenswrapper[7599]: I0318 13:08:24.562816 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 18 13:08:24.699144 master-0 kubenswrapper[7599]: I0318 13:08:24.699037 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/814ffa63-b08e-4de8-b912-8d7f0638230b-var-lock\") pod \"814ffa63-b08e-4de8-b912-8d7f0638230b\" (UID: \"814ffa63-b08e-4de8-b912-8d7f0638230b\") " Mar 18 13:08:24.699144 master-0 kubenswrapper[7599]: I0318 13:08:24.699144 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/814ffa63-b08e-4de8-b912-8d7f0638230b-kubelet-dir\") pod \"814ffa63-b08e-4de8-b912-8d7f0638230b\" (UID: \"814ffa63-b08e-4de8-b912-8d7f0638230b\") " Mar 18 13:08:24.699551 master-0 kubenswrapper[7599]: I0318 13:08:24.699176 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/814ffa63-b08e-4de8-b912-8d7f0638230b-var-lock" (OuterVolumeSpecName: "var-lock") pod "814ffa63-b08e-4de8-b912-8d7f0638230b" (UID: "814ffa63-b08e-4de8-b912-8d7f0638230b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:08:24.699551 master-0 kubenswrapper[7599]: I0318 13:08:24.699257 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/814ffa63-b08e-4de8-b912-8d7f0638230b-kube-api-access\") pod \"814ffa63-b08e-4de8-b912-8d7f0638230b\" (UID: \"814ffa63-b08e-4de8-b912-8d7f0638230b\") " Mar 18 13:08:24.699551 master-0 kubenswrapper[7599]: I0318 13:08:24.699310 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/814ffa63-b08e-4de8-b912-8d7f0638230b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "814ffa63-b08e-4de8-b912-8d7f0638230b" (UID: "814ffa63-b08e-4de8-b912-8d7f0638230b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:08:24.699795 master-0 kubenswrapper[7599]: I0318 13:08:24.699602 7599 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/814ffa63-b08e-4de8-b912-8d7f0638230b-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:24.699795 master-0 kubenswrapper[7599]: I0318 13:08:24.699631 7599 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/814ffa63-b08e-4de8-b912-8d7f0638230b-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:24.704250 master-0 kubenswrapper[7599]: I0318 13:08:24.704169 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/814ffa63-b08e-4de8-b912-8d7f0638230b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "814ffa63-b08e-4de8-b912-8d7f0638230b" (UID: "814ffa63-b08e-4de8-b912-8d7f0638230b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:08:24.800638 master-0 kubenswrapper[7599]: I0318 13:08:24.800471 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/814ffa63-b08e-4de8-b912-8d7f0638230b-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:25.132333 master-0 kubenswrapper[7599]: I0318 13:08:25.132165 7599 generic.go:334] "Generic (PLEG): container finished" podID="c83737980b9ee109184b1d78e942cf36" containerID="fa24e07dc1e554926055d55fec3f68de49cdd19d5efe278d06ec7ad571b7e767" exitCode=1 Mar 18 13:08:25.133212 master-0 kubenswrapper[7599]: I0318 13:08:25.132321 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerDied","Data":"fa24e07dc1e554926055d55fec3f68de49cdd19d5efe278d06ec7ad571b7e767"} Mar 18 13:08:25.133377 master-0 kubenswrapper[7599]: I0318 13:08:25.133222 7599 scope.go:117] "RemoveContainer" containerID="fa24e07dc1e554926055d55fec3f68de49cdd19d5efe278d06ec7ad571b7e767" Mar 18 13:08:25.138607 master-0 kubenswrapper[7599]: I0318 13:08:25.138546 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"814ffa63-b08e-4de8-b912-8d7f0638230b","Type":"ContainerDied","Data":"399e36a17781740e987661a51d15dc9628e7dba92fbae5bfa7767552365b7e5a"} Mar 18 13:08:25.138695 master-0 kubenswrapper[7599]: I0318 13:08:25.138603 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 18 13:08:25.138695 master-0 kubenswrapper[7599]: I0318 13:08:25.138614 7599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="399e36a17781740e987661a51d15dc9628e7dba92fbae5bfa7767552365b7e5a" Mar 18 13:08:25.255219 master-0 kubenswrapper[7599]: I0318 13:08:25.255100 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:25.255219 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:25.255219 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:25.255219 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:25.255689 master-0 kubenswrapper[7599]: I0318 13:08:25.255557 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:26.147507 master-0 kubenswrapper[7599]: I0318 13:08:26.147402 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"2b457975b25ebc40cb55c5ee5a932669f56be2bc949f61f598bc7d15209f09c7"} Mar 18 13:08:26.253978 master-0 kubenswrapper[7599]: I0318 13:08:26.253872 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:26.253978 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:26.253978 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:26.253978 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:26.254278 master-0 kubenswrapper[7599]: I0318 13:08:26.254007 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:27.254661 master-0 kubenswrapper[7599]: I0318 13:08:27.254577 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:27.254661 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:27.254661 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:27.254661 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:27.255178 master-0 kubenswrapper[7599]: I0318 13:08:27.254682 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:28.077215 master-0 kubenswrapper[7599]: E0318 13:08:28.077068 7599 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:08:28.254605 master-0 kubenswrapper[7599]: I0318 13:08:28.254537 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:28.254605 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:28.254605 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:28.254605 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:28.255674 master-0 kubenswrapper[7599]: I0318 13:08:28.254615 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:28.444374 master-0 kubenswrapper[7599]: I0318 13:08:28.444251 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:08:28.520259 master-0 kubenswrapper[7599]: I0318 13:08:28.520194 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" Mar 18 13:08:29.254657 master-0 kubenswrapper[7599]: I0318 13:08:29.254580 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:29.254657 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:29.254657 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:29.254657 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:29.255214 master-0 kubenswrapper[7599]: I0318 13:08:29.254682 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:29.851789 master-0 kubenswrapper[7599]: I0318 13:08:29.851708 7599 patch_prober.go:28] interesting pod/packageserver-ff75f747c-r46tm container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.128.0.58:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 13:08:29.851789 master-0 kubenswrapper[7599]: I0318 13:08:29.851766 7599 patch_prober.go:28] interesting pod/packageserver-ff75f747c-r46tm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.58:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 13:08:29.852164 master-0 kubenswrapper[7599]: I0318 13:08:29.851801 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" podUID="3ee0f85b-219b-47cb-a22a-67d359a69881" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.58:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:08:29.852164 master-0 kubenswrapper[7599]: I0318 13:08:29.851825 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" podUID="3ee0f85b-219b-47cb-a22a-67d359a69881" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.58:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 13:08:30.254142 master-0 kubenswrapper[7599]: I0318 13:08:30.254075 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:30.254142 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:30.254142 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:30.254142 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:30.254489 master-0 kubenswrapper[7599]: I0318 13:08:30.254143 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:31.254606 master-0 kubenswrapper[7599]: I0318 13:08:31.254507 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:31.254606 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:31.254606 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:31.254606 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:31.255354 master-0 kubenswrapper[7599]: I0318 13:08:31.254614 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:32.254860 master-0 kubenswrapper[7599]: I0318 13:08:32.254797 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:32.254860 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:32.254860 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:32.254860 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:32.255593 master-0 kubenswrapper[7599]: I0318 13:08:32.254890 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:33.255079 master-0 kubenswrapper[7599]: I0318 13:08:33.254973 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:33.255079 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:33.255079 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:33.255079 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:33.255676 master-0 kubenswrapper[7599]: I0318 13:08:33.255089 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:33.324795 master-0 kubenswrapper[7599]: I0318 13:08:33.324659 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:08:34.191131 master-0 kubenswrapper[7599]: I0318 13:08:34.191051 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41/installer/0.log" Mar 18 13:08:34.191131 master-0 kubenswrapper[7599]: I0318 13:08:34.191126 7599 generic.go:334] "Generic (PLEG): container finished" podID="e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41" containerID="31b0fc8784eb8367b69b8a7c847bfd1469f93f534490b89c89aa0c82a72151b2" exitCode=1 Mar 18 13:08:34.191622 master-0 kubenswrapper[7599]: I0318 13:08:34.191190 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41","Type":"ContainerDied","Data":"31b0fc8784eb8367b69b8a7c847bfd1469f93f534490b89c89aa0c82a72151b2"} Mar 18 13:08:34.254764 master-0 kubenswrapper[7599]: I0318 13:08:34.254688 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:34.254764 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:34.254764 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:34.254764 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:34.254764 master-0 kubenswrapper[7599]: I0318 13:08:34.254745 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:35.110434 master-0 kubenswrapper[7599]: E0318 13:08:35.110366 7599 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 18 13:08:35.201715 master-0 kubenswrapper[7599]: I0318 13:08:35.201665 7599 generic.go:334] "Generic (PLEG): container finished" podID="d664a6d0d2a24360dee10612610f1b59" containerID="b680406b619eed9e8ddd7c1c6d8c3b3b49b30c6e09e5967e05d69c77c60fb8e7" exitCode=0 Mar 18 13:08:35.252880 master-0 kubenswrapper[7599]: I0318 13:08:35.252834 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:35.252880 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:35.252880 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:35.252880 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:35.253074 master-0 kubenswrapper[7599]: I0318 13:08:35.252890 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:35.487077 master-0 kubenswrapper[7599]: I0318 13:08:35.487044 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41/installer/0.log" Mar 18 13:08:35.487623 master-0 kubenswrapper[7599]: I0318 13:08:35.487101 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 13:08:35.671020 master-0 kubenswrapper[7599]: I0318 13:08:35.670909 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41-kubelet-dir\") pod \"e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41\" (UID: \"e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41\") " Mar 18 13:08:35.671020 master-0 kubenswrapper[7599]: I0318 13:08:35.671005 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41-var-lock\") pod \"e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41\" (UID: \"e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41\") " Mar 18 13:08:35.671357 master-0 kubenswrapper[7599]: I0318 13:08:35.671046 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41" (UID: "e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:08:35.671357 master-0 kubenswrapper[7599]: I0318 13:08:35.671134 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41-kube-api-access\") pod \"e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41\" (UID: \"e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41\") " Mar 18 13:08:35.671357 master-0 kubenswrapper[7599]: I0318 13:08:35.671257 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41-var-lock" (OuterVolumeSpecName: "var-lock") pod "e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41" (UID: "e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:08:35.671654 master-0 kubenswrapper[7599]: I0318 13:08:35.671567 7599 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:35.671654 master-0 kubenswrapper[7599]: I0318 13:08:35.671595 7599 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:35.674322 master-0 kubenswrapper[7599]: I0318 13:08:35.674265 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41" (UID: "e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:08:35.772864 master-0 kubenswrapper[7599]: I0318 13:08:35.772744 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:36.209874 master-0 kubenswrapper[7599]: I0318 13:08:36.209773 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41/installer/0.log" Mar 18 13:08:36.210151 master-0 kubenswrapper[7599]: I0318 13:08:36.209893 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41","Type":"ContainerDied","Data":"3cb0fd8ad50843d858abaee21b28a02e53fe5cd0a20c10c6df87f1573285730f"} Mar 18 13:08:36.210151 master-0 kubenswrapper[7599]: I0318 13:08:36.209934 7599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cb0fd8ad50843d858abaee21b28a02e53fe5cd0a20c10c6df87f1573285730f" Mar 18 13:08:36.210151 master-0 kubenswrapper[7599]: I0318 13:08:36.209967 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 13:08:36.212573 master-0 kubenswrapper[7599]: I0318 13:08:36.212239 7599 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="0c50ad9bf75bb5d525307b74331dd8a6e65e0eee7272e3ac766a19e6ee5d1252" exitCode=0 Mar 18 13:08:36.212690 master-0 kubenswrapper[7599]: I0318 13:08:36.212340 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerDied","Data":"0c50ad9bf75bb5d525307b74331dd8a6e65e0eee7272e3ac766a19e6ee5d1252"} Mar 18 13:08:36.254653 master-0 kubenswrapper[7599]: I0318 13:08:36.254603 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:36.254653 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:36.254653 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:36.254653 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:36.254965 master-0 kubenswrapper[7599]: I0318 13:08:36.254660 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:36.325502 master-0 kubenswrapper[7599]: I0318 13:08:36.325324 7599 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:08:37.255109 master-0 kubenswrapper[7599]: I0318 13:08:37.255022 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:37.255109 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:37.255109 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:37.255109 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:37.256374 master-0 kubenswrapper[7599]: I0318 13:08:37.255120 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:37.470434 master-0 kubenswrapper[7599]: I0318 13:08:37.470361 7599 patch_prober.go:28] interesting pod/etcd-operator-8544cbcf9c-jx4mf container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" start-of-body= Mar 18 13:08:37.470434 master-0 kubenswrapper[7599]: I0318 13:08:37.470438 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" podUID="19a76585-a9ac-4ed9-9146-bb77b31848c6" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" Mar 18 13:08:38.078408 master-0 kubenswrapper[7599]: E0318 13:08:38.078319 7599 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io master-0)" Mar 18 13:08:38.214387 master-0 kubenswrapper[7599]: I0318 13:08:38.214304 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_d664a6d0d2a24360dee10612610f1b59/etcdctl/0.log" Mar 18 13:08:38.214387 master-0 kubenswrapper[7599]: I0318 13:08:38.214387 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:08:38.226659 master-0 kubenswrapper[7599]: I0318 13:08:38.226539 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_d664a6d0d2a24360dee10612610f1b59/etcdctl/0.log" Mar 18 13:08:38.226896 master-0 kubenswrapper[7599]: I0318 13:08:38.226660 7599 generic.go:334] "Generic (PLEG): container finished" podID="d664a6d0d2a24360dee10612610f1b59" containerID="ac4829129d2b10e54c93299397b9cc08e24d1caf7621cbb7c7ae941e1d3f8b33" exitCode=137 Mar 18 13:08:38.226896 master-0 kubenswrapper[7599]: I0318 13:08:38.226715 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:08:38.226896 master-0 kubenswrapper[7599]: I0318 13:08:38.226782 7599 scope.go:117] "RemoveContainer" containerID="b680406b619eed9e8ddd7c1c6d8c3b3b49b30c6e09e5967e05d69c77c60fb8e7" Mar 18 13:08:38.234260 master-0 kubenswrapper[7599]: I0318 13:08:38.234198 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-cpqm5_7dca7577-6bee-4dd3-917a-7b7ccc42f0fc/openshift-controller-manager-operator/0.log" Mar 18 13:08:38.234590 master-0 kubenswrapper[7599]: I0318 13:08:38.234266 7599 generic.go:334] "Generic (PLEG): container finished" podID="7dca7577-6bee-4dd3-917a-7b7ccc42f0fc" containerID="58ba17cb9e47416db3b6a0a6b8c2a2608308d20a79593a16babea0c6f26ec54c" exitCode=1 Mar 18 13:08:38.234590 master-0 kubenswrapper[7599]: I0318 13:08:38.234308 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5" event={"ID":"7dca7577-6bee-4dd3-917a-7b7ccc42f0fc","Type":"ContainerDied","Data":"58ba17cb9e47416db3b6a0a6b8c2a2608308d20a79593a16babea0c6f26ec54c"} Mar 18 13:08:38.235111 master-0 kubenswrapper[7599]: I0318 13:08:38.235056 7599 scope.go:117] "RemoveContainer" containerID="58ba17cb9e47416db3b6a0a6b8c2a2608308d20a79593a16babea0c6f26ec54c" Mar 18 13:08:38.248397 master-0 kubenswrapper[7599]: I0318 13:08:38.248344 7599 scope.go:117] "RemoveContainer" containerID="ac4829129d2b10e54c93299397b9cc08e24d1caf7621cbb7c7ae941e1d3f8b33" Mar 18 13:08:38.254124 master-0 kubenswrapper[7599]: I0318 13:08:38.254058 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:38.254124 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:38.254124 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:38.254124 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:38.254124 master-0 kubenswrapper[7599]: I0318 13:08:38.254116 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:38.266481 master-0 kubenswrapper[7599]: I0318 13:08:38.266309 7599 scope.go:117] "RemoveContainer" containerID="b680406b619eed9e8ddd7c1c6d8c3b3b49b30c6e09e5967e05d69c77c60fb8e7" Mar 18 13:08:38.266984 master-0 kubenswrapper[7599]: E0318 13:08:38.266914 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b680406b619eed9e8ddd7c1c6d8c3b3b49b30c6e09e5967e05d69c77c60fb8e7\": container with ID starting with b680406b619eed9e8ddd7c1c6d8c3b3b49b30c6e09e5967e05d69c77c60fb8e7 not found: ID does not exist" containerID="b680406b619eed9e8ddd7c1c6d8c3b3b49b30c6e09e5967e05d69c77c60fb8e7" Mar 18 13:08:38.266984 master-0 kubenswrapper[7599]: I0318 13:08:38.266951 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b680406b619eed9e8ddd7c1c6d8c3b3b49b30c6e09e5967e05d69c77c60fb8e7"} err="failed to get container status \"b680406b619eed9e8ddd7c1c6d8c3b3b49b30c6e09e5967e05d69c77c60fb8e7\": rpc error: code = NotFound desc = could not find container \"b680406b619eed9e8ddd7c1c6d8c3b3b49b30c6e09e5967e05d69c77c60fb8e7\": container with ID starting with b680406b619eed9e8ddd7c1c6d8c3b3b49b30c6e09e5967e05d69c77c60fb8e7 not found: ID does not exist" Mar 18 13:08:38.266984 master-0 kubenswrapper[7599]: I0318 13:08:38.266975 7599 scope.go:117] "RemoveContainer" containerID="ac4829129d2b10e54c93299397b9cc08e24d1caf7621cbb7c7ae941e1d3f8b33" Mar 18 13:08:38.267390 master-0 kubenswrapper[7599]: E0318 13:08:38.267340 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac4829129d2b10e54c93299397b9cc08e24d1caf7621cbb7c7ae941e1d3f8b33\": container with ID starting with ac4829129d2b10e54c93299397b9cc08e24d1caf7621cbb7c7ae941e1d3f8b33 not found: ID does not exist" containerID="ac4829129d2b10e54c93299397b9cc08e24d1caf7621cbb7c7ae941e1d3f8b33" Mar 18 13:08:38.267390 master-0 kubenswrapper[7599]: I0318 13:08:38.267368 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac4829129d2b10e54c93299397b9cc08e24d1caf7621cbb7c7ae941e1d3f8b33"} err="failed to get container status \"ac4829129d2b10e54c93299397b9cc08e24d1caf7621cbb7c7ae941e1d3f8b33\": rpc error: code = NotFound desc = could not find container \"ac4829129d2b10e54c93299397b9cc08e24d1caf7621cbb7c7ae941e1d3f8b33\": container with ID starting with ac4829129d2b10e54c93299397b9cc08e24d1caf7621cbb7c7ae941e1d3f8b33 not found: ID does not exist" Mar 18 13:08:38.410919 master-0 kubenswrapper[7599]: I0318 13:08:38.410846 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"d664a6d0d2a24360dee10612610f1b59\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " Mar 18 13:08:38.411154 master-0 kubenswrapper[7599]: I0318 13:08:38.410952 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs" (OuterVolumeSpecName: "certs") pod "d664a6d0d2a24360dee10612610f1b59" (UID: "d664a6d0d2a24360dee10612610f1b59"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:08:38.411154 master-0 kubenswrapper[7599]: I0318 13:08:38.411099 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"d664a6d0d2a24360dee10612610f1b59\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " Mar 18 13:08:38.411302 master-0 kubenswrapper[7599]: I0318 13:08:38.411251 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir" (OuterVolumeSpecName: "data-dir") pod "d664a6d0d2a24360dee10612610f1b59" (UID: "d664a6d0d2a24360dee10612610f1b59"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:08:38.412257 master-0 kubenswrapper[7599]: I0318 13:08:38.411903 7599 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:38.412257 master-0 kubenswrapper[7599]: I0318 13:08:38.411950 7599 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:08:39.239765 master-0 kubenswrapper[7599]: I0318 13:08:39.239670 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-cpqm5_7dca7577-6bee-4dd3-917a-7b7ccc42f0fc/openshift-controller-manager-operator/0.log" Mar 18 13:08:39.239765 master-0 kubenswrapper[7599]: I0318 13:08:39.239754 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5" event={"ID":"7dca7577-6bee-4dd3-917a-7b7ccc42f0fc","Type":"ContainerStarted","Data":"8dcf0d47755aa9729c9174b6d9eec6a76d4adc29a9ce8725fd5baba97772cee5"} Mar 18 13:08:39.254883 master-0 kubenswrapper[7599]: I0318 13:08:39.254789 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:39.254883 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:39.254883 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:39.254883 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:39.254883 master-0 kubenswrapper[7599]: I0318 13:08:39.254859 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:39.379306 master-0 kubenswrapper[7599]: I0318 13:08:39.379216 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d664a6d0d2a24360dee10612610f1b59" path="/var/lib/kubelet/pods/d664a6d0d2a24360dee10612610f1b59/volumes" Mar 18 13:08:39.379873 master-0 kubenswrapper[7599]: I0318 13:08:39.379726 7599 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 18 13:08:39.851788 master-0 kubenswrapper[7599]: I0318 13:08:39.851638 7599 patch_prober.go:28] interesting pod/packageserver-ff75f747c-r46tm container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.128.0.58:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 13:08:39.851788 master-0 kubenswrapper[7599]: I0318 13:08:39.851777 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" podUID="3ee0f85b-219b-47cb-a22a-67d359a69881" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.58:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:08:39.852179 master-0 kubenswrapper[7599]: I0318 13:08:39.851696 7599 patch_prober.go:28] interesting pod/packageserver-ff75f747c-r46tm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.58:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 13:08:39.852179 master-0 kubenswrapper[7599]: I0318 13:08:39.851877 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" podUID="3ee0f85b-219b-47cb-a22a-67d359a69881" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.58:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:08:40.255049 master-0 kubenswrapper[7599]: I0318 13:08:40.254956 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:40.255049 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:40.255049 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:40.255049 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:40.255476 master-0 kubenswrapper[7599]: I0318 13:08:40.255061 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:41.253883 master-0 kubenswrapper[7599]: I0318 13:08:41.253787 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:41.253883 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:41.253883 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:41.253883 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:41.254891 master-0 kubenswrapper[7599]: I0318 13:08:41.253892 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:42.072539 master-0 kubenswrapper[7599]: E0318 13:08:42.072279 7599 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0-master-0.189df16f2c8dc0eb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Killing,Message:Stopping container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:08:08.044994795 +0000 UTC m=+63.006049037,LastTimestamp:2026-03-18 13:08:08.044994795 +0000 UTC m=+63.006049037,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:08:42.072844 master-0 kubenswrapper[7599]: E0318 13:08:42.072759 7599 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-2-master-0: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 13:08:42.073033 master-0 kubenswrapper[7599]: E0318 13:08:42.072994 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6a4c87a8-6bf0-43b2-b598-1561cba3e391-kube-api-access podName:6a4c87a8-6bf0-43b2-b598-1561cba3e391 nodeName:}" failed. No retries permitted until 2026-03-18 13:08:42.572934792 +0000 UTC m=+97.533989064 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/6a4c87a8-6bf0-43b2-b598-1561cba3e391-kube-api-access") pod "installer-2-master-0" (UID: "6a4c87a8-6bf0-43b2-b598-1561cba3e391") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 13:08:42.255387 master-0 kubenswrapper[7599]: I0318 13:08:42.255299 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:42.255387 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:42.255387 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:42.255387 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:42.255387 master-0 kubenswrapper[7599]: I0318 13:08:42.255382 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:42.670466 master-0 kubenswrapper[7599]: I0318 13:08:42.670349 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6a4c87a8-6bf0-43b2-b598-1561cba3e391-kube-api-access\") pod \"installer-2-master-0\" (UID: \"6a4c87a8-6bf0-43b2-b598-1561cba3e391\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 13:08:43.254796 master-0 kubenswrapper[7599]: I0318 13:08:43.254713 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:43.254796 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:43.254796 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:43.254796 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:43.254796 master-0 kubenswrapper[7599]: I0318 13:08:43.254782 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:43.929222 master-0 kubenswrapper[7599]: I0318 13:08:43.929141 7599 patch_prober.go:28] interesting pod/authentication-operator-5885bfd7f4-sp4ld container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" start-of-body= Mar 18 13:08:43.929762 master-0 kubenswrapper[7599]: I0318 13:08:43.929235 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" podUID="bf9d21f9-64d6-4e21-a985-491197038568" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" Mar 18 13:08:44.255181 master-0 kubenswrapper[7599]: I0318 13:08:44.255017 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:44.255181 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:44.255181 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:44.255181 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:44.255181 master-0 kubenswrapper[7599]: I0318 13:08:44.255117 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:45.254930 master-0 kubenswrapper[7599]: I0318 13:08:45.254779 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:45.254930 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:45.254930 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:45.254930 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:45.256027 master-0 kubenswrapper[7599]: I0318 13:08:45.254941 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:46.254377 master-0 kubenswrapper[7599]: I0318 13:08:46.254248 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:46.254377 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:46.254377 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:46.254377 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:46.254377 master-0 kubenswrapper[7599]: I0318 13:08:46.254352 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:46.324558 master-0 kubenswrapper[7599]: I0318 13:08:46.324385 7599 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:08:47.253771 master-0 kubenswrapper[7599]: I0318 13:08:47.253685 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:47.253771 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:47.253771 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:47.253771 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:47.253771 master-0 kubenswrapper[7599]: I0318 13:08:47.253756 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:47.294446 master-0 kubenswrapper[7599]: I0318 13:08:47.294372 7599 generic.go:334] "Generic (PLEG): container finished" podID="b75d4622-ac12-4f82-afc9-ab63e6278b0c" containerID="4f4390a1edc4e74d8425b268d4802fbbd68b0a727bcc922dd63ac0c094e61704" exitCode=0 Mar 18 13:08:48.079542 master-0 kubenswrapper[7599]: E0318 13:08:48.079377 7599 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:08:48.254519 master-0 kubenswrapper[7599]: I0318 13:08:48.254403 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:48.254519 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:48.254519 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:48.254519 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:48.254519 master-0 kubenswrapper[7599]: I0318 13:08:48.254496 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:48.721548 master-0 kubenswrapper[7599]: E0318 13:08:48.720836 7599 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:08:38Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:08:38Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:08:38Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:08:38Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7\\\"],\\\"sizeBytes\\\":1637455533},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36\\\"],\\\"sizeBytes\\\":1238100502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af0fe0ca926422a6471d5bf22fc0e682c36c24fba05496a3bdfac0b7d3733015\\\"],\\\"sizeBytes\\\":991832673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\\\"],\\\"sizeBytes\\\":943841779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a09f5a3ba4f60cce0145769509bab92553c8075d210af4ac058965d2ae11efa\\\"],\\\"sizeBytes\\\":876160834},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578\\\"],\\\"sizeBytes\\\":862657321},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a9e8da5c6114f062b814936d4db7a47a04d248e160d6bb28ad4e4a081496ee4\\\"],\\\"sizeBytes\\\":772943435},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e1faad2d9167d84e23585c1cea5962301845548043cf09578f943f79ca98016\\\"],\\\"sizeBytes\\\":687949580},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5e12e4dc52214d3ada5ba5106caebe079eac1d9292c2571a5fe83411ce8e900d\\\"],\\\"sizeBytes\\\":683195416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa5e782406f71c048b1ac3a4bf5d1227ff4be81111114083ad4c7a209c6bfb5a\\\"],\\\"sizeBytes\\\":677942383},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec8fd46dfb35ed10e8f98933166f69ce579c2f35b8db03d21e4c34fc544553e4\\\"],\\\"sizeBytes\\\":621648710},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae50e496bd6ae2d27298d997470b7cb0a426eeb8b7e2e9c7187a34cb03993998\\\"],\\\"sizeBytes\\\":589386806},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55\\\"],\\\"sizeBytes\\\":582154903},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982\\\"],\\\"sizeBytes\\\":558211175},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7af9f5c5af9d529840233ef4b519120cc0e3f14c4fe28cc43b0823f2c11d8f89\\\"],\\\"sizeBytes\\\":548752816},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\\\"],\\\"sizeBytes\\\":529326739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b4c9cf268bb7abef7af187cd775d3f74d0bd33626250095428d53b705ee946\\\"],\\\"sizeBytes\\\":528956487},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a\\\"],\\\"sizeBytes\\\":518384969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1\\\"],\\\"sizeBytes\\\":517999161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\\\"],\\\"sizeBytes\\\":514984269},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30a2f97d7785ce8b0ea5115e67c4554b64adefbc7856bcf6f4fe6cc7e938a310\\\"],\\\"sizeBytes\\\":513582374},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfe394b58ec6195de8b8420e781b7630d85a412b9112d892fea903f92b783427\\\"],\\\"sizeBytes\\\":513221333},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e\\\"],\\\"sizeBytes\\\":512274055},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77fff570657d2fa0bfb709b2c8b6665bae0bf90a2be981d8dbca56c674715098\\\"],\\\"sizeBytes\\\":511227324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adb9f6f2fd701863c7caed747df43f83d3569ba9388cfa33ea7219ac6a606b11\\\"],\\\"sizeBytes\\\":511164375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458\\\"],\\\"sizeBytes\\\":508888171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263\\\"],\\\"sizeBytes\\\":508544745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:313d1d8ca85e65236a59f058a3316c49436dde691b3a3930d5bc5e3b4b8c8a71\\\"],\\\"sizeBytes\\\":507972093},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c527b4e8239a1f4f4e0a851113e7dd633b7dcb9d75b0e7b21c23d26304abcb3\\\"],\\\"sizeBytes\\\":506480167},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302\\\"],\\\"sizeBytes\\\":506395599},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4a034950346bcd4e36e9e2f1343e0cf7a10cf544963f33d09c7eb2a1bfc634\\\"],\\\"sizeBytes\\\":505345991},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\\\"],\\\"sizeBytes\\\":505246690},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252\\\"],\\\"sizeBytes\\\":504625081},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:712d334b7752d95580571059aae2c50e111d879af4fd8ea7cc3dbaf1a8e7dc69\\\"],\\\"sizeBytes\\\":495994673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5ea1ef4e09b673a0c68c8848ca162ab11d9ac373a377daa52dea702ffa3023\\\"],\\\"sizeBytes\\\":495065340},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:002dfb86e17ad8f5cc232a7d2dce183b23335c8ecb7e7d31dcf3e4446b390777\\\"],\\\"sizeBytes\\\":487159945},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:446bedea4916d3c1ee52be94137e484659e9561bd1de95c8189eee279aae984b\\\"],\\\"sizeBytes\\\":487096305},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a746a87b784ea1caa278fd0e012554f9df520b6fff665ea0bc4c83f487fed113\\\"],\\\"sizeBytes\\\":484450894},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c4d5a681595e428ff4b5083648c13615eed80be9084a3d3fc68a0295079cb12\\\"],\\\"sizeBytes\\\":484187929},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f933312f49083e8746fc41ab5e46a9a757b448374f14971e256ebcb36f11dd97\\\"],\\\"sizeBytes\\\":470826739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea5c8a93f30e0a4932da5697d22c0da7eda9a7035c0555eb006b6755e62bb2fc\\\"],\\\"sizeBytes\\\":468265024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\\\"],\\\"sizeBytes\\\":465090934},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9609c00207cc4db97f0fd6162eb429d7f81654137f020a677e30cba26a887a24\\\"],\\\"sizeBytes\\\":463705930},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:632e80bba5077068ecca05fddb95aedebad4493af6f36152c01c6ae490975b62\\\"],\\\"sizeBytes\\\":458126937},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bcb08821551e9a5b9f82aa794bcea673279cefb93cb47492e19ccac5e2cf18fe\\\"],\\\"sizeBytes\\\":456576198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:759fb1d5353dbbadd443f38631d977ca3aed9787b873be05cc9660532a252739\\\"],\\\"sizeBytes\\\":448828620},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3062f6485aec4770e60852b535c69a42527b305161fe856499c8658ead6d1e85\\\"],\\\"sizeBytes\\\":448042136},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:97c7a635130c574a2c501091bb44f17cd92e05e29b5102e59578b5885d9bfec0\\\"],\\\"sizeBytes\\\":444573129},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951ecfeba9b2da4b653034d09275f925396a79c2d8461b8a7c71c776fee67ba0\\\"],\\\"sizeBytes\\\":443272037},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:292560e2d80b460468bb19fe0ddf289767c655027b03a76ee6c40c91ffe4c483\\\"],\\\"sizeBytes\\\":438654374}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:08:49.220480 master-0 kubenswrapper[7599]: E0318 13:08:49.220379 7599 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 18 13:08:49.258195 master-0 kubenswrapper[7599]: I0318 13:08:49.258092 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:49.258195 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:49.258195 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:49.258195 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:49.258195 master-0 kubenswrapper[7599]: I0318 13:08:49.258155 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:49.851847 master-0 kubenswrapper[7599]: I0318 13:08:49.851728 7599 patch_prober.go:28] interesting pod/packageserver-ff75f747c-r46tm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.58:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 13:08:49.851847 master-0 kubenswrapper[7599]: I0318 13:08:49.851811 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" podUID="3ee0f85b-219b-47cb-a22a-67d359a69881" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.58:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:08:50.253350 master-0 kubenswrapper[7599]: I0318 13:08:50.253285 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:50.253350 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:50.253350 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:50.253350 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:50.253866 master-0 kubenswrapper[7599]: I0318 13:08:50.253356 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:50.313692 master-0 kubenswrapper[7599]: I0318 13:08:50.313644 7599 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="e09a418d06bba466d802f306bad44d8be5e7b6c6d9c9ae5189c3748d7fc68429" exitCode=0 Mar 18 13:08:51.254625 master-0 kubenswrapper[7599]: I0318 13:08:51.254522 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:51.254625 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:51.254625 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:51.254625 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:51.255128 master-0 kubenswrapper[7599]: I0318 13:08:51.254627 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:52.253954 master-0 kubenswrapper[7599]: I0318 13:08:52.253799 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:52.253954 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:52.253954 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:52.253954 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:52.253954 master-0 kubenswrapper[7599]: I0318 13:08:52.253857 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:52.327842 master-0 kubenswrapper[7599]: I0318 13:08:52.327762 7599 generic.go:334] "Generic (PLEG): container finished" podID="0c2c4a58-9780-4ecd-b417-e590ac3576ed" containerID="8e530c2314387d6faa3389f896853faadcabf48e6b1056d8665d0aee6b25ba83" exitCode=0 Mar 18 13:08:53.255219 master-0 kubenswrapper[7599]: I0318 13:08:53.255172 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:53.255219 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:53.255219 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:53.255219 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:53.255734 master-0 kubenswrapper[7599]: I0318 13:08:53.255661 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:53.335158 master-0 kubenswrapper[7599]: I0318 13:08:53.335076 7599 generic.go:334] "Generic (PLEG): container finished" podID="34a3a84b-048f-4822-9f05-0e7509327ca2" containerID="f405c7c5758aab122512ec8685660fb5ea0502d97836267e430ea463ff79f592" exitCode=0 Mar 18 13:08:53.929113 master-0 kubenswrapper[7599]: I0318 13:08:53.928922 7599 patch_prober.go:28] interesting pod/authentication-operator-5885bfd7f4-sp4ld container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" start-of-body= Mar 18 13:08:53.929113 master-0 kubenswrapper[7599]: I0318 13:08:53.929032 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" podUID="bf9d21f9-64d6-4e21-a985-491197038568" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" Mar 18 13:08:54.253114 master-0 kubenswrapper[7599]: I0318 13:08:54.253009 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:54.253114 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:54.253114 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:54.253114 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:54.253114 master-0 kubenswrapper[7599]: I0318 13:08:54.253067 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:55.254514 master-0 kubenswrapper[7599]: I0318 13:08:55.254380 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:55.254514 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:55.254514 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:55.254514 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:55.254514 master-0 kubenswrapper[7599]: I0318 13:08:55.254492 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:56.255154 master-0 kubenswrapper[7599]: I0318 13:08:56.255020 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:56.255154 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:56.255154 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:56.255154 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:56.256103 master-0 kubenswrapper[7599]: I0318 13:08:56.255165 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:56.327264 master-0 kubenswrapper[7599]: I0318 13:08:56.327193 7599 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:08:57.257343 master-0 kubenswrapper[7599]: I0318 13:08:57.257242 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:57.257343 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:57.257343 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:57.257343 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:57.257343 master-0 kubenswrapper[7599]: I0318 13:08:57.257310 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:58.081055 master-0 kubenswrapper[7599]: E0318 13:08:58.080912 7599 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:08:58.081055 master-0 kubenswrapper[7599]: I0318 13:08:58.081027 7599 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 18 13:08:58.253811 master-0 kubenswrapper[7599]: I0318 13:08:58.253738 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:58.253811 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:58.253811 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:58.253811 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:58.254127 master-0 kubenswrapper[7599]: I0318 13:08:58.253817 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:58.367407 master-0 kubenswrapper[7599]: I0318 13:08:58.367333 7599 generic.go:334] "Generic (PLEG): container finished" podID="19a76585-a9ac-4ed9-9146-bb77b31848c6" containerID="e98d728f4b1b0e813247323f6966121eae00b055f966e7db7eab7c672af9c4da" exitCode=0 Mar 18 13:08:58.722789 master-0 kubenswrapper[7599]: E0318 13:08:58.722684 7599 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:08:59.254111 master-0 kubenswrapper[7599]: I0318 13:08:59.254025 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:08:59.254111 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:08:59.254111 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:08:59.254111 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:08:59.254397 master-0 kubenswrapper[7599]: I0318 13:08:59.254120 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:08:59.851141 master-0 kubenswrapper[7599]: I0318 13:08:59.851038 7599 patch_prober.go:28] interesting pod/packageserver-ff75f747c-r46tm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.58:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 13:08:59.851870 master-0 kubenswrapper[7599]: I0318 13:08:59.851151 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" podUID="3ee0f85b-219b-47cb-a22a-67d359a69881" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.58:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:09:00.253895 master-0 kubenswrapper[7599]: I0318 13:09:00.253825 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:00.253895 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:00.253895 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:00.253895 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:00.253895 master-0 kubenswrapper[7599]: I0318 13:09:00.253880 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:00.379640 master-0 kubenswrapper[7599]: I0318 13:09:00.379597 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-x8r78_0e7156cf-2d68-4de8-b7e7-60e1539590dd/approver/0.log" Mar 18 13:09:00.380669 master-0 kubenswrapper[7599]: I0318 13:09:00.380620 7599 generic.go:334] "Generic (PLEG): container finished" podID="0e7156cf-2d68-4de8-b7e7-60e1539590dd" containerID="4a2b96ab3e758ccd953d067f7229799e7c3da85d90ceb61612bf33b3cfdeebe2" exitCode=1 Mar 18 13:09:01.253775 master-0 kubenswrapper[7599]: I0318 13:09:01.253644 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:01.253775 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:01.253775 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:01.253775 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:01.254885 master-0 kubenswrapper[7599]: I0318 13:09:01.253771 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:02.254746 master-0 kubenswrapper[7599]: I0318 13:09:02.254641 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:02.254746 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:02.254746 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:02.254746 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:02.254746 master-0 kubenswrapper[7599]: I0318 13:09:02.254734 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:02.395526 master-0 kubenswrapper[7599]: I0318 13:09:02.395453 7599 generic.go:334] "Generic (PLEG): container finished" podID="a8eff549-02f3-446e-b3a1-a66cecdc02a6" containerID="0fa9267fcb1942ed177056f1462768d5db7582291e5f4b758f528a23e47041d8" exitCode=0 Mar 18 13:09:02.398184 master-0 kubenswrapper[7599]: I0318 13:09:02.398142 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-gxxbr_f7f4ae93-428b-4ebd-bfaa-18359b407ede/network-operator/0.log" Mar 18 13:09:02.398297 master-0 kubenswrapper[7599]: I0318 13:09:02.398192 7599 generic.go:334] "Generic (PLEG): container finished" podID="f7f4ae93-428b-4ebd-bfaa-18359b407ede" containerID="0f68e5c45ea6d8fc8605559b1dd3501571f6348a64337151b3b9a1c54518d47c" exitCode=255 Mar 18 13:09:02.400783 master-0 kubenswrapper[7599]: I0318 13:09:02.400728 7599 generic.go:334] "Generic (PLEG): container finished" podID="07505113-d5e7-4ea3-b9cc-8f08cba45ccc" containerID="34f2829f920c0b8e7fad32f3489c2848036444d936bf5324856fb8eb487c04e1" exitCode=0 Mar 18 13:09:03.254749 master-0 kubenswrapper[7599]: I0318 13:09:03.254625 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:03.254749 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:03.254749 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:03.254749 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:03.255952 master-0 kubenswrapper[7599]: I0318 13:09:03.254752 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:03.929185 master-0 kubenswrapper[7599]: I0318 13:09:03.929034 7599 patch_prober.go:28] interesting pod/authentication-operator-5885bfd7f4-sp4ld container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" start-of-body= Mar 18 13:09:03.929185 master-0 kubenswrapper[7599]: I0318 13:09:03.929106 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" podUID="bf9d21f9-64d6-4e21-a985-491197038568" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.29:8443/healthz\": dial tcp 10.128.0.29:8443: connect: connection refused" Mar 18 13:09:04.255081 master-0 kubenswrapper[7599]: I0318 13:09:04.254907 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:04.255081 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:04.255081 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:04.255081 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:04.255081 master-0 kubenswrapper[7599]: I0318 13:09:04.254998 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:05.253523 master-0 kubenswrapper[7599]: I0318 13:09:05.253435 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:05.253523 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:05.253523 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:05.253523 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:05.253523 master-0 kubenswrapper[7599]: I0318 13:09:05.253524 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:06.253953 master-0 kubenswrapper[7599]: I0318 13:09:06.253860 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:06.253953 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:06.253953 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:06.253953 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:06.253953 master-0 kubenswrapper[7599]: I0318 13:09:06.253923 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:07.254364 master-0 kubenswrapper[7599]: I0318 13:09:07.254293 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:07.254364 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:07.254364 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:07.254364 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:07.254969 master-0 kubenswrapper[7599]: I0318 13:09:07.254394 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:07.427328 master-0 kubenswrapper[7599]: I0318 13:09:07.427251 7599 generic.go:334] "Generic (PLEG): container finished" podID="595f697b-d238-4500-84ce-1ea00377f05e" containerID="a2dd4b79716d36a56d21bba417e3ebe1360ab2ee3f667763e4260bf014da2347" exitCode=0 Mar 18 13:09:08.081295 master-0 kubenswrapper[7599]: E0318 13:09:08.081239 7599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="200ms" Mar 18 13:09:08.253303 master-0 kubenswrapper[7599]: I0318 13:09:08.253204 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:08.253303 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:08.253303 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:08.253303 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:08.253637 master-0 kubenswrapper[7599]: I0318 13:09:08.253336 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:08.723276 master-0 kubenswrapper[7599]: E0318 13:09:08.723206 7599 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:09:08.851683 master-0 kubenswrapper[7599]: I0318 13:09:08.851500 7599 patch_prober.go:28] interesting pod/packageserver-ff75f747c-r46tm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.58:5443/healthz\": dial tcp 10.128.0.58:5443: connect: connection refused" start-of-body= Mar 18 13:09:08.851683 master-0 kubenswrapper[7599]: I0318 13:09:08.851581 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" podUID="3ee0f85b-219b-47cb-a22a-67d359a69881" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.58:5443/healthz\": dial tcp 10.128.0.58:5443: connect: connection refused" Mar 18 13:09:09.019521 master-0 kubenswrapper[7599]: I0318 13:09:09.019404 7599 status_manager.go:851] "Failed to get status for pod" podUID="6db2bfbd-d8db-4384-8979-23e8a1e87e5e" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wsmsc" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods prometheus-operator-admission-webhook-69c6b55594-wsmsc)" Mar 18 13:09:09.254800 master-0 kubenswrapper[7599]: I0318 13:09:09.254726 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:09.254800 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:09.254800 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:09.254800 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:09.254800 master-0 kubenswrapper[7599]: I0318 13:09:09.254798 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:09.440025 master-0 kubenswrapper[7599]: I0318 13:09:09.439952 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-ff75f747c-r46tm_3ee0f85b-219b-47cb-a22a-67d359a69881/packageserver/0.log" Mar 18 13:09:09.440025 master-0 kubenswrapper[7599]: I0318 13:09:09.440022 7599 generic.go:334] "Generic (PLEG): container finished" podID="3ee0f85b-219b-47cb-a22a-67d359a69881" containerID="48e43ee75779b8e1045feaede050da1592482395d03ca73890f0546a58a0cc80" exitCode=2 Mar 18 13:09:10.254184 master-0 kubenswrapper[7599]: I0318 13:09:10.254107 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:10.254184 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:10.254184 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:10.254184 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:10.254947 master-0 kubenswrapper[7599]: I0318 13:09:10.254224 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:11.253818 master-0 kubenswrapper[7599]: I0318 13:09:11.253764 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:11.253818 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:11.253818 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:11.253818 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:11.254651 master-0 kubenswrapper[7599]: I0318 13:09:11.253844 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:12.256527 master-0 kubenswrapper[7599]: I0318 13:09:12.255121 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:12.256527 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:12.256527 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:12.256527 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:12.256527 master-0 kubenswrapper[7599]: I0318 13:09:12.255235 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:12.463119 master-0 kubenswrapper[7599]: I0318 13:09:12.463052 7599 generic.go:334] "Generic (PLEG): container finished" podID="15a97fe2-5022-4997-9936-4247ae7ecb43" containerID="6bba51891e1777a8a2c079cba18156b56f50c10e22f9de1c059b65799e3a81f6" exitCode=0 Mar 18 13:09:13.255345 master-0 kubenswrapper[7599]: I0318 13:09:13.255270 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:13.255345 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:13.255345 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:13.255345 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:13.255345 master-0 kubenswrapper[7599]: I0318 13:09:13.255347 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:13.382670 master-0 kubenswrapper[7599]: E0318 13:09:13.382604 7599 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:09:13.383186 master-0 kubenswrapper[7599]: E0318 13:09:13.382838 7599 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.012s" Mar 18 13:09:13.385124 master-0 kubenswrapper[7599]: I0318 13:09:13.384971 7599 scope.go:117] "RemoveContainer" containerID="4f4390a1edc4e74d8425b268d4802fbbd68b0a727bcc922dd63ac0c094e61704" Mar 18 13:09:13.401456 master-0 kubenswrapper[7599]: I0318 13:09:13.401371 7599 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 18 13:09:13.470071 master-0 kubenswrapper[7599]: I0318 13:09:13.470008 7599 generic.go:334] "Generic (PLEG): container finished" podID="bf9d21f9-64d6-4e21-a985-491197038568" containerID="e3030c6144549ecf6368b1e14f59622a57b27f9cd532ce32634fa6a2d9e59421" exitCode=0 Mar 18 13:09:14.254031 master-0 kubenswrapper[7599]: I0318 13:09:14.253969 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:14.254031 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:14.254031 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:14.254031 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:14.254031 master-0 kubenswrapper[7599]: I0318 13:09:14.254022 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:15.253785 master-0 kubenswrapper[7599]: I0318 13:09:15.253698 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:15.253785 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:15.253785 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:15.253785 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:15.253785 master-0 kubenswrapper[7599]: I0318 13:09:15.253765 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:16.074646 master-0 kubenswrapper[7599]: E0318 13:09:16.074484 7599 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{prometheus-operator-admission-webhook-69c6b55594-wsmsc.189df16f2fbe8307 openshift-monitoring 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-monitoring,Name:prometheus-operator-admission-webhook-69c6b55594-wsmsc,UID:6db2bfbd-d8db-4384-8979-23e8a1e87e5e,APIVersion:v1,ResourceVersion:7903,FieldPath:spec.containers{prometheus-operator-admission-webhook},},Reason:Created,Message:Created container: prometheus-operator-admission-webhook,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:08:08.098521863 +0000 UTC m=+63.059576105,LastTimestamp:2026-03-18 13:08:08.098521863 +0000 UTC m=+63.059576105,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:09:16.253518 master-0 kubenswrapper[7599]: I0318 13:09:16.253440 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:16.253518 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:16.253518 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:16.253518 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:16.253817 master-0 kubenswrapper[7599]: I0318 13:09:16.253518 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:16.673336 master-0 kubenswrapper[7599]: E0318 13:09:16.673292 7599 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-2-master-0: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 13:09:16.674241 master-0 kubenswrapper[7599]: E0318 13:09:16.674220 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6a4c87a8-6bf0-43b2-b598-1561cba3e391-kube-api-access podName:6a4c87a8-6bf0-43b2-b598-1561cba3e391 nodeName:}" failed. No retries permitted until 2026-03-18 13:09:17.674195154 +0000 UTC m=+132.635249406 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/6a4c87a8-6bf0-43b2-b598-1561cba3e391-kube-api-access") pod "installer-2-master-0" (UID: "6a4c87a8-6bf0-43b2-b598-1561cba3e391") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 13:09:17.254957 master-0 kubenswrapper[7599]: I0318 13:09:17.254852 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:17.254957 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:17.254957 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:17.254957 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:17.255330 master-0 kubenswrapper[7599]: I0318 13:09:17.254969 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:17.721966 master-0 kubenswrapper[7599]: I0318 13:09:17.721889 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6a4c87a8-6bf0-43b2-b598-1561cba3e391-kube-api-access\") pod \"installer-2-master-0\" (UID: \"6a4c87a8-6bf0-43b2-b598-1561cba3e391\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 13:09:18.254155 master-0 kubenswrapper[7599]: I0318 13:09:18.254080 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:18.254155 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:18.254155 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:18.254155 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:18.254475 master-0 kubenswrapper[7599]: I0318 13:09:18.254181 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:18.284268 master-0 kubenswrapper[7599]: E0318 13:09:18.284033 7599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Mar 18 13:09:18.725125 master-0 kubenswrapper[7599]: E0318 13:09:18.725030 7599 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:09:18.851393 master-0 kubenswrapper[7599]: I0318 13:09:18.851316 7599 patch_prober.go:28] interesting pod/packageserver-ff75f747c-r46tm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.58:5443/healthz\": dial tcp 10.128.0.58:5443: connect: connection refused" start-of-body= Mar 18 13:09:18.851804 master-0 kubenswrapper[7599]: I0318 13:09:18.851401 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" podUID="3ee0f85b-219b-47cb-a22a-67d359a69881" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.58:5443/healthz\": dial tcp 10.128.0.58:5443: connect: connection refused" Mar 18 13:09:19.254296 master-0 kubenswrapper[7599]: I0318 13:09:19.254224 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:19.254296 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:19.254296 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:19.254296 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:19.254635 master-0 kubenswrapper[7599]: I0318 13:09:19.254318 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:20.254461 master-0 kubenswrapper[7599]: I0318 13:09:20.254386 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:20.254461 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:20.254461 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:20.254461 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:20.255183 master-0 kubenswrapper[7599]: I0318 13:09:20.254478 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:21.253482 master-0 kubenswrapper[7599]: I0318 13:09:21.253446 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:21.253482 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:21.253482 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:21.253482 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:21.253754 master-0 kubenswrapper[7599]: I0318 13:09:21.253725 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:22.255132 master-0 kubenswrapper[7599]: I0318 13:09:22.255052 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:22.255132 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:22.255132 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:22.255132 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:22.256467 master-0 kubenswrapper[7599]: I0318 13:09:22.255171 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:23.254732 master-0 kubenswrapper[7599]: I0318 13:09:23.254664 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:23.254732 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:23.254732 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:23.254732 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:23.255167 master-0 kubenswrapper[7599]: I0318 13:09:23.254765 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:23.537531 master-0 kubenswrapper[7599]: I0318 13:09:23.537363 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_615539dc-56e1-4489-9aee-33b3e769d4fc/installer/0.log" Mar 18 13:09:23.537531 master-0 kubenswrapper[7599]: I0318 13:09:23.537442 7599 generic.go:334] "Generic (PLEG): container finished" podID="615539dc-56e1-4489-9aee-33b3e769d4fc" containerID="60014c22022db848874d3a05474beca08d37dd24a5fad732534f373108a2dd40" exitCode=1 Mar 18 13:09:24.255894 master-0 kubenswrapper[7599]: I0318 13:09:24.255295 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:24.255894 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:24.255894 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:24.255894 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:24.255894 master-0 kubenswrapper[7599]: I0318 13:09:24.255376 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:24.547766 master-0 kubenswrapper[7599]: I0318 13:09:24.547642 7599 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="c54bcf4ddd56343697f9602341ecf51d80939627fe3f4a59637f96162fa1598d" exitCode=1 Mar 18 13:09:25.254844 master-0 kubenswrapper[7599]: I0318 13:09:25.254773 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:25.254844 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:25.254844 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:25.254844 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:25.254844 master-0 kubenswrapper[7599]: I0318 13:09:25.254836 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:26.256061 master-0 kubenswrapper[7599]: I0318 13:09:26.255965 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:26.256061 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:26.256061 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:26.256061 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:26.262620 master-0 kubenswrapper[7599]: I0318 13:09:26.256070 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:27.255027 master-0 kubenswrapper[7599]: I0318 13:09:27.254938 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:27.255027 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:27.255027 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:27.255027 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:27.255027 master-0 kubenswrapper[7599]: I0318 13:09:27.255020 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:28.254318 master-0 kubenswrapper[7599]: I0318 13:09:28.254204 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:28.254318 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:28.254318 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:28.254318 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:28.254318 master-0 kubenswrapper[7599]: I0318 13:09:28.254314 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:28.685275 master-0 kubenswrapper[7599]: E0318 13:09:28.685149 7599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Mar 18 13:09:28.726761 master-0 kubenswrapper[7599]: E0318 13:09:28.726680 7599 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": the server was unable to return a response in the time allotted, but may still be processing the request (get nodes master-0)" Mar 18 13:09:28.726761 master-0 kubenswrapper[7599]: E0318 13:09:28.726733 7599 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 13:09:28.851845 master-0 kubenswrapper[7599]: I0318 13:09:28.851755 7599 patch_prober.go:28] interesting pod/packageserver-ff75f747c-r46tm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.58:5443/healthz\": dial tcp 10.128.0.58:5443: connect: connection refused" start-of-body= Mar 18 13:09:28.851845 master-0 kubenswrapper[7599]: I0318 13:09:28.851838 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" podUID="3ee0f85b-219b-47cb-a22a-67d359a69881" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.58:5443/healthz\": dial tcp 10.128.0.58:5443: connect: connection refused" Mar 18 13:09:29.256360 master-0 kubenswrapper[7599]: I0318 13:09:29.256278 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:29.256360 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:29.256360 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:29.256360 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:29.257133 master-0 kubenswrapper[7599]: I0318 13:09:29.256384 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:30.254350 master-0 kubenswrapper[7599]: I0318 13:09:30.254259 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:30.254350 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:30.254350 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:30.254350 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:30.254350 master-0 kubenswrapper[7599]: I0318 13:09:30.254345 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:31.254358 master-0 kubenswrapper[7599]: I0318 13:09:31.254231 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:31.254358 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:31.254358 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:31.254358 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:31.254358 master-0 kubenswrapper[7599]: I0318 13:09:31.254315 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:32.254818 master-0 kubenswrapper[7599]: I0318 13:09:32.254672 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:32.254818 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:32.254818 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:32.254818 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:32.254818 master-0 kubenswrapper[7599]: I0318 13:09:32.254822 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:33.253826 master-0 kubenswrapper[7599]: I0318 13:09:33.253706 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:33.253826 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:33.253826 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:33.253826 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:33.253826 master-0 kubenswrapper[7599]: I0318 13:09:33.253816 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:34.255348 master-0 kubenswrapper[7599]: I0318 13:09:34.255266 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:34.255348 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:34.255348 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:34.255348 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:34.256617 master-0 kubenswrapper[7599]: I0318 13:09:34.255375 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:35.254152 master-0 kubenswrapper[7599]: I0318 13:09:35.254026 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:35.254152 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:35.254152 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:35.254152 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:35.254152 master-0 kubenswrapper[7599]: I0318 13:09:35.254125 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:36.253499 master-0 kubenswrapper[7599]: I0318 13:09:36.253399 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:36.253499 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:36.253499 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:36.253499 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:36.253499 master-0 kubenswrapper[7599]: I0318 13:09:36.253480 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:37.254629 master-0 kubenswrapper[7599]: I0318 13:09:37.254548 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:37.254629 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:37.254629 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:37.254629 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:37.254629 master-0 kubenswrapper[7599]: I0318 13:09:37.254628 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:37.469992 master-0 kubenswrapper[7599]: I0318 13:09:37.469947 7599 patch_prober.go:28] interesting pod/etcd-operator-8544cbcf9c-jx4mf container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" start-of-body= Mar 18 13:09:37.470281 master-0 kubenswrapper[7599]: I0318 13:09:37.470249 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" podUID="19a76585-a9ac-4ed9-9146-bb77b31848c6" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" Mar 18 13:09:38.254596 master-0 kubenswrapper[7599]: I0318 13:09:38.254509 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:38.254596 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:38.254596 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:38.254596 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:38.255321 master-0 kubenswrapper[7599]: I0318 13:09:38.254616 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:38.851751 master-0 kubenswrapper[7599]: I0318 13:09:38.851655 7599 patch_prober.go:28] interesting pod/packageserver-ff75f747c-r46tm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.58:5443/healthz\": dial tcp 10.128.0.58:5443: connect: connection refused" start-of-body= Mar 18 13:09:38.851965 master-0 kubenswrapper[7599]: I0318 13:09:38.851757 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" podUID="3ee0f85b-219b-47cb-a22a-67d359a69881" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.58:5443/healthz\": dial tcp 10.128.0.58:5443: connect: connection refused" Mar 18 13:09:39.253738 master-0 kubenswrapper[7599]: I0318 13:09:39.253621 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:39.253738 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:39.253738 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:39.253738 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:39.254086 master-0 kubenswrapper[7599]: I0318 13:09:39.253773 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:39.486793 master-0 kubenswrapper[7599]: E0318 13:09:39.486731 7599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Mar 18 13:09:40.254336 master-0 kubenswrapper[7599]: I0318 13:09:40.254280 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:40.254336 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:40.254336 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:40.254336 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:40.254858 master-0 kubenswrapper[7599]: I0318 13:09:40.254818 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:41.253671 master-0 kubenswrapper[7599]: I0318 13:09:41.253588 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:41.253671 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:41.253671 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:41.253671 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:41.254325 master-0 kubenswrapper[7599]: I0318 13:09:41.253695 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:42.256597 master-0 kubenswrapper[7599]: I0318 13:09:42.256521 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:42.256597 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:42.256597 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:42.256597 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:42.257717 master-0 kubenswrapper[7599]: I0318 13:09:42.256625 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:43.255881 master-0 kubenswrapper[7599]: I0318 13:09:43.255770 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:43.255881 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:43.255881 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:43.255881 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:43.256382 master-0 kubenswrapper[7599]: I0318 13:09:43.255943 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:44.261086 master-0 kubenswrapper[7599]: I0318 13:09:44.260994 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:44.261086 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:44.261086 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:44.261086 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:44.261794 master-0 kubenswrapper[7599]: I0318 13:09:44.261095 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:45.255003 master-0 kubenswrapper[7599]: I0318 13:09:45.254826 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:45.255003 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:45.255003 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:45.255003 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:45.255003 master-0 kubenswrapper[7599]: I0318 13:09:45.254925 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:46.254669 master-0 kubenswrapper[7599]: I0318 13:09:46.254586 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:46.254669 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:46.254669 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:46.254669 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:46.254669 master-0 kubenswrapper[7599]: I0318 13:09:46.254667 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:47.254085 master-0 kubenswrapper[7599]: I0318 13:09:47.253979 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:47.254085 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:47.254085 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:47.254085 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:47.254085 master-0 kubenswrapper[7599]: I0318 13:09:47.254069 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:47.404595 master-0 kubenswrapper[7599]: E0318 13:09:47.404492 7599 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:09:47.404879 master-0 kubenswrapper[7599]: E0318 13:09:47.404688 7599 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.021s" Mar 18 13:09:47.412695 master-0 kubenswrapper[7599]: I0318 13:09:47.412620 7599 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 18 13:09:48.255283 master-0 kubenswrapper[7599]: I0318 13:09:48.255146 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:48.255283 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:48.255283 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:48.255283 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:48.255283 master-0 kubenswrapper[7599]: I0318 13:09:48.255234 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:48.785362 master-0 kubenswrapper[7599]: E0318 13:09:48.785048 7599 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:09:38Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:09:38Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:09:38Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:09:38Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7\\\"],\\\"sizeBytes\\\":1637455533},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36\\\"],\\\"sizeBytes\\\":1238100502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af0fe0ca926422a6471d5bf22fc0e682c36c24fba05496a3bdfac0b7d3733015\\\"],\\\"sizeBytes\\\":991832673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\\\"],\\\"sizeBytes\\\":943841779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a09f5a3ba4f60cce0145769509bab92553c8075d210af4ac058965d2ae11efa\\\"],\\\"sizeBytes\\\":876160834},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578\\\"],\\\"sizeBytes\\\":862657321},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a9e8da5c6114f062b814936d4db7a47a04d248e160d6bb28ad4e4a081496ee4\\\"],\\\"sizeBytes\\\":772943435},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e1faad2d9167d84e23585c1cea5962301845548043cf09578f943f79ca98016\\\"],\\\"sizeBytes\\\":687949580},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5e12e4dc52214d3ada5ba5106caebe079eac1d9292c2571a5fe83411ce8e900d\\\"],\\\"sizeBytes\\\":683195416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa5e782406f71c048b1ac3a4bf5d1227ff4be81111114083ad4c7a209c6bfb5a\\\"],\\\"sizeBytes\\\":677942383},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec8fd46dfb35ed10e8f98933166f69ce579c2f35b8db03d21e4c34fc544553e4\\\"],\\\"sizeBytes\\\":621648710},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae50e496bd6ae2d27298d997470b7cb0a426eeb8b7e2e9c7187a34cb03993998\\\"],\\\"sizeBytes\\\":589386806},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55\\\"],\\\"sizeBytes\\\":582154903},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982\\\"],\\\"sizeBytes\\\":558211175},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7af9f5c5af9d529840233ef4b519120cc0e3f14c4fe28cc43b0823f2c11d8f89\\\"],\\\"sizeBytes\\\":548752816},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\\\"],\\\"sizeBytes\\\":529326739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b4c9cf268bb7abef7af187cd775d3f74d0bd33626250095428d53b705ee946\\\"],\\\"sizeBytes\\\":528956487},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a\\\"],\\\"sizeBytes\\\":518384969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1\\\"],\\\"sizeBytes\\\":517999161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\\\"],\\\"sizeBytes\\\":514984269},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30a2f97d7785ce8b0ea5115e67c4554b64adefbc7856bcf6f4fe6cc7e938a310\\\"],\\\"sizeBytes\\\":513582374},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfe394b58ec6195de8b8420e781b7630d85a412b9112d892fea903f92b783427\\\"],\\\"sizeBytes\\\":513221333},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e\\\"],\\\"sizeBytes\\\":512274055},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77fff570657d2fa0bfb709b2c8b6665bae0bf90a2be981d8dbca56c674715098\\\"],\\\"sizeBytes\\\":511227324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adb9f6f2fd701863c7caed747df43f83d3569ba9388cfa33ea7219ac6a606b11\\\"],\\\"sizeBytes\\\":511164375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458\\\"],\\\"sizeBytes\\\":508888171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263\\\"],\\\"sizeBytes\\\":508544745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:313d1d8ca85e65236a59f058a3316c49436dde691b3a3930d5bc5e3b4b8c8a71\\\"],\\\"sizeBytes\\\":507972093},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c527b4e8239a1f4f4e0a851113e7dd633b7dcb9d75b0e7b21c23d26304abcb3\\\"],\\\"sizeBytes\\\":506480167},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302\\\"],\\\"sizeBytes\\\":506395599},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4a034950346bcd4e36e9e2f1343e0cf7a10cf544963f33d09c7eb2a1bfc634\\\"],\\\"sizeBytes\\\":505345991},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\\\"],\\\"sizeBytes\\\":505246690},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252\\\"],\\\"sizeBytes\\\":504625081},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:712d334b7752d95580571059aae2c50e111d879af4fd8ea7cc3dbaf1a8e7dc69\\\"],\\\"sizeBytes\\\":495994673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5ea1ef4e09b673a0c68c8848ca162ab11d9ac373a377daa52dea702ffa3023\\\"],\\\"sizeBytes\\\":495065340},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:002dfb86e17ad8f5cc232a7d2dce183b23335c8ecb7e7d31dcf3e4446b390777\\\"],\\\"sizeBytes\\\":487159945},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:446bedea4916d3c1ee52be94137e484659e9561bd1de95c8189eee279aae984b\\\"],\\\"sizeBytes\\\":487096305},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a746a87b784ea1caa278fd0e012554f9df520b6fff665ea0bc4c83f487fed113\\\"],\\\"sizeBytes\\\":484450894},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c4d5a681595e428ff4b5083648c13615eed80be9084a3d3fc68a0295079cb12\\\"],\\\"sizeBytes\\\":484187929},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f933312f49083e8746fc41ab5e46a9a757b448374f14971e256ebcb36f11dd97\\\"],\\\"sizeBytes\\\":470826739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea5c8a93f30e0a4932da5697d22c0da7eda9a7035c0555eb006b6755e62bb2fc\\\"],\\\"sizeBytes\\\":468265024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\\\"],\\\"sizeBytes\\\":465090934},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9609c00207cc4db97f0fd6162eb429d7f81654137f020a677e30cba26a887a24\\\"],\\\"sizeBytes\\\":463705930},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:632e80bba5077068ecca05fddb95aedebad4493af6f36152c01c6ae490975b62\\\"],\\\"sizeBytes\\\":458126937},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bcb08821551e9a5b9f82aa794bcea673279cefb93cb47492e19ccac5e2cf18fe\\\"],\\\"sizeBytes\\\":456576198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:759fb1d5353dbbadd443f38631d977ca3aed9787b873be05cc9660532a252739\\\"],\\\"sizeBytes\\\":448828620},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3062f6485aec4770e60852b535c69a42527b305161fe856499c8658ead6d1e85\\\"],\\\"sizeBytes\\\":448042136},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:97c7a635130c574a2c501091bb44f17cd92e05e29b5102e59578b5885d9bfec0\\\"],\\\"sizeBytes\\\":444573129},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951ecfeba9b2da4b653034d09275f925396a79c2d8461b8a7c71c776fee67ba0\\\"],\\\"sizeBytes\\\":443272037},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:292560e2d80b460468bb19fe0ddf289767c655027b03a76ee6c40c91ffe4c483\\\"],\\\"sizeBytes\\\":438654374}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:09:48.851529 master-0 kubenswrapper[7599]: I0318 13:09:48.851388 7599 patch_prober.go:28] interesting pod/packageserver-ff75f747c-r46tm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.58:5443/healthz\": dial tcp 10.128.0.58:5443: connect: connection refused" start-of-body= Mar 18 13:09:48.851839 master-0 kubenswrapper[7599]: I0318 13:09:48.851535 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" podUID="3ee0f85b-219b-47cb-a22a-67d359a69881" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.58:5443/healthz\": dial tcp 10.128.0.58:5443: connect: connection refused" Mar 18 13:09:49.254475 master-0 kubenswrapper[7599]: I0318 13:09:49.254352 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:49.254475 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:49.254475 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:49.254475 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:49.254475 master-0 kubenswrapper[7599]: I0318 13:09:49.254456 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:50.077343 master-0 kubenswrapper[7599]: E0318 13:09:50.077151 7599 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{packageserver-ff75f747c-r46tm.189df16f306f1cc0 openshift-operator-lifecycle-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-operator-lifecycle-manager,Name:packageserver-ff75f747c-r46tm,UID:3ee0f85b-219b-47cb-a22a-67d359a69881,APIVersion:v1,ResourceVersion:8643,FieldPath:spec.containers{packageserver},},Reason:Created,Message:Created container: packageserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:08:08.110095552 +0000 UTC m=+63.071149794,LastTimestamp:2026-03-18 13:08:08.110095552 +0000 UTC m=+63.071149794,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:09:50.254088 master-0 kubenswrapper[7599]: I0318 13:09:50.253930 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:50.254088 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:50.254088 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:50.254088 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:50.254542 master-0 kubenswrapper[7599]: I0318 13:09:50.254110 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:51.088067 master-0 kubenswrapper[7599]: E0318 13:09:51.087926 7599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Mar 18 13:09:51.254793 master-0 kubenswrapper[7599]: I0318 13:09:51.254661 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:51.254793 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:51.254793 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:51.254793 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:51.254793 master-0 kubenswrapper[7599]: I0318 13:09:51.254799 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:51.725297 master-0 kubenswrapper[7599]: E0318 13:09:51.725228 7599 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-2-master-0: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 13:09:51.725297 master-0 kubenswrapper[7599]: E0318 13:09:51.725312 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6a4c87a8-6bf0-43b2-b598-1561cba3e391-kube-api-access podName:6a4c87a8-6bf0-43b2-b598-1561cba3e391 nodeName:}" failed. No retries permitted until 2026-03-18 13:09:53.725293166 +0000 UTC m=+168.686347408 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/6a4c87a8-6bf0-43b2-b598-1561cba3e391-kube-api-access") pod "installer-2-master-0" (UID: "6a4c87a8-6bf0-43b2-b598-1561cba3e391") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 13:09:52.253794 master-0 kubenswrapper[7599]: I0318 13:09:52.253707 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:52.253794 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:52.253794 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:52.253794 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:52.254574 master-0 kubenswrapper[7599]: I0318 13:09:52.253818 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:53.254533 master-0 kubenswrapper[7599]: I0318 13:09:53.254403 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:53.254533 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:53.254533 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:53.254533 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:53.254533 master-0 kubenswrapper[7599]: I0318 13:09:53.254507 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:53.717687 master-0 kubenswrapper[7599]: I0318 13:09:53.717594 7599 generic.go:334] "Generic (PLEG): container finished" podID="fe643e40-d06d-4e69-9be3-0065c2a78567" containerID="ae6b8122ce3ad297d1b8d967c790c62c2b0fe5b326636877eaeee68260e70360" exitCode=0 Mar 18 13:09:53.796438 master-0 kubenswrapper[7599]: I0318 13:09:53.795548 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6a4c87a8-6bf0-43b2-b598-1561cba3e391-kube-api-access\") pod \"installer-2-master-0\" (UID: \"6a4c87a8-6bf0-43b2-b598-1561cba3e391\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 13:09:54.253551 master-0 kubenswrapper[7599]: I0318 13:09:54.253391 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:54.253551 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:54.253551 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:54.253551 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:54.254002 master-0 kubenswrapper[7599]: I0318 13:09:54.253598 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:55.255603 master-0 kubenswrapper[7599]: I0318 13:09:55.255500 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:55.255603 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:55.255603 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:55.255603 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:55.256605 master-0 kubenswrapper[7599]: I0318 13:09:55.255608 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:55.728979 master-0 kubenswrapper[7599]: I0318 13:09:55.728906 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-68f85b4d6c-t84s9_d2455453-5943-49ef-bfea-cba077197da0/catalog-operator/0.log" Mar 18 13:09:55.728979 master-0 kubenswrapper[7599]: I0318 13:09:55.728961 7599 generic.go:334] "Generic (PLEG): container finished" podID="d2455453-5943-49ef-bfea-cba077197da0" containerID="625aa9e7efb69e0ce2b0b79e4566d5e74a444c0e432174133ef355a88a29ba59" exitCode=1 Mar 18 13:09:56.257496 master-0 kubenswrapper[7599]: I0318 13:09:56.256819 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:56.257496 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:56.257496 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:56.257496 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:56.257496 master-0 kubenswrapper[7599]: I0318 13:09:56.256926 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:57.255546 master-0 kubenswrapper[7599]: I0318 13:09:57.255391 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:57.255546 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:57.255546 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:57.255546 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:57.256113 master-0 kubenswrapper[7599]: I0318 13:09:57.255582 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:58.254544 master-0 kubenswrapper[7599]: I0318 13:09:58.254460 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:58.254544 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:58.254544 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:58.254544 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:58.254544 master-0 kubenswrapper[7599]: I0318 13:09:58.254542 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:09:58.393380 master-0 kubenswrapper[7599]: I0318 13:09:58.393257 7599 patch_prober.go:28] interesting pod/marketplace-operator-89ccd998f-99pzm container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" start-of-body= Mar 18 13:09:58.393941 master-0 kubenswrapper[7599]: I0318 13:09:58.393261 7599 patch_prober.go:28] interesting pod/marketplace-operator-89ccd998f-99pzm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" start-of-body= Mar 18 13:09:58.393941 master-0 kubenswrapper[7599]: I0318 13:09:58.393464 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" podUID="fe643e40-d06d-4e69-9be3-0065c2a78567" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" Mar 18 13:09:58.393941 master-0 kubenswrapper[7599]: I0318 13:09:58.393515 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" podUID="fe643e40-d06d-4e69-9be3-0065c2a78567" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" Mar 18 13:09:58.700287 master-0 kubenswrapper[7599]: I0318 13:09:58.700155 7599 patch_prober.go:28] interesting pod/catalog-operator-68f85b4d6c-t84s9 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" start-of-body= Mar 18 13:09:58.700287 master-0 kubenswrapper[7599]: I0318 13:09:58.700177 7599 patch_prober.go:28] interesting pod/catalog-operator-68f85b4d6c-t84s9 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" start-of-body= Mar 18 13:09:58.700714 master-0 kubenswrapper[7599]: I0318 13:09:58.700295 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" podUID="d2455453-5943-49ef-bfea-cba077197da0" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" Mar 18 13:09:58.700714 master-0 kubenswrapper[7599]: I0318 13:09:58.700341 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" podUID="d2455453-5943-49ef-bfea-cba077197da0" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" Mar 18 13:09:58.787118 master-0 kubenswrapper[7599]: E0318 13:09:58.787039 7599 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:09:58.851699 master-0 kubenswrapper[7599]: I0318 13:09:58.851617 7599 patch_prober.go:28] interesting pod/packageserver-ff75f747c-r46tm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.58:5443/healthz\": dial tcp 10.128.0.58:5443: connect: connection refused" start-of-body= Mar 18 13:09:58.851984 master-0 kubenswrapper[7599]: I0318 13:09:58.851704 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" podUID="3ee0f85b-219b-47cb-a22a-67d359a69881" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.58:5443/healthz\": dial tcp 10.128.0.58:5443: connect: connection refused" Mar 18 13:09:59.254838 master-0 kubenswrapper[7599]: I0318 13:09:59.254727 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:09:59.254838 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:09:59.254838 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:09:59.254838 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:09:59.254838 master-0 kubenswrapper[7599]: I0318 13:09:59.254830 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:10:00.254921 master-0 kubenswrapper[7599]: I0318 13:10:00.254847 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:10:00.254921 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:10:00.254921 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:10:00.254921 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:10:00.257682 master-0 kubenswrapper[7599]: I0318 13:10:00.254925 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:10:01.254364 master-0 kubenswrapper[7599]: I0318 13:10:01.254287 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:10:01.254364 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:10:01.254364 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:10:01.254364 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:10:01.254870 master-0 kubenswrapper[7599]: I0318 13:10:01.254361 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:10:02.254981 master-0 kubenswrapper[7599]: I0318 13:10:02.254896 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:10:02.254981 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:10:02.254981 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:10:02.254981 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:10:02.256338 master-0 kubenswrapper[7599]: I0318 13:10:02.254996 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:10:03.254201 master-0 kubenswrapper[7599]: I0318 13:10:03.254087 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:10:03.254201 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:10:03.254201 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:10:03.254201 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:10:03.254732 master-0 kubenswrapper[7599]: I0318 13:10:03.254203 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:10:04.254537 master-0 kubenswrapper[7599]: I0318 13:10:04.254406 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:10:04.254537 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:10:04.254537 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:10:04.254537 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:10:04.254537 master-0 kubenswrapper[7599]: I0318 13:10:04.254529 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:10:04.289372 master-0 kubenswrapper[7599]: E0318 13:10:04.289234 7599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Mar 18 13:10:05.253697 master-0 kubenswrapper[7599]: I0318 13:10:05.253627 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:10:05.253697 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:10:05.253697 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:10:05.253697 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:10:05.253995 master-0 kubenswrapper[7599]: I0318 13:10:05.253725 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:10:06.253442 master-0 kubenswrapper[7599]: I0318 13:10:06.253372 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:10:06.253442 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:10:06.253442 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:10:06.253442 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:10:06.254198 master-0 kubenswrapper[7599]: I0318 13:10:06.253464 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:10:06.802435 master-0 kubenswrapper[7599]: I0318 13:10:06.802375 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-wqxpk_d9d09a56-ed4c-40b7-8be1-f3934c07296e/ingress-operator/0.log" Mar 18 13:10:06.802680 master-0 kubenswrapper[7599]: I0318 13:10:06.802454 7599 generic.go:334] "Generic (PLEG): container finished" podID="d9d09a56-ed4c-40b7-8be1-f3934c07296e" containerID="a09e30a0e0a70728f4eacd16714f41244f1eaa2c744901296ee7506c0e6ed81f" exitCode=1 Mar 18 13:10:07.254471 master-0 kubenswrapper[7599]: I0318 13:10:07.254399 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:10:07.254471 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:10:07.254471 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:10:07.254471 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:10:07.255048 master-0 kubenswrapper[7599]: I0318 13:10:07.254476 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:10:08.392655 master-0 kubenswrapper[7599]: I0318 13:10:08.392553 7599 patch_prober.go:28] interesting pod/marketplace-operator-89ccd998f-99pzm container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" start-of-body= Mar 18 13:10:08.393687 master-0 kubenswrapper[7599]: I0318 13:10:08.392646 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" podUID="fe643e40-d06d-4e69-9be3-0065c2a78567" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" Mar 18 13:10:08.393687 master-0 kubenswrapper[7599]: I0318 13:10:08.392579 7599 patch_prober.go:28] interesting pod/marketplace-operator-89ccd998f-99pzm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" start-of-body= Mar 18 13:10:08.393687 master-0 kubenswrapper[7599]: I0318 13:10:08.392753 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" podUID="fe643e40-d06d-4e69-9be3-0065c2a78567" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" Mar 18 13:10:08.699257 master-0 kubenswrapper[7599]: I0318 13:10:08.699180 7599 patch_prober.go:28] interesting pod/catalog-operator-68f85b4d6c-t84s9 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" start-of-body= Mar 18 13:10:08.699257 master-0 kubenswrapper[7599]: I0318 13:10:08.699208 7599 patch_prober.go:28] interesting pod/catalog-operator-68f85b4d6c-t84s9 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" start-of-body= Mar 18 13:10:08.699257 master-0 kubenswrapper[7599]: I0318 13:10:08.699241 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" podUID="d2455453-5943-49ef-bfea-cba077197da0" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" Mar 18 13:10:08.699620 master-0 kubenswrapper[7599]: I0318 13:10:08.699255 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" podUID="d2455453-5943-49ef-bfea-cba077197da0" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" Mar 18 13:10:08.788179 master-0 kubenswrapper[7599]: E0318 13:10:08.788063 7599 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:10:08.853544 master-0 kubenswrapper[7599]: I0318 13:10:08.853402 7599 patch_prober.go:28] interesting pod/packageserver-ff75f747c-r46tm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.58:5443/healthz\": dial tcp 10.128.0.58:5443: connect: connection refused" start-of-body= Mar 18 13:10:08.853544 master-0 kubenswrapper[7599]: I0318 13:10:08.853523 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" podUID="3ee0f85b-219b-47cb-a22a-67d359a69881" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.58:5443/healthz\": dial tcp 10.128.0.58:5443: connect: connection refused" Mar 18 13:10:09.025147 master-0 kubenswrapper[7599]: I0318 13:10:09.024812 7599 status_manager.go:851] "Failed to get status for pod" podUID="3ee0f85b-219b-47cb-a22a-67d359a69881" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods packageserver-ff75f747c-r46tm)" Mar 18 13:10:09.863887 master-0 kubenswrapper[7599]: I0318 13:10:09.863853 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-cpqm5_7dca7577-6bee-4dd3-917a-7b7ccc42f0fc/openshift-controller-manager-operator/1.log" Mar 18 13:10:09.865130 master-0 kubenswrapper[7599]: I0318 13:10:09.865090 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-cpqm5_7dca7577-6bee-4dd3-917a-7b7ccc42f0fc/openshift-controller-manager-operator/0.log" Mar 18 13:10:09.865189 master-0 kubenswrapper[7599]: I0318 13:10:09.865155 7599 generic.go:334] "Generic (PLEG): container finished" podID="7dca7577-6bee-4dd3-917a-7b7ccc42f0fc" containerID="8dcf0d47755aa9729c9174b6d9eec6a76d4adc29a9ce8725fd5baba97772cee5" exitCode=255 Mar 18 13:10:11.005138 master-0 kubenswrapper[7599]: E0318 13:10:11.004961 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-kube-controller-manager/installer-2-master-0" podUID="6a4c87a8-6bf0-43b2-b598-1561cba3e391" Mar 18 13:10:11.876963 master-0 kubenswrapper[7599]: I0318 13:10:11.876844 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 13:10:18.393054 master-0 kubenswrapper[7599]: I0318 13:10:18.392886 7599 patch_prober.go:28] interesting pod/marketplace-operator-89ccd998f-99pzm container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" start-of-body= Mar 18 13:10:18.393054 master-0 kubenswrapper[7599]: I0318 13:10:18.392886 7599 patch_prober.go:28] interesting pod/marketplace-operator-89ccd998f-99pzm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" start-of-body= Mar 18 13:10:18.393054 master-0 kubenswrapper[7599]: I0318 13:10:18.392992 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" podUID="fe643e40-d06d-4e69-9be3-0065c2a78567" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" Mar 18 13:10:18.393054 master-0 kubenswrapper[7599]: I0318 13:10:18.393035 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" podUID="fe643e40-d06d-4e69-9be3-0065c2a78567" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" Mar 18 13:10:18.699664 master-0 kubenswrapper[7599]: I0318 13:10:18.699549 7599 patch_prober.go:28] interesting pod/catalog-operator-68f85b4d6c-t84s9 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" start-of-body= Mar 18 13:10:18.699664 master-0 kubenswrapper[7599]: I0318 13:10:18.699631 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" podUID="d2455453-5943-49ef-bfea-cba077197da0" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" Mar 18 13:10:18.700128 master-0 kubenswrapper[7599]: I0318 13:10:18.699717 7599 patch_prober.go:28] interesting pod/catalog-operator-68f85b4d6c-t84s9 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" start-of-body= Mar 18 13:10:18.700128 master-0 kubenswrapper[7599]: I0318 13:10:18.699795 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" podUID="d2455453-5943-49ef-bfea-cba077197da0" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" Mar 18 13:10:18.789306 master-0 kubenswrapper[7599]: E0318 13:10:18.789164 7599 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:10:18.861797 master-0 kubenswrapper[7599]: I0318 13:10:18.861673 7599 patch_prober.go:28] interesting pod/packageserver-ff75f747c-r46tm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.58:5443/healthz\": dial tcp 10.128.0.58:5443: connect: connection refused" start-of-body= Mar 18 13:10:18.862069 master-0 kubenswrapper[7599]: I0318 13:10:18.861827 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" podUID="3ee0f85b-219b-47cb-a22a-67d359a69881" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.58:5443/healthz\": dial tcp 10.128.0.58:5443: connect: connection refused" Mar 18 13:10:18.939615 master-0 kubenswrapper[7599]: I0318 13:10:18.939539 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-q2ndb_16f8e725-f18a-478e-88c5-87d54aeb4857/manager/0.log" Mar 18 13:10:18.940137 master-0 kubenswrapper[7599]: I0318 13:10:18.940052 7599 generic.go:334] "Generic (PLEG): container finished" podID="16f8e725-f18a-478e-88c5-87d54aeb4857" containerID="aebb870640af737294de5fde7faf1b19862e6f81b4ae715f35fdf208373b75e7" exitCode=1 Mar 18 13:10:18.942927 master-0 kubenswrapper[7599]: I0318 13:10:18.942901 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-9bjsj_98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617/manager/0.log" Mar 18 13:10:18.943084 master-0 kubenswrapper[7599]: I0318 13:10:18.942935 7599 generic.go:334] "Generic (PLEG): container finished" podID="98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617" containerID="e6d3b86684e16237f7515b45dbb7b40a94f5f8bddf2d34d18c36a6a4d6af41b4" exitCode=1 Mar 18 13:10:20.690890 master-0 kubenswrapper[7599]: E0318 13:10:20.690700 7599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 13:10:20.959260 master-0 kubenswrapper[7599]: I0318 13:10:20.959126 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qsnxz_deb67ea0-8342-40cb-b0f4-115270e878dd/snapshot-controller/0.log" Mar 18 13:10:20.959260 master-0 kubenswrapper[7599]: I0318 13:10:20.959198 7599 generic.go:334] "Generic (PLEG): container finished" podID="deb67ea0-8342-40cb-b0f4-115270e878dd" containerID="8c9e4d7f5a1cfb905af9530af8305e93c12f5088f9374b32f042b05f77b48591" exitCode=1 Mar 18 13:10:21.415894 master-0 kubenswrapper[7599]: E0318 13:10:21.415785 7599 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:10:21.416216 master-0 kubenswrapper[7599]: E0318 13:10:21.415974 7599 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.011s" Mar 18 13:10:21.416216 master-0 kubenswrapper[7599]: I0318 13:10:21.415998 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:10:21.416216 master-0 kubenswrapper[7599]: I0318 13:10:21.416019 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4" event={"ID":"b75d4622-ac12-4f82-afc9-ab63e6278b0c","Type":"ContainerDied","Data":"4f4390a1edc4e74d8425b268d4802fbbd68b0a727bcc922dd63ac0c094e61704"} Mar 18 13:10:21.416216 master-0 kubenswrapper[7599]: I0318 13:10:21.416042 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:10:21.416216 master-0 kubenswrapper[7599]: I0318 13:10:21.416054 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:10:21.416807 master-0 kubenswrapper[7599]: I0318 13:10:21.416235 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerDied","Data":"e09a418d06bba466d802f306bad44d8be5e7b6c6d9c9ae5189c3748d7fc68429"} Mar 18 13:10:21.416807 master-0 kubenswrapper[7599]: I0318 13:10:21.416294 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-4bqf9" event={"ID":"0c2c4a58-9780-4ecd-b417-e590ac3576ed","Type":"ContainerDied","Data":"8e530c2314387d6faa3389f896853faadcabf48e6b1056d8665d0aee6b25ba83"} Mar 18 13:10:21.416807 master-0 kubenswrapper[7599]: I0318 13:10:21.416711 7599 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"9ce3394879cb362e5d7236279a34aac71fedeb577c1dc6ec801d0fa7287bb15c"} pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" containerMessage="Container router failed startup probe, will be restarted" Mar 18 13:10:21.416807 master-0 kubenswrapper[7599]: I0318 13:10:21.416747 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" containerID="cri-o://9ce3394879cb362e5d7236279a34aac71fedeb577c1dc6ec801d0fa7287bb15c" gracePeriod=3600 Mar 18 13:10:21.418074 master-0 kubenswrapper[7599]: I0318 13:10:21.417227 7599 scope.go:117] "RemoveContainer" containerID="625aa9e7efb69e0ce2b0b79e4566d5e74a444c0e432174133ef355a88a29ba59" Mar 18 13:10:21.418074 master-0 kubenswrapper[7599]: I0318 13:10:21.417490 7599 scope.go:117] "RemoveContainer" containerID="8e530c2314387d6faa3389f896853faadcabf48e6b1056d8665d0aee6b25ba83" Mar 18 13:10:21.418074 master-0 kubenswrapper[7599]: I0318 13:10:21.417653 7599 scope.go:117] "RemoveContainer" containerID="c54bcf4ddd56343697f9602341ecf51d80939627fe3f4a59637f96162fa1598d" Mar 18 13:10:21.428483 master-0 kubenswrapper[7599]: I0318 13:10:21.428088 7599 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 18 13:10:21.968325 master-0 kubenswrapper[7599]: I0318 13:10:21.968159 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-68f85b4d6c-t84s9_d2455453-5943-49ef-bfea-cba077197da0/catalog-operator/0.log" Mar 18 13:10:24.080290 master-0 kubenswrapper[7599]: E0318 13:10:24.080079 7599 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{prometheus-operator-admission-webhook-69c6b55594-wsmsc.189df16f3146afd5 openshift-monitoring 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-monitoring,Name:prometheus-operator-admission-webhook-69c6b55594-wsmsc,UID:6db2bfbd-d8db-4384-8979-23e8a1e87e5e,APIVersion:v1,ResourceVersion:7903,FieldPath:spec.containers{prometheus-operator-admission-webhook},},Reason:Started,Message:Started container prometheus-operator-admission-webhook,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:08:08.124223445 +0000 UTC m=+63.085277687,LastTimestamp:2026-03-18 13:08:08.124223445 +0000 UTC m=+63.085277687,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:10:26.021914 master-0 kubenswrapper[7599]: I0318 13:10:26.021832 7599 patch_prober.go:28] interesting pod/catalogd-controller-manager-6864dc98f7-q2ndb container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Mar 18 13:10:26.022658 master-0 kubenswrapper[7599]: I0318 13:10:26.021922 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" podUID="16f8e725-f18a-478e-88c5-87d54aeb4857" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Mar 18 13:10:26.049959 master-0 kubenswrapper[7599]: I0318 13:10:26.049892 7599 patch_prober.go:28] interesting pod/operator-controller-controller-manager-57777556ff-9bjsj container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Mar 18 13:10:26.049959 master-0 kubenswrapper[7599]: I0318 13:10:26.049961 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" podUID="98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" Mar 18 13:10:27.799456 master-0 kubenswrapper[7599]: E0318 13:10:27.799353 7599 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-2-master-0: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 13:10:27.800328 master-0 kubenswrapper[7599]: E0318 13:10:27.799526 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6a4c87a8-6bf0-43b2-b598-1561cba3e391-kube-api-access podName:6a4c87a8-6bf0-43b2-b598-1561cba3e391 nodeName:}" failed. No retries permitted until 2026-03-18 13:10:31.799495754 +0000 UTC m=+206.760550036 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/6a4c87a8-6bf0-43b2-b598-1561cba3e391-kube-api-access") pod "installer-2-master-0" (UID: "6a4c87a8-6bf0-43b2-b598-1561cba3e391") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 13:10:28.393037 master-0 kubenswrapper[7599]: I0318 13:10:28.392884 7599 patch_prober.go:28] interesting pod/marketplace-operator-89ccd998f-99pzm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" start-of-body= Mar 18 13:10:28.393037 master-0 kubenswrapper[7599]: I0318 13:10:28.392987 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" podUID="fe643e40-d06d-4e69-9be3-0065c2a78567" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" Mar 18 13:10:28.791237 master-0 kubenswrapper[7599]: E0318 13:10:28.790374 7599 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:10:28.791237 master-0 kubenswrapper[7599]: E0318 13:10:28.790826 7599 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 13:10:28.851680 master-0 kubenswrapper[7599]: I0318 13:10:28.851605 7599 patch_prober.go:28] interesting pod/packageserver-ff75f747c-r46tm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.58:5443/healthz\": dial tcp 10.128.0.58:5443: connect: connection refused" start-of-body= Mar 18 13:10:28.852306 master-0 kubenswrapper[7599]: I0318 13:10:28.851683 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" podUID="3ee0f85b-219b-47cb-a22a-67d359a69881" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.58:5443/healthz\": dial tcp 10.128.0.58:5443: connect: connection refused" Mar 18 13:10:31.810115 master-0 kubenswrapper[7599]: I0318 13:10:31.809994 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6a4c87a8-6bf0-43b2-b598-1561cba3e391-kube-api-access\") pod \"installer-2-master-0\" (UID: \"6a4c87a8-6bf0-43b2-b598-1561cba3e391\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 13:10:34.430378 master-0 kubenswrapper[7599]: E0318 13:10:34.430316 7599 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 18 13:10:36.022522 master-0 kubenswrapper[7599]: I0318 13:10:36.022323 7599 patch_prober.go:28] interesting pod/catalogd-controller-manager-6864dc98f7-q2ndb container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Mar 18 13:10:36.022522 master-0 kubenswrapper[7599]: I0318 13:10:36.022452 7599 patch_prober.go:28] interesting pod/catalogd-controller-manager-6864dc98f7-q2ndb container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.44:8081/healthz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Mar 18 13:10:36.022522 master-0 kubenswrapper[7599]: I0318 13:10:36.022493 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" podUID="16f8e725-f18a-478e-88c5-87d54aeb4857" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Mar 18 13:10:36.023257 master-0 kubenswrapper[7599]: I0318 13:10:36.022516 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" podUID="16f8e725-f18a-478e-88c5-87d54aeb4857" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/healthz\": dial tcp 10.128.0.44:8081: connect: connection refused" Mar 18 13:10:36.050210 master-0 kubenswrapper[7599]: I0318 13:10:36.050103 7599 patch_prober.go:28] interesting pod/operator-controller-controller-manager-57777556ff-9bjsj container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Mar 18 13:10:36.050547 master-0 kubenswrapper[7599]: I0318 13:10:36.050205 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" podUID="98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" Mar 18 13:10:36.050547 master-0 kubenswrapper[7599]: I0318 13:10:36.050305 7599 patch_prober.go:28] interesting pod/operator-controller-controller-manager-57777556ff-9bjsj container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.43:8081/healthz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Mar 18 13:10:36.050547 master-0 kubenswrapper[7599]: I0318 13:10:36.050407 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" podUID="98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/healthz\": dial tcp 10.128.0.43:8081: connect: connection refused" Mar 18 13:10:37.127867 master-0 kubenswrapper[7599]: I0318 13:10:37.127816 7599 generic.go:334] "Generic (PLEG): container finished" podID="d42bcf13-548b-46c4-9a3d-a46f1b6ec045" containerID="bdb06b047a43d8f5cc135f15126477528bd6743cd5d10a3d7306b59927303450" exitCode=0 Mar 18 13:10:37.470179 master-0 kubenswrapper[7599]: I0318 13:10:37.470106 7599 patch_prober.go:28] interesting pod/etcd-operator-8544cbcf9c-jx4mf container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" start-of-body= Mar 18 13:10:37.470363 master-0 kubenswrapper[7599]: I0318 13:10:37.470175 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" podUID="19a76585-a9ac-4ed9-9146-bb77b31848c6" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" Mar 18 13:10:37.693676 master-0 kubenswrapper[7599]: E0318 13:10:37.693545 7599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 13:10:38.393270 master-0 kubenswrapper[7599]: I0318 13:10:38.393147 7599 patch_prober.go:28] interesting pod/marketplace-operator-89ccd998f-99pzm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" start-of-body= Mar 18 13:10:38.393270 master-0 kubenswrapper[7599]: I0318 13:10:38.393251 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" podUID="fe643e40-d06d-4e69-9be3-0065c2a78567" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" Mar 18 13:10:38.850933 master-0 kubenswrapper[7599]: I0318 13:10:38.850838 7599 patch_prober.go:28] interesting pod/packageserver-ff75f747c-r46tm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.58:5443/healthz\": dial tcp 10.128.0.58:5443: connect: connection refused" start-of-body= Mar 18 13:10:38.850933 master-0 kubenswrapper[7599]: I0318 13:10:38.850926 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" podUID="3ee0f85b-219b-47cb-a22a-67d359a69881" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.58:5443/healthz\": dial tcp 10.128.0.58:5443: connect: connection refused" Mar 18 13:10:42.163235 master-0 kubenswrapper[7599]: I0318 13:10:42.163117 7599 generic.go:334] "Generic (PLEG): container finished" podID="6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0" containerID="bb553102326f2c4e518a7d4b30f5d51c49f697e05a25cafd876fd2a4683c379b" exitCode=0 Mar 18 13:10:44.176308 master-0 kubenswrapper[7599]: I0318 13:10:44.176272 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-n7fn4_b75d4622-ac12-4f82-afc9-ab63e6278b0c/kube-controller-manager-operator/1.log" Mar 18 13:10:44.177211 master-0 kubenswrapper[7599]: I0318 13:10:44.177142 7599 generic.go:334] "Generic (PLEG): container finished" podID="b75d4622-ac12-4f82-afc9-ab63e6278b0c" containerID="8fd581d9433e603018eead43b8e27a33c255b946ee133532ab11a25007d5ddfb" exitCode=255 Mar 18 13:10:46.021696 master-0 kubenswrapper[7599]: I0318 13:10:46.021583 7599 patch_prober.go:28] interesting pod/catalogd-controller-manager-6864dc98f7-q2ndb container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Mar 18 13:10:46.021696 master-0 kubenswrapper[7599]: I0318 13:10:46.021681 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" podUID="16f8e725-f18a-478e-88c5-87d54aeb4857" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Mar 18 13:10:46.050387 master-0 kubenswrapper[7599]: I0318 13:10:46.050304 7599 patch_prober.go:28] interesting pod/operator-controller-controller-manager-57777556ff-9bjsj container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Mar 18 13:10:46.050682 master-0 kubenswrapper[7599]: I0318 13:10:46.050392 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" podUID="98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" Mar 18 13:10:48.392936 master-0 kubenswrapper[7599]: I0318 13:10:48.392876 7599 patch_prober.go:28] interesting pod/marketplace-operator-89ccd998f-99pzm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" start-of-body= Mar 18 13:10:48.393944 master-0 kubenswrapper[7599]: I0318 13:10:48.393616 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" podUID="fe643e40-d06d-4e69-9be3-0065c2a78567" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" Mar 18 13:10:48.851989 master-0 kubenswrapper[7599]: I0318 13:10:48.851869 7599 patch_prober.go:28] interesting pod/packageserver-ff75f747c-r46tm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.58:5443/healthz\": dial tcp 10.128.0.58:5443: connect: connection refused" start-of-body= Mar 18 13:10:48.852326 master-0 kubenswrapper[7599]: I0318 13:10:48.851986 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" podUID="3ee0f85b-219b-47cb-a22a-67d359a69881" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.58:5443/healthz\": dial tcp 10.128.0.58:5443: connect: connection refused" Mar 18 13:10:48.867375 master-0 kubenswrapper[7599]: E0318 13:10:48.867164 7599 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:10:38Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:10:38Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:10:38Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:10:38Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7\\\"],\\\"sizeBytes\\\":1637455533},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36\\\"],\\\"sizeBytes\\\":1238100502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af0fe0ca926422a6471d5bf22fc0e682c36c24fba05496a3bdfac0b7d3733015\\\"],\\\"sizeBytes\\\":991832673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\\\"],\\\"sizeBytes\\\":943841779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a09f5a3ba4f60cce0145769509bab92553c8075d210af4ac058965d2ae11efa\\\"],\\\"sizeBytes\\\":876160834},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578\\\"],\\\"sizeBytes\\\":862657321},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a9e8da5c6114f062b814936d4db7a47a04d248e160d6bb28ad4e4a081496ee4\\\"],\\\"sizeBytes\\\":772943435},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e1faad2d9167d84e23585c1cea5962301845548043cf09578f943f79ca98016\\\"],\\\"sizeBytes\\\":687949580},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5e12e4dc52214d3ada5ba5106caebe079eac1d9292c2571a5fe83411ce8e900d\\\"],\\\"sizeBytes\\\":683195416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa5e782406f71c048b1ac3a4bf5d1227ff4be81111114083ad4c7a209c6bfb5a\\\"],\\\"sizeBytes\\\":677942383},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec8fd46dfb35ed10e8f98933166f69ce579c2f35b8db03d21e4c34fc544553e4\\\"],\\\"sizeBytes\\\":621648710},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae50e496bd6ae2d27298d997470b7cb0a426eeb8b7e2e9c7187a34cb03993998\\\"],\\\"sizeBytes\\\":589386806},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55\\\"],\\\"sizeBytes\\\":582154903},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982\\\"],\\\"sizeBytes\\\":558211175},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7af9f5c5af9d529840233ef4b519120cc0e3f14c4fe28cc43b0823f2c11d8f89\\\"],\\\"sizeBytes\\\":548752816},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\\\"],\\\"sizeBytes\\\":529326739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b4c9cf268bb7abef7af187cd775d3f74d0bd33626250095428d53b705ee946\\\"],\\\"sizeBytes\\\":528956487},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a\\\"],\\\"sizeBytes\\\":518384969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1\\\"],\\\"sizeBytes\\\":517999161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\\\"],\\\"sizeBytes\\\":514984269},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30a2f97d7785ce8b0ea5115e67c4554b64adefbc7856bcf6f4fe6cc7e938a310\\\"],\\\"sizeBytes\\\":513582374},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfe394b58ec6195de8b8420e781b7630d85a412b9112d892fea903f92b783427\\\"],\\\"sizeBytes\\\":513221333},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e\\\"],\\\"sizeBytes\\\":512274055},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77fff570657d2fa0bfb709b2c8b6665bae0bf90a2be981d8dbca56c674715098\\\"],\\\"sizeBytes\\\":511227324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adb9f6f2fd701863c7caed747df43f83d3569ba9388cfa33ea7219ac6a606b11\\\"],\\\"sizeBytes\\\":511164375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458\\\"],\\\"sizeBytes\\\":508888171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263\\\"],\\\"sizeBytes\\\":508544745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:313d1d8ca85e65236a59f058a3316c49436dde691b3a3930d5bc5e3b4b8c8a71\\\"],\\\"sizeBytes\\\":507972093},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c527b4e8239a1f4f4e0a851113e7dd633b7dcb9d75b0e7b21c23d26304abcb3\\\"],\\\"sizeBytes\\\":506480167},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302\\\"],\\\"sizeBytes\\\":506395599},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4a034950346bcd4e36e9e2f1343e0cf7a10cf544963f33d09c7eb2a1bfc634\\\"],\\\"sizeBytes\\\":505345991},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\\\"],\\\"sizeBytes\\\":505246690},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252\\\"],\\\"sizeBytes\\\":504625081},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:712d334b7752d95580571059aae2c50e111d879af4fd8ea7cc3dbaf1a8e7dc69\\\"],\\\"sizeBytes\\\":495994673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5ea1ef4e09b673a0c68c8848ca162ab11d9ac373a377daa52dea702ffa3023\\\"],\\\"sizeBytes\\\":495065340},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:002dfb86e17ad8f5cc232a7d2dce183b23335c8ecb7e7d31dcf3e4446b390777\\\"],\\\"sizeBytes\\\":487159945},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:446bedea4916d3c1ee52be94137e484659e9561bd1de95c8189eee279aae984b\\\"],\\\"sizeBytes\\\":487096305},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a746a87b784ea1caa278fd0e012554f9df520b6fff665ea0bc4c83f487fed113\\\"],\\\"sizeBytes\\\":484450894},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c4d5a681595e428ff4b5083648c13615eed80be9084a3d3fc68a0295079cb12\\\"],\\\"sizeBytes\\\":484187929},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f933312f49083e8746fc41ab5e46a9a757b448374f14971e256ebcb36f11dd97\\\"],\\\"sizeBytes\\\":470826739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea5c8a93f30e0a4932da5697d22c0da7eda9a7035c0555eb006b6755e62bb2fc\\\"],\\\"sizeBytes\\\":468265024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\\\"],\\\"sizeBytes\\\":465090934},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9609c00207cc4db97f0fd6162eb429d7f81654137f020a677e30cba26a887a24\\\"],\\\"sizeBytes\\\":463705930},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:632e80bba5077068ecca05fddb95aedebad4493af6f36152c01c6ae490975b62\\\"],\\\"sizeBytes\\\":458126937},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bcb08821551e9a5b9f82aa794bcea673279cefb93cb47492e19ccac5e2cf18fe\\\"],\\\"sizeBytes\\\":456576198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:759fb1d5353dbbadd443f38631d977ca3aed9787b873be05cc9660532a252739\\\"],\\\"sizeBytes\\\":448828620},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3062f6485aec4770e60852b535c69a42527b305161fe856499c8658ead6d1e85\\\"],\\\"sizeBytes\\\":448042136},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:97c7a635130c574a2c501091bb44f17cd92e05e29b5102e59578b5885d9bfec0\\\"],\\\"sizeBytes\\\":444573129},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951ecfeba9b2da4b653034d09275f925396a79c2d8461b8a7c71c776fee67ba0\\\"],\\\"sizeBytes\\\":443272037},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:292560e2d80b460468bb19fe0ddf289767c655027b03a76ee6c40c91ffe4c483\\\"],\\\"sizeBytes\\\":438654374}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:10:50.871544 master-0 kubenswrapper[7599]: I0318 13:10:50.871145 7599 patch_prober.go:28] interesting pod/controller-manager-fffb75699-b7pwr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" start-of-body= Mar 18 13:10:50.871544 master-0 kubenswrapper[7599]: I0318 13:10:50.871408 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-fffb75699-b7pwr" podUID="6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" Mar 18 13:10:50.871544 master-0 kubenswrapper[7599]: I0318 13:10:50.871476 7599 patch_prober.go:28] interesting pod/controller-manager-fffb75699-b7pwr container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" start-of-body= Mar 18 13:10:50.871544 master-0 kubenswrapper[7599]: I0318 13:10:50.871518 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-fffb75699-b7pwr" podUID="6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" Mar 18 13:10:51.224790 master-0 kubenswrapper[7599]: I0318 13:10:51.224708 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-mz4qp_ac6d8eb6-1d5e-4757-9823-5ffe478c711c/cluster-baremetal-operator/0.log" Mar 18 13:10:51.224790 master-0 kubenswrapper[7599]: I0318 13:10:51.224776 7599 generic.go:334] "Generic (PLEG): container finished" podID="ac6d8eb6-1d5e-4757-9823-5ffe478c711c" containerID="49667c3562724d21d11f45af9648468c2dd5436306c9e389954957510ee7b256" exitCode=1 Mar 18 13:10:55.431077 master-0 kubenswrapper[7599]: E0318 13:10:55.430962 7599 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 13:10:55.432296 master-0 kubenswrapper[7599]: E0318 13:10:55.431144 7599 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.015s" Mar 18 13:10:55.432296 master-0 kubenswrapper[7599]: I0318 13:10:55.431167 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-k8tv4" event={"ID":"34a3a84b-048f-4822-9f05-0e7509327ca2","Type":"ContainerDied","Data":"f405c7c5758aab122512ec8685660fb5ea0502d97836267e430ea463ff79f592"} Mar 18 13:10:55.432296 master-0 kubenswrapper[7599]: I0318 13:10:55.432177 7599 scope.go:117] "RemoveContainer" containerID="f405c7c5758aab122512ec8685660fb5ea0502d97836267e430ea463ff79f592" Mar 18 13:10:55.441183 master-0 kubenswrapper[7599]: I0318 13:10:55.441118 7599 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 18 13:10:56.021797 master-0 kubenswrapper[7599]: I0318 13:10:56.021622 7599 patch_prober.go:28] interesting pod/catalogd-controller-manager-6864dc98f7-q2ndb container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.44:8081/healthz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Mar 18 13:10:56.021797 master-0 kubenswrapper[7599]: I0318 13:10:56.021702 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" podUID="16f8e725-f18a-478e-88c5-87d54aeb4857" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/healthz\": dial tcp 10.128.0.44:8081: connect: connection refused" Mar 18 13:10:56.022303 master-0 kubenswrapper[7599]: I0318 13:10:56.022262 7599 patch_prober.go:28] interesting pod/catalogd-controller-manager-6864dc98f7-q2ndb container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Mar 18 13:10:56.022599 master-0 kubenswrapper[7599]: I0318 13:10:56.022553 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" podUID="16f8e725-f18a-478e-88c5-87d54aeb4857" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Mar 18 13:10:56.050607 master-0 kubenswrapper[7599]: I0318 13:10:56.050532 7599 patch_prober.go:28] interesting pod/operator-controller-controller-manager-57777556ff-9bjsj container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Mar 18 13:10:56.050851 master-0 kubenswrapper[7599]: I0318 13:10:56.050619 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" podUID="98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" Mar 18 13:10:56.050851 master-0 kubenswrapper[7599]: I0318 13:10:56.050643 7599 patch_prober.go:28] interesting pod/operator-controller-controller-manager-57777556ff-9bjsj container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.43:8081/healthz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Mar 18 13:10:56.050851 master-0 kubenswrapper[7599]: I0318 13:10:56.050729 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" podUID="98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/healthz\": dial tcp 10.128.0.43:8081: connect: connection refused" Mar 18 13:10:58.083923 master-0 kubenswrapper[7599]: E0318 13:10:58.083665 7599 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{packageserver-ff75f747c-r46tm.189df16f325450a6 openshift-operator-lifecycle-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-operator-lifecycle-manager,Name:packageserver-ff75f747c-r46tm,UID:3ee0f85b-219b-47cb-a22a-67d359a69881,APIVersion:v1,ResourceVersion:8643,FieldPath:spec.containers{packageserver},},Reason:Started,Message:Started container packageserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:08:08.141893798 +0000 UTC m=+63.102948030,LastTimestamp:2026-03-18 13:08:08.141893798 +0000 UTC m=+63.102948030,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:10:58.392382 master-0 kubenswrapper[7599]: I0318 13:10:58.392219 7599 patch_prober.go:28] interesting pod/marketplace-operator-89ccd998f-99pzm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" start-of-body= Mar 18 13:10:58.392382 master-0 kubenswrapper[7599]: I0318 13:10:58.392291 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" podUID="fe643e40-d06d-4e69-9be3-0065c2a78567" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" Mar 18 13:10:58.851902 master-0 kubenswrapper[7599]: I0318 13:10:58.851845 7599 patch_prober.go:28] interesting pod/packageserver-ff75f747c-r46tm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.58:5443/healthz\": dial tcp 10.128.0.58:5443: connect: connection refused" start-of-body= Mar 18 13:10:58.852294 master-0 kubenswrapper[7599]: I0318 13:10:58.852252 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" podUID="3ee0f85b-219b-47cb-a22a-67d359a69881" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.58:5443/healthz\": dial tcp 10.128.0.58:5443: connect: connection refused" Mar 18 13:10:58.868605 master-0 kubenswrapper[7599]: E0318 13:10:58.868562 7599 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:11:00.871149 master-0 kubenswrapper[7599]: I0318 13:11:00.871002 7599 patch_prober.go:28] interesting pod/controller-manager-fffb75699-b7pwr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" start-of-body= Mar 18 13:11:00.871149 master-0 kubenswrapper[7599]: I0318 13:11:00.871066 7599 patch_prober.go:28] interesting pod/controller-manager-fffb75699-b7pwr container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" start-of-body= Mar 18 13:11:00.871149 master-0 kubenswrapper[7599]: I0318 13:11:00.871121 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-fffb75699-b7pwr" podUID="6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" Mar 18 13:11:00.872405 master-0 kubenswrapper[7599]: I0318 13:11:00.871147 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-fffb75699-b7pwr" podUID="6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.53:8443/healthz\": dial tcp 10.128.0.53:8443: connect: connection refused" Mar 18 13:11:05.813368 master-0 kubenswrapper[7599]: E0318 13:11:05.813166 7599 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-2-master-0: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 13:11:05.813368 master-0 kubenswrapper[7599]: E0318 13:11:05.813269 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6a4c87a8-6bf0-43b2-b598-1561cba3e391-kube-api-access podName:6a4c87a8-6bf0-43b2-b598-1561cba3e391 nodeName:}" failed. No retries permitted until 2026-03-18 13:11:13.81324573 +0000 UTC m=+248.774299982 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/6a4c87a8-6bf0-43b2-b598-1561cba3e391-kube-api-access") pod "installer-2-master-0" (UID: "6a4c87a8-6bf0-43b2-b598-1561cba3e391") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 13:11:06.021979 master-0 kubenswrapper[7599]: I0318 13:11:06.021865 7599 patch_prober.go:28] interesting pod/catalogd-controller-manager-6864dc98f7-q2ndb container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Mar 18 13:11:06.022287 master-0 kubenswrapper[7599]: I0318 13:11:06.021978 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" podUID="16f8e725-f18a-478e-88c5-87d54aeb4857" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Mar 18 13:11:06.050711 master-0 kubenswrapper[7599]: I0318 13:11:06.050573 7599 patch_prober.go:28] interesting pod/operator-controller-controller-manager-57777556ff-9bjsj container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Mar 18 13:11:06.050711 master-0 kubenswrapper[7599]: I0318 13:11:06.050666 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" podUID="98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" Mar 18 13:11:06.348115 master-0 kubenswrapper[7599]: E0318 13:11:06.348044 7599 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.917s" Mar 18 13:11:06.348115 master-0 kubenswrapper[7599]: I0318 13:11:06.348087 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" event={"ID":"d2455453-5943-49ef-bfea-cba077197da0","Type":"ContainerStarted","Data":"625aa9e7efb69e0ce2b0b79e4566d5e74a444c0e432174133ef355a88a29ba59"} Mar 18 13:11:06.348544 master-0 kubenswrapper[7599]: I0318 13:11:06.348166 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:11:06.348544 master-0 kubenswrapper[7599]: I0318 13:11:06.348180 7599 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" Mar 18 13:11:06.348544 master-0 kubenswrapper[7599]: I0318 13:11:06.348189 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" event={"ID":"19a76585-a9ac-4ed9-9146-bb77b31848c6","Type":"ContainerDied","Data":"e98d728f4b1b0e813247323f6966121eae00b055f966e7db7eab7c672af9c4da"} Mar 18 13:11:06.348544 master-0 kubenswrapper[7599]: I0318 13:11:06.348203 7599 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:11:06.348544 master-0 kubenswrapper[7599]: I0318 13:11:06.348230 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:11:06.349298 master-0 kubenswrapper[7599]: I0318 13:11:06.349230 7599 scope.go:117] "RemoveContainer" containerID="ae6b8122ce3ad297d1b8d967c790c62c2b0fe5b326636877eaeee68260e70360" Mar 18 13:11:06.350765 master-0 kubenswrapper[7599]: I0318 13:11:06.350065 7599 scope.go:117] "RemoveContainer" containerID="a2dd4b79716d36a56d21bba417e3ebe1360ab2ee3f667763e4260bf014da2347" Mar 18 13:11:06.350765 master-0 kubenswrapper[7599]: I0318 13:11:06.350487 7599 scope.go:117] "RemoveContainer" containerID="8c9e4d7f5a1cfb905af9530af8305e93c12f5088f9374b32f042b05f77b48591" Mar 18 13:11:06.350765 master-0 kubenswrapper[7599]: I0318 13:11:06.350527 7599 scope.go:117] "RemoveContainer" containerID="bb553102326f2c4e518a7d4b30f5d51c49f697e05a25cafd876fd2a4683c379b" Mar 18 13:11:06.350765 master-0 kubenswrapper[7599]: I0318 13:11:06.350601 7599 scope.go:117] "RemoveContainer" containerID="a09e30a0e0a70728f4eacd16714f41244f1eaa2c744901296ee7506c0e6ed81f" Mar 18 13:11:06.351290 master-0 kubenswrapper[7599]: I0318 13:11:06.351258 7599 scope.go:117] "RemoveContainer" containerID="34f2829f920c0b8e7fad32f3489c2848036444d936bf5324856fb8eb487c04e1" Mar 18 13:11:06.352630 master-0 kubenswrapper[7599]: I0318 13:11:06.352577 7599 scope.go:117] "RemoveContainer" containerID="e3030c6144549ecf6368b1e14f59622a57b27f9cd532ce32634fa6a2d9e59421" Mar 18 13:11:06.353505 master-0 kubenswrapper[7599]: I0318 13:11:06.352798 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:11:06.353505 master-0 kubenswrapper[7599]: I0318 13:11:06.352827 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-x8r78" event={"ID":"0e7156cf-2d68-4de8-b7e7-60e1539590dd","Type":"ContainerDied","Data":"4a2b96ab3e758ccd953d067f7229799e7c3da85d90ceb61612bf33b3cfdeebe2"} Mar 18 13:11:06.354198 master-0 kubenswrapper[7599]: I0318 13:11:06.353842 7599 scope.go:117] "RemoveContainer" containerID="48e43ee75779b8e1045feaede050da1592482395d03ca73890f0546a58a0cc80" Mar 18 13:11:06.357646 master-0 kubenswrapper[7599]: I0318 13:11:06.354456 7599 scope.go:117] "RemoveContainer" containerID="aebb870640af737294de5fde7faf1b19862e6f81b4ae715f35fdf208373b75e7" Mar 18 13:11:06.357646 master-0 kubenswrapper[7599]: I0318 13:11:06.356052 7599 scope.go:117] "RemoveContainer" containerID="6bba51891e1777a8a2c079cba18156b56f50c10e22f9de1c059b65799e3a81f6" Mar 18 13:11:06.357646 master-0 kubenswrapper[7599]: I0318 13:11:06.356373 7599 scope.go:117] "RemoveContainer" containerID="0fa9267fcb1942ed177056f1462768d5db7582291e5f4b758f528a23e47041d8" Mar 18 13:11:06.357646 master-0 kubenswrapper[7599]: I0318 13:11:06.356601 7599 scope.go:117] "RemoveContainer" containerID="0f68e5c45ea6d8fc8605559b1dd3501571f6348a64337151b3b9a1c54518d47c" Mar 18 13:11:06.357646 master-0 kubenswrapper[7599]: I0318 13:11:06.357429 7599 scope.go:117] "RemoveContainer" containerID="49667c3562724d21d11f45af9648468c2dd5436306c9e389954957510ee7b256" Mar 18 13:11:06.358349 master-0 kubenswrapper[7599]: I0318 13:11:06.357742 7599 scope.go:117] "RemoveContainer" containerID="4a2b96ab3e758ccd953d067f7229799e7c3da85d90ceb61612bf33b3cfdeebe2" Mar 18 13:11:06.358587 master-0 kubenswrapper[7599]: I0318 13:11:06.358558 7599 scope.go:117] "RemoveContainer" containerID="8dcf0d47755aa9729c9174b6d9eec6a76d4adc29a9ce8725fd5baba97772cee5" Mar 18 13:11:06.358758 master-0 kubenswrapper[7599]: I0318 13:11:06.358727 7599 scope.go:117] "RemoveContainer" containerID="bdb06b047a43d8f5cc135f15126477528bd6743cd5d10a3d7306b59927303450" Mar 18 13:11:06.360503 master-0 kubenswrapper[7599]: I0318 13:11:06.360470 7599 scope.go:117] "RemoveContainer" containerID="e98d728f4b1b0e813247323f6966121eae00b055f966e7db7eab7c672af9c4da" Mar 18 13:11:06.360629 master-0 kubenswrapper[7599]: I0318 13:11:06.360580 7599 scope.go:117] "RemoveContainer" containerID="e6d3b86684e16237f7515b45dbb7b40a94f5f8bddf2d34d18c36a6a4d6af41b4" Mar 18 13:11:06.361352 master-0 kubenswrapper[7599]: I0318 13:11:06.361296 7599 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 18 13:11:06.364659 master-0 kubenswrapper[7599]: I0318 13:11:06.364634 7599 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:11:06.364710 master-0 kubenswrapper[7599]: I0318 13:11:06.364668 7599 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:11:06.364710 master-0 kubenswrapper[7599]: I0318 13:11:06.364686 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd" event={"ID":"a8eff549-02f3-446e-b3a1-a66cecdc02a6","Type":"ContainerDied","Data":"0fa9267fcb1942ed177056f1462768d5db7582291e5f4b758f528a23e47041d8"} Mar 18 13:11:06.364710 master-0 kubenswrapper[7599]: I0318 13:11:06.364707 7599 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:11:06.364900 master-0 kubenswrapper[7599]: I0318 13:11:06.364720 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-gxxbr" event={"ID":"f7f4ae93-428b-4ebd-bfaa-18359b407ede","Type":"ContainerDied","Data":"0f68e5c45ea6d8fc8605559b1dd3501571f6348a64337151b3b9a1c54518d47c"} Mar 18 13:11:06.364900 master-0 kubenswrapper[7599]: I0318 13:11:06.364737 7599 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:11:06.364900 master-0 kubenswrapper[7599]: I0318 13:11:06.364746 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb" event={"ID":"07505113-d5e7-4ea3-b9cc-8f08cba45ccc","Type":"ContainerDied","Data":"34f2829f920c0b8e7fad32f3489c2848036444d936bf5324856fb8eb487c04e1"} Mar 18 13:11:06.364900 master-0 kubenswrapper[7599]: I0318 13:11:06.364761 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:11:06.364900 master-0 kubenswrapper[7599]: I0318 13:11:06.364773 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:11:06.364900 master-0 kubenswrapper[7599]: I0318 13:11:06.364782 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m" event={"ID":"595f697b-d238-4500-84ce-1ea00377f05e","Type":"ContainerDied","Data":"a2dd4b79716d36a56d21bba417e3ebe1360ab2ee3f667763e4260bf014da2347"} Mar 18 13:11:06.364900 master-0 kubenswrapper[7599]: I0318 13:11:06.364794 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" event={"ID":"3ee0f85b-219b-47cb-a22a-67d359a69881","Type":"ContainerDied","Data":"48e43ee75779b8e1045feaede050da1592482395d03ca73890f0546a58a0cc80"} Mar 18 13:11:06.364900 master-0 kubenswrapper[7599]: I0318 13:11:06.364805 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-r4dzk" event={"ID":"15a97fe2-5022-4997-9936-4247ae7ecb43","Type":"ContainerDied","Data":"6bba51891e1777a8a2c079cba18156b56f50c10e22f9de1c059b65799e3a81f6"} Mar 18 13:11:06.364900 master-0 kubenswrapper[7599]: I0318 13:11:06.364819 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" event={"ID":"bf9d21f9-64d6-4e21-a985-491197038568","Type":"ContainerDied","Data":"e3030c6144549ecf6368b1e14f59622a57b27f9cd532ce32634fa6a2d9e59421"} Mar 18 13:11:06.364900 master-0 kubenswrapper[7599]: I0318 13:11:06.364831 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4" event={"ID":"b75d4622-ac12-4f82-afc9-ab63e6278b0c","Type":"ContainerStarted","Data":"8fd581d9433e603018eead43b8e27a33c255b946ee133532ab11a25007d5ddfb"} Mar 18 13:11:06.364900 master-0 kubenswrapper[7599]: I0318 13:11:06.364841 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"615539dc-56e1-4489-9aee-33b3e769d4fc","Type":"ContainerDied","Data":"60014c22022db848874d3a05474beca08d37dd24a5fad732534f373108a2dd40"} Mar 18 13:11:06.364900 master-0 kubenswrapper[7599]: I0318 13:11:06.364852 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"c54bcf4ddd56343697f9602341ecf51d80939627fe3f4a59637f96162fa1598d"} Mar 18 13:11:06.364900 master-0 kubenswrapper[7599]: I0318 13:11:06.364865 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" event={"ID":"fe643e40-d06d-4e69-9be3-0065c2a78567","Type":"ContainerDied","Data":"ae6b8122ce3ad297d1b8d967c790c62c2b0fe5b326636877eaeee68260e70360"} Mar 18 13:11:06.364900 master-0 kubenswrapper[7599]: I0318 13:11:06.364877 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" event={"ID":"d2455453-5943-49ef-bfea-cba077197da0","Type":"ContainerDied","Data":"625aa9e7efb69e0ce2b0b79e4566d5e74a444c0e432174133ef355a88a29ba59"} Mar 18 13:11:06.364900 master-0 kubenswrapper[7599]: I0318 13:11:06.364889 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" event={"ID":"d9d09a56-ed4c-40b7-8be1-f3934c07296e","Type":"ContainerDied","Data":"a09e30a0e0a70728f4eacd16714f41244f1eaa2c744901296ee7506c0e6ed81f"} Mar 18 13:11:06.364900 master-0 kubenswrapper[7599]: I0318 13:11:06.364902 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5" event={"ID":"7dca7577-6bee-4dd3-917a-7b7ccc42f0fc","Type":"ContainerDied","Data":"8dcf0d47755aa9729c9174b6d9eec6a76d4adc29a9ce8725fd5baba97772cee5"} Mar 18 13:11:06.364900 master-0 kubenswrapper[7599]: I0318 13:11:06.364917 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" event={"ID":"16f8e725-f18a-478e-88c5-87d54aeb4857","Type":"ContainerDied","Data":"aebb870640af737294de5fde7faf1b19862e6f81b4ae715f35fdf208373b75e7"} Mar 18 13:11:06.365806 master-0 kubenswrapper[7599]: I0318 13:11:06.364928 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" event={"ID":"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617","Type":"ContainerDied","Data":"e6d3b86684e16237f7515b45dbb7b40a94f5f8bddf2d34d18c36a6a4d6af41b4"} Mar 18 13:11:06.365806 master-0 kubenswrapper[7599]: I0318 13:11:06.364941 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qsnxz" event={"ID":"deb67ea0-8342-40cb-b0f4-115270e878dd","Type":"ContainerDied","Data":"8c9e4d7f5a1cfb905af9530af8305e93c12f5088f9374b32f042b05f77b48591"} Mar 18 13:11:06.365806 master-0 kubenswrapper[7599]: I0318 13:11:06.364952 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" event={"ID":"d2455453-5943-49ef-bfea-cba077197da0","Type":"ContainerStarted","Data":"c6d06965eb2aa010cd8386f146e5e7d18099615f71beaf1a6e240f94bd2aecf0"} Mar 18 13:11:06.365806 master-0 kubenswrapper[7599]: I0318 13:11:06.364962 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"f8d9ce1d67226c0b362cac090a8a6e718851e873d29da1183f8e1cd8096dfcfa"} Mar 18 13:11:06.365806 master-0 kubenswrapper[7599]: I0318 13:11:06.364972 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-4bqf9" event={"ID":"0c2c4a58-9780-4ecd-b417-e590ac3576ed","Type":"ContainerStarted","Data":"ece7f8de2256ca4c5499a6c68682a60215b1ff9074f8ada25360681bd459a76c"} Mar 18 13:11:06.365806 master-0 kubenswrapper[7599]: I0318 13:11:06.364983 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"a13dcbbd7771909f98990e3fbfa639b77bb9b4ed659307240d5421a4bc1673b4"} Mar 18 13:11:06.365806 master-0 kubenswrapper[7599]: I0318 13:11:06.364993 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"b1a8894c60e1a0b4aac83cae127aac7a97f1365cf3ab01361a3d2b5c535d9553"} Mar 18 13:11:06.365806 master-0 kubenswrapper[7599]: I0318 13:11:06.365003 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"690d150bf9126351d3371d14a932ae3fddd48238d7ea5cee202800a083a202c1"} Mar 18 13:11:06.365806 master-0 kubenswrapper[7599]: I0318 13:11:06.365011 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"a56ef2268aedcc9b0bdbcda78017fa91cce5eaa7bb2b9a0bcc6e1faf89d9bd0b"} Mar 18 13:11:06.365806 master-0 kubenswrapper[7599]: I0318 13:11:06.365020 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"25667e017afa5136617f1c6f60e91bf035d6e8ded1b221beee9a50d1249b2623"} Mar 18 13:11:06.365806 master-0 kubenswrapper[7599]: I0318 13:11:06.365030 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" event={"ID":"d42bcf13-548b-46c4-9a3d-a46f1b6ec045","Type":"ContainerDied","Data":"bdb06b047a43d8f5cc135f15126477528bd6743cd5d10a3d7306b59927303450"} Mar 18 13:11:06.365806 master-0 kubenswrapper[7599]: I0318 13:11:06.365041 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-fffb75699-b7pwr" event={"ID":"6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0","Type":"ContainerDied","Data":"bb553102326f2c4e518a7d4b30f5d51c49f697e05a25cafd876fd2a4683c379b"} Mar 18 13:11:06.365806 master-0 kubenswrapper[7599]: I0318 13:11:06.365055 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4" event={"ID":"b75d4622-ac12-4f82-afc9-ab63e6278b0c","Type":"ContainerDied","Data":"8fd581d9433e603018eead43b8e27a33c255b946ee133532ab11a25007d5ddfb"} Mar 18 13:11:06.365806 master-0 kubenswrapper[7599]: I0318 13:11:06.365066 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" event={"ID":"ac6d8eb6-1d5e-4757-9823-5ffe478c711c","Type":"ContainerDied","Data":"49667c3562724d21d11f45af9648468c2dd5436306c9e389954957510ee7b256"} Mar 18 13:11:06.365806 master-0 kubenswrapper[7599]: I0318 13:11:06.365080 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-k8tv4" event={"ID":"34a3a84b-048f-4822-9f05-0e7509327ca2","Type":"ContainerStarted","Data":"f3ae91480a2e8eb448094d4c03f841ffed318076eee9f40a63820ede2deb2573"} Mar 18 13:11:06.366202 master-0 kubenswrapper[7599]: I0318 13:11:06.365901 7599 scope.go:117] "RemoveContainer" containerID="f95a3bc3d3ba83cb38567fab408924e4ffe01d6a95b0daefb0d6bae2338f0525" Mar 18 13:11:06.366202 master-0 kubenswrapper[7599]: I0318 13:11:06.366020 7599 scope.go:117] "RemoveContainer" containerID="8fd581d9433e603018eead43b8e27a33c255b946ee133532ab11a25007d5ddfb" Mar 18 13:11:06.423640 master-0 kubenswrapper[7599]: I0318 13:11:06.423347 7599 scope.go:117] "RemoveContainer" containerID="58ba17cb9e47416db3b6a0a6b8c2a2608308d20a79593a16babea0c6f26ec54c" Mar 18 13:11:06.441698 master-0 kubenswrapper[7599]: I0318 13:11:06.440922 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 18 13:11:06.441698 master-0 kubenswrapper[7599]: I0318 13:11:06.440964 7599 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="aca10bb0-fe57-44cb-9d2e-15d0f748f87a" Mar 18 13:11:06.449466 master-0 kubenswrapper[7599]: I0318 13:11:06.444693 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 18 13:11:06.449466 master-0 kubenswrapper[7599]: I0318 13:11:06.444739 7599 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="aca10bb0-fe57-44cb-9d2e-15d0f748f87a" Mar 18 13:11:06.473276 master-0 kubenswrapper[7599]: I0318 13:11:06.473209 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 18 13:11:06.482012 master-0 kubenswrapper[7599]: I0318 13:11:06.481730 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 18 13:11:06.484847 master-0 kubenswrapper[7599]: I0318 13:11:06.484827 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 18 13:11:06.494681 master-0 kubenswrapper[7599]: I0318 13:11:06.494428 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 18 13:11:06.496552 master-0 kubenswrapper[7599]: I0318 13:11:06.496506 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 18 13:11:06.551981 master-0 kubenswrapper[7599]: I0318 13:11:06.551908 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wsmsc" podStartSLOduration=186.507915289 podStartE2EDuration="3m18.55188605s" podCreationTimestamp="2026-03-18 13:07:48 +0000 UTC" firstStartedPulling="2026-03-18 13:07:55.774387659 +0000 UTC m=+50.735441891" lastFinishedPulling="2026-03-18 13:08:07.81835841 +0000 UTC m=+62.779412652" observedRunningTime="2026-03-18 13:11:06.547729591 +0000 UTC m=+241.508783833" watchObservedRunningTime="2026-03-18 13:11:06.55188605 +0000 UTC m=+241.512940312" Mar 18 13:11:06.553675 master-0 kubenswrapper[7599]: I0318 13:11:06.553635 7599 scope.go:117] "RemoveContainer" containerID="4f4390a1edc4e74d8425b268d4802fbbd68b0a727bcc922dd63ac0c094e61704" Mar 18 13:11:06.653761 master-0 kubenswrapper[7599]: I0318 13:11:06.653700 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=0.653684697 podStartE2EDuration="653.684697ms" podCreationTimestamp="2026-03-18 13:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:11:06.652101012 +0000 UTC m=+241.613155254" watchObservedRunningTime="2026-03-18 13:11:06.653684697 +0000 UTC m=+241.614738939" Mar 18 13:11:07.318376 master-0 kubenswrapper[7599]: I0318 13:11:07.318321 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-r4dzk" event={"ID":"15a97fe2-5022-4997-9936-4247ae7ecb43","Type":"ContainerStarted","Data":"11285df327738337914cc0ae565734b64d8fbdbaed5cdcd21d8f84db43967978"} Mar 18 13:11:07.319822 master-0 kubenswrapper[7599]: I0318 13:11:07.319805 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" event={"ID":"bf9d21f9-64d6-4e21-a985-491197038568","Type":"ContainerStarted","Data":"cb973050d91145843fda6519effa669a5d62a92181e514441bd6c04fe69dc004"} Mar 18 13:11:07.321616 master-0 kubenswrapper[7599]: I0318 13:11:07.321584 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-wqxpk_d9d09a56-ed4c-40b7-8be1-f3934c07296e/ingress-operator/0.log" Mar 18 13:11:07.321706 master-0 kubenswrapper[7599]: I0318 13:11:07.321663 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" event={"ID":"d9d09a56-ed4c-40b7-8be1-f3934c07296e","Type":"ContainerStarted","Data":"df5d711a967c436c3ef89b97c0b604c819b293d8a09e8223cc8050c145294e10"} Mar 18 13:11:07.323240 master-0 kubenswrapper[7599]: I0318 13:11:07.323219 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-q2ndb_16f8e725-f18a-478e-88c5-87d54aeb4857/manager/0.log" Mar 18 13:11:07.324378 master-0 kubenswrapper[7599]: I0318 13:11:07.324353 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" event={"ID":"16f8e725-f18a-478e-88c5-87d54aeb4857","Type":"ContainerStarted","Data":"8b95eb8fc69ceaf4692d8f4970690d7b9c31eb8fb64b767afa33cfaa9ea6e088"} Mar 18 13:11:07.324617 master-0 kubenswrapper[7599]: I0318 13:11:07.324596 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:11:07.326006 master-0 kubenswrapper[7599]: I0318 13:11:07.325981 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-fffb75699-b7pwr" event={"ID":"6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0","Type":"ContainerStarted","Data":"faa3ae234ab802023f31cbc58b44ed217cc96ae50b5b94a276ed5c1ebb501106"} Mar 18 13:11:07.326210 master-0 kubenswrapper[7599]: I0318 13:11:07.326196 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-fffb75699-b7pwr" Mar 18 13:11:07.327690 master-0 kubenswrapper[7599]: I0318 13:11:07.327651 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-gxxbr_f7f4ae93-428b-4ebd-bfaa-18359b407ede/network-operator/0.log" Mar 18 13:11:07.327806 master-0 kubenswrapper[7599]: I0318 13:11:07.327771 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-gxxbr" event={"ID":"f7f4ae93-428b-4ebd-bfaa-18359b407ede","Type":"ContainerStarted","Data":"12f0bb461bc477b8eb65dc72156ed9ad8f7e41968ae2d0ef9cad32f3e837b199"} Mar 18 13:11:07.329630 master-0 kubenswrapper[7599]: I0318 13:11:07.329603 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" event={"ID":"d42bcf13-548b-46c4-9a3d-a46f1b6ec045","Type":"ContainerStarted","Data":"d95ca6e96bcbe20e26ec06e8bea97630f7abc38b8dcb855ed93eec8b8ea1c22b"} Mar 18 13:11:07.330252 master-0 kubenswrapper[7599]: I0318 13:11:07.330220 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-fffb75699-b7pwr" Mar 18 13:11:07.331586 master-0 kubenswrapper[7599]: I0318 13:11:07.331552 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb" event={"ID":"07505113-d5e7-4ea3-b9cc-8f08cba45ccc","Type":"ContainerStarted","Data":"b1d807d6b0428c0212050973f0490d6a880b69e8127e076fbe197dddf8a96d5b"} Mar 18 13:11:07.333741 master-0 kubenswrapper[7599]: I0318 13:11:07.333705 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-cpqm5_7dca7577-6bee-4dd3-917a-7b7ccc42f0fc/openshift-controller-manager-operator/1.log" Mar 18 13:11:07.333811 master-0 kubenswrapper[7599]: I0318 13:11:07.333789 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5" event={"ID":"7dca7577-6bee-4dd3-917a-7b7ccc42f0fc","Type":"ContainerStarted","Data":"b5aaa571a68806249fc7d55159a4093df00ace03fbc9a12d84446e66a7f3e311"} Mar 18 13:11:07.338756 master-0 kubenswrapper[7599]: I0318 13:11:07.338710 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-mz4qp_ac6d8eb6-1d5e-4757-9823-5ffe478c711c/cluster-baremetal-operator/0.log" Mar 18 13:11:07.338883 master-0 kubenswrapper[7599]: I0318 13:11:07.338815 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" event={"ID":"ac6d8eb6-1d5e-4757-9823-5ffe478c711c","Type":"ContainerStarted","Data":"acf7b4bcf62e14560e517d9dee729cb0a4ed47ca5e163fdf0d69e59c5b3307d6"} Mar 18 13:11:07.341363 master-0 kubenswrapper[7599]: I0318 13:11:07.341322 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qsnxz_deb67ea0-8342-40cb-b0f4-115270e878dd/snapshot-controller/0.log" Mar 18 13:11:07.341439 master-0 kubenswrapper[7599]: I0318 13:11:07.341405 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qsnxz" event={"ID":"deb67ea0-8342-40cb-b0f4-115270e878dd","Type":"ContainerStarted","Data":"f15ee5d33285c15f95f38e99b3afffed56d23dec0f3da62015e493b81d27528c"} Mar 18 13:11:07.344696 master-0 kubenswrapper[7599]: I0318 13:11:07.344631 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd" event={"ID":"a8eff549-02f3-446e-b3a1-a66cecdc02a6","Type":"ContainerStarted","Data":"8282b58a87a9816b39b8e46af1e553cfafda7bc3ace1196ac63b527830a8a86a"} Mar 18 13:11:07.350818 master-0 kubenswrapper[7599]: I0318 13:11:07.350771 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-9bjsj_98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617/manager/0.log" Mar 18 13:11:07.350993 master-0 kubenswrapper[7599]: I0318 13:11:07.350972 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" event={"ID":"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617","Type":"ContainerStarted","Data":"373a96948933142905a0929fac3fe9686db40a54b4edff77be09a9cdf58a235d"} Mar 18 13:11:07.351078 master-0 kubenswrapper[7599]: I0318 13:11:07.351066 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:11:07.353983 master-0 kubenswrapper[7599]: I0318 13:11:07.353563 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-n7fn4_b75d4622-ac12-4f82-afc9-ab63e6278b0c/kube-controller-manager-operator/1.log" Mar 18 13:11:07.353983 master-0 kubenswrapper[7599]: I0318 13:11:07.353670 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4" event={"ID":"b75d4622-ac12-4f82-afc9-ab63e6278b0c","Type":"ContainerStarted","Data":"efe6e287c36852699c4eb20fb17353458d83a029dc0001b97b2d103045cc17c2"} Mar 18 13:11:07.356783 master-0 kubenswrapper[7599]: I0318 13:11:07.356734 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-x8r78_0e7156cf-2d68-4de8-b7e7-60e1539590dd/approver/0.log" Mar 18 13:11:07.357451 master-0 kubenswrapper[7599]: I0318 13:11:07.357382 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-x8r78" event={"ID":"0e7156cf-2d68-4de8-b7e7-60e1539590dd","Type":"ContainerStarted","Data":"0df05b30952e8bace8c1fadfea54a4900c846053f046ccb0bcbeb970b3b63e09"} Mar 18 13:11:07.360628 master-0 kubenswrapper[7599]: I0318 13:11:07.360602 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-ff75f747c-r46tm_3ee0f85b-219b-47cb-a22a-67d359a69881/packageserver/0.log" Mar 18 13:11:07.360707 master-0 kubenswrapper[7599]: I0318 13:11:07.360691 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" event={"ID":"3ee0f85b-219b-47cb-a22a-67d359a69881","Type":"ContainerStarted","Data":"a173260494fc7cb4b3e5f060c679f7a75fbec9929d0f639c7f0f786a29fccfb7"} Mar 18 13:11:07.360993 master-0 kubenswrapper[7599]: I0318 13:11:07.360974 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" Mar 18 13:11:07.362356 master-0 kubenswrapper[7599]: I0318 13:11:07.362332 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" event={"ID":"19a76585-a9ac-4ed9-9146-bb77b31848c6","Type":"ContainerStarted","Data":"f3d6a2875cca50d672dfde1a32c8dca9e65a425957da660e57609821797e598b"} Mar 18 13:11:07.364096 master-0 kubenswrapper[7599]: I0318 13:11:07.364047 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m" event={"ID":"595f697b-d238-4500-84ce-1ea00377f05e","Type":"ContainerStarted","Data":"239bc63a547a5d1be7fb026224506bae5660c286e46adef016daf55c15815d54"} Mar 18 13:11:07.367059 master-0 kubenswrapper[7599]: I0318 13:11:07.366995 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" event={"ID":"fe643e40-d06d-4e69-9be3-0065c2a78567","Type":"ContainerStarted","Data":"f43bc8a8bbbc899289099dead977b3860d9a72dc536beb3c67e961db9a9900e8"} Mar 18 13:11:07.368050 master-0 kubenswrapper[7599]: I0318 13:11:07.368011 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:11:07.368112 master-0 kubenswrapper[7599]: I0318 13:11:07.368046 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:11:07.380274 master-0 kubenswrapper[7599]: I0318 13:11:07.380198 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="250b63d5-21ee-44d3-821e-f42a8112dc50" path="/var/lib/kubelet/pods/250b63d5-21ee-44d3-821e-f42a8112dc50/volumes" Mar 18 13:11:07.381454 master-0 kubenswrapper[7599]: I0318 13:11:07.381384 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6298bf7b-ba09-4b4a-a0c6-f1989113eb5f" path="/var/lib/kubelet/pods/6298bf7b-ba09-4b4a-a0c6-f1989113eb5f/volumes" Mar 18 13:11:07.383065 master-0 kubenswrapper[7599]: I0318 13:11:07.382990 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:11:07.383065 master-0 kubenswrapper[7599]: I0318 13:11:07.383053 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:11:07.466922 master-0 kubenswrapper[7599]: I0318 13:11:07.466178 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" Mar 18 13:11:07.641166 master-0 kubenswrapper[7599]: I0318 13:11:07.641096 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_615539dc-56e1-4489-9aee-33b3e769d4fc/installer/0.log" Mar 18 13:11:07.641371 master-0 kubenswrapper[7599]: I0318 13:11:07.641204 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 13:11:07.763405 master-0 kubenswrapper[7599]: I0318 13:11:07.763314 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" podStartSLOduration=189.763295291 podStartE2EDuration="3m9.763295291s" podCreationTimestamp="2026-03-18 13:07:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:11:07.740171083 +0000 UTC m=+242.701225355" watchObservedRunningTime="2026-03-18 13:11:07.763295291 +0000 UTC m=+242.724349533" Mar 18 13:11:07.805423 master-0 kubenswrapper[7599]: I0318 13:11:07.805343 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/615539dc-56e1-4489-9aee-33b3e769d4fc-kubelet-dir\") pod \"615539dc-56e1-4489-9aee-33b3e769d4fc\" (UID: \"615539dc-56e1-4489-9aee-33b3e769d4fc\") " Mar 18 13:11:07.805613 master-0 kubenswrapper[7599]: I0318 13:11:07.805480 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/615539dc-56e1-4489-9aee-33b3e769d4fc-var-lock\") pod \"615539dc-56e1-4489-9aee-33b3e769d4fc\" (UID: \"615539dc-56e1-4489-9aee-33b3e769d4fc\") " Mar 18 13:11:07.805613 master-0 kubenswrapper[7599]: I0318 13:11:07.805531 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/615539dc-56e1-4489-9aee-33b3e769d4fc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "615539dc-56e1-4489-9aee-33b3e769d4fc" (UID: "615539dc-56e1-4489-9aee-33b3e769d4fc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:11:07.805613 master-0 kubenswrapper[7599]: I0318 13:11:07.805554 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/615539dc-56e1-4489-9aee-33b3e769d4fc-kube-api-access\") pod \"615539dc-56e1-4489-9aee-33b3e769d4fc\" (UID: \"615539dc-56e1-4489-9aee-33b3e769d4fc\") " Mar 18 13:11:07.805701 master-0 kubenswrapper[7599]: I0318 13:11:07.805656 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/615539dc-56e1-4489-9aee-33b3e769d4fc-var-lock" (OuterVolumeSpecName: "var-lock") pod "615539dc-56e1-4489-9aee-33b3e769d4fc" (UID: "615539dc-56e1-4489-9aee-33b3e769d4fc"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:11:07.806117 master-0 kubenswrapper[7599]: I0318 13:11:07.806086 7599 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/615539dc-56e1-4489-9aee-33b3e769d4fc-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:07.806165 master-0 kubenswrapper[7599]: I0318 13:11:07.806120 7599 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/615539dc-56e1-4489-9aee-33b3e769d4fc-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:07.816503 master-0 kubenswrapper[7599]: I0318 13:11:07.814201 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/615539dc-56e1-4489-9aee-33b3e769d4fc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "615539dc-56e1-4489-9aee-33b3e769d4fc" (UID: "615539dc-56e1-4489-9aee-33b3e769d4fc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:11:07.906978 master-0 kubenswrapper[7599]: I0318 13:11:07.906799 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/615539dc-56e1-4489-9aee-33b3e769d4fc-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:08.377142 master-0 kubenswrapper[7599]: I0318 13:11:08.377082 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_615539dc-56e1-4489-9aee-33b3e769d4fc/installer/0.log" Mar 18 13:11:08.377802 master-0 kubenswrapper[7599]: I0318 13:11:08.377276 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"615539dc-56e1-4489-9aee-33b3e769d4fc","Type":"ContainerDied","Data":"2ce4033218ef99b5e9450dbf20ed2bc82ac943f7fb09ce23f0d76d88d185685f"} Mar 18 13:11:08.377802 master-0 kubenswrapper[7599]: I0318 13:11:08.377300 7599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ce4033218ef99b5e9450dbf20ed2bc82ac943f7fb09ce23f0d76d88d185685f" Mar 18 13:11:08.377802 master-0 kubenswrapper[7599]: I0318 13:11:08.377539 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 13:11:08.443571 master-0 kubenswrapper[7599]: I0318 13:11:08.443485 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:11:08.498296 master-0 kubenswrapper[7599]: E0318 13:11:08.498245 7599 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod615539dc_56e1_4489_9aee_33b3e769d4fc.slice/crio-2ce4033218ef99b5e9450dbf20ed2bc82ac943f7fb09ce23f0d76d88d185685f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod615539dc_56e1_4489_9aee_33b3e769d4fc.slice\": RecentStats: unable to find data in memory cache]" Mar 18 13:11:09.384906 master-0 kubenswrapper[7599]: I0318 13:11:09.384860 7599 generic.go:334] "Generic (PLEG): container finished" podID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerID="9ce3394879cb362e5d7236279a34aac71fedeb577c1dc6ec801d0fa7287bb15c" exitCode=0 Mar 18 13:11:09.385483 master-0 kubenswrapper[7599]: I0318 13:11:09.384960 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" event={"ID":"00375107-9a3b-4161-a90d-72ea8827c5fc","Type":"ContainerDied","Data":"9ce3394879cb362e5d7236279a34aac71fedeb577c1dc6ec801d0fa7287bb15c"} Mar 18 13:11:09.385483 master-0 kubenswrapper[7599]: I0318 13:11:09.385005 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" event={"ID":"00375107-9a3b-4161-a90d-72ea8827c5fc","Type":"ContainerStarted","Data":"f14e73371f76e20d73c8968b8d34cca55ee15e6f6c8c8c101d7840ace2efb3fd"} Mar 18 13:11:10.252190 master-0 kubenswrapper[7599]: I0318 13:11:10.252065 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:11:10.255572 master-0 kubenswrapper[7599]: I0318 13:11:10.255537 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:10.255572 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:10.255572 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:10.255572 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:10.255743 master-0 kubenswrapper[7599]: I0318 13:11:10.255596 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:11.110979 master-0 kubenswrapper[7599]: I0318 13:11:11.110856 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 18 13:11:11.110979 master-0 kubenswrapper[7599]: I0318 13:11:11.110916 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 18 13:11:11.135304 master-0 kubenswrapper[7599]: I0318 13:11:11.135257 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 18 13:11:11.254349 master-0 kubenswrapper[7599]: I0318 13:11:11.254245 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:11.254349 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:11.254349 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:11.254349 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:11.254851 master-0 kubenswrapper[7599]: I0318 13:11:11.254356 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:11.416567 master-0 kubenswrapper[7599]: I0318 13:11:11.416480 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 18 13:11:12.255175 master-0 kubenswrapper[7599]: I0318 13:11:12.255057 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:12.255175 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:12.255175 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:12.255175 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:12.256497 master-0 kubenswrapper[7599]: I0318 13:11:12.255194 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:13.254755 master-0 kubenswrapper[7599]: I0318 13:11:13.254660 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:13.254755 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:13.254755 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:13.254755 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:13.256149 master-0 kubenswrapper[7599]: I0318 13:11:13.254775 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:13.324775 master-0 kubenswrapper[7599]: I0318 13:11:13.324680 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:11:13.331014 master-0 kubenswrapper[7599]: I0318 13:11:13.330958 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:11:13.416107 master-0 kubenswrapper[7599]: I0318 13:11:13.416045 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:11:13.885618 master-0 kubenswrapper[7599]: I0318 13:11:13.885538 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6a4c87a8-6bf0-43b2-b598-1561cba3e391-kube-api-access\") pod \"installer-2-master-0\" (UID: \"6a4c87a8-6bf0-43b2-b598-1561cba3e391\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 13:11:14.253642 master-0 kubenswrapper[7599]: I0318 13:11:14.253575 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:14.253642 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:14.253642 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:14.253642 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:14.253642 master-0 kubenswrapper[7599]: I0318 13:11:14.253638 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:15.252214 master-0 kubenswrapper[7599]: I0318 13:11:15.252111 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:11:15.254485 master-0 kubenswrapper[7599]: I0318 13:11:15.254393 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:15.254485 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:15.254485 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:15.254485 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:15.254770 master-0 kubenswrapper[7599]: I0318 13:11:15.254517 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:16.024277 master-0 kubenswrapper[7599]: I0318 13:11:16.024194 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:11:16.055348 master-0 kubenswrapper[7599]: I0318 13:11:16.055278 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:11:16.254677 master-0 kubenswrapper[7599]: I0318 13:11:16.254589 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:16.254677 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:16.254677 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:16.254677 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:16.255322 master-0 kubenswrapper[7599]: I0318 13:11:16.254690 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:17.253638 master-0 kubenswrapper[7599]: I0318 13:11:17.253561 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:17.253638 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:17.253638 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:17.253638 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:17.253951 master-0 kubenswrapper[7599]: I0318 13:11:17.253647 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:18.253355 master-0 kubenswrapper[7599]: I0318 13:11:18.253254 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:18.253355 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:18.253355 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:18.253355 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:18.253355 master-0 kubenswrapper[7599]: I0318 13:11:18.253317 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:19.258081 master-0 kubenswrapper[7599]: I0318 13:11:19.257985 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:19.258081 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:19.258081 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:19.258081 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:19.258904 master-0 kubenswrapper[7599]: I0318 13:11:19.258080 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:20.255154 master-0 kubenswrapper[7599]: I0318 13:11:20.255032 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:20.255154 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:20.255154 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:20.255154 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:20.255613 master-0 kubenswrapper[7599]: I0318 13:11:20.255179 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:21.254129 master-0 kubenswrapper[7599]: I0318 13:11:21.253999 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:21.254129 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:21.254129 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:21.254129 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:21.254129 master-0 kubenswrapper[7599]: I0318 13:11:21.254083 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:22.254332 master-0 kubenswrapper[7599]: I0318 13:11:22.254197 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:22.254332 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:22.254332 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:22.254332 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:22.254332 master-0 kubenswrapper[7599]: I0318 13:11:22.254302 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:22.467099 master-0 kubenswrapper[7599]: I0318 13:11:22.467048 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-68f85b4d6c-t84s9_d2455453-5943-49ef-bfea-cba077197da0/catalog-operator/1.log" Mar 18 13:11:22.467540 master-0 kubenswrapper[7599]: I0318 13:11:22.467517 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-68f85b4d6c-t84s9_d2455453-5943-49ef-bfea-cba077197da0/catalog-operator/0.log" Mar 18 13:11:22.467604 master-0 kubenswrapper[7599]: I0318 13:11:22.467556 7599 generic.go:334] "Generic (PLEG): container finished" podID="d2455453-5943-49ef-bfea-cba077197da0" containerID="c6d06965eb2aa010cd8386f146e5e7d18099615f71beaf1a6e240f94bd2aecf0" exitCode=1 Mar 18 13:11:22.467604 master-0 kubenswrapper[7599]: I0318 13:11:22.467583 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" event={"ID":"d2455453-5943-49ef-bfea-cba077197da0","Type":"ContainerDied","Data":"c6d06965eb2aa010cd8386f146e5e7d18099615f71beaf1a6e240f94bd2aecf0"} Mar 18 13:11:22.467690 master-0 kubenswrapper[7599]: I0318 13:11:22.467615 7599 scope.go:117] "RemoveContainer" containerID="625aa9e7efb69e0ce2b0b79e4566d5e74a444c0e432174133ef355a88a29ba59" Mar 18 13:11:22.468575 master-0 kubenswrapper[7599]: I0318 13:11:22.468521 7599 scope.go:117] "RemoveContainer" containerID="c6d06965eb2aa010cd8386f146e5e7d18099615f71beaf1a6e240f94bd2aecf0" Mar 18 13:11:22.468976 master-0 kubenswrapper[7599]: E0318 13:11:22.468915 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=catalog-operator pod=catalog-operator-68f85b4d6c-t84s9_openshift-operator-lifecycle-manager(d2455453-5943-49ef-bfea-cba077197da0)\"" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" podUID="d2455453-5943-49ef-bfea-cba077197da0" Mar 18 13:11:22.952836 master-0 kubenswrapper[7599]: I0318 13:11:22.952752 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6a4c87a8-6bf0-43b2-b598-1561cba3e391-kube-api-access\") pod \"installer-2-master-0\" (UID: \"6a4c87a8-6bf0-43b2-b598-1561cba3e391\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 13:11:22.985593 master-0 kubenswrapper[7599]: I0318 13:11:22.985555 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-w7jpc" Mar 18 13:11:22.994477 master-0 kubenswrapper[7599]: I0318 13:11:22.994430 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 13:11:23.253367 master-0 kubenswrapper[7599]: I0318 13:11:23.253177 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:23.253367 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:23.253367 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:23.253367 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:23.253367 master-0 kubenswrapper[7599]: I0318 13:11:23.253237 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:23.453377 master-0 kubenswrapper[7599]: I0318 13:11:23.453310 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 18 13:11:23.459211 master-0 kubenswrapper[7599]: W0318 13:11:23.459130 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod6a4c87a8_6bf0_43b2_b598_1561cba3e391.slice/crio-54b6e29331a441885b9941b0a8d3cb3f4a69221f2394a03c8cf38fa54c2e30f4 WatchSource:0}: Error finding container 54b6e29331a441885b9941b0a8d3cb3f4a69221f2394a03c8cf38fa54c2e30f4: Status 404 returned error can't find the container with id 54b6e29331a441885b9941b0a8d3cb3f4a69221f2394a03c8cf38fa54c2e30f4 Mar 18 13:11:23.475609 master-0 kubenswrapper[7599]: I0318 13:11:23.475533 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"6a4c87a8-6bf0-43b2-b598-1561cba3e391","Type":"ContainerStarted","Data":"54b6e29331a441885b9941b0a8d3cb3f4a69221f2394a03c8cf38fa54c2e30f4"} Mar 18 13:11:23.479477 master-0 kubenswrapper[7599]: I0318 13:11:23.479394 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-68f85b4d6c-t84s9_d2455453-5943-49ef-bfea-cba077197da0/catalog-operator/1.log" Mar 18 13:11:24.254470 master-0 kubenswrapper[7599]: I0318 13:11:24.254359 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:24.254470 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:24.254470 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:24.254470 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:24.254870 master-0 kubenswrapper[7599]: I0318 13:11:24.254482 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:24.487682 master-0 kubenswrapper[7599]: I0318 13:11:24.487606 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"6a4c87a8-6bf0-43b2-b598-1561cba3e391","Type":"ContainerStarted","Data":"d3c2d483573799510afcab12d760b1183078a2dd2aa3d3d851d413db0b1d8ab1"} Mar 18 13:11:24.515857 master-0 kubenswrapper[7599]: I0318 13:11:24.515644 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-master-0" podStartSLOduration=197.515622438 podStartE2EDuration="3m17.515622438s" podCreationTimestamp="2026-03-18 13:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:11:24.512358723 +0000 UTC m=+259.473412965" watchObservedRunningTime="2026-03-18 13:11:24.515622438 +0000 UTC m=+259.476676690" Mar 18 13:11:25.253576 master-0 kubenswrapper[7599]: I0318 13:11:25.253476 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:25.253576 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:25.253576 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:25.253576 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:25.253971 master-0 kubenswrapper[7599]: I0318 13:11:25.253580 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:26.253699 master-0 kubenswrapper[7599]: I0318 13:11:26.253648 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:26.253699 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:26.253699 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:26.253699 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:26.254251 master-0 kubenswrapper[7599]: I0318 13:11:26.253718 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:26.996217 master-0 kubenswrapper[7599]: I0318 13:11:26.996155 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-5vhnr"] Mar 18 13:11:26.996651 master-0 kubenswrapper[7599]: E0318 13:11:26.996628 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="615539dc-56e1-4489-9aee-33b3e769d4fc" containerName="installer" Mar 18 13:11:26.996795 master-0 kubenswrapper[7599]: I0318 13:11:26.996778 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="615539dc-56e1-4489-9aee-33b3e769d4fc" containerName="installer" Mar 18 13:11:26.996884 master-0 kubenswrapper[7599]: E0318 13:11:26.996870 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41" containerName="installer" Mar 18 13:11:26.996970 master-0 kubenswrapper[7599]: I0318 13:11:26.996954 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41" containerName="installer" Mar 18 13:11:26.997075 master-0 kubenswrapper[7599]: E0318 13:11:26.997060 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="250b63d5-21ee-44d3-821e-f42a8112dc50" containerName="installer" Mar 18 13:11:26.997158 master-0 kubenswrapper[7599]: I0318 13:11:26.997144 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="250b63d5-21ee-44d3-821e-f42a8112dc50" containerName="installer" Mar 18 13:11:26.997245 master-0 kubenswrapper[7599]: E0318 13:11:26.997232 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6298bf7b-ba09-4b4a-a0c6-f1989113eb5f" containerName="installer" Mar 18 13:11:26.997328 master-0 kubenswrapper[7599]: I0318 13:11:26.997315 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="6298bf7b-ba09-4b4a-a0c6-f1989113eb5f" containerName="installer" Mar 18 13:11:26.997415 master-0 kubenswrapper[7599]: E0318 13:11:26.997397 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="814ffa63-b08e-4de8-b912-8d7f0638230b" containerName="installer" Mar 18 13:11:26.997578 master-0 kubenswrapper[7599]: I0318 13:11:26.997561 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="814ffa63-b08e-4de8-b912-8d7f0638230b" containerName="installer" Mar 18 13:11:26.997806 master-0 kubenswrapper[7599]: I0318 13:11:26.997788 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="615539dc-56e1-4489-9aee-33b3e769d4fc" containerName="installer" Mar 18 13:11:26.997931 master-0 kubenswrapper[7599]: I0318 13:11:26.997892 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="250b63d5-21ee-44d3-821e-f42a8112dc50" containerName="installer" Mar 18 13:11:26.998028 master-0 kubenswrapper[7599]: I0318 13:11:26.998014 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="6298bf7b-ba09-4b4a-a0c6-f1989113eb5f" containerName="installer" Mar 18 13:11:26.998109 master-0 kubenswrapper[7599]: I0318 13:11:26.998095 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41" containerName="installer" Mar 18 13:11:26.998199 master-0 kubenswrapper[7599]: I0318 13:11:26.998185 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="814ffa63-b08e-4de8-b912-8d7f0638230b" containerName="installer" Mar 18 13:11:26.998650 master-0 kubenswrapper[7599]: I0318 13:11:26.998635 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-5vhnr" Mar 18 13:11:27.000522 master-0 kubenswrapper[7599]: I0318 13:11:27.000439 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-qqvgp" Mar 18 13:11:27.012636 master-0 kubenswrapper[7599]: I0318 13:11:27.001768 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 18 13:11:27.034141 master-0 kubenswrapper[7599]: I0318 13:11:27.034098 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-5vhnr"] Mar 18 13:11:27.099443 master-0 kubenswrapper[7599]: I0318 13:11:27.095550 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-59d95"] Mar 18 13:11:27.099443 master-0 kubenswrapper[7599]: I0318 13:11:27.096503 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-59d95" Mar 18 13:11:27.107443 master-0 kubenswrapper[7599]: I0318 13:11:27.105859 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-67jff" Mar 18 13:11:27.107443 master-0 kubenswrapper[7599]: I0318 13:11:27.105883 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 18 13:11:27.107443 master-0 kubenswrapper[7599]: I0318 13:11:27.106074 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 18 13:11:27.107443 master-0 kubenswrapper[7599]: I0318 13:11:27.106550 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 18 13:11:27.108914 master-0 kubenswrapper[7599]: I0318 13:11:27.108867 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 18 13:11:27.133452 master-0 kubenswrapper[7599]: I0318 13:11:27.123858 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-59d95"] Mar 18 13:11:27.174099 master-0 kubenswrapper[7599]: I0318 13:11:27.171525 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bdcd72a6-a8e8-47ba-8b51-7325d35bad6b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-5vhnr\" (UID: \"bdcd72a6-a8e8-47ba-8b51-7325d35bad6b\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-5vhnr" Mar 18 13:11:27.174099 master-0 kubenswrapper[7599]: I0318 13:11:27.171607 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fw5f\" (UniqueName: \"kubernetes.io/projected/bdcd72a6-a8e8-47ba-8b51-7325d35bad6b-kube-api-access-6fw5f\") pod \"control-plane-machine-set-operator-6f97756bc8-5vhnr\" (UID: \"bdcd72a6-a8e8-47ba-8b51-7325d35bad6b\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-5vhnr" Mar 18 13:11:27.254669 master-0 kubenswrapper[7599]: I0318 13:11:27.254561 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:27.254669 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:27.254669 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:27.254669 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:27.254669 master-0 kubenswrapper[7599]: I0318 13:11:27.254631 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:27.272895 master-0 kubenswrapper[7599]: I0318 13:11:27.272842 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9de7243-90c0-49c4-8059-34e0558fca40-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-59d95\" (UID: \"a9de7243-90c0-49c4-8059-34e0558fca40\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-59d95" Mar 18 13:11:27.273006 master-0 kubenswrapper[7599]: I0318 13:11:27.272925 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bdcd72a6-a8e8-47ba-8b51-7325d35bad6b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-5vhnr\" (UID: \"bdcd72a6-a8e8-47ba-8b51-7325d35bad6b\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-5vhnr" Mar 18 13:11:27.273210 master-0 kubenswrapper[7599]: I0318 13:11:27.273173 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9de7243-90c0-49c4-8059-34e0558fca40-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-59d95\" (UID: \"a9de7243-90c0-49c4-8059-34e0558fca40\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-59d95" Mar 18 13:11:27.273335 master-0 kubenswrapper[7599]: I0318 13:11:27.273307 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fw5f\" (UniqueName: \"kubernetes.io/projected/bdcd72a6-a8e8-47ba-8b51-7325d35bad6b-kube-api-access-6fw5f\") pod \"control-plane-machine-set-operator-6f97756bc8-5vhnr\" (UID: \"bdcd72a6-a8e8-47ba-8b51-7325d35bad6b\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-5vhnr" Mar 18 13:11:27.273470 master-0 kubenswrapper[7599]: I0318 13:11:27.273450 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75jwh\" (UniqueName: \"kubernetes.io/projected/a9de7243-90c0-49c4-8059-34e0558fca40-kube-api-access-75jwh\") pod \"cloud-credential-operator-744f9dbf77-59d95\" (UID: \"a9de7243-90c0-49c4-8059-34e0558fca40\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-59d95" Mar 18 13:11:27.276607 master-0 kubenswrapper[7599]: I0318 13:11:27.276525 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bdcd72a6-a8e8-47ba-8b51-7325d35bad6b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-5vhnr\" (UID: \"bdcd72a6-a8e8-47ba-8b51-7325d35bad6b\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-5vhnr" Mar 18 13:11:27.289014 master-0 kubenswrapper[7599]: I0318 13:11:27.288956 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fw5f\" (UniqueName: \"kubernetes.io/projected/bdcd72a6-a8e8-47ba-8b51-7325d35bad6b-kube-api-access-6fw5f\") pod \"control-plane-machine-set-operator-6f97756bc8-5vhnr\" (UID: \"bdcd72a6-a8e8-47ba-8b51-7325d35bad6b\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-5vhnr" Mar 18 13:11:27.319251 master-0 kubenswrapper[7599]: I0318 13:11:27.319202 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-5vhnr" Mar 18 13:11:27.375021 master-0 kubenswrapper[7599]: I0318 13:11:27.374966 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9de7243-90c0-49c4-8059-34e0558fca40-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-59d95\" (UID: \"a9de7243-90c0-49c4-8059-34e0558fca40\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-59d95" Mar 18 13:11:27.375290 master-0 kubenswrapper[7599]: I0318 13:11:27.375258 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9de7243-90c0-49c4-8059-34e0558fca40-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-59d95\" (UID: \"a9de7243-90c0-49c4-8059-34e0558fca40\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-59d95" Mar 18 13:11:27.375342 master-0 kubenswrapper[7599]: I0318 13:11:27.375330 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75jwh\" (UniqueName: \"kubernetes.io/projected/a9de7243-90c0-49c4-8059-34e0558fca40-kube-api-access-75jwh\") pod \"cloud-credential-operator-744f9dbf77-59d95\" (UID: \"a9de7243-90c0-49c4-8059-34e0558fca40\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-59d95" Mar 18 13:11:27.375826 master-0 kubenswrapper[7599]: I0318 13:11:27.375741 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9de7243-90c0-49c4-8059-34e0558fca40-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-59d95\" (UID: \"a9de7243-90c0-49c4-8059-34e0558fca40\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-59d95" Mar 18 13:11:27.382491 master-0 kubenswrapper[7599]: I0318 13:11:27.379571 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9de7243-90c0-49c4-8059-34e0558fca40-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-59d95\" (UID: \"a9de7243-90c0-49c4-8059-34e0558fca40\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-59d95" Mar 18 13:11:27.478351 master-0 kubenswrapper[7599]: I0318 13:11:27.478290 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75jwh\" (UniqueName: \"kubernetes.io/projected/a9de7243-90c0-49c4-8059-34e0558fca40-kube-api-access-75jwh\") pod \"cloud-credential-operator-744f9dbf77-59d95\" (UID: \"a9de7243-90c0-49c4-8059-34e0558fca40\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-59d95" Mar 18 13:11:27.718598 master-0 kubenswrapper[7599]: I0318 13:11:27.718556 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-5vhnr"] Mar 18 13:11:27.724394 master-0 kubenswrapper[7599]: I0318 13:11:27.724033 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-59d95" Mar 18 13:11:27.849196 master-0 kubenswrapper[7599]: I0318 13:11:27.848375 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-6c8df6d4b-7tcjk"] Mar 18 13:11:27.850108 master-0 kubenswrapper[7599]: I0318 13:11:27.850032 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-7tcjk" Mar 18 13:11:27.853832 master-0 kubenswrapper[7599]: I0318 13:11:27.853495 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 18 13:11:27.854029 master-0 kubenswrapper[7599]: I0318 13:11:27.853930 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-r54p6" Mar 18 13:11:27.862551 master-0 kubenswrapper[7599]: I0318 13:11:27.854260 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 18 13:11:27.862551 master-0 kubenswrapper[7599]: I0318 13:11:27.862268 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 18 13:11:27.870351 master-0 kubenswrapper[7599]: I0318 13:11:27.870212 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-6c8df6d4b-7tcjk"] Mar 18 13:11:27.980939 master-0 kubenswrapper[7599]: I0318 13:11:27.980816 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fb65c095-ca20-432c-a069-ad6719fca9c8-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-7tcjk\" (UID: \"fb65c095-ca20-432c-a069-ad6719fca9c8\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-7tcjk" Mar 18 13:11:27.981228 master-0 kubenswrapper[7599]: I0318 13:11:27.981155 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/fb65c095-ca20-432c-a069-ad6719fca9c8-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-7tcjk\" (UID: \"fb65c095-ca20-432c-a069-ad6719fca9c8\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-7tcjk" Mar 18 13:11:27.981311 master-0 kubenswrapper[7599]: I0318 13:11:27.981284 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/fb65c095-ca20-432c-a069-ad6719fca9c8-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-7tcjk\" (UID: \"fb65c095-ca20-432c-a069-ad6719fca9c8\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-7tcjk" Mar 18 13:11:27.981408 master-0 kubenswrapper[7599]: I0318 13:11:27.981389 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5lv2\" (UniqueName: \"kubernetes.io/projected/fb65c095-ca20-432c-a069-ad6719fca9c8-kube-api-access-j5lv2\") pod \"prometheus-operator-6c8df6d4b-7tcjk\" (UID: \"fb65c095-ca20-432c-a069-ad6719fca9c8\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-7tcjk" Mar 18 13:11:28.082750 master-0 kubenswrapper[7599]: I0318 13:11:28.082701 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5lv2\" (UniqueName: \"kubernetes.io/projected/fb65c095-ca20-432c-a069-ad6719fca9c8-kube-api-access-j5lv2\") pod \"prometheus-operator-6c8df6d4b-7tcjk\" (UID: \"fb65c095-ca20-432c-a069-ad6719fca9c8\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-7tcjk" Mar 18 13:11:28.082954 master-0 kubenswrapper[7599]: I0318 13:11:28.082775 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fb65c095-ca20-432c-a069-ad6719fca9c8-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-7tcjk\" (UID: \"fb65c095-ca20-432c-a069-ad6719fca9c8\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-7tcjk" Mar 18 13:11:28.082954 master-0 kubenswrapper[7599]: I0318 13:11:28.082821 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/fb65c095-ca20-432c-a069-ad6719fca9c8-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-7tcjk\" (UID: \"fb65c095-ca20-432c-a069-ad6719fca9c8\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-7tcjk" Mar 18 13:11:28.082954 master-0 kubenswrapper[7599]: I0318 13:11:28.082860 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/fb65c095-ca20-432c-a069-ad6719fca9c8-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-7tcjk\" (UID: \"fb65c095-ca20-432c-a069-ad6719fca9c8\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-7tcjk" Mar 18 13:11:28.083592 master-0 kubenswrapper[7599]: E0318 13:11:28.083548 7599 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Mar 18 13:11:28.083678 master-0 kubenswrapper[7599]: E0318 13:11:28.083624 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb65c095-ca20-432c-a069-ad6719fca9c8-prometheus-operator-tls podName:fb65c095-ca20-432c-a069-ad6719fca9c8 nodeName:}" failed. No retries permitted until 2026-03-18 13:11:28.583604316 +0000 UTC m=+263.544658558 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/fb65c095-ca20-432c-a069-ad6719fca9c8-prometheus-operator-tls") pod "prometheus-operator-6c8df6d4b-7tcjk" (UID: "fb65c095-ca20-432c-a069-ad6719fca9c8") : secret "prometheus-operator-tls" not found Mar 18 13:11:28.085458 master-0 kubenswrapper[7599]: I0318 13:11:28.084561 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fb65c095-ca20-432c-a069-ad6719fca9c8-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-7tcjk\" (UID: \"fb65c095-ca20-432c-a069-ad6719fca9c8\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-7tcjk" Mar 18 13:11:28.086659 master-0 kubenswrapper[7599]: I0318 13:11:28.086625 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/fb65c095-ca20-432c-a069-ad6719fca9c8-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-7tcjk\" (UID: \"fb65c095-ca20-432c-a069-ad6719fca9c8\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-7tcjk" Mar 18 13:11:28.103205 master-0 kubenswrapper[7599]: I0318 13:11:28.103156 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5lv2\" (UniqueName: \"kubernetes.io/projected/fb65c095-ca20-432c-a069-ad6719fca9c8-kube-api-access-j5lv2\") pod \"prometheus-operator-6c8df6d4b-7tcjk\" (UID: \"fb65c095-ca20-432c-a069-ad6719fca9c8\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-7tcjk" Mar 18 13:11:28.170009 master-0 kubenswrapper[7599]: I0318 13:11:28.169946 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-59d95"] Mar 18 13:11:28.173586 master-0 kubenswrapper[7599]: W0318 13:11:28.173549 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9de7243_90c0_49c4_8059_34e0558fca40.slice/crio-c338b30f4ce7d3b65e0ff2e507deac121b209e8c01583658956897e30a06262e WatchSource:0}: Error finding container c338b30f4ce7d3b65e0ff2e507deac121b209e8c01583658956897e30a06262e: Status 404 returned error can't find the container with id c338b30f4ce7d3b65e0ff2e507deac121b209e8c01583658956897e30a06262e Mar 18 13:11:28.254179 master-0 kubenswrapper[7599]: I0318 13:11:28.253884 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:28.254179 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:28.254179 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:28.254179 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:28.254179 master-0 kubenswrapper[7599]: I0318 13:11:28.253972 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:28.521229 master-0 kubenswrapper[7599]: I0318 13:11:28.521086 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-59d95" event={"ID":"a9de7243-90c0-49c4-8059-34e0558fca40","Type":"ContainerStarted","Data":"46201f52909688d0b866665564333312afa7f308fe0c5dd71538e6e4a883b683"} Mar 18 13:11:28.521229 master-0 kubenswrapper[7599]: I0318 13:11:28.521157 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-59d95" event={"ID":"a9de7243-90c0-49c4-8059-34e0558fca40","Type":"ContainerStarted","Data":"c338b30f4ce7d3b65e0ff2e507deac121b209e8c01583658956897e30a06262e"} Mar 18 13:11:28.523055 master-0 kubenswrapper[7599]: I0318 13:11:28.522468 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-5vhnr" event={"ID":"bdcd72a6-a8e8-47ba-8b51-7325d35bad6b","Type":"ContainerStarted","Data":"217f2ddac8460682f53f483f75566ba056797e6cb9215803ff6c892d4d2a8575"} Mar 18 13:11:28.592184 master-0 kubenswrapper[7599]: I0318 13:11:28.592121 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/fb65c095-ca20-432c-a069-ad6719fca9c8-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-7tcjk\" (UID: \"fb65c095-ca20-432c-a069-ad6719fca9c8\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-7tcjk" Mar 18 13:11:28.595137 master-0 kubenswrapper[7599]: I0318 13:11:28.595066 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/fb65c095-ca20-432c-a069-ad6719fca9c8-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-7tcjk\" (UID: \"fb65c095-ca20-432c-a069-ad6719fca9c8\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-7tcjk" Mar 18 13:11:28.698661 master-0 kubenswrapper[7599]: I0318 13:11:28.698606 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:11:28.698886 master-0 kubenswrapper[7599]: I0318 13:11:28.698767 7599 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:11:28.699075 master-0 kubenswrapper[7599]: I0318 13:11:28.699043 7599 scope.go:117] "RemoveContainer" containerID="c6d06965eb2aa010cd8386f146e5e7d18099615f71beaf1a6e240f94bd2aecf0" Mar 18 13:11:28.699510 master-0 kubenswrapper[7599]: E0318 13:11:28.699302 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=catalog-operator pod=catalog-operator-68f85b4d6c-t84s9_openshift-operator-lifecycle-manager(d2455453-5943-49ef-bfea-cba077197da0)\"" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" podUID="d2455453-5943-49ef-bfea-cba077197da0" Mar 18 13:11:28.794917 master-0 kubenswrapper[7599]: I0318 13:11:28.794792 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-7tcjk" Mar 18 13:11:29.193877 master-0 kubenswrapper[7599]: I0318 13:11:29.193825 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-6c8df6d4b-7tcjk"] Mar 18 13:11:29.254028 master-0 kubenswrapper[7599]: I0318 13:11:29.253968 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:29.254028 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:29.254028 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:29.254028 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:29.254672 master-0 kubenswrapper[7599]: I0318 13:11:29.254050 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:29.428038 master-0 kubenswrapper[7599]: W0318 13:11:29.427985 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb65c095_ca20_432c_a069_ad6719fca9c8.slice/crio-381d29d4a5ad407e637362bbe1b13c2af8936f3cc15562644f115d2bb0e3ff71 WatchSource:0}: Error finding container 381d29d4a5ad407e637362bbe1b13c2af8936f3cc15562644f115d2bb0e3ff71: Status 404 returned error can't find the container with id 381d29d4a5ad407e637362bbe1b13c2af8936f3cc15562644f115d2bb0e3ff71 Mar 18 13:11:29.536080 master-0 kubenswrapper[7599]: I0318 13:11:29.536013 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-7tcjk" event={"ID":"fb65c095-ca20-432c-a069-ad6719fca9c8","Type":"ContainerStarted","Data":"381d29d4a5ad407e637362bbe1b13c2af8936f3cc15562644f115d2bb0e3ff71"} Mar 18 13:11:29.549760 master-0 kubenswrapper[7599]: I0318 13:11:29.536415 7599 scope.go:117] "RemoveContainer" containerID="c6d06965eb2aa010cd8386f146e5e7d18099615f71beaf1a6e240f94bd2aecf0" Mar 18 13:11:29.549760 master-0 kubenswrapper[7599]: E0318 13:11:29.536661 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=catalog-operator pod=catalog-operator-68f85b4d6c-t84s9_openshift-operator-lifecycle-manager(d2455453-5943-49ef-bfea-cba077197da0)\"" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" podUID="d2455453-5943-49ef-bfea-cba077197da0" Mar 18 13:11:30.253708 master-0 kubenswrapper[7599]: I0318 13:11:30.253667 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:30.253708 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:30.253708 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:30.253708 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:30.253998 master-0 kubenswrapper[7599]: I0318 13:11:30.253722 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:30.542073 master-0 kubenswrapper[7599]: I0318 13:11:30.541934 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-5vhnr" event={"ID":"bdcd72a6-a8e8-47ba-8b51-7325d35bad6b","Type":"ContainerStarted","Data":"7256607264aec34fc303524e25688f50a4035bdf4da670438e512c20c88759c7"} Mar 18 13:11:30.557815 master-0 kubenswrapper[7599]: I0318 13:11:30.557732 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-5vhnr" podStartSLOduration=2.81474881 podStartE2EDuration="4.557712366s" podCreationTimestamp="2026-03-18 13:11:26 +0000 UTC" firstStartedPulling="2026-03-18 13:11:27.731347963 +0000 UTC m=+262.692402195" lastFinishedPulling="2026-03-18 13:11:29.474311519 +0000 UTC m=+264.435365751" observedRunningTime="2026-03-18 13:11:30.554970046 +0000 UTC m=+265.516024288" watchObservedRunningTime="2026-03-18 13:11:30.557712366 +0000 UTC m=+265.518766608" Mar 18 13:11:31.253286 master-0 kubenswrapper[7599]: I0318 13:11:31.253221 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:31.253286 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:31.253286 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:31.253286 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:31.253604 master-0 kubenswrapper[7599]: I0318 13:11:31.253287 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:32.254878 master-0 kubenswrapper[7599]: I0318 13:11:32.254805 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:32.254878 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:32.254878 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:32.254878 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:32.255471 master-0 kubenswrapper[7599]: I0318 13:11:32.254919 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:33.253149 master-0 kubenswrapper[7599]: I0318 13:11:33.252968 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:33.253149 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:33.253149 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:33.253149 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:33.253149 master-0 kubenswrapper[7599]: I0318 13:11:33.253029 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:33.570964 master-0 kubenswrapper[7599]: I0318 13:11:33.570892 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-7tcjk" event={"ID":"fb65c095-ca20-432c-a069-ad6719fca9c8","Type":"ContainerStarted","Data":"663b337d51e2873d5151b1f329e1358b3ddb8ded99570ad538a8ad35be083482"} Mar 18 13:11:33.570964 master-0 kubenswrapper[7599]: I0318 13:11:33.570945 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-7tcjk" event={"ID":"fb65c095-ca20-432c-a069-ad6719fca9c8","Type":"ContainerStarted","Data":"34eb85215321835de9e05074243d042b395a27aa34e46a23e03dd0c3867bbe76"} Mar 18 13:11:33.592775 master-0 kubenswrapper[7599]: I0318 13:11:33.592691 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-7tcjk" podStartSLOduration=3.574806523 podStartE2EDuration="6.592672755s" podCreationTimestamp="2026-03-18 13:11:27 +0000 UTC" firstStartedPulling="2026-03-18 13:11:29.43030759 +0000 UTC m=+264.391361832" lastFinishedPulling="2026-03-18 13:11:32.448173822 +0000 UTC m=+267.409228064" observedRunningTime="2026-03-18 13:11:33.59250914 +0000 UTC m=+268.553563382" watchObservedRunningTime="2026-03-18 13:11:33.592672755 +0000 UTC m=+268.553726997" Mar 18 13:11:34.078301 master-0 kubenswrapper[7599]: I0318 13:11:34.078243 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6cb57bb5db-tc2gl"] Mar 18 13:11:34.079312 master-0 kubenswrapper[7599]: I0318 13:11:34.079241 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-tc2gl" Mar 18 13:11:34.081989 master-0 kubenswrapper[7599]: I0318 13:11:34.081966 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 18 13:11:34.082078 master-0 kubenswrapper[7599]: I0318 13:11:34.081976 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-qfz5b" Mar 18 13:11:34.082205 master-0 kubenswrapper[7599]: I0318 13:11:34.082186 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 18 13:11:34.082480 master-0 kubenswrapper[7599]: I0318 13:11:34.082458 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 18 13:11:34.083493 master-0 kubenswrapper[7599]: I0318 13:11:34.083463 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 18 13:11:34.083723 master-0 kubenswrapper[7599]: I0318 13:11:34.083701 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 18 13:11:34.177109 master-0 kubenswrapper[7599]: I0318 13:11:34.177045 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2624f748-2132-40c6-aa2e-52df50ba8911-config\") pod \"machine-approver-6cb57bb5db-tc2gl\" (UID: \"2624f748-2132-40c6-aa2e-52df50ba8911\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-tc2gl" Mar 18 13:11:34.177109 master-0 kubenswrapper[7599]: I0318 13:11:34.177106 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbbv4\" (UniqueName: \"kubernetes.io/projected/2624f748-2132-40c6-aa2e-52df50ba8911-kube-api-access-fbbv4\") pod \"machine-approver-6cb57bb5db-tc2gl\" (UID: \"2624f748-2132-40c6-aa2e-52df50ba8911\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-tc2gl" Mar 18 13:11:34.177369 master-0 kubenswrapper[7599]: I0318 13:11:34.177127 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2624f748-2132-40c6-aa2e-52df50ba8911-auth-proxy-config\") pod \"machine-approver-6cb57bb5db-tc2gl\" (UID: \"2624f748-2132-40c6-aa2e-52df50ba8911\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-tc2gl" Mar 18 13:11:34.177369 master-0 kubenswrapper[7599]: I0318 13:11:34.177146 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2624f748-2132-40c6-aa2e-52df50ba8911-machine-approver-tls\") pod \"machine-approver-6cb57bb5db-tc2gl\" (UID: \"2624f748-2132-40c6-aa2e-52df50ba8911\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-tc2gl" Mar 18 13:11:34.254108 master-0 kubenswrapper[7599]: I0318 13:11:34.254059 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:34.254108 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:34.254108 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:34.254108 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:34.254392 master-0 kubenswrapper[7599]: I0318 13:11:34.254120 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:34.277848 master-0 kubenswrapper[7599]: I0318 13:11:34.277809 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2624f748-2132-40c6-aa2e-52df50ba8911-config\") pod \"machine-approver-6cb57bb5db-tc2gl\" (UID: \"2624f748-2132-40c6-aa2e-52df50ba8911\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-tc2gl" Mar 18 13:11:34.278036 master-0 kubenswrapper[7599]: I0318 13:11:34.277863 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbbv4\" (UniqueName: \"kubernetes.io/projected/2624f748-2132-40c6-aa2e-52df50ba8911-kube-api-access-fbbv4\") pod \"machine-approver-6cb57bb5db-tc2gl\" (UID: \"2624f748-2132-40c6-aa2e-52df50ba8911\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-tc2gl" Mar 18 13:11:34.278036 master-0 kubenswrapper[7599]: I0318 13:11:34.277906 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2624f748-2132-40c6-aa2e-52df50ba8911-auth-proxy-config\") pod \"machine-approver-6cb57bb5db-tc2gl\" (UID: \"2624f748-2132-40c6-aa2e-52df50ba8911\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-tc2gl" Mar 18 13:11:34.278036 master-0 kubenswrapper[7599]: I0318 13:11:34.277931 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2624f748-2132-40c6-aa2e-52df50ba8911-machine-approver-tls\") pod \"machine-approver-6cb57bb5db-tc2gl\" (UID: \"2624f748-2132-40c6-aa2e-52df50ba8911\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-tc2gl" Mar 18 13:11:34.279235 master-0 kubenswrapper[7599]: I0318 13:11:34.279186 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2624f748-2132-40c6-aa2e-52df50ba8911-config\") pod \"machine-approver-6cb57bb5db-tc2gl\" (UID: \"2624f748-2132-40c6-aa2e-52df50ba8911\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-tc2gl" Mar 18 13:11:34.281482 master-0 kubenswrapper[7599]: I0318 13:11:34.281104 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2624f748-2132-40c6-aa2e-52df50ba8911-machine-approver-tls\") pod \"machine-approver-6cb57bb5db-tc2gl\" (UID: \"2624f748-2132-40c6-aa2e-52df50ba8911\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-tc2gl" Mar 18 13:11:34.282246 master-0 kubenswrapper[7599]: I0318 13:11:34.282187 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2624f748-2132-40c6-aa2e-52df50ba8911-auth-proxy-config\") pod \"machine-approver-6cb57bb5db-tc2gl\" (UID: \"2624f748-2132-40c6-aa2e-52df50ba8911\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-tc2gl" Mar 18 13:11:34.298538 master-0 kubenswrapper[7599]: I0318 13:11:34.298459 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbbv4\" (UniqueName: \"kubernetes.io/projected/2624f748-2132-40c6-aa2e-52df50ba8911-kube-api-access-fbbv4\") pod \"machine-approver-6cb57bb5db-tc2gl\" (UID: \"2624f748-2132-40c6-aa2e-52df50ba8911\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-tc2gl" Mar 18 13:11:34.400191 master-0 kubenswrapper[7599]: I0318 13:11:34.399617 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-tc2gl" Mar 18 13:11:34.499069 master-0 kubenswrapper[7599]: I0318 13:11:34.499022 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-sqx7p"] Mar 18 13:11:34.500780 master-0 kubenswrapper[7599]: I0318 13:11:34.500752 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-sqx7p" Mar 18 13:11:34.506130 master-0 kubenswrapper[7599]: I0318 13:11:34.503280 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-hr2xw" Mar 18 13:11:34.506130 master-0 kubenswrapper[7599]: I0318 13:11:34.503363 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 18 13:11:34.506818 master-0 kubenswrapper[7599]: I0318 13:11:34.506567 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 18 13:11:34.508014 master-0 kubenswrapper[7599]: I0318 13:11:34.507901 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 18 13:11:34.512776 master-0 kubenswrapper[7599]: I0318 13:11:34.512728 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-sqx7p"] Mar 18 13:11:34.583068 master-0 kubenswrapper[7599]: I0318 13:11:34.583012 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff8tm\" (UniqueName: \"kubernetes.io/projected/74f296d4-40d1-449e-88ea-db6c1574a11a-kube-api-access-ff8tm\") pod \"cluster-samples-operator-85f7577d78-sqx7p\" (UID: \"74f296d4-40d1-449e-88ea-db6c1574a11a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-sqx7p" Mar 18 13:11:34.583639 master-0 kubenswrapper[7599]: I0318 13:11:34.583088 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/74f296d4-40d1-449e-88ea-db6c1574a11a-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-sqx7p\" (UID: \"74f296d4-40d1-449e-88ea-db6c1574a11a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-sqx7p" Mar 18 13:11:34.684963 master-0 kubenswrapper[7599]: I0318 13:11:34.684897 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff8tm\" (UniqueName: \"kubernetes.io/projected/74f296d4-40d1-449e-88ea-db6c1574a11a-kube-api-access-ff8tm\") pod \"cluster-samples-operator-85f7577d78-sqx7p\" (UID: \"74f296d4-40d1-449e-88ea-db6c1574a11a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-sqx7p" Mar 18 13:11:34.685186 master-0 kubenswrapper[7599]: I0318 13:11:34.684984 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/74f296d4-40d1-449e-88ea-db6c1574a11a-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-sqx7p\" (UID: \"74f296d4-40d1-449e-88ea-db6c1574a11a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-sqx7p" Mar 18 13:11:34.689500 master-0 kubenswrapper[7599]: I0318 13:11:34.689464 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/74f296d4-40d1-449e-88ea-db6c1574a11a-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-sqx7p\" (UID: \"74f296d4-40d1-449e-88ea-db6c1574a11a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-sqx7p" Mar 18 13:11:34.708288 master-0 kubenswrapper[7599]: I0318 13:11:34.708248 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff8tm\" (UniqueName: \"kubernetes.io/projected/74f296d4-40d1-449e-88ea-db6c1574a11a-kube-api-access-ff8tm\") pod \"cluster-samples-operator-85f7577d78-sqx7p\" (UID: \"74f296d4-40d1-449e-88ea-db6c1574a11a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-sqx7p" Mar 18 13:11:34.835475 master-0 kubenswrapper[7599]: I0318 13:11:34.835381 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-sqx7p" Mar 18 13:11:35.214101 master-0 kubenswrapper[7599]: I0318 13:11:35.212158 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7"] Mar 18 13:11:35.214101 master-0 kubenswrapper[7599]: I0318 13:11:35.213270 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7" Mar 18 13:11:35.216207 master-0 kubenswrapper[7599]: I0318 13:11:35.215814 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-lvs7l" Mar 18 13:11:35.216207 master-0 kubenswrapper[7599]: I0318 13:11:35.215983 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 18 13:11:35.216207 master-0 kubenswrapper[7599]: I0318 13:11:35.216082 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 18 13:11:35.220864 master-0 kubenswrapper[7599]: I0318 13:11:35.220817 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-t4p42"] Mar 18 13:11:35.231500 master-0 kubenswrapper[7599]: I0318 13:11:35.229667 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:11:35.233180 master-0 kubenswrapper[7599]: I0318 13:11:35.231738 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-7bbc969446-mxcng"] Mar 18 13:11:35.233180 master-0 kubenswrapper[7599]: I0318 13:11:35.232716 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" Mar 18 13:11:35.233180 master-0 kubenswrapper[7599]: I0318 13:11:35.232794 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 18 13:11:35.233180 master-0 kubenswrapper[7599]: I0318 13:11:35.233089 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 18 13:11:35.235718 master-0 kubenswrapper[7599]: I0318 13:11:35.235653 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7"] Mar 18 13:11:35.241489 master-0 kubenswrapper[7599]: I0318 13:11:35.238459 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-6fb5w" Mar 18 13:11:35.241489 master-0 kubenswrapper[7599]: I0318 13:11:35.238693 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-sjstk" Mar 18 13:11:35.241489 master-0 kubenswrapper[7599]: I0318 13:11:35.238802 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 18 13:11:35.241489 master-0 kubenswrapper[7599]: I0318 13:11:35.238931 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 18 13:11:35.241489 master-0 kubenswrapper[7599]: I0318 13:11:35.239035 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 18 13:11:35.254903 master-0 kubenswrapper[7599]: I0318 13:11:35.254623 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:35.254903 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:35.254903 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:35.254903 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:35.254903 master-0 kubenswrapper[7599]: I0318 13:11:35.254671 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:35.264581 master-0 kubenswrapper[7599]: I0318 13:11:35.264544 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-7bbc969446-mxcng"] Mar 18 13:11:35.292446 master-0 kubenswrapper[7599]: I0318 13:11:35.292395 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk4ql\" (UniqueName: \"kubernetes.io/projected/d325c523-8e6f-4665-9f54-334eaf301141-kube-api-access-mk4ql\") pod \"openshift-state-metrics-5dc6c74576-s4ql7\" (UID: \"d325c523-8e6f-4665-9f54-334eaf301141\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7" Mar 18 13:11:35.292446 master-0 kubenswrapper[7599]: I0318 13:11:35.292445 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6a93ff56-362e-44fc-a54f-666a01559892-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-mxcng\" (UID: \"6a93ff56-362e-44fc-a54f-666a01559892\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" Mar 18 13:11:35.292646 master-0 kubenswrapper[7599]: I0318 13:11:35.292462 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/702076a9-b542-4768-9e9e-99b2cac0a66e-sys\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:11:35.292646 master-0 kubenswrapper[7599]: I0318 13:11:35.292479 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d325c523-8e6f-4665-9f54-334eaf301141-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-s4ql7\" (UID: \"d325c523-8e6f-4665-9f54-334eaf301141\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7" Mar 18 13:11:35.292646 master-0 kubenswrapper[7599]: I0318 13:11:35.292504 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d325c523-8e6f-4665-9f54-334eaf301141-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-s4ql7\" (UID: \"d325c523-8e6f-4665-9f54-334eaf301141\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7" Mar 18 13:11:35.292646 master-0 kubenswrapper[7599]: I0318 13:11:35.292526 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/702076a9-b542-4768-9e9e-99b2cac0a66e-root\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:11:35.292646 master-0 kubenswrapper[7599]: I0318 13:11:35.292541 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmlh2\" (UniqueName: \"kubernetes.io/projected/6a93ff56-362e-44fc-a54f-666a01559892-kube-api-access-wmlh2\") pod \"kube-state-metrics-7bbc969446-mxcng\" (UID: \"6a93ff56-362e-44fc-a54f-666a01559892\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" Mar 18 13:11:35.292646 master-0 kubenswrapper[7599]: I0318 13:11:35.292558 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkfkr\" (UniqueName: \"kubernetes.io/projected/702076a9-b542-4768-9e9e-99b2cac0a66e-kube-api-access-bkfkr\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:11:35.292646 master-0 kubenswrapper[7599]: I0318 13:11:35.292579 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/702076a9-b542-4768-9e9e-99b2cac0a66e-node-exporter-textfile\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:11:35.292646 master-0 kubenswrapper[7599]: I0318 13:11:35.292605 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6a93ff56-362e-44fc-a54f-666a01559892-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-mxcng\" (UID: \"6a93ff56-362e-44fc-a54f-666a01559892\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" Mar 18 13:11:35.292646 master-0 kubenswrapper[7599]: I0318 13:11:35.292620 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/d325c523-8e6f-4665-9f54-334eaf301141-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-s4ql7\" (UID: \"d325c523-8e6f-4665-9f54-334eaf301141\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7" Mar 18 13:11:35.292646 master-0 kubenswrapper[7599]: I0318 13:11:35.292638 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/702076a9-b542-4768-9e9e-99b2cac0a66e-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:11:35.292955 master-0 kubenswrapper[7599]: I0318 13:11:35.292654 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/702076a9-b542-4768-9e9e-99b2cac0a66e-metrics-client-ca\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:11:35.292955 master-0 kubenswrapper[7599]: I0318 13:11:35.292676 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/6a93ff56-362e-44fc-a54f-666a01559892-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-mxcng\" (UID: \"6a93ff56-362e-44fc-a54f-666a01559892\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" Mar 18 13:11:35.292955 master-0 kubenswrapper[7599]: I0318 13:11:35.292698 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/702076a9-b542-4768-9e9e-99b2cac0a66e-node-exporter-tls\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:11:35.292955 master-0 kubenswrapper[7599]: I0318 13:11:35.292718 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/6a93ff56-362e-44fc-a54f-666a01559892-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-mxcng\" (UID: \"6a93ff56-362e-44fc-a54f-666a01559892\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" Mar 18 13:11:35.292955 master-0 kubenswrapper[7599]: I0318 13:11:35.292736 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6a93ff56-362e-44fc-a54f-666a01559892-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-mxcng\" (UID: \"6a93ff56-362e-44fc-a54f-666a01559892\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" Mar 18 13:11:35.292955 master-0 kubenswrapper[7599]: I0318 13:11:35.292754 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/702076a9-b542-4768-9e9e-99b2cac0a66e-node-exporter-wtmp\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:11:35.393813 master-0 kubenswrapper[7599]: I0318 13:11:35.393760 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmlh2\" (UniqueName: \"kubernetes.io/projected/6a93ff56-362e-44fc-a54f-666a01559892-kube-api-access-wmlh2\") pod \"kube-state-metrics-7bbc969446-mxcng\" (UID: \"6a93ff56-362e-44fc-a54f-666a01559892\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" Mar 18 13:11:35.394061 master-0 kubenswrapper[7599]: I0318 13:11:35.393821 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkfkr\" (UniqueName: \"kubernetes.io/projected/702076a9-b542-4768-9e9e-99b2cac0a66e-kube-api-access-bkfkr\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:11:35.394061 master-0 kubenswrapper[7599]: I0318 13:11:35.393849 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/702076a9-b542-4768-9e9e-99b2cac0a66e-node-exporter-textfile\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:11:35.394061 master-0 kubenswrapper[7599]: I0318 13:11:35.393867 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6a93ff56-362e-44fc-a54f-666a01559892-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-mxcng\" (UID: \"6a93ff56-362e-44fc-a54f-666a01559892\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" Mar 18 13:11:35.394061 master-0 kubenswrapper[7599]: I0318 13:11:35.393884 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/d325c523-8e6f-4665-9f54-334eaf301141-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-s4ql7\" (UID: \"d325c523-8e6f-4665-9f54-334eaf301141\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7" Mar 18 13:11:35.394061 master-0 kubenswrapper[7599]: I0318 13:11:35.393901 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/702076a9-b542-4768-9e9e-99b2cac0a66e-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:11:35.394061 master-0 kubenswrapper[7599]: I0318 13:11:35.393920 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/702076a9-b542-4768-9e9e-99b2cac0a66e-metrics-client-ca\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:11:35.394061 master-0 kubenswrapper[7599]: I0318 13:11:35.393947 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/6a93ff56-362e-44fc-a54f-666a01559892-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-mxcng\" (UID: \"6a93ff56-362e-44fc-a54f-666a01559892\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" Mar 18 13:11:35.394061 master-0 kubenswrapper[7599]: I0318 13:11:35.393965 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/702076a9-b542-4768-9e9e-99b2cac0a66e-node-exporter-tls\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:11:35.394061 master-0 kubenswrapper[7599]: I0318 13:11:35.393982 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/6a93ff56-362e-44fc-a54f-666a01559892-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-mxcng\" (UID: \"6a93ff56-362e-44fc-a54f-666a01559892\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" Mar 18 13:11:35.394061 master-0 kubenswrapper[7599]: I0318 13:11:35.394000 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6a93ff56-362e-44fc-a54f-666a01559892-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-mxcng\" (UID: \"6a93ff56-362e-44fc-a54f-666a01559892\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" Mar 18 13:11:35.394061 master-0 kubenswrapper[7599]: I0318 13:11:35.394044 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/702076a9-b542-4768-9e9e-99b2cac0a66e-node-exporter-wtmp\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:11:35.394367 master-0 kubenswrapper[7599]: I0318 13:11:35.394073 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk4ql\" (UniqueName: \"kubernetes.io/projected/d325c523-8e6f-4665-9f54-334eaf301141-kube-api-access-mk4ql\") pod \"openshift-state-metrics-5dc6c74576-s4ql7\" (UID: \"d325c523-8e6f-4665-9f54-334eaf301141\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7" Mar 18 13:11:35.394367 master-0 kubenswrapper[7599]: I0318 13:11:35.394092 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6a93ff56-362e-44fc-a54f-666a01559892-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-mxcng\" (UID: \"6a93ff56-362e-44fc-a54f-666a01559892\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" Mar 18 13:11:35.394367 master-0 kubenswrapper[7599]: I0318 13:11:35.394109 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/702076a9-b542-4768-9e9e-99b2cac0a66e-sys\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:11:35.394367 master-0 kubenswrapper[7599]: I0318 13:11:35.394129 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d325c523-8e6f-4665-9f54-334eaf301141-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-s4ql7\" (UID: \"d325c523-8e6f-4665-9f54-334eaf301141\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7" Mar 18 13:11:35.394367 master-0 kubenswrapper[7599]: I0318 13:11:35.394155 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d325c523-8e6f-4665-9f54-334eaf301141-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-s4ql7\" (UID: \"d325c523-8e6f-4665-9f54-334eaf301141\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7" Mar 18 13:11:35.394367 master-0 kubenswrapper[7599]: I0318 13:11:35.394184 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/702076a9-b542-4768-9e9e-99b2cac0a66e-root\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:11:35.394367 master-0 kubenswrapper[7599]: I0318 13:11:35.394298 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/702076a9-b542-4768-9e9e-99b2cac0a66e-root\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:11:35.395440 master-0 kubenswrapper[7599]: I0318 13:11:35.395077 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/702076a9-b542-4768-9e9e-99b2cac0a66e-node-exporter-textfile\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:11:35.396470 master-0 kubenswrapper[7599]: I0318 13:11:35.395707 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6a93ff56-362e-44fc-a54f-666a01559892-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-mxcng\" (UID: \"6a93ff56-362e-44fc-a54f-666a01559892\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" Mar 18 13:11:35.396470 master-0 kubenswrapper[7599]: E0318 13:11:35.395772 7599 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: secret "openshift-state-metrics-tls" not found Mar 18 13:11:35.396470 master-0 kubenswrapper[7599]: E0318 13:11:35.395804 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d325c523-8e6f-4665-9f54-334eaf301141-openshift-state-metrics-tls podName:d325c523-8e6f-4665-9f54-334eaf301141 nodeName:}" failed. No retries permitted until 2026-03-18 13:11:35.895793375 +0000 UTC m=+270.856847617 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/d325c523-8e6f-4665-9f54-334eaf301141-openshift-state-metrics-tls") pod "openshift-state-metrics-5dc6c74576-s4ql7" (UID: "d325c523-8e6f-4665-9f54-334eaf301141") : secret "openshift-state-metrics-tls" not found Mar 18 13:11:35.396719 master-0 kubenswrapper[7599]: I0318 13:11:35.396670 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/702076a9-b542-4768-9e9e-99b2cac0a66e-node-exporter-wtmp\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:11:35.396770 master-0 kubenswrapper[7599]: I0318 13:11:35.396679 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/702076a9-b542-4768-9e9e-99b2cac0a66e-sys\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:11:35.396999 master-0 kubenswrapper[7599]: E0318 13:11:35.396984 7599 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: secret "kube-state-metrics-tls" not found Mar 18 13:11:35.397095 master-0 kubenswrapper[7599]: E0318 13:11:35.397082 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a93ff56-362e-44fc-a54f-666a01559892-kube-state-metrics-tls podName:6a93ff56-362e-44fc-a54f-666a01559892 nodeName:}" failed. No retries permitted until 2026-03-18 13:11:35.897068818 +0000 UTC m=+270.858123060 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/6a93ff56-362e-44fc-a54f-666a01559892-kube-state-metrics-tls") pod "kube-state-metrics-7bbc969446-mxcng" (UID: "6a93ff56-362e-44fc-a54f-666a01559892") : secret "kube-state-metrics-tls" not found Mar 18 13:11:35.397211 master-0 kubenswrapper[7599]: E0318 13:11:35.397198 7599 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Mar 18 13:11:35.397287 master-0 kubenswrapper[7599]: E0318 13:11:35.397278 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/702076a9-b542-4768-9e9e-99b2cac0a66e-node-exporter-tls podName:702076a9-b542-4768-9e9e-99b2cac0a66e nodeName:}" failed. No retries permitted until 2026-03-18 13:11:35.897269524 +0000 UTC m=+270.858323836 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/702076a9-b542-4768-9e9e-99b2cac0a66e-node-exporter-tls") pod "node-exporter-t4p42" (UID: "702076a9-b542-4768-9e9e-99b2cac0a66e") : secret "node-exporter-tls" not found Mar 18 13:11:35.397344 master-0 kubenswrapper[7599]: I0318 13:11:35.397285 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/702076a9-b542-4768-9e9e-99b2cac0a66e-metrics-client-ca\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:11:35.397653 master-0 kubenswrapper[7599]: I0318 13:11:35.397625 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/6a93ff56-362e-44fc-a54f-666a01559892-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-mxcng\" (UID: \"6a93ff56-362e-44fc-a54f-666a01559892\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" Mar 18 13:11:35.398096 master-0 kubenswrapper[7599]: I0318 13:11:35.398077 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/6a93ff56-362e-44fc-a54f-666a01559892-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-mxcng\" (UID: \"6a93ff56-362e-44fc-a54f-666a01559892\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" Mar 18 13:11:35.400299 master-0 kubenswrapper[7599]: I0318 13:11:35.400264 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/702076a9-b542-4768-9e9e-99b2cac0a66e-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:11:35.402607 master-0 kubenswrapper[7599]: I0318 13:11:35.400599 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d325c523-8e6f-4665-9f54-334eaf301141-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-s4ql7\" (UID: \"d325c523-8e6f-4665-9f54-334eaf301141\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7" Mar 18 13:11:35.404984 master-0 kubenswrapper[7599]: I0318 13:11:35.404949 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d325c523-8e6f-4665-9f54-334eaf301141-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-s4ql7\" (UID: \"d325c523-8e6f-4665-9f54-334eaf301141\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7" Mar 18 13:11:35.408118 master-0 kubenswrapper[7599]: I0318 13:11:35.408071 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6a93ff56-362e-44fc-a54f-666a01559892-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-mxcng\" (UID: \"6a93ff56-362e-44fc-a54f-666a01559892\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" Mar 18 13:11:35.417325 master-0 kubenswrapper[7599]: I0318 13:11:35.417265 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmlh2\" (UniqueName: \"kubernetes.io/projected/6a93ff56-362e-44fc-a54f-666a01559892-kube-api-access-wmlh2\") pod \"kube-state-metrics-7bbc969446-mxcng\" (UID: \"6a93ff56-362e-44fc-a54f-666a01559892\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" Mar 18 13:11:35.423482 master-0 kubenswrapper[7599]: I0318 13:11:35.421242 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkfkr\" (UniqueName: \"kubernetes.io/projected/702076a9-b542-4768-9e9e-99b2cac0a66e-kube-api-access-bkfkr\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:11:35.424595 master-0 kubenswrapper[7599]: I0318 13:11:35.424568 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk4ql\" (UniqueName: \"kubernetes.io/projected/d325c523-8e6f-4665-9f54-334eaf301141-kube-api-access-mk4ql\") pod \"openshift-state-metrics-5dc6c74576-s4ql7\" (UID: \"d325c523-8e6f-4665-9f54-334eaf301141\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7" Mar 18 13:11:35.901981 master-0 kubenswrapper[7599]: I0318 13:11:35.901929 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/702076a9-b542-4768-9e9e-99b2cac0a66e-node-exporter-tls\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:11:35.902676 master-0 kubenswrapper[7599]: I0318 13:11:35.902016 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6a93ff56-362e-44fc-a54f-666a01559892-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-mxcng\" (UID: \"6a93ff56-362e-44fc-a54f-666a01559892\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" Mar 18 13:11:35.902676 master-0 kubenswrapper[7599]: I0318 13:11:35.902270 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/d325c523-8e6f-4665-9f54-334eaf301141-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-s4ql7\" (UID: \"d325c523-8e6f-4665-9f54-334eaf301141\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7" Mar 18 13:11:35.906940 master-0 kubenswrapper[7599]: I0318 13:11:35.906915 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/702076a9-b542-4768-9e9e-99b2cac0a66e-node-exporter-tls\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:11:35.907772 master-0 kubenswrapper[7599]: I0318 13:11:35.907742 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/d325c523-8e6f-4665-9f54-334eaf301141-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-s4ql7\" (UID: \"d325c523-8e6f-4665-9f54-334eaf301141\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7" Mar 18 13:11:35.908445 master-0 kubenswrapper[7599]: I0318 13:11:35.908373 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6a93ff56-362e-44fc-a54f-666a01559892-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-mxcng\" (UID: \"6a93ff56-362e-44fc-a54f-666a01559892\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" Mar 18 13:11:35.910333 master-0 kubenswrapper[7599]: I0318 13:11:35.910314 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:11:35.930670 master-0 kubenswrapper[7599]: I0318 13:11:35.930592 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" Mar 18 13:11:36.152463 master-0 kubenswrapper[7599]: I0318 13:11:36.152319 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-866dc4744-lqtbg"] Mar 18 13:11:36.153617 master-0 kubenswrapper[7599]: I0318 13:11:36.153469 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lqtbg" Mar 18 13:11:36.155936 master-0 kubenswrapper[7599]: I0318 13:11:36.155784 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-vbmv6" Mar 18 13:11:36.157156 master-0 kubenswrapper[7599]: I0318 13:11:36.157111 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 18 13:11:36.158820 master-0 kubenswrapper[7599]: I0318 13:11:36.157480 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 18 13:11:36.170475 master-0 kubenswrapper[7599]: I0318 13:11:36.170398 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-866dc4744-lqtbg"] Mar 18 13:11:36.192006 master-0 kubenswrapper[7599]: I0318 13:11:36.191960 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7" Mar 18 13:11:36.205798 master-0 kubenswrapper[7599]: I0318 13:11:36.205754 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b12af9a-8041-477f-90eb-05bb6ae7861a-cert\") pod \"cluster-autoscaler-operator-866dc4744-lqtbg\" (UID: \"2b12af9a-8041-477f-90eb-05bb6ae7861a\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lqtbg" Mar 18 13:11:36.205898 master-0 kubenswrapper[7599]: I0318 13:11:36.205865 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2b12af9a-8041-477f-90eb-05bb6ae7861a-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-lqtbg\" (UID: \"2b12af9a-8041-477f-90eb-05bb6ae7861a\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lqtbg" Mar 18 13:11:36.205974 master-0 kubenswrapper[7599]: I0318 13:11:36.205947 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn8qc\" (UniqueName: \"kubernetes.io/projected/2b12af9a-8041-477f-90eb-05bb6ae7861a-kube-api-access-sn8qc\") pod \"cluster-autoscaler-operator-866dc4744-lqtbg\" (UID: \"2b12af9a-8041-477f-90eb-05bb6ae7861a\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lqtbg" Mar 18 13:11:36.268498 master-0 kubenswrapper[7599]: I0318 13:11:36.263989 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:36.268498 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:36.268498 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:36.268498 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:36.268498 master-0 kubenswrapper[7599]: I0318 13:11:36.264175 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:36.307269 master-0 kubenswrapper[7599]: I0318 13:11:36.307159 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2b12af9a-8041-477f-90eb-05bb6ae7861a-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-lqtbg\" (UID: \"2b12af9a-8041-477f-90eb-05bb6ae7861a\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lqtbg" Mar 18 13:11:36.307269 master-0 kubenswrapper[7599]: I0318 13:11:36.307277 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn8qc\" (UniqueName: \"kubernetes.io/projected/2b12af9a-8041-477f-90eb-05bb6ae7861a-kube-api-access-sn8qc\") pod \"cluster-autoscaler-operator-866dc4744-lqtbg\" (UID: \"2b12af9a-8041-477f-90eb-05bb6ae7861a\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lqtbg" Mar 18 13:11:36.307604 master-0 kubenswrapper[7599]: I0318 13:11:36.307331 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b12af9a-8041-477f-90eb-05bb6ae7861a-cert\") pod \"cluster-autoscaler-operator-866dc4744-lqtbg\" (UID: \"2b12af9a-8041-477f-90eb-05bb6ae7861a\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lqtbg" Mar 18 13:11:36.309363 master-0 kubenswrapper[7599]: I0318 13:11:36.308279 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2b12af9a-8041-477f-90eb-05bb6ae7861a-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-lqtbg\" (UID: \"2b12af9a-8041-477f-90eb-05bb6ae7861a\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lqtbg" Mar 18 13:11:36.312161 master-0 kubenswrapper[7599]: I0318 13:11:36.311200 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b12af9a-8041-477f-90eb-05bb6ae7861a-cert\") pod \"cluster-autoscaler-operator-866dc4744-lqtbg\" (UID: \"2b12af9a-8041-477f-90eb-05bb6ae7861a\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lqtbg" Mar 18 13:11:36.334454 master-0 kubenswrapper[7599]: I0318 13:11:36.334384 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn8qc\" (UniqueName: \"kubernetes.io/projected/2b12af9a-8041-477f-90eb-05bb6ae7861a-kube-api-access-sn8qc\") pod \"cluster-autoscaler-operator-866dc4744-lqtbg\" (UID: \"2b12af9a-8041-477f-90eb-05bb6ae7861a\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lqtbg" Mar 18 13:11:36.508076 master-0 kubenswrapper[7599]: I0318 13:11:36.508003 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lqtbg" Mar 18 13:11:36.902187 master-0 kubenswrapper[7599]: I0318 13:11:36.902073 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-68bf6ff9d6-bbqfl"] Mar 18 13:11:36.903179 master-0 kubenswrapper[7599]: I0318 13:11:36.903134 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-68bf6ff9d6-bbqfl" Mar 18 13:11:36.906644 master-0 kubenswrapper[7599]: I0318 13:11:36.906575 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 18 13:11:36.906852 master-0 kubenswrapper[7599]: I0318 13:11:36.906754 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 18 13:11:36.906852 master-0 kubenswrapper[7599]: I0318 13:11:36.906826 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 18 13:11:36.906949 master-0 kubenswrapper[7599]: I0318 13:11:36.906875 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-dprq6" Mar 18 13:11:36.906949 master-0 kubenswrapper[7599]: I0318 13:11:36.906784 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 18 13:11:36.907027 master-0 kubenswrapper[7599]: I0318 13:11:36.906988 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 18 13:11:36.911829 master-0 kubenswrapper[7599]: I0318 13:11:36.911779 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-68bf6ff9d6-bbqfl"] Mar 18 13:11:36.916904 master-0 kubenswrapper[7599]: I0318 13:11:36.916850 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd8ff\" (UniqueName: \"kubernetes.io/projected/0a6090f0-3a27-4102-b8dd-b071644a3543-kube-api-access-bd8ff\") pod \"insights-operator-68bf6ff9d6-bbqfl\" (UID: \"0a6090f0-3a27-4102-b8dd-b071644a3543\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bbqfl" Mar 18 13:11:36.916904 master-0 kubenswrapper[7599]: I0318 13:11:36.916904 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a6090f0-3a27-4102-b8dd-b071644a3543-serving-cert\") pod \"insights-operator-68bf6ff9d6-bbqfl\" (UID: \"0a6090f0-3a27-4102-b8dd-b071644a3543\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bbqfl" Mar 18 13:11:36.917193 master-0 kubenswrapper[7599]: I0318 13:11:36.916936 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a6090f0-3a27-4102-b8dd-b071644a3543-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-bbqfl\" (UID: \"0a6090f0-3a27-4102-b8dd-b071644a3543\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bbqfl" Mar 18 13:11:36.917193 master-0 kubenswrapper[7599]: I0318 13:11:36.917044 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/0a6090f0-3a27-4102-b8dd-b071644a3543-snapshots\") pod \"insights-operator-68bf6ff9d6-bbqfl\" (UID: \"0a6090f0-3a27-4102-b8dd-b071644a3543\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bbqfl" Mar 18 13:11:36.917193 master-0 kubenswrapper[7599]: I0318 13:11:36.917093 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a6090f0-3a27-4102-b8dd-b071644a3543-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-bbqfl\" (UID: \"0a6090f0-3a27-4102-b8dd-b071644a3543\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bbqfl" Mar 18 13:11:37.032440 master-0 kubenswrapper[7599]: I0318 13:11:37.025999 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd8ff\" (UniqueName: \"kubernetes.io/projected/0a6090f0-3a27-4102-b8dd-b071644a3543-kube-api-access-bd8ff\") pod \"insights-operator-68bf6ff9d6-bbqfl\" (UID: \"0a6090f0-3a27-4102-b8dd-b071644a3543\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bbqfl" Mar 18 13:11:37.032440 master-0 kubenswrapper[7599]: I0318 13:11:37.026064 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a6090f0-3a27-4102-b8dd-b071644a3543-serving-cert\") pod \"insights-operator-68bf6ff9d6-bbqfl\" (UID: \"0a6090f0-3a27-4102-b8dd-b071644a3543\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bbqfl" Mar 18 13:11:37.032440 master-0 kubenswrapper[7599]: I0318 13:11:37.026086 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a6090f0-3a27-4102-b8dd-b071644a3543-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-bbqfl\" (UID: \"0a6090f0-3a27-4102-b8dd-b071644a3543\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bbqfl" Mar 18 13:11:37.032440 master-0 kubenswrapper[7599]: I0318 13:11:37.026105 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/0a6090f0-3a27-4102-b8dd-b071644a3543-snapshots\") pod \"insights-operator-68bf6ff9d6-bbqfl\" (UID: \"0a6090f0-3a27-4102-b8dd-b071644a3543\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bbqfl" Mar 18 13:11:37.032440 master-0 kubenswrapper[7599]: I0318 13:11:37.026122 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a6090f0-3a27-4102-b8dd-b071644a3543-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-bbqfl\" (UID: \"0a6090f0-3a27-4102-b8dd-b071644a3543\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bbqfl" Mar 18 13:11:37.032440 master-0 kubenswrapper[7599]: I0318 13:11:37.026848 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a6090f0-3a27-4102-b8dd-b071644a3543-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-bbqfl\" (UID: \"0a6090f0-3a27-4102-b8dd-b071644a3543\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bbqfl" Mar 18 13:11:37.032440 master-0 kubenswrapper[7599]: I0318 13:11:37.027401 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a6090f0-3a27-4102-b8dd-b071644a3543-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-bbqfl\" (UID: \"0a6090f0-3a27-4102-b8dd-b071644a3543\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bbqfl" Mar 18 13:11:37.032440 master-0 kubenswrapper[7599]: I0318 13:11:37.027823 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/0a6090f0-3a27-4102-b8dd-b071644a3543-snapshots\") pod \"insights-operator-68bf6ff9d6-bbqfl\" (UID: \"0a6090f0-3a27-4102-b8dd-b071644a3543\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bbqfl" Mar 18 13:11:37.033454 master-0 kubenswrapper[7599]: I0318 13:11:37.033006 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a6090f0-3a27-4102-b8dd-b071644a3543-serving-cert\") pod \"insights-operator-68bf6ff9d6-bbqfl\" (UID: \"0a6090f0-3a27-4102-b8dd-b071644a3543\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bbqfl" Mar 18 13:11:37.054519 master-0 kubenswrapper[7599]: I0318 13:11:37.053523 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd8ff\" (UniqueName: \"kubernetes.io/projected/0a6090f0-3a27-4102-b8dd-b071644a3543-kube-api-access-bd8ff\") pod \"insights-operator-68bf6ff9d6-bbqfl\" (UID: \"0a6090f0-3a27-4102-b8dd-b071644a3543\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bbqfl" Mar 18 13:11:37.133958 master-0 kubenswrapper[7599]: I0318 13:11:37.133491 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm"] Mar 18 13:11:37.134694 master-0 kubenswrapper[7599]: I0318 13:11:37.134661 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm" Mar 18 13:11:37.138191 master-0 kubenswrapper[7599]: I0318 13:11:37.137229 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 18 13:11:37.138191 master-0 kubenswrapper[7599]: I0318 13:11:37.137628 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-6lm6r" Mar 18 13:11:37.138587 master-0 kubenswrapper[7599]: I0318 13:11:37.138560 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 18 13:11:37.138702 master-0 kubenswrapper[7599]: I0318 13:11:37.138660 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 18 13:11:37.148821 master-0 kubenswrapper[7599]: I0318 13:11:37.148772 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm"] Mar 18 13:11:37.193405 master-0 kubenswrapper[7599]: W0318 13:11:37.193323 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod702076a9_b542_4768_9e9e_99b2cac0a66e.slice/crio-d085d8a019f7e2c66eb4ee6d163b9b2393cab47ea58008a519ad2cb921a6f6d3 WatchSource:0}: Error finding container d085d8a019f7e2c66eb4ee6d163b9b2393cab47ea58008a519ad2cb921a6f6d3: Status 404 returned error can't find the container with id d085d8a019f7e2c66eb4ee6d163b9b2393cab47ea58008a519ad2cb921a6f6d3 Mar 18 13:11:37.228336 master-0 kubenswrapper[7599]: I0318 13:11:37.228298 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/68104a8c-3fac-4d4b-b975-bc2d045b3375-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-9bqxm\" (UID: \"68104a8c-3fac-4d4b-b975-bc2d045b3375\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm" Mar 18 13:11:37.228465 master-0 kubenswrapper[7599]: I0318 13:11:37.228439 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/68104a8c-3fac-4d4b-b975-bc2d045b3375-images\") pod \"machine-api-operator-6fbb6cf6f9-9bqxm\" (UID: \"68104a8c-3fac-4d4b-b975-bc2d045b3375\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm" Mar 18 13:11:37.228550 master-0 kubenswrapper[7599]: I0318 13:11:37.228482 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68104a8c-3fac-4d4b-b975-bc2d045b3375-config\") pod \"machine-api-operator-6fbb6cf6f9-9bqxm\" (UID: \"68104a8c-3fac-4d4b-b975-bc2d045b3375\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm" Mar 18 13:11:37.228550 master-0 kubenswrapper[7599]: I0318 13:11:37.228526 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx8j5\" (UniqueName: \"kubernetes.io/projected/68104a8c-3fac-4d4b-b975-bc2d045b3375-kube-api-access-sx8j5\") pod \"machine-api-operator-6fbb6cf6f9-9bqxm\" (UID: \"68104a8c-3fac-4d4b-b975-bc2d045b3375\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm" Mar 18 13:11:37.234450 master-0 kubenswrapper[7599]: I0318 13:11:37.232271 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-68bf6ff9d6-bbqfl" Mar 18 13:11:37.304490 master-0 kubenswrapper[7599]: I0318 13:11:37.301017 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:37.304490 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:37.304490 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:37.304490 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:37.304490 master-0 kubenswrapper[7599]: I0318 13:11:37.301077 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:37.332507 master-0 kubenswrapper[7599]: I0318 13:11:37.329434 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/68104a8c-3fac-4d4b-b975-bc2d045b3375-images\") pod \"machine-api-operator-6fbb6cf6f9-9bqxm\" (UID: \"68104a8c-3fac-4d4b-b975-bc2d045b3375\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm" Mar 18 13:11:37.332507 master-0 kubenswrapper[7599]: I0318 13:11:37.329498 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68104a8c-3fac-4d4b-b975-bc2d045b3375-config\") pod \"machine-api-operator-6fbb6cf6f9-9bqxm\" (UID: \"68104a8c-3fac-4d4b-b975-bc2d045b3375\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm" Mar 18 13:11:37.332507 master-0 kubenswrapper[7599]: I0318 13:11:37.329526 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx8j5\" (UniqueName: \"kubernetes.io/projected/68104a8c-3fac-4d4b-b975-bc2d045b3375-kube-api-access-sx8j5\") pod \"machine-api-operator-6fbb6cf6f9-9bqxm\" (UID: \"68104a8c-3fac-4d4b-b975-bc2d045b3375\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm" Mar 18 13:11:37.332507 master-0 kubenswrapper[7599]: I0318 13:11:37.329579 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/68104a8c-3fac-4d4b-b975-bc2d045b3375-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-9bqxm\" (UID: \"68104a8c-3fac-4d4b-b975-bc2d045b3375\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm" Mar 18 13:11:37.332507 master-0 kubenswrapper[7599]: I0318 13:11:37.331447 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68104a8c-3fac-4d4b-b975-bc2d045b3375-config\") pod \"machine-api-operator-6fbb6cf6f9-9bqxm\" (UID: \"68104a8c-3fac-4d4b-b975-bc2d045b3375\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm" Mar 18 13:11:37.332507 master-0 kubenswrapper[7599]: I0318 13:11:37.332134 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/68104a8c-3fac-4d4b-b975-bc2d045b3375-images\") pod \"machine-api-operator-6fbb6cf6f9-9bqxm\" (UID: \"68104a8c-3fac-4d4b-b975-bc2d045b3375\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm" Mar 18 13:11:37.371761 master-0 kubenswrapper[7599]: I0318 13:11:37.353039 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/68104a8c-3fac-4d4b-b975-bc2d045b3375-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-9bqxm\" (UID: \"68104a8c-3fac-4d4b-b975-bc2d045b3375\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm" Mar 18 13:11:37.380358 master-0 kubenswrapper[7599]: I0318 13:11:37.380320 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx8j5\" (UniqueName: \"kubernetes.io/projected/68104a8c-3fac-4d4b-b975-bc2d045b3375-kube-api-access-sx8j5\") pod \"machine-api-operator-6fbb6cf6f9-9bqxm\" (UID: \"68104a8c-3fac-4d4b-b975-bc2d045b3375\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm" Mar 18 13:11:37.485552 master-0 kubenswrapper[7599]: I0318 13:11:37.485492 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm" Mar 18 13:11:37.519508 master-0 kubenswrapper[7599]: I0318 13:11:37.517363 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wr89x"] Mar 18 13:11:37.523439 master-0 kubenswrapper[7599]: I0318 13:11:37.521141 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wr89x" Mar 18 13:11:37.529715 master-0 kubenswrapper[7599]: I0318 13:11:37.525759 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 18 13:11:37.529715 master-0 kubenswrapper[7599]: I0318 13:11:37.526093 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-7vfv5" Mar 18 13:11:37.529715 master-0 kubenswrapper[7599]: I0318 13:11:37.526209 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 18 13:11:37.529715 master-0 kubenswrapper[7599]: I0318 13:11:37.526311 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 18 13:11:37.529715 master-0 kubenswrapper[7599]: I0318 13:11:37.526436 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 18 13:11:37.529715 master-0 kubenswrapper[7599]: I0318 13:11:37.526520 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 13:11:37.599949 master-0 kubenswrapper[7599]: I0318 13:11:37.599899 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-59d95" event={"ID":"a9de7243-90c0-49c4-8059-34e0558fca40","Type":"ContainerStarted","Data":"ad240e2b8d0267297457b342613a05d440eb47d85b7ae176e7581cd53dc16f38"} Mar 18 13:11:37.613769 master-0 kubenswrapper[7599]: I0318 13:11:37.613700 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-tc2gl" event={"ID":"2624f748-2132-40c6-aa2e-52df50ba8911","Type":"ContainerStarted","Data":"0081f93aac49349ae24ac36cdf03fb5cb79f65e865fbe1e81aca69accde14cf0"} Mar 18 13:11:37.613769 master-0 kubenswrapper[7599]: I0318 13:11:37.613755 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-tc2gl" event={"ID":"2624f748-2132-40c6-aa2e-52df50ba8911","Type":"ContainerStarted","Data":"fd203f439195728c79a6408f8c590a4c1f4265ecb56872e8885bd8b86d270077"} Mar 18 13:11:37.616708 master-0 kubenswrapper[7599]: I0318 13:11:37.616677 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-t4p42" event={"ID":"702076a9-b542-4768-9e9e-99b2cac0a66e","Type":"ContainerStarted","Data":"d085d8a019f7e2c66eb4ee6d163b9b2393cab47ea58008a519ad2cb921a6f6d3"} Mar 18 13:11:37.633413 master-0 kubenswrapper[7599]: I0318 13:11:37.633360 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/20b8c731-9ec8-4abb-8cc2-9821b2819e48-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-wr89x\" (UID: \"20b8c731-9ec8-4abb-8cc2-9821b2819e48\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wr89x" Mar 18 13:11:37.633413 master-0 kubenswrapper[7599]: I0318 13:11:37.633422 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/20b8c731-9ec8-4abb-8cc2-9821b2819e48-images\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-wr89x\" (UID: \"20b8c731-9ec8-4abb-8cc2-9821b2819e48\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wr89x" Mar 18 13:11:37.633666 master-0 kubenswrapper[7599]: I0318 13:11:37.633504 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k75tx\" (UniqueName: \"kubernetes.io/projected/20b8c731-9ec8-4abb-8cc2-9821b2819e48-kube-api-access-k75tx\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-wr89x\" (UID: \"20b8c731-9ec8-4abb-8cc2-9821b2819e48\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wr89x" Mar 18 13:11:37.633666 master-0 kubenswrapper[7599]: I0318 13:11:37.633537 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/20b8c731-9ec8-4abb-8cc2-9821b2819e48-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-wr89x\" (UID: \"20b8c731-9ec8-4abb-8cc2-9821b2819e48\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wr89x" Mar 18 13:11:37.633666 master-0 kubenswrapper[7599]: I0318 13:11:37.633555 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/20b8c731-9ec8-4abb-8cc2-9821b2819e48-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-wr89x\" (UID: \"20b8c731-9ec8-4abb-8cc2-9821b2819e48\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wr89x" Mar 18 13:11:37.749685 master-0 kubenswrapper[7599]: I0318 13:11:37.734807 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k75tx\" (UniqueName: \"kubernetes.io/projected/20b8c731-9ec8-4abb-8cc2-9821b2819e48-kube-api-access-k75tx\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-wr89x\" (UID: \"20b8c731-9ec8-4abb-8cc2-9821b2819e48\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wr89x" Mar 18 13:11:37.749685 master-0 kubenswrapper[7599]: I0318 13:11:37.735073 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/20b8c731-9ec8-4abb-8cc2-9821b2819e48-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-wr89x\" (UID: \"20b8c731-9ec8-4abb-8cc2-9821b2819e48\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wr89x" Mar 18 13:11:37.749685 master-0 kubenswrapper[7599]: I0318 13:11:37.735122 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/20b8c731-9ec8-4abb-8cc2-9821b2819e48-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-wr89x\" (UID: \"20b8c731-9ec8-4abb-8cc2-9821b2819e48\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wr89x" Mar 18 13:11:37.749685 master-0 kubenswrapper[7599]: I0318 13:11:37.735323 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/20b8c731-9ec8-4abb-8cc2-9821b2819e48-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-wr89x\" (UID: \"20b8c731-9ec8-4abb-8cc2-9821b2819e48\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wr89x" Mar 18 13:11:37.749685 master-0 kubenswrapper[7599]: I0318 13:11:37.735471 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/20b8c731-9ec8-4abb-8cc2-9821b2819e48-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-wr89x\" (UID: \"20b8c731-9ec8-4abb-8cc2-9821b2819e48\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wr89x" Mar 18 13:11:37.749685 master-0 kubenswrapper[7599]: I0318 13:11:37.735565 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/20b8c731-9ec8-4abb-8cc2-9821b2819e48-images\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-wr89x\" (UID: \"20b8c731-9ec8-4abb-8cc2-9821b2819e48\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wr89x" Mar 18 13:11:37.749685 master-0 kubenswrapper[7599]: I0318 13:11:37.736123 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/20b8c731-9ec8-4abb-8cc2-9821b2819e48-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-wr89x\" (UID: \"20b8c731-9ec8-4abb-8cc2-9821b2819e48\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wr89x" Mar 18 13:11:37.749685 master-0 kubenswrapper[7599]: I0318 13:11:37.736845 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/20b8c731-9ec8-4abb-8cc2-9821b2819e48-images\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-wr89x\" (UID: \"20b8c731-9ec8-4abb-8cc2-9821b2819e48\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wr89x" Mar 18 13:11:37.749685 master-0 kubenswrapper[7599]: I0318 13:11:37.743498 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/20b8c731-9ec8-4abb-8cc2-9821b2819e48-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-wr89x\" (UID: \"20b8c731-9ec8-4abb-8cc2-9821b2819e48\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wr89x" Mar 18 13:11:37.764076 master-0 kubenswrapper[7599]: I0318 13:11:37.762847 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k75tx\" (UniqueName: \"kubernetes.io/projected/20b8c731-9ec8-4abb-8cc2-9821b2819e48-kube-api-access-k75tx\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-wr89x\" (UID: \"20b8c731-9ec8-4abb-8cc2-9821b2819e48\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wr89x" Mar 18 13:11:37.768233 master-0 kubenswrapper[7599]: I0318 13:11:37.766134 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-59d95" podStartSLOduration=1.82856897 podStartE2EDuration="10.766111467s" podCreationTimestamp="2026-03-18 13:11:27 +0000 UTC" firstStartedPulling="2026-03-18 13:11:28.29030075 +0000 UTC m=+263.251354992" lastFinishedPulling="2026-03-18 13:11:37.227843247 +0000 UTC m=+272.188897489" observedRunningTime="2026-03-18 13:11:37.726306126 +0000 UTC m=+272.687360368" watchObservedRunningTime="2026-03-18 13:11:37.766111467 +0000 UTC m=+272.727165719" Mar 18 13:11:37.783628 master-0 kubenswrapper[7599]: I0318 13:11:37.783567 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-7bbc969446-mxcng"] Mar 18 13:11:37.789600 master-0 kubenswrapper[7599]: I0318 13:11:37.789548 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7"] Mar 18 13:11:37.830729 master-0 kubenswrapper[7599]: I0318 13:11:37.829771 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm"] Mar 18 13:11:37.832337 master-0 kubenswrapper[7599]: W0318 13:11:37.832249 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68104a8c_3fac_4d4b_b975_bc2d045b3375.slice/crio-06b37ad3c0f2f564ede6e81cb5f87c31e9193aa64abb54a08ee07cad5168cccd WatchSource:0}: Error finding container 06b37ad3c0f2f564ede6e81cb5f87c31e9193aa64abb54a08ee07cad5168cccd: Status 404 returned error can't find the container with id 06b37ad3c0f2f564ede6e81cb5f87c31e9193aa64abb54a08ee07cad5168cccd Mar 18 13:11:37.869314 master-0 kubenswrapper[7599]: I0318 13:11:37.867056 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-sqx7p"] Mar 18 13:11:37.872629 master-0 kubenswrapper[7599]: I0318 13:11:37.870248 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-866dc4744-lqtbg"] Mar 18 13:11:37.912690 master-0 kubenswrapper[7599]: I0318 13:11:37.912641 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wr89x" Mar 18 13:11:37.950647 master-0 kubenswrapper[7599]: W0318 13:11:37.950614 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20b8c731_9ec8_4abb_8cc2_9821b2819e48.slice/crio-130c9b8b81d1dc1642b282baefa9a945c1fd58c682eee8d1812a7778c9b49286 WatchSource:0}: Error finding container 130c9b8b81d1dc1642b282baefa9a945c1fd58c682eee8d1812a7778c9b49286: Status 404 returned error can't find the container with id 130c9b8b81d1dc1642b282baefa9a945c1fd58c682eee8d1812a7778c9b49286 Mar 18 13:11:37.952946 master-0 kubenswrapper[7599]: I0318 13:11:37.952893 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-68bf6ff9d6-bbqfl"] Mar 18 13:11:38.254083 master-0 kubenswrapper[7599]: I0318 13:11:38.253917 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:38.254083 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:38.254083 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:38.254083 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:38.254754 master-0 kubenswrapper[7599]: I0318 13:11:38.254000 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:38.627757 master-0 kubenswrapper[7599]: I0318 13:11:38.627672 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" event={"ID":"6a93ff56-362e-44fc-a54f-666a01559892","Type":"ContainerStarted","Data":"cb59122d7a7b042121b64340b8ada26c1823fa00f9c980926b47cbaa0d20cc3f"} Mar 18 13:11:38.630244 master-0 kubenswrapper[7599]: I0318 13:11:38.630204 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-bbqfl" event={"ID":"0a6090f0-3a27-4102-b8dd-b071644a3543","Type":"ContainerStarted","Data":"5f5a7d7c0e9750e48ccca14b1c41ca2a57206319db458c1aefe78bdb62a1f334"} Mar 18 13:11:38.636434 master-0 kubenswrapper[7599]: I0318 13:11:38.636218 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-sqx7p" event={"ID":"74f296d4-40d1-449e-88ea-db6c1574a11a","Type":"ContainerStarted","Data":"b8e76ab6e36792c638116c40619921d7addf605312998f00e62d98e5a5614955"} Mar 18 13:11:38.646567 master-0 kubenswrapper[7599]: I0318 13:11:38.646479 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7" event={"ID":"d325c523-8e6f-4665-9f54-334eaf301141","Type":"ContainerStarted","Data":"aae54c8930e87459876624e3a195385d1057c9142b7bea1bae8fab9500f4916d"} Mar 18 13:11:38.646567 master-0 kubenswrapper[7599]: I0318 13:11:38.646527 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7" event={"ID":"d325c523-8e6f-4665-9f54-334eaf301141","Type":"ContainerStarted","Data":"ff229113000bcb5174eae222c2757e6f95658656feb5832275cddd0f205c413f"} Mar 18 13:11:38.646567 master-0 kubenswrapper[7599]: I0318 13:11:38.646540 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7" event={"ID":"d325c523-8e6f-4665-9f54-334eaf301141","Type":"ContainerStarted","Data":"daaff2e16f5e705f64dc5a7b025fa31e1b94f1cba87483d97066f316342671c2"} Mar 18 13:11:38.651859 master-0 kubenswrapper[7599]: I0318 13:11:38.650238 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-t4p42" event={"ID":"702076a9-b542-4768-9e9e-99b2cac0a66e","Type":"ContainerStarted","Data":"e09c13a4c855b0e00ad1329ef737699f109774957ff6b437737fd8c1e39daca5"} Mar 18 13:11:38.660791 master-0 kubenswrapper[7599]: I0318 13:11:38.659015 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm" event={"ID":"68104a8c-3fac-4d4b-b975-bc2d045b3375","Type":"ContainerStarted","Data":"c8a9c7ba3dfa56fce014dd866938b3ebae10a392ba44b6a44344dd4757310fda"} Mar 18 13:11:38.660791 master-0 kubenswrapper[7599]: I0318 13:11:38.659090 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm" event={"ID":"68104a8c-3fac-4d4b-b975-bc2d045b3375","Type":"ContainerStarted","Data":"06b37ad3c0f2f564ede6e81cb5f87c31e9193aa64abb54a08ee07cad5168cccd"} Mar 18 13:11:38.667087 master-0 kubenswrapper[7599]: I0318 13:11:38.664604 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lqtbg" event={"ID":"2b12af9a-8041-477f-90eb-05bb6ae7861a","Type":"ContainerStarted","Data":"98f348c48c25a3ad3d98bffee1f3f7c9ece63bd1ce7d6bb87b45e89183bb6b2b"} Mar 18 13:11:38.667087 master-0 kubenswrapper[7599]: I0318 13:11:38.664666 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lqtbg" event={"ID":"2b12af9a-8041-477f-90eb-05bb6ae7861a","Type":"ContainerStarted","Data":"ed3daf11e343e1b2061522afa05ec8c54dad41a761078c089559715ea58a7e8b"} Mar 18 13:11:38.671632 master-0 kubenswrapper[7599]: I0318 13:11:38.670868 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wr89x" event={"ID":"20b8c731-9ec8-4abb-8cc2-9821b2819e48","Type":"ContainerStarted","Data":"130c9b8b81d1dc1642b282baefa9a945c1fd58c682eee8d1812a7778c9b49286"} Mar 18 13:11:39.253680 master-0 kubenswrapper[7599]: I0318 13:11:39.253634 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:39.253680 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:39.253680 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:39.253680 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:39.254193 master-0 kubenswrapper[7599]: I0318 13:11:39.253694 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:39.679753 master-0 kubenswrapper[7599]: I0318 13:11:39.679688 7599 generic.go:334] "Generic (PLEG): container finished" podID="702076a9-b542-4768-9e9e-99b2cac0a66e" containerID="e09c13a4c855b0e00ad1329ef737699f109774957ff6b437737fd8c1e39daca5" exitCode=0 Mar 18 13:11:39.679753 master-0 kubenswrapper[7599]: I0318 13:11:39.679748 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-t4p42" event={"ID":"702076a9-b542-4768-9e9e-99b2cac0a66e","Type":"ContainerDied","Data":"e09c13a4c855b0e00ad1329ef737699f109774957ff6b437737fd8c1e39daca5"} Mar 18 13:11:40.253603 master-0 kubenswrapper[7599]: I0318 13:11:40.253546 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:40.253603 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:40.253603 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:40.253603 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:40.254279 master-0 kubenswrapper[7599]: I0318 13:11:40.253618 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:40.835734 master-0 kubenswrapper[7599]: I0318 13:11:40.830544 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-65dbcd767c-7bqc9"] Mar 18 13:11:40.835734 master-0 kubenswrapper[7599]: I0318 13:11:40.831198 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:11:40.838481 master-0 kubenswrapper[7599]: I0318 13:11:40.836882 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 18 13:11:40.838481 master-0 kubenswrapper[7599]: I0318 13:11:40.837162 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-hnp25" Mar 18 13:11:40.838481 master-0 kubenswrapper[7599]: I0318 13:11:40.837300 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 18 13:11:40.838481 master-0 kubenswrapper[7599]: I0318 13:11:40.837485 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 18 13:11:40.838481 master-0 kubenswrapper[7599]: I0318 13:11:40.837642 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-5fm1li8uoic3j" Mar 18 13:11:40.838481 master-0 kubenswrapper[7599]: I0318 13:11:40.837823 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 18 13:11:40.849908 master-0 kubenswrapper[7599]: I0318 13:11:40.849847 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-65dbcd767c-7bqc9"] Mar 18 13:11:40.881667 master-0 kubenswrapper[7599]: I0318 13:11:40.881620 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/41cc6278-8f99-407c-ba5f-750a40e3058c-secret-metrics-server-tls\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:11:40.881879 master-0 kubenswrapper[7599]: I0318 13:11:40.881673 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2w6b\" (UniqueName: \"kubernetes.io/projected/41cc6278-8f99-407c-ba5f-750a40e3058c-kube-api-access-g2w6b\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:11:40.881879 master-0 kubenswrapper[7599]: I0318 13:11:40.881703 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41cc6278-8f99-407c-ba5f-750a40e3058c-client-ca-bundle\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:11:40.881879 master-0 kubenswrapper[7599]: I0318 13:11:40.881731 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41cc6278-8f99-407c-ba5f-750a40e3058c-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:11:40.881879 master-0 kubenswrapper[7599]: I0318 13:11:40.881755 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/41cc6278-8f99-407c-ba5f-750a40e3058c-secret-metrics-client-certs\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:11:40.881879 master-0 kubenswrapper[7599]: I0318 13:11:40.881781 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/41cc6278-8f99-407c-ba5f-750a40e3058c-metrics-server-audit-profiles\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:11:40.881879 master-0 kubenswrapper[7599]: I0318 13:11:40.881803 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/41cc6278-8f99-407c-ba5f-750a40e3058c-audit-log\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:11:40.983120 master-0 kubenswrapper[7599]: I0318 13:11:40.983045 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/41cc6278-8f99-407c-ba5f-750a40e3058c-metrics-server-audit-profiles\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:11:40.983120 master-0 kubenswrapper[7599]: I0318 13:11:40.983099 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/41cc6278-8f99-407c-ba5f-750a40e3058c-audit-log\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:11:40.983502 master-0 kubenswrapper[7599]: I0318 13:11:40.983466 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/41cc6278-8f99-407c-ba5f-750a40e3058c-secret-metrics-server-tls\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:11:40.983562 master-0 kubenswrapper[7599]: I0318 13:11:40.983546 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/41cc6278-8f99-407c-ba5f-750a40e3058c-audit-log\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:11:40.983605 master-0 kubenswrapper[7599]: I0318 13:11:40.983553 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2w6b\" (UniqueName: \"kubernetes.io/projected/41cc6278-8f99-407c-ba5f-750a40e3058c-kube-api-access-g2w6b\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:11:40.983657 master-0 kubenswrapper[7599]: I0318 13:11:40.983627 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41cc6278-8f99-407c-ba5f-750a40e3058c-client-ca-bundle\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:11:40.983702 master-0 kubenswrapper[7599]: I0318 13:11:40.983688 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41cc6278-8f99-407c-ba5f-750a40e3058c-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:11:40.983747 master-0 kubenswrapper[7599]: I0318 13:11:40.983736 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/41cc6278-8f99-407c-ba5f-750a40e3058c-secret-metrics-client-certs\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:11:40.984622 master-0 kubenswrapper[7599]: I0318 13:11:40.984571 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/41cc6278-8f99-407c-ba5f-750a40e3058c-metrics-server-audit-profiles\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:11:40.984815 master-0 kubenswrapper[7599]: I0318 13:11:40.984786 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41cc6278-8f99-407c-ba5f-750a40e3058c-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:11:40.988329 master-0 kubenswrapper[7599]: I0318 13:11:40.988283 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41cc6278-8f99-407c-ba5f-750a40e3058c-client-ca-bundle\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:11:40.989675 master-0 kubenswrapper[7599]: I0318 13:11:40.989620 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/41cc6278-8f99-407c-ba5f-750a40e3058c-secret-metrics-server-tls\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:11:40.991782 master-0 kubenswrapper[7599]: I0318 13:11:40.991738 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/41cc6278-8f99-407c-ba5f-750a40e3058c-secret-metrics-client-certs\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:11:40.999890 master-0 kubenswrapper[7599]: I0318 13:11:40.999617 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2w6b\" (UniqueName: \"kubernetes.io/projected/41cc6278-8f99-407c-ba5f-750a40e3058c-kube-api-access-g2w6b\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:11:41.208092 master-0 kubenswrapper[7599]: I0318 13:11:41.208042 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:11:41.253099 master-0 kubenswrapper[7599]: I0318 13:11:41.253019 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:41.253099 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:41.253099 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:41.253099 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:41.253099 master-0 kubenswrapper[7599]: I0318 13:11:41.253089 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:42.253946 master-0 kubenswrapper[7599]: I0318 13:11:42.253874 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:42.253946 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:42.253946 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:42.253946 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:42.254589 master-0 kubenswrapper[7599]: I0318 13:11:42.253962 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:43.254004 master-0 kubenswrapper[7599]: I0318 13:11:43.253876 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:43.254004 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:43.254004 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:43.254004 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:43.254004 master-0 kubenswrapper[7599]: I0318 13:11:43.253956 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:44.254457 master-0 kubenswrapper[7599]: I0318 13:11:44.254391 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:44.254457 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:44.254457 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:44.254457 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:44.254457 master-0 kubenswrapper[7599]: I0318 13:11:44.254468 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:44.371835 master-0 kubenswrapper[7599]: I0318 13:11:44.371791 7599 scope.go:117] "RemoveContainer" containerID="c6d06965eb2aa010cd8386f146e5e7d18099615f71beaf1a6e240f94bd2aecf0" Mar 18 13:11:45.017257 master-0 kubenswrapper[7599]: I0318 13:11:45.017220 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-65dbcd767c-7bqc9"] Mar 18 13:11:45.253509 master-0 kubenswrapper[7599]: I0318 13:11:45.253459 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:45.253509 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:45.253509 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:45.253509 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:45.253795 master-0 kubenswrapper[7599]: I0318 13:11:45.253527 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:46.254005 master-0 kubenswrapper[7599]: I0318 13:11:46.253931 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:46.254005 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:46.254005 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:46.254005 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:46.254610 master-0 kubenswrapper[7599]: I0318 13:11:46.254006 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:46.781516 master-0 kubenswrapper[7599]: W0318 13:11:46.781430 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41cc6278_8f99_407c_ba5f_750a40e3058c.slice/crio-03b02d62589926abe5e0b1261c9a635f1d2bfbcefca79eac740978fb36ffa6b1 WatchSource:0}: Error finding container 03b02d62589926abe5e0b1261c9a635f1d2bfbcefca79eac740978fb36ffa6b1: Status 404 returned error can't find the container with id 03b02d62589926abe5e0b1261c9a635f1d2bfbcefca79eac740978fb36ffa6b1 Mar 18 13:11:47.256047 master-0 kubenswrapper[7599]: I0318 13:11:47.255982 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:47.256047 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:47.256047 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:47.256047 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:47.256047 master-0 kubenswrapper[7599]: I0318 13:11:47.256037 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:47.733238 master-0 kubenswrapper[7599]: I0318 13:11:47.733169 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-tc2gl" event={"ID":"2624f748-2132-40c6-aa2e-52df50ba8911","Type":"ContainerStarted","Data":"25c5b004cbd15a0e41bc1ec41a3336d780436ce4b506da49b6e40d0d6e02e8cd"} Mar 18 13:11:47.743540 master-0 kubenswrapper[7599]: I0318 13:11:47.743497 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-t4p42" event={"ID":"702076a9-b542-4768-9e9e-99b2cac0a66e","Type":"ContainerStarted","Data":"e491672fbffb4614bce8c9d210033686066eca2a19da5ff73650fa1e0579c900"} Mar 18 13:11:47.743540 master-0 kubenswrapper[7599]: I0318 13:11:47.743543 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-t4p42" event={"ID":"702076a9-b542-4768-9e9e-99b2cac0a66e","Type":"ContainerStarted","Data":"2e03f5085662e6e504d1377448ea945910b21e845ebe1440b4adca9307187581"} Mar 18 13:11:47.745828 master-0 kubenswrapper[7599]: I0318 13:11:47.745793 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-68f85b4d6c-t84s9_d2455453-5943-49ef-bfea-cba077197da0/catalog-operator/1.log" Mar 18 13:11:47.746034 master-0 kubenswrapper[7599]: I0318 13:11:47.745992 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" event={"ID":"d2455453-5943-49ef-bfea-cba077197da0","Type":"ContainerStarted","Data":"d0f4426f9820f0b9e0e6d6fc6e2530dc463048378180953ea79d72f72fda3c7c"} Mar 18 13:11:47.746436 master-0 kubenswrapper[7599]: I0318 13:11:47.746285 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:11:47.747989 master-0 kubenswrapper[7599]: I0318 13:11:47.747787 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-tc2gl" podStartSLOduration=4.535979935 podStartE2EDuration="13.747777041s" podCreationTimestamp="2026-03-18 13:11:34 +0000 UTC" firstStartedPulling="2026-03-18 13:11:37.565024391 +0000 UTC m=+272.526078633" lastFinishedPulling="2026-03-18 13:11:46.776821467 +0000 UTC m=+281.737875739" observedRunningTime="2026-03-18 13:11:47.747276523 +0000 UTC m=+282.708330765" watchObservedRunningTime="2026-03-18 13:11:47.747777041 +0000 UTC m=+282.708831283" Mar 18 13:11:47.752426 master-0 kubenswrapper[7599]: I0318 13:11:47.752371 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lqtbg" event={"ID":"2b12af9a-8041-477f-90eb-05bb6ae7861a","Type":"ContainerStarted","Data":"1ea74ec7ff988c3aa1326aad273ebf989a1e564b326b601e6eb48c414dd19ee0"} Mar 18 13:11:47.757671 master-0 kubenswrapper[7599]: I0318 13:11:47.757625 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" event={"ID":"6a93ff56-362e-44fc-a54f-666a01559892","Type":"ContainerStarted","Data":"bf094747ec2d2ce13923f1981e2b60301e7fc87989743a2574d721a3108a0f23"} Mar 18 13:11:47.757734 master-0 kubenswrapper[7599]: I0318 13:11:47.757675 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" event={"ID":"6a93ff56-362e-44fc-a54f-666a01559892","Type":"ContainerStarted","Data":"e6c4257b0b67452b20cd7a6f86548bd2672d0f4565af7d50ef244e38ac13bf8e"} Mar 18 13:11:47.757734 master-0 kubenswrapper[7599]: I0318 13:11:47.757690 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" event={"ID":"6a93ff56-362e-44fc-a54f-666a01559892","Type":"ContainerStarted","Data":"a669294396e2c35e0be9a0842b4ba90e0c2258e89ac9948c8865f45b4432e16b"} Mar 18 13:11:47.762246 master-0 kubenswrapper[7599]: I0318 13:11:47.762211 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:11:47.765441 master-0 kubenswrapper[7599]: I0318 13:11:47.765188 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-sqx7p" event={"ID":"74f296d4-40d1-449e-88ea-db6c1574a11a","Type":"ContainerStarted","Data":"623b1f887d07a207253d786ce6f347b115eff72fb9da12be783a840d209812fb"} Mar 18 13:11:47.765534 master-0 kubenswrapper[7599]: I0318 13:11:47.765476 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-sqx7p" event={"ID":"74f296d4-40d1-449e-88ea-db6c1574a11a","Type":"ContainerStarted","Data":"c57e6b4204d657669c9164f93a42b5760026a8d1d5180433a4216ca3f552edf0"} Mar 18 13:11:47.786424 master-0 kubenswrapper[7599]: I0318 13:11:47.786311 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7" event={"ID":"d325c523-8e6f-4665-9f54-334eaf301141","Type":"ContainerStarted","Data":"ed1f8fad77b3049231437f6fc06b7cd861bd826f2609faf9830e6b26f51e0a3b"} Mar 18 13:11:47.788802 master-0 kubenswrapper[7599]: I0318 13:11:47.788777 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" event={"ID":"41cc6278-8f99-407c-ba5f-750a40e3058c","Type":"ContainerStarted","Data":"03b02d62589926abe5e0b1261c9a635f1d2bfbcefca79eac740978fb36ffa6b1"} Mar 18 13:11:47.791284 master-0 kubenswrapper[7599]: I0318 13:11:47.791252 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm" event={"ID":"68104a8c-3fac-4d4b-b975-bc2d045b3375","Type":"ContainerStarted","Data":"48a75a1bd556b4ca5c903ca8cec01a63d2822cbb454ffb75470b5fa995517263"} Mar 18 13:11:47.792917 master-0 kubenswrapper[7599]: I0318 13:11:47.792891 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wr89x" event={"ID":"20b8c731-9ec8-4abb-8cc2-9821b2819e48","Type":"ContainerStarted","Data":"49cb30268577529a78d37d728d3b27d0c56be1f0d196c9688ac6dc0186c908b6"} Mar 18 13:11:47.792992 master-0 kubenswrapper[7599]: I0318 13:11:47.792920 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wr89x" event={"ID":"20b8c731-9ec8-4abb-8cc2-9821b2819e48","Type":"ContainerStarted","Data":"53dc88cb178a1099a7385d15a32bbd65471f1352977700c7370927d3cedef7d2"} Mar 18 13:11:47.794720 master-0 kubenswrapper[7599]: I0318 13:11:47.794159 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-bbqfl" event={"ID":"0a6090f0-3a27-4102-b8dd-b071644a3543","Type":"ContainerStarted","Data":"d0ac20086f35d51bcf8fc783fb1c1bf1ac3f8ca49ee1fa8aafa1da1a9b8115d7"} Mar 18 13:11:47.802592 master-0 kubenswrapper[7599]: I0318 13:11:47.802526 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-t4p42" podStartSLOduration=11.610195514 podStartE2EDuration="12.80249314s" podCreationTimestamp="2026-03-18 13:11:35 +0000 UTC" firstStartedPulling="2026-03-18 13:11:37.196278079 +0000 UTC m=+272.157332321" lastFinishedPulling="2026-03-18 13:11:38.388575695 +0000 UTC m=+273.349629947" observedRunningTime="2026-03-18 13:11:47.798004763 +0000 UTC m=+282.759059005" watchObservedRunningTime="2026-03-18 13:11:47.80249314 +0000 UTC m=+282.763547382" Mar 18 13:11:47.823335 master-0 kubenswrapper[7599]: I0318 13:11:47.822810 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" podStartSLOduration=6.034350757 podStartE2EDuration="12.822795818s" podCreationTimestamp="2026-03-18 13:11:35 +0000 UTC" firstStartedPulling="2026-03-18 13:11:37.788659928 +0000 UTC m=+272.749714180" lastFinishedPulling="2026-03-18 13:11:44.577104999 +0000 UTC m=+279.538159241" observedRunningTime="2026-03-18 13:11:47.820299307 +0000 UTC m=+282.781353569" watchObservedRunningTime="2026-03-18 13:11:47.822795818 +0000 UTC m=+282.783850060" Mar 18 13:11:47.845836 master-0 kubenswrapper[7599]: I0318 13:11:47.845692 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-sqx7p" podStartSLOduration=5.023061818 podStartE2EDuration="13.845673441s" podCreationTimestamp="2026-03-18 13:11:34 +0000 UTC" firstStartedPulling="2026-03-18 13:11:37.954357499 +0000 UTC m=+272.915411741" lastFinishedPulling="2026-03-18 13:11:46.776969122 +0000 UTC m=+281.738023364" observedRunningTime="2026-03-18 13:11:47.845080181 +0000 UTC m=+282.806134423" watchObservedRunningTime="2026-03-18 13:11:47.845673441 +0000 UTC m=+282.806727683" Mar 18 13:11:47.876941 master-0 kubenswrapper[7599]: I0318 13:11:47.876542 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7" podStartSLOduration=6.396884654 podStartE2EDuration="12.876525546s" podCreationTimestamp="2026-03-18 13:11:35 +0000 UTC" firstStartedPulling="2026-03-18 13:11:38.09634598 +0000 UTC m=+273.057400222" lastFinishedPulling="2026-03-18 13:11:44.575986872 +0000 UTC m=+279.537041114" observedRunningTime="2026-03-18 13:11:47.871675287 +0000 UTC m=+282.832729529" watchObservedRunningTime="2026-03-18 13:11:47.876525546 +0000 UTC m=+282.837579788" Mar 18 13:11:47.941629 master-0 kubenswrapper[7599]: I0318 13:11:47.941552 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-68bf6ff9d6-bbqfl" podStartSLOduration=3.11186402 podStartE2EDuration="11.941462053s" podCreationTimestamp="2026-03-18 13:11:36 +0000 UTC" firstStartedPulling="2026-03-18 13:11:37.965513336 +0000 UTC m=+272.926567578" lastFinishedPulling="2026-03-18 13:11:46.795111369 +0000 UTC m=+281.756165611" observedRunningTime="2026-03-18 13:11:47.914425503 +0000 UTC m=+282.875479745" watchObservedRunningTime="2026-03-18 13:11:47.941462053 +0000 UTC m=+282.902516295" Mar 18 13:11:47.941823 master-0 kubenswrapper[7599]: I0318 13:11:47.941678 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm" podStartSLOduration=2.038567508 podStartE2EDuration="10.941671069s" podCreationTimestamp="2026-03-18 13:11:37 +0000 UTC" firstStartedPulling="2026-03-18 13:11:37.988713399 +0000 UTC m=+272.949767641" lastFinishedPulling="2026-03-18 13:11:46.89181695 +0000 UTC m=+281.852871202" observedRunningTime="2026-03-18 13:11:47.936806299 +0000 UTC m=+282.897860551" watchObservedRunningTime="2026-03-18 13:11:47.941671069 +0000 UTC m=+282.902725331" Mar 18 13:11:47.979974 master-0 kubenswrapper[7599]: I0318 13:11:47.979872 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lqtbg" podStartSLOduration=3.209833414 podStartE2EDuration="11.979844125s" podCreationTimestamp="2026-03-18 13:11:36 +0000 UTC" firstStartedPulling="2026-03-18 13:11:38.007065534 +0000 UTC m=+272.968119776" lastFinishedPulling="2026-03-18 13:11:46.777076215 +0000 UTC m=+281.738130487" observedRunningTime="2026-03-18 13:11:47.958209954 +0000 UTC m=+282.919264206" watchObservedRunningTime="2026-03-18 13:11:47.979844125 +0000 UTC m=+282.940898367" Mar 18 13:11:48.175617 master-0 kubenswrapper[7599]: I0318 13:11:48.175568 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8wqfk"] Mar 18 13:11:48.179427 master-0 kubenswrapper[7599]: I0318 13:11:48.176578 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8wqfk" Mar 18 13:11:48.179427 master-0 kubenswrapper[7599]: W0318 13:11:48.178224 7599 reflector.go:561] object-"openshift-marketplace"/"certified-operators-dockercfg-kn6rx": failed to list *v1.Secret: secrets "certified-operators-dockercfg-kn6rx" is forbidden: User "system:node:master-0" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'master-0' and this object Mar 18 13:11:48.179427 master-0 kubenswrapper[7599]: E0318 13:11:48.178299 7599 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-kn6rx\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"certified-operators-dockercfg-kn6rx\" is forbidden: User \"system:node:master-0\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 18 13:11:48.254181 master-0 kubenswrapper[7599]: I0318 13:11:48.252542 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8wqfk"] Mar 18 13:11:48.254181 master-0 kubenswrapper[7599]: I0318 13:11:48.252897 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:48.254181 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:48.254181 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:48.254181 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:48.254181 master-0 kubenswrapper[7599]: I0318 13:11:48.252981 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:48.349182 master-0 kubenswrapper[7599]: I0318 13:11:48.349092 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrgxg\" (UniqueName: \"kubernetes.io/projected/d2316774-4ebc-4fa9-be07-eb1f16f614dd-kube-api-access-lrgxg\") pod \"certified-operators-8wqfk\" (UID: \"d2316774-4ebc-4fa9-be07-eb1f16f614dd\") " pod="openshift-marketplace/certified-operators-8wqfk" Mar 18 13:11:48.349899 master-0 kubenswrapper[7599]: I0318 13:11:48.349212 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2316774-4ebc-4fa9-be07-eb1f16f614dd-utilities\") pod \"certified-operators-8wqfk\" (UID: \"d2316774-4ebc-4fa9-be07-eb1f16f614dd\") " pod="openshift-marketplace/certified-operators-8wqfk" Mar 18 13:11:48.349899 master-0 kubenswrapper[7599]: I0318 13:11:48.349243 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2316774-4ebc-4fa9-be07-eb1f16f614dd-catalog-content\") pod \"certified-operators-8wqfk\" (UID: \"d2316774-4ebc-4fa9-be07-eb1f16f614dd\") " pod="openshift-marketplace/certified-operators-8wqfk" Mar 18 13:11:48.385434 master-0 kubenswrapper[7599]: I0318 13:11:48.383973 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tqw5h"] Mar 18 13:11:48.385434 master-0 kubenswrapper[7599]: I0318 13:11:48.385297 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tqw5h" Mar 18 13:11:48.391056 master-0 kubenswrapper[7599]: I0318 13:11:48.387130 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-q8tt6" Mar 18 13:11:48.401441 master-0 kubenswrapper[7599]: I0318 13:11:48.396370 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tqw5h"] Mar 18 13:11:48.451515 master-0 kubenswrapper[7599]: I0318 13:11:48.451281 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrgxg\" (UniqueName: \"kubernetes.io/projected/d2316774-4ebc-4fa9-be07-eb1f16f614dd-kube-api-access-lrgxg\") pod \"certified-operators-8wqfk\" (UID: \"d2316774-4ebc-4fa9-be07-eb1f16f614dd\") " pod="openshift-marketplace/certified-operators-8wqfk" Mar 18 13:11:48.451515 master-0 kubenswrapper[7599]: I0318 13:11:48.451464 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2316774-4ebc-4fa9-be07-eb1f16f614dd-utilities\") pod \"certified-operators-8wqfk\" (UID: \"d2316774-4ebc-4fa9-be07-eb1f16f614dd\") " pod="openshift-marketplace/certified-operators-8wqfk" Mar 18 13:11:48.451515 master-0 kubenswrapper[7599]: I0318 13:11:48.451493 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2316774-4ebc-4fa9-be07-eb1f16f614dd-catalog-content\") pod \"certified-operators-8wqfk\" (UID: \"d2316774-4ebc-4fa9-be07-eb1f16f614dd\") " pod="openshift-marketplace/certified-operators-8wqfk" Mar 18 13:11:48.454870 master-0 kubenswrapper[7599]: I0318 13:11:48.452139 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2316774-4ebc-4fa9-be07-eb1f16f614dd-utilities\") pod \"certified-operators-8wqfk\" (UID: \"d2316774-4ebc-4fa9-be07-eb1f16f614dd\") " pod="openshift-marketplace/certified-operators-8wqfk" Mar 18 13:11:48.454870 master-0 kubenswrapper[7599]: I0318 13:11:48.453972 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2316774-4ebc-4fa9-be07-eb1f16f614dd-catalog-content\") pod \"certified-operators-8wqfk\" (UID: \"d2316774-4ebc-4fa9-be07-eb1f16f614dd\") " pod="openshift-marketplace/certified-operators-8wqfk" Mar 18 13:11:48.468583 master-0 kubenswrapper[7599]: I0318 13:11:48.468538 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrgxg\" (UniqueName: \"kubernetes.io/projected/d2316774-4ebc-4fa9-be07-eb1f16f614dd-kube-api-access-lrgxg\") pod \"certified-operators-8wqfk\" (UID: \"d2316774-4ebc-4fa9-be07-eb1f16f614dd\") " pod="openshift-marketplace/certified-operators-8wqfk" Mar 18 13:11:48.553522 master-0 kubenswrapper[7599]: I0318 13:11:48.552450 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a25632e-32d0-43d2-9be7-f515d29a1720-catalog-content\") pod \"community-operators-tqw5h\" (UID: \"2a25632e-32d0-43d2-9be7-f515d29a1720\") " pod="openshift-marketplace/community-operators-tqw5h" Mar 18 13:11:48.553522 master-0 kubenswrapper[7599]: I0318 13:11:48.552540 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcfsk\" (UniqueName: \"kubernetes.io/projected/2a25632e-32d0-43d2-9be7-f515d29a1720-kube-api-access-bcfsk\") pod \"community-operators-tqw5h\" (UID: \"2a25632e-32d0-43d2-9be7-f515d29a1720\") " pod="openshift-marketplace/community-operators-tqw5h" Mar 18 13:11:48.553522 master-0 kubenswrapper[7599]: I0318 13:11:48.552559 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a25632e-32d0-43d2-9be7-f515d29a1720-utilities\") pod \"community-operators-tqw5h\" (UID: \"2a25632e-32d0-43d2-9be7-f515d29a1720\") " pod="openshift-marketplace/community-operators-tqw5h" Mar 18 13:11:48.653949 master-0 kubenswrapper[7599]: I0318 13:11:48.653884 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcfsk\" (UniqueName: \"kubernetes.io/projected/2a25632e-32d0-43d2-9be7-f515d29a1720-kube-api-access-bcfsk\") pod \"community-operators-tqw5h\" (UID: \"2a25632e-32d0-43d2-9be7-f515d29a1720\") " pod="openshift-marketplace/community-operators-tqw5h" Mar 18 13:11:48.654281 master-0 kubenswrapper[7599]: I0318 13:11:48.654259 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a25632e-32d0-43d2-9be7-f515d29a1720-utilities\") pod \"community-operators-tqw5h\" (UID: \"2a25632e-32d0-43d2-9be7-f515d29a1720\") " pod="openshift-marketplace/community-operators-tqw5h" Mar 18 13:11:48.654694 master-0 kubenswrapper[7599]: I0318 13:11:48.654670 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a25632e-32d0-43d2-9be7-f515d29a1720-utilities\") pod \"community-operators-tqw5h\" (UID: \"2a25632e-32d0-43d2-9be7-f515d29a1720\") " pod="openshift-marketplace/community-operators-tqw5h" Mar 18 13:11:48.654758 master-0 kubenswrapper[7599]: I0318 13:11:48.654740 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a25632e-32d0-43d2-9be7-f515d29a1720-catalog-content\") pod \"community-operators-tqw5h\" (UID: \"2a25632e-32d0-43d2-9be7-f515d29a1720\") " pod="openshift-marketplace/community-operators-tqw5h" Mar 18 13:11:48.655048 master-0 kubenswrapper[7599]: I0318 13:11:48.655021 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a25632e-32d0-43d2-9be7-f515d29a1720-catalog-content\") pod \"community-operators-tqw5h\" (UID: \"2a25632e-32d0-43d2-9be7-f515d29a1720\") " pod="openshift-marketplace/community-operators-tqw5h" Mar 18 13:11:48.671505 master-0 kubenswrapper[7599]: I0318 13:11:48.670874 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcfsk\" (UniqueName: \"kubernetes.io/projected/2a25632e-32d0-43d2-9be7-f515d29a1720-kube-api-access-bcfsk\") pod \"community-operators-tqw5h\" (UID: \"2a25632e-32d0-43d2-9be7-f515d29a1720\") " pod="openshift-marketplace/community-operators-tqw5h" Mar 18 13:11:48.705812 master-0 kubenswrapper[7599]: I0318 13:11:48.705701 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tqw5h" Mar 18 13:11:48.802988 master-0 kubenswrapper[7599]: I0318 13:11:48.802951 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wr89x" event={"ID":"20b8c731-9ec8-4abb-8cc2-9821b2819e48","Type":"ContainerStarted","Data":"2f8cede3208eb8ed1f37fd3a1de6bd3ce1f710b4758b33028b063eb6f08e2f1d"} Mar 18 13:11:48.821901 master-0 kubenswrapper[7599]: I0318 13:11:48.821830 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wr89x" podStartSLOduration=2.947785191 podStartE2EDuration="11.821813485s" podCreationTimestamp="2026-03-18 13:11:37 +0000 UTC" firstStartedPulling="2026-03-18 13:11:37.953986847 +0000 UTC m=+272.915041089" lastFinishedPulling="2026-03-18 13:11:46.828015141 +0000 UTC m=+281.789069383" observedRunningTime="2026-03-18 13:11:48.817855445 +0000 UTC m=+283.778909687" watchObservedRunningTime="2026-03-18 13:11:48.821813485 +0000 UTC m=+283.782867727" Mar 18 13:11:49.254929 master-0 kubenswrapper[7599]: I0318 13:11:49.254713 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:49.254929 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:49.254929 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:49.254929 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:49.255509 master-0 kubenswrapper[7599]: I0318 13:11:49.254992 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:49.490522 master-0 kubenswrapper[7599]: I0318 13:11:49.490444 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tqw5h"] Mar 18 13:11:49.498049 master-0 kubenswrapper[7599]: W0318 13:11:49.498006 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a25632e_32d0_43d2_9be7_f515d29a1720.slice/crio-fd58bf4306c0d3457858f4ec24d59cd979f6f4afdc73f13f04c121d2cc971fc3 WatchSource:0}: Error finding container fd58bf4306c0d3457858f4ec24d59cd979f6f4afdc73f13f04c121d2cc971fc3: Status 404 returned error can't find the container with id fd58bf4306c0d3457858f4ec24d59cd979f6f4afdc73f13f04c121d2cc971fc3 Mar 18 13:11:49.557948 master-0 kubenswrapper[7599]: I0318 13:11:49.557903 7599 kubelet_pods.go:1000] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-marketplace/certified-operators-8wqfk" secret="" err="failed to sync secret cache: timed out waiting for the condition" Mar 18 13:11:49.558043 master-0 kubenswrapper[7599]: I0318 13:11:49.557992 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8wqfk" Mar 18 13:11:49.626848 master-0 kubenswrapper[7599]: I0318 13:11:49.626776 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-kn6rx" Mar 18 13:11:49.784931 master-0 kubenswrapper[7599]: I0318 13:11:49.784791 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bxlrz"] Mar 18 13:11:49.786670 master-0 kubenswrapper[7599]: I0318 13:11:49.786549 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bxlrz" Mar 18 13:11:49.789451 master-0 kubenswrapper[7599]: I0318 13:11:49.788891 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-dz6jc" Mar 18 13:11:49.799252 master-0 kubenswrapper[7599]: I0318 13:11:49.799181 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bxlrz"] Mar 18 13:11:50.141781 master-0 kubenswrapper[7599]: I0318 13:11:50.141726 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e390416b-4fa1-41d5-bc74-9e779b252350-utilities\") pod \"redhat-marketplace-bxlrz\" (UID: \"e390416b-4fa1-41d5-bc74-9e779b252350\") " pod="openshift-marketplace/redhat-marketplace-bxlrz" Mar 18 13:11:50.141929 master-0 kubenswrapper[7599]: I0318 13:11:50.141789 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz6h6\" (UniqueName: \"kubernetes.io/projected/e390416b-4fa1-41d5-bc74-9e779b252350-kube-api-access-cz6h6\") pod \"redhat-marketplace-bxlrz\" (UID: \"e390416b-4fa1-41d5-bc74-9e779b252350\") " pod="openshift-marketplace/redhat-marketplace-bxlrz" Mar 18 13:11:50.141929 master-0 kubenswrapper[7599]: I0318 13:11:50.141820 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e390416b-4fa1-41d5-bc74-9e779b252350-catalog-content\") pod \"redhat-marketplace-bxlrz\" (UID: \"e390416b-4fa1-41d5-bc74-9e779b252350\") " pod="openshift-marketplace/redhat-marketplace-bxlrz" Mar 18 13:11:50.163918 master-0 kubenswrapper[7599]: I0318 13:11:50.163039 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" event={"ID":"41cc6278-8f99-407c-ba5f-750a40e3058c","Type":"ContainerStarted","Data":"de4324b4c32cf4e9cbdf79af1c88339cded8c6fd18295426d2e5f309799e44c1"} Mar 18 13:11:50.165783 master-0 kubenswrapper[7599]: I0318 13:11:50.165504 7599 generic.go:334] "Generic (PLEG): container finished" podID="2a25632e-32d0-43d2-9be7-f515d29a1720" containerID="c180f7f3ef28dbeeb20612afdf694c75b1483a1a6158630039543cf7971e63f5" exitCode=0 Mar 18 13:11:50.166111 master-0 kubenswrapper[7599]: I0318 13:11:50.166013 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tqw5h" event={"ID":"2a25632e-32d0-43d2-9be7-f515d29a1720","Type":"ContainerDied","Data":"c180f7f3ef28dbeeb20612afdf694c75b1483a1a6158630039543cf7971e63f5"} Mar 18 13:11:50.166111 master-0 kubenswrapper[7599]: I0318 13:11:50.166049 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tqw5h" event={"ID":"2a25632e-32d0-43d2-9be7-f515d29a1720","Type":"ContainerStarted","Data":"fd58bf4306c0d3457858f4ec24d59cd979f6f4afdc73f13f04c121d2cc971fc3"} Mar 18 13:11:50.199703 master-0 kubenswrapper[7599]: I0318 13:11:50.199492 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" podStartSLOduration=7.916792251 podStartE2EDuration="10.199471868s" podCreationTimestamp="2026-03-18 13:11:40 +0000 UTC" firstStartedPulling="2026-03-18 13:11:46.786429823 +0000 UTC m=+281.747484065" lastFinishedPulling="2026-03-18 13:11:49.06910944 +0000 UTC m=+284.030163682" observedRunningTime="2026-03-18 13:11:50.195077653 +0000 UTC m=+285.156131895" watchObservedRunningTime="2026-03-18 13:11:50.199471868 +0000 UTC m=+285.160526110" Mar 18 13:11:50.203495 master-0 kubenswrapper[7599]: I0318 13:11:50.200884 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8wqfk"] Mar 18 13:11:50.211230 master-0 kubenswrapper[7599]: W0318 13:11:50.211177 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2316774_4ebc_4fa9_be07_eb1f16f614dd.slice/crio-f8ec4afc73563013c96a8e1eace508a943272ee46d78033f1795223ee51579db WatchSource:0}: Error finding container f8ec4afc73563013c96a8e1eace508a943272ee46d78033f1795223ee51579db: Status 404 returned error can't find the container with id f8ec4afc73563013c96a8e1eace508a943272ee46d78033f1795223ee51579db Mar 18 13:11:50.242184 master-0 kubenswrapper[7599]: I0318 13:11:50.242147 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e390416b-4fa1-41d5-bc74-9e779b252350-utilities\") pod \"redhat-marketplace-bxlrz\" (UID: \"e390416b-4fa1-41d5-bc74-9e779b252350\") " pod="openshift-marketplace/redhat-marketplace-bxlrz" Mar 18 13:11:50.242257 master-0 kubenswrapper[7599]: I0318 13:11:50.242198 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz6h6\" (UniqueName: \"kubernetes.io/projected/e390416b-4fa1-41d5-bc74-9e779b252350-kube-api-access-cz6h6\") pod \"redhat-marketplace-bxlrz\" (UID: \"e390416b-4fa1-41d5-bc74-9e779b252350\") " pod="openshift-marketplace/redhat-marketplace-bxlrz" Mar 18 13:11:50.242257 master-0 kubenswrapper[7599]: I0318 13:11:50.242249 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e390416b-4fa1-41d5-bc74-9e779b252350-catalog-content\") pod \"redhat-marketplace-bxlrz\" (UID: \"e390416b-4fa1-41d5-bc74-9e779b252350\") " pod="openshift-marketplace/redhat-marketplace-bxlrz" Mar 18 13:11:50.243028 master-0 kubenswrapper[7599]: I0318 13:11:50.243001 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e390416b-4fa1-41d5-bc74-9e779b252350-catalog-content\") pod \"redhat-marketplace-bxlrz\" (UID: \"e390416b-4fa1-41d5-bc74-9e779b252350\") " pod="openshift-marketplace/redhat-marketplace-bxlrz" Mar 18 13:11:50.244474 master-0 kubenswrapper[7599]: I0318 13:11:50.243646 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e390416b-4fa1-41d5-bc74-9e779b252350-utilities\") pod \"redhat-marketplace-bxlrz\" (UID: \"e390416b-4fa1-41d5-bc74-9e779b252350\") " pod="openshift-marketplace/redhat-marketplace-bxlrz" Mar 18 13:11:50.254813 master-0 kubenswrapper[7599]: I0318 13:11:50.254703 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:50.254813 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:50.254813 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:50.254813 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:50.254813 master-0 kubenswrapper[7599]: I0318 13:11:50.254759 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:50.273691 master-0 kubenswrapper[7599]: I0318 13:11:50.273635 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz6h6\" (UniqueName: \"kubernetes.io/projected/e390416b-4fa1-41d5-bc74-9e779b252350-kube-api-access-cz6h6\") pod \"redhat-marketplace-bxlrz\" (UID: \"e390416b-4fa1-41d5-bc74-9e779b252350\") " pod="openshift-marketplace/redhat-marketplace-bxlrz" Mar 18 13:11:50.471646 master-0 kubenswrapper[7599]: I0318 13:11:50.471542 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bxlrz" Mar 18 13:11:50.879720 master-0 kubenswrapper[7599]: I0318 13:11:50.879669 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bxlrz"] Mar 18 13:11:50.888455 master-0 kubenswrapper[7599]: I0318 13:11:50.884973 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6cb57bb5db-tc2gl"] Mar 18 13:11:50.888455 master-0 kubenswrapper[7599]: I0318 13:11:50.885284 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-tc2gl" podUID="2624f748-2132-40c6-aa2e-52df50ba8911" containerName="kube-rbac-proxy" containerID="cri-o://0081f93aac49349ae24ac36cdf03fb5cb79f65e865fbe1e81aca69accde14cf0" gracePeriod=30 Mar 18 13:11:50.888455 master-0 kubenswrapper[7599]: I0318 13:11:50.885522 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-tc2gl" podUID="2624f748-2132-40c6-aa2e-52df50ba8911" containerName="machine-approver-controller" containerID="cri-o://25c5b004cbd15a0e41bc1ec41a3336d780436ce4b506da49b6e40d0d6e02e8cd" gracePeriod=30 Mar 18 13:11:50.982146 master-0 kubenswrapper[7599]: I0318 13:11:50.982057 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-89st2"] Mar 18 13:11:50.983382 master-0 kubenswrapper[7599]: I0318 13:11:50.983355 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-89st2" Mar 18 13:11:50.985182 master-0 kubenswrapper[7599]: I0318 13:11:50.985016 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-sn888" Mar 18 13:11:50.994917 master-0 kubenswrapper[7599]: I0318 13:11:50.994883 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-89st2"] Mar 18 13:11:51.103370 master-0 kubenswrapper[7599]: I0318 13:11:51.103322 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-tc2gl" Mar 18 13:11:51.155435 master-0 kubenswrapper[7599]: I0318 13:11:51.155363 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9548e397-0db4-41c8-9cc8-b575060e9c66-utilities\") pod \"redhat-operators-89st2\" (UID: \"9548e397-0db4-41c8-9cc8-b575060e9c66\") " pod="openshift-marketplace/redhat-operators-89st2" Mar 18 13:11:51.155578 master-0 kubenswrapper[7599]: I0318 13:11:51.155468 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbwfq\" (UniqueName: \"kubernetes.io/projected/9548e397-0db4-41c8-9cc8-b575060e9c66-kube-api-access-kbwfq\") pod \"redhat-operators-89st2\" (UID: \"9548e397-0db4-41c8-9cc8-b575060e9c66\") " pod="openshift-marketplace/redhat-operators-89st2" Mar 18 13:11:51.155578 master-0 kubenswrapper[7599]: I0318 13:11:51.155523 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9548e397-0db4-41c8-9cc8-b575060e9c66-catalog-content\") pod \"redhat-operators-89st2\" (UID: \"9548e397-0db4-41c8-9cc8-b575060e9c66\") " pod="openshift-marketplace/redhat-operators-89st2" Mar 18 13:11:51.189035 master-0 kubenswrapper[7599]: I0318 13:11:51.188906 7599 generic.go:334] "Generic (PLEG): container finished" podID="2624f748-2132-40c6-aa2e-52df50ba8911" containerID="25c5b004cbd15a0e41bc1ec41a3336d780436ce4b506da49b6e40d0d6e02e8cd" exitCode=0 Mar 18 13:11:51.189160 master-0 kubenswrapper[7599]: I0318 13:11:51.189055 7599 generic.go:334] "Generic (PLEG): container finished" podID="2624f748-2132-40c6-aa2e-52df50ba8911" containerID="0081f93aac49349ae24ac36cdf03fb5cb79f65e865fbe1e81aca69accde14cf0" exitCode=0 Mar 18 13:11:51.189160 master-0 kubenswrapper[7599]: I0318 13:11:51.189002 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-tc2gl" Mar 18 13:11:51.189160 master-0 kubenswrapper[7599]: I0318 13:11:51.188988 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-tc2gl" event={"ID":"2624f748-2132-40c6-aa2e-52df50ba8911","Type":"ContainerDied","Data":"25c5b004cbd15a0e41bc1ec41a3336d780436ce4b506da49b6e40d0d6e02e8cd"} Mar 18 13:11:51.189160 master-0 kubenswrapper[7599]: I0318 13:11:51.189150 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-tc2gl" event={"ID":"2624f748-2132-40c6-aa2e-52df50ba8911","Type":"ContainerDied","Data":"0081f93aac49349ae24ac36cdf03fb5cb79f65e865fbe1e81aca69accde14cf0"} Mar 18 13:11:51.189393 master-0 kubenswrapper[7599]: I0318 13:11:51.189167 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-tc2gl" event={"ID":"2624f748-2132-40c6-aa2e-52df50ba8911","Type":"ContainerDied","Data":"fd203f439195728c79a6408f8c590a4c1f4265ecb56872e8885bd8b86d270077"} Mar 18 13:11:51.189393 master-0 kubenswrapper[7599]: I0318 13:11:51.189217 7599 scope.go:117] "RemoveContainer" containerID="25c5b004cbd15a0e41bc1ec41a3336d780436ce4b506da49b6e40d0d6e02e8cd" Mar 18 13:11:51.195931 master-0 kubenswrapper[7599]: I0318 13:11:51.195749 7599 generic.go:334] "Generic (PLEG): container finished" podID="d2316774-4ebc-4fa9-be07-eb1f16f614dd" containerID="4c8a9dfdf52860c843b25f4e4b2d64bea7e0f6631bfdbe29d75a91918d723d48" exitCode=0 Mar 18 13:11:51.195931 master-0 kubenswrapper[7599]: I0318 13:11:51.195842 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8wqfk" event={"ID":"d2316774-4ebc-4fa9-be07-eb1f16f614dd","Type":"ContainerDied","Data":"4c8a9dfdf52860c843b25f4e4b2d64bea7e0f6631bfdbe29d75a91918d723d48"} Mar 18 13:11:51.195931 master-0 kubenswrapper[7599]: I0318 13:11:51.195874 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8wqfk" event={"ID":"d2316774-4ebc-4fa9-be07-eb1f16f614dd","Type":"ContainerStarted","Data":"f8ec4afc73563013c96a8e1eace508a943272ee46d78033f1795223ee51579db"} Mar 18 13:11:51.200659 master-0 kubenswrapper[7599]: I0318 13:11:51.200607 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxlrz" event={"ID":"e390416b-4fa1-41d5-bc74-9e779b252350","Type":"ContainerStarted","Data":"430ac96fd015a6eea0a650279b116d5a8e02003f3361085b042396c185be38af"} Mar 18 13:11:51.200659 master-0 kubenswrapper[7599]: I0318 13:11:51.200656 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxlrz" event={"ID":"e390416b-4fa1-41d5-bc74-9e779b252350","Type":"ContainerStarted","Data":"9f1629a9c890b158ad74d9b6c35c2de2573e526e00eff6015bd3861ec48b5231"} Mar 18 13:11:51.235059 master-0 kubenswrapper[7599]: I0318 13:11:51.235026 7599 scope.go:117] "RemoveContainer" containerID="0081f93aac49349ae24ac36cdf03fb5cb79f65e865fbe1e81aca69accde14cf0" Mar 18 13:11:51.253252 master-0 kubenswrapper[7599]: I0318 13:11:51.253206 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:51.253252 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:51.253252 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:51.253252 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:51.253650 master-0 kubenswrapper[7599]: I0318 13:11:51.253256 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:51.255909 master-0 kubenswrapper[7599]: I0318 13:11:51.255881 7599 scope.go:117] "RemoveContainer" containerID="25c5b004cbd15a0e41bc1ec41a3336d780436ce4b506da49b6e40d0d6e02e8cd" Mar 18 13:11:51.256124 master-0 kubenswrapper[7599]: I0318 13:11:51.256098 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbbv4\" (UniqueName: \"kubernetes.io/projected/2624f748-2132-40c6-aa2e-52df50ba8911-kube-api-access-fbbv4\") pod \"2624f748-2132-40c6-aa2e-52df50ba8911\" (UID: \"2624f748-2132-40c6-aa2e-52df50ba8911\") " Mar 18 13:11:51.256199 master-0 kubenswrapper[7599]: I0318 13:11:51.256177 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2624f748-2132-40c6-aa2e-52df50ba8911-machine-approver-tls\") pod \"2624f748-2132-40c6-aa2e-52df50ba8911\" (UID: \"2624f748-2132-40c6-aa2e-52df50ba8911\") " Mar 18 13:11:51.256287 master-0 kubenswrapper[7599]: I0318 13:11:51.256267 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2624f748-2132-40c6-aa2e-52df50ba8911-auth-proxy-config\") pod \"2624f748-2132-40c6-aa2e-52df50ba8911\" (UID: \"2624f748-2132-40c6-aa2e-52df50ba8911\") " Mar 18 13:11:51.256359 master-0 kubenswrapper[7599]: I0318 13:11:51.256312 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2624f748-2132-40c6-aa2e-52df50ba8911-config\") pod \"2624f748-2132-40c6-aa2e-52df50ba8911\" (UID: \"2624f748-2132-40c6-aa2e-52df50ba8911\") " Mar 18 13:11:51.256394 master-0 kubenswrapper[7599]: E0318 13:11:51.256366 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25c5b004cbd15a0e41bc1ec41a3336d780436ce4b506da49b6e40d0d6e02e8cd\": container with ID starting with 25c5b004cbd15a0e41bc1ec41a3336d780436ce4b506da49b6e40d0d6e02e8cd not found: ID does not exist" containerID="25c5b004cbd15a0e41bc1ec41a3336d780436ce4b506da49b6e40d0d6e02e8cd" Mar 18 13:11:51.256525 master-0 kubenswrapper[7599]: I0318 13:11:51.256406 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25c5b004cbd15a0e41bc1ec41a3336d780436ce4b506da49b6e40d0d6e02e8cd"} err="failed to get container status \"25c5b004cbd15a0e41bc1ec41a3336d780436ce4b506da49b6e40d0d6e02e8cd\": rpc error: code = NotFound desc = could not find container \"25c5b004cbd15a0e41bc1ec41a3336d780436ce4b506da49b6e40d0d6e02e8cd\": container with ID starting with 25c5b004cbd15a0e41bc1ec41a3336d780436ce4b506da49b6e40d0d6e02e8cd not found: ID does not exist" Mar 18 13:11:51.256525 master-0 kubenswrapper[7599]: I0318 13:11:51.256436 7599 scope.go:117] "RemoveContainer" containerID="0081f93aac49349ae24ac36cdf03fb5cb79f65e865fbe1e81aca69accde14cf0" Mar 18 13:11:51.256752 master-0 kubenswrapper[7599]: E0318 13:11:51.256731 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0081f93aac49349ae24ac36cdf03fb5cb79f65e865fbe1e81aca69accde14cf0\": container with ID starting with 0081f93aac49349ae24ac36cdf03fb5cb79f65e865fbe1e81aca69accde14cf0 not found: ID does not exist" containerID="0081f93aac49349ae24ac36cdf03fb5cb79f65e865fbe1e81aca69accde14cf0" Mar 18 13:11:51.256793 master-0 kubenswrapper[7599]: I0318 13:11:51.256756 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0081f93aac49349ae24ac36cdf03fb5cb79f65e865fbe1e81aca69accde14cf0"} err="failed to get container status \"0081f93aac49349ae24ac36cdf03fb5cb79f65e865fbe1e81aca69accde14cf0\": rpc error: code = NotFound desc = could not find container \"0081f93aac49349ae24ac36cdf03fb5cb79f65e865fbe1e81aca69accde14cf0\": container with ID starting with 0081f93aac49349ae24ac36cdf03fb5cb79f65e865fbe1e81aca69accde14cf0 not found: ID does not exist" Mar 18 13:11:51.256793 master-0 kubenswrapper[7599]: I0318 13:11:51.256765 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2624f748-2132-40c6-aa2e-52df50ba8911-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "2624f748-2132-40c6-aa2e-52df50ba8911" (UID: "2624f748-2132-40c6-aa2e-52df50ba8911"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:11:51.256793 master-0 kubenswrapper[7599]: I0318 13:11:51.256771 7599 scope.go:117] "RemoveContainer" containerID="25c5b004cbd15a0e41bc1ec41a3336d780436ce4b506da49b6e40d0d6e02e8cd" Mar 18 13:11:51.256945 master-0 kubenswrapper[7599]: I0318 13:11:51.256922 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9548e397-0db4-41c8-9cc8-b575060e9c66-utilities\") pod \"redhat-operators-89st2\" (UID: \"9548e397-0db4-41c8-9cc8-b575060e9c66\") " pod="openshift-marketplace/redhat-operators-89st2" Mar 18 13:11:51.256998 master-0 kubenswrapper[7599]: I0318 13:11:51.256974 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbwfq\" (UniqueName: \"kubernetes.io/projected/9548e397-0db4-41c8-9cc8-b575060e9c66-kube-api-access-kbwfq\") pod \"redhat-operators-89st2\" (UID: \"9548e397-0db4-41c8-9cc8-b575060e9c66\") " pod="openshift-marketplace/redhat-operators-89st2" Mar 18 13:11:51.257047 master-0 kubenswrapper[7599]: I0318 13:11:51.257028 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9548e397-0db4-41c8-9cc8-b575060e9c66-catalog-content\") pod \"redhat-operators-89st2\" (UID: \"9548e397-0db4-41c8-9cc8-b575060e9c66\") " pod="openshift-marketplace/redhat-operators-89st2" Mar 18 13:11:51.257119 master-0 kubenswrapper[7599]: I0318 13:11:51.257084 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25c5b004cbd15a0e41bc1ec41a3336d780436ce4b506da49b6e40d0d6e02e8cd"} err="failed to get container status \"25c5b004cbd15a0e41bc1ec41a3336d780436ce4b506da49b6e40d0d6e02e8cd\": rpc error: code = NotFound desc = could not find container \"25c5b004cbd15a0e41bc1ec41a3336d780436ce4b506da49b6e40d0d6e02e8cd\": container with ID starting with 25c5b004cbd15a0e41bc1ec41a3336d780436ce4b506da49b6e40d0d6e02e8cd not found: ID does not exist" Mar 18 13:11:51.257162 master-0 kubenswrapper[7599]: I0318 13:11:51.257123 7599 scope.go:117] "RemoveContainer" containerID="0081f93aac49349ae24ac36cdf03fb5cb79f65e865fbe1e81aca69accde14cf0" Mar 18 13:11:51.257214 master-0 kubenswrapper[7599]: I0318 13:11:51.257121 7599 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2624f748-2132-40c6-aa2e-52df50ba8911-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:51.257305 master-0 kubenswrapper[7599]: I0318 13:11:51.257138 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2624f748-2132-40c6-aa2e-52df50ba8911-config" (OuterVolumeSpecName: "config") pod "2624f748-2132-40c6-aa2e-52df50ba8911" (UID: "2624f748-2132-40c6-aa2e-52df50ba8911"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:11:51.257550 master-0 kubenswrapper[7599]: I0318 13:11:51.257531 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9548e397-0db4-41c8-9cc8-b575060e9c66-utilities\") pod \"redhat-operators-89st2\" (UID: \"9548e397-0db4-41c8-9cc8-b575060e9c66\") " pod="openshift-marketplace/redhat-operators-89st2" Mar 18 13:11:51.257608 master-0 kubenswrapper[7599]: I0318 13:11:51.257555 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9548e397-0db4-41c8-9cc8-b575060e9c66-catalog-content\") pod \"redhat-operators-89st2\" (UID: \"9548e397-0db4-41c8-9cc8-b575060e9c66\") " pod="openshift-marketplace/redhat-operators-89st2" Mar 18 13:11:51.260732 master-0 kubenswrapper[7599]: I0318 13:11:51.258759 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0081f93aac49349ae24ac36cdf03fb5cb79f65e865fbe1e81aca69accde14cf0"} err="failed to get container status \"0081f93aac49349ae24ac36cdf03fb5cb79f65e865fbe1e81aca69accde14cf0\": rpc error: code = NotFound desc = could not find container \"0081f93aac49349ae24ac36cdf03fb5cb79f65e865fbe1e81aca69accde14cf0\": container with ID starting with 0081f93aac49349ae24ac36cdf03fb5cb79f65e865fbe1e81aca69accde14cf0 not found: ID does not exist" Mar 18 13:11:51.260732 master-0 kubenswrapper[7599]: I0318 13:11:51.259373 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2624f748-2132-40c6-aa2e-52df50ba8911-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "2624f748-2132-40c6-aa2e-52df50ba8911" (UID: "2624f748-2132-40c6-aa2e-52df50ba8911"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:11:51.260732 master-0 kubenswrapper[7599]: I0318 13:11:51.259823 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2624f748-2132-40c6-aa2e-52df50ba8911-kube-api-access-fbbv4" (OuterVolumeSpecName: "kube-api-access-fbbv4") pod "2624f748-2132-40c6-aa2e-52df50ba8911" (UID: "2624f748-2132-40c6-aa2e-52df50ba8911"). InnerVolumeSpecName "kube-api-access-fbbv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:11:51.272236 master-0 kubenswrapper[7599]: I0318 13:11:51.272135 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbwfq\" (UniqueName: \"kubernetes.io/projected/9548e397-0db4-41c8-9cc8-b575060e9c66-kube-api-access-kbwfq\") pod \"redhat-operators-89st2\" (UID: \"9548e397-0db4-41c8-9cc8-b575060e9c66\") " pod="openshift-marketplace/redhat-operators-89st2" Mar 18 13:11:51.360283 master-0 kubenswrapper[7599]: I0318 13:11:51.360221 7599 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2624f748-2132-40c6-aa2e-52df50ba8911-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:51.360552 master-0 kubenswrapper[7599]: I0318 13:11:51.360305 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbbv4\" (UniqueName: \"kubernetes.io/projected/2624f748-2132-40c6-aa2e-52df50ba8911-kube-api-access-fbbv4\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:51.360552 master-0 kubenswrapper[7599]: I0318 13:11:51.360323 7599 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2624f748-2132-40c6-aa2e-52df50ba8911-machine-approver-tls\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:51.405624 master-0 kubenswrapper[7599]: I0318 13:11:51.404092 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-89st2" Mar 18 13:11:51.513722 master-0 kubenswrapper[7599]: I0318 13:11:51.513657 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6cb57bb5db-tc2gl"] Mar 18 13:11:51.517963 master-0 kubenswrapper[7599]: I0318 13:11:51.517913 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6cb57bb5db-tc2gl"] Mar 18 13:11:51.543515 master-0 kubenswrapper[7599]: I0318 13:11:51.543123 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql"] Mar 18 13:11:51.543515 master-0 kubenswrapper[7599]: E0318 13:11:51.543355 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2624f748-2132-40c6-aa2e-52df50ba8911" containerName="kube-rbac-proxy" Mar 18 13:11:51.543515 master-0 kubenswrapper[7599]: I0318 13:11:51.543367 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="2624f748-2132-40c6-aa2e-52df50ba8911" containerName="kube-rbac-proxy" Mar 18 13:11:51.543515 master-0 kubenswrapper[7599]: E0318 13:11:51.543393 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2624f748-2132-40c6-aa2e-52df50ba8911" containerName="machine-approver-controller" Mar 18 13:11:51.543515 master-0 kubenswrapper[7599]: I0318 13:11:51.543400 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="2624f748-2132-40c6-aa2e-52df50ba8911" containerName="machine-approver-controller" Mar 18 13:11:51.543515 master-0 kubenswrapper[7599]: I0318 13:11:51.543511 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="2624f748-2132-40c6-aa2e-52df50ba8911" containerName="machine-approver-controller" Mar 18 13:11:51.543515 master-0 kubenswrapper[7599]: I0318 13:11:51.543526 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="2624f748-2132-40c6-aa2e-52df50ba8911" containerName="kube-rbac-proxy" Mar 18 13:11:51.544083 master-0 kubenswrapper[7599]: I0318 13:11:51.544061 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql" Mar 18 13:11:51.546497 master-0 kubenswrapper[7599]: I0318 13:11:51.545844 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 18 13:11:51.546704 master-0 kubenswrapper[7599]: I0318 13:11:51.546508 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-qfz5b" Mar 18 13:11:51.546704 master-0 kubenswrapper[7599]: I0318 13:11:51.546653 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 18 13:11:51.546704 master-0 kubenswrapper[7599]: I0318 13:11:51.546680 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 18 13:11:51.546990 master-0 kubenswrapper[7599]: I0318 13:11:51.546960 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 18 13:11:51.550982 master-0 kubenswrapper[7599]: I0318 13:11:51.550659 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 18 13:11:51.664263 master-0 kubenswrapper[7599]: I0318 13:11:51.664216 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f38b464d-a218-4753-b7ac-a7d373952c4d-config\") pod \"machine-approver-5c6485487f-fk8ql\" (UID: \"f38b464d-a218-4753-b7ac-a7d373952c4d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql" Mar 18 13:11:51.664511 master-0 kubenswrapper[7599]: I0318 13:11:51.664283 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfbx8\" (UniqueName: \"kubernetes.io/projected/f38b464d-a218-4753-b7ac-a7d373952c4d-kube-api-access-lfbx8\") pod \"machine-approver-5c6485487f-fk8ql\" (UID: \"f38b464d-a218-4753-b7ac-a7d373952c4d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql" Mar 18 13:11:51.664511 master-0 kubenswrapper[7599]: I0318 13:11:51.664350 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f38b464d-a218-4753-b7ac-a7d373952c4d-machine-approver-tls\") pod \"machine-approver-5c6485487f-fk8ql\" (UID: \"f38b464d-a218-4753-b7ac-a7d373952c4d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql" Mar 18 13:11:51.664511 master-0 kubenswrapper[7599]: I0318 13:11:51.664391 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f38b464d-a218-4753-b7ac-a7d373952c4d-auth-proxy-config\") pod \"machine-approver-5c6485487f-fk8ql\" (UID: \"f38b464d-a218-4753-b7ac-a7d373952c4d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql" Mar 18 13:11:51.773441 master-0 kubenswrapper[7599]: I0318 13:11:51.773382 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f38b464d-a218-4753-b7ac-a7d373952c4d-config\") pod \"machine-approver-5c6485487f-fk8ql\" (UID: \"f38b464d-a218-4753-b7ac-a7d373952c4d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql" Mar 18 13:11:51.773888 master-0 kubenswrapper[7599]: I0318 13:11:51.773519 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfbx8\" (UniqueName: \"kubernetes.io/projected/f38b464d-a218-4753-b7ac-a7d373952c4d-kube-api-access-lfbx8\") pod \"machine-approver-5c6485487f-fk8ql\" (UID: \"f38b464d-a218-4753-b7ac-a7d373952c4d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql" Mar 18 13:11:51.773888 master-0 kubenswrapper[7599]: I0318 13:11:51.773570 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f38b464d-a218-4753-b7ac-a7d373952c4d-machine-approver-tls\") pod \"machine-approver-5c6485487f-fk8ql\" (UID: \"f38b464d-a218-4753-b7ac-a7d373952c4d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql" Mar 18 13:11:51.773888 master-0 kubenswrapper[7599]: I0318 13:11:51.773630 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f38b464d-a218-4753-b7ac-a7d373952c4d-auth-proxy-config\") pod \"machine-approver-5c6485487f-fk8ql\" (UID: \"f38b464d-a218-4753-b7ac-a7d373952c4d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql" Mar 18 13:11:51.774451 master-0 kubenswrapper[7599]: I0318 13:11:51.773989 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f38b464d-a218-4753-b7ac-a7d373952c4d-config\") pod \"machine-approver-5c6485487f-fk8ql\" (UID: \"f38b464d-a218-4753-b7ac-a7d373952c4d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql" Mar 18 13:11:51.776037 master-0 kubenswrapper[7599]: I0318 13:11:51.775885 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f38b464d-a218-4753-b7ac-a7d373952c4d-auth-proxy-config\") pod \"machine-approver-5c6485487f-fk8ql\" (UID: \"f38b464d-a218-4753-b7ac-a7d373952c4d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql" Mar 18 13:11:51.779131 master-0 kubenswrapper[7599]: I0318 13:11:51.779039 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f38b464d-a218-4753-b7ac-a7d373952c4d-machine-approver-tls\") pod \"machine-approver-5c6485487f-fk8ql\" (UID: \"f38b464d-a218-4753-b7ac-a7d373952c4d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql" Mar 18 13:11:51.791234 master-0 kubenswrapper[7599]: I0318 13:11:51.791188 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfbx8\" (UniqueName: \"kubernetes.io/projected/f38b464d-a218-4753-b7ac-a7d373952c4d-kube-api-access-lfbx8\") pod \"machine-approver-5c6485487f-fk8ql\" (UID: \"f38b464d-a218-4753-b7ac-a7d373952c4d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql" Mar 18 13:11:51.811980 master-0 kubenswrapper[7599]: I0318 13:11:51.811746 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-89st2"] Mar 18 13:11:51.816390 master-0 kubenswrapper[7599]: W0318 13:11:51.816358 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9548e397_0db4_41c8_9cc8_b575060e9c66.slice/crio-5eea39afe08c6fda2308b0aa93f656fdde076cef1d17307c9c4b3694c8a0bf52 WatchSource:0}: Error finding container 5eea39afe08c6fda2308b0aa93f656fdde076cef1d17307c9c4b3694c8a0bf52: Status 404 returned error can't find the container with id 5eea39afe08c6fda2308b0aa93f656fdde076cef1d17307c9c4b3694c8a0bf52 Mar 18 13:11:51.869091 master-0 kubenswrapper[7599]: I0318 13:11:51.869050 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql" Mar 18 13:11:52.208894 master-0 kubenswrapper[7599]: I0318 13:11:52.208774 7599 generic.go:334] "Generic (PLEG): container finished" podID="e390416b-4fa1-41d5-bc74-9e779b252350" containerID="430ac96fd015a6eea0a650279b116d5a8e02003f3361085b042396c185be38af" exitCode=0 Mar 18 13:11:52.208894 master-0 kubenswrapper[7599]: I0318 13:11:52.208859 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxlrz" event={"ID":"e390416b-4fa1-41d5-bc74-9e779b252350","Type":"ContainerDied","Data":"430ac96fd015a6eea0a650279b116d5a8e02003f3361085b042396c185be38af"} Mar 18 13:11:52.212969 master-0 kubenswrapper[7599]: I0318 13:11:52.212904 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql" event={"ID":"f38b464d-a218-4753-b7ac-a7d373952c4d","Type":"ContainerStarted","Data":"b3c1a7233994d6fe76298cbc19c305628db4f9a91233624d87cce643360815bc"} Mar 18 13:11:52.213082 master-0 kubenswrapper[7599]: I0318 13:11:52.213049 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql" event={"ID":"f38b464d-a218-4753-b7ac-a7d373952c4d","Type":"ContainerStarted","Data":"354c2a6b66c065fe648ce36ee5e4c7bbfed1c688af2120800fda750d61548f3b"} Mar 18 13:11:52.215277 master-0 kubenswrapper[7599]: I0318 13:11:52.215249 7599 generic.go:334] "Generic (PLEG): container finished" podID="9548e397-0db4-41c8-9cc8-b575060e9c66" containerID="aacaa4f75f3c9d2bdb4d347974e6b6d65020cdef4eea519f86746e64d1055396" exitCode=0 Mar 18 13:11:52.215360 master-0 kubenswrapper[7599]: I0318 13:11:52.215277 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-89st2" event={"ID":"9548e397-0db4-41c8-9cc8-b575060e9c66","Type":"ContainerDied","Data":"aacaa4f75f3c9d2bdb4d347974e6b6d65020cdef4eea519f86746e64d1055396"} Mar 18 13:11:52.215360 master-0 kubenswrapper[7599]: I0318 13:11:52.215307 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-89st2" event={"ID":"9548e397-0db4-41c8-9cc8-b575060e9c66","Type":"ContainerStarted","Data":"5eea39afe08c6fda2308b0aa93f656fdde076cef1d17307c9c4b3694c8a0bf52"} Mar 18 13:11:52.253460 master-0 kubenswrapper[7599]: I0318 13:11:52.253395 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:52.253460 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:52.253460 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:52.253460 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:52.253813 master-0 kubenswrapper[7599]: I0318 13:11:52.253469 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:53.225917 master-0 kubenswrapper[7599]: I0318 13:11:53.225868 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql" event={"ID":"f38b464d-a218-4753-b7ac-a7d373952c4d","Type":"ContainerStarted","Data":"55fa6d94ce214941faacc4a186e818424b11b71ba4c1eab406a044ddb774b931"} Mar 18 13:11:53.229723 master-0 kubenswrapper[7599]: I0318 13:11:53.229688 7599 generic.go:334] "Generic (PLEG): container finished" podID="0a6090f0-3a27-4102-b8dd-b071644a3543" containerID="d0ac20086f35d51bcf8fc783fb1c1bf1ac3f8ca49ee1fa8aafa1da1a9b8115d7" exitCode=0 Mar 18 13:11:53.229774 master-0 kubenswrapper[7599]: I0318 13:11:53.229730 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-bbqfl" event={"ID":"0a6090f0-3a27-4102-b8dd-b071644a3543","Type":"ContainerDied","Data":"d0ac20086f35d51bcf8fc783fb1c1bf1ac3f8ca49ee1fa8aafa1da1a9b8115d7"} Mar 18 13:11:53.230094 master-0 kubenswrapper[7599]: I0318 13:11:53.230073 7599 scope.go:117] "RemoveContainer" containerID="d0ac20086f35d51bcf8fc783fb1c1bf1ac3f8ca49ee1fa8aafa1da1a9b8115d7" Mar 18 13:11:53.251539 master-0 kubenswrapper[7599]: I0318 13:11:53.250508 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql" podStartSLOduration=2.250482083 podStartE2EDuration="2.250482083s" podCreationTimestamp="2026-03-18 13:11:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:11:53.239643026 +0000 UTC m=+288.200697268" watchObservedRunningTime="2026-03-18 13:11:53.250482083 +0000 UTC m=+288.211536325" Mar 18 13:11:53.254110 master-0 kubenswrapper[7599]: I0318 13:11:53.254046 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:53.254110 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:53.254110 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:53.254110 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:53.254110 master-0 kubenswrapper[7599]: I0318 13:11:53.254091 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:53.395495 master-0 kubenswrapper[7599]: I0318 13:11:53.395370 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2624f748-2132-40c6-aa2e-52df50ba8911" path="/var/lib/kubelet/pods/2624f748-2132-40c6-aa2e-52df50ba8911/volumes" Mar 18 13:11:54.244134 master-0 kubenswrapper[7599]: I0318 13:11:54.243966 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-bbqfl" event={"ID":"0a6090f0-3a27-4102-b8dd-b071644a3543","Type":"ContainerStarted","Data":"efc7dcc65e51970be3f223938d17e3608d2b08a5580819c1889dcf943e6c33b1"} Mar 18 13:11:54.253977 master-0 kubenswrapper[7599]: I0318 13:11:54.253927 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:54.253977 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:54.253977 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:54.253977 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:54.254266 master-0 kubenswrapper[7599]: I0318 13:11:54.254018 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:55.254475 master-0 kubenswrapper[7599]: I0318 13:11:55.254362 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:55.254475 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:55.254475 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:55.254475 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:55.254475 master-0 kubenswrapper[7599]: I0318 13:11:55.254454 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:56.253593 master-0 kubenswrapper[7599]: I0318 13:11:56.253546 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:56.253593 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:56.253593 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:56.253593 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:56.253856 master-0 kubenswrapper[7599]: I0318 13:11:56.253611 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:56.668465 master-0 kubenswrapper[7599]: I0318 13:11:56.668190 7599 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 18 13:11:56.668465 master-0 kubenswrapper[7599]: I0318 13:11:56.668434 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" containerID="cri-o://ef75baaea3b231f0a943268458f551b383f49ce5906993775a78b47a21e43600" gracePeriod=30 Mar 18 13:11:56.670350 master-0 kubenswrapper[7599]: I0318 13:11:56.668524 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" containerID="cri-o://f8d9ce1d67226c0b362cac090a8a6e718851e873d29da1183f8e1cd8096dfcfa" gracePeriod=30 Mar 18 13:11:56.670350 master-0 kubenswrapper[7599]: I0318 13:11:56.669627 7599 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 13:11:56.670350 master-0 kubenswrapper[7599]: E0318 13:11:56.669866 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" Mar 18 13:11:56.670350 master-0 kubenswrapper[7599]: I0318 13:11:56.669877 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" Mar 18 13:11:56.670350 master-0 kubenswrapper[7599]: E0318 13:11:56.669890 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 13:11:56.670350 master-0 kubenswrapper[7599]: I0318 13:11:56.669896 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 13:11:56.670350 master-0 kubenswrapper[7599]: E0318 13:11:56.669906 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 13:11:56.670350 master-0 kubenswrapper[7599]: I0318 13:11:56.669913 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 13:11:56.670350 master-0 kubenswrapper[7599]: E0318 13:11:56.669925 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 13:11:56.670350 master-0 kubenswrapper[7599]: I0318 13:11:56.669932 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 13:11:56.670350 master-0 kubenswrapper[7599]: I0318 13:11:56.670049 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 13:11:56.670350 master-0 kubenswrapper[7599]: I0318 13:11:56.670067 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" Mar 18 13:11:56.670350 master-0 kubenswrapper[7599]: I0318 13:11:56.670076 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 13:11:56.670350 master-0 kubenswrapper[7599]: I0318 13:11:56.670084 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 13:11:56.670350 master-0 kubenswrapper[7599]: E0318 13:11:56.670182 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 13:11:56.670350 master-0 kubenswrapper[7599]: I0318 13:11:56.670191 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 13:11:56.670350 master-0 kubenswrapper[7599]: I0318 13:11:56.670288 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 13:11:56.673174 master-0 kubenswrapper[7599]: I0318 13:11:56.671277 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:11:56.720451 master-0 kubenswrapper[7599]: I0318 13:11:56.720401 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 13:11:56.856438 master-0 kubenswrapper[7599]: I0318 13:11:56.856372 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:11:56.866914 master-0 kubenswrapper[7599]: I0318 13:11:56.866788 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ce43e217adc4d0869adee3ba7c628c00-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"ce43e217adc4d0869adee3ba7c628c00\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:11:56.866914 master-0 kubenswrapper[7599]: I0318 13:11:56.866832 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ce43e217adc4d0869adee3ba7c628c00-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"ce43e217adc4d0869adee3ba7c628c00\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:11:56.949658 master-0 kubenswrapper[7599]: I0318 13:11:56.947904 7599 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="bdd82ae2-22f0-4d9d-b53d-93c949b988c1" Mar 18 13:11:56.967974 master-0 kubenswrapper[7599]: I0318 13:11:56.967918 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 18 13:11:56.968154 master-0 kubenswrapper[7599]: I0318 13:11:56.968029 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 18 13:11:56.968154 master-0 kubenswrapper[7599]: I0318 13:11:56.968120 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 18 13:11:56.968234 master-0 kubenswrapper[7599]: I0318 13:11:56.968171 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 18 13:11:56.968234 master-0 kubenswrapper[7599]: I0318 13:11:56.968031 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs" (OuterVolumeSpecName: "logs") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:11:56.968305 master-0 kubenswrapper[7599]: I0318 13:11:56.968151 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets" (OuterVolumeSpecName: "secrets") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:11:56.968305 master-0 kubenswrapper[7599]: I0318 13:11:56.968248 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 18 13:11:56.968428 master-0 kubenswrapper[7599]: I0318 13:11:56.968187 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config" (OuterVolumeSpecName: "config") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:11:56.968470 master-0 kubenswrapper[7599]: I0318 13:11:56.968432 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ce43e217adc4d0869adee3ba7c628c00-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"ce43e217adc4d0869adee3ba7c628c00\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:11:56.968470 master-0 kubenswrapper[7599]: I0318 13:11:56.968204 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:11:56.968470 master-0 kubenswrapper[7599]: I0318 13:11:56.968314 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:11:56.968557 master-0 kubenswrapper[7599]: I0318 13:11:56.968476 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ce43e217adc4d0869adee3ba7c628c00-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"ce43e217adc4d0869adee3ba7c628c00\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:11:56.968557 master-0 kubenswrapper[7599]: I0318 13:11:56.968525 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ce43e217adc4d0869adee3ba7c628c00-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"ce43e217adc4d0869adee3ba7c628c00\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:11:56.968611 master-0 kubenswrapper[7599]: I0318 13:11:56.968589 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ce43e217adc4d0869adee3ba7c628c00-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"ce43e217adc4d0869adee3ba7c628c00\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:11:56.968772 master-0 kubenswrapper[7599]: I0318 13:11:56.968748 7599 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:56.968817 master-0 kubenswrapper[7599]: I0318 13:11:56.968775 7599 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:56.968817 master-0 kubenswrapper[7599]: I0318 13:11:56.968793 7599 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:56.968817 master-0 kubenswrapper[7599]: I0318 13:11:56.968806 7599 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:56.968817 master-0 kubenswrapper[7599]: I0318 13:11:56.968818 7599 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") on node \"master-0\" DevicePath \"\"" Mar 18 13:11:57.015531 master-0 kubenswrapper[7599]: I0318 13:11:57.015482 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:11:57.043199 master-0 kubenswrapper[7599]: W0318 13:11:57.043157 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce43e217adc4d0869adee3ba7c628c00.slice/crio-60156cd8a457797db9bf54e48022d6e4ae174300834ce3ef829021fe366c28b0 WatchSource:0}: Error finding container 60156cd8a457797db9bf54e48022d6e4ae174300834ce3ef829021fe366c28b0: Status 404 returned error can't find the container with id 60156cd8a457797db9bf54e48022d6e4ae174300834ce3ef829021fe366c28b0 Mar 18 13:11:57.253265 master-0 kubenswrapper[7599]: I0318 13:11:57.253206 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:57.253265 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:57.253265 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:57.253265 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:57.254399 master-0 kubenswrapper[7599]: I0318 13:11:57.254350 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:57.272911 master-0 kubenswrapper[7599]: I0318 13:11:57.272863 7599 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="f8d9ce1d67226c0b362cac090a8a6e718851e873d29da1183f8e1cd8096dfcfa" exitCode=0 Mar 18 13:11:57.272911 master-0 kubenswrapper[7599]: I0318 13:11:57.272896 7599 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="ef75baaea3b231f0a943268458f551b383f49ce5906993775a78b47a21e43600" exitCode=0 Mar 18 13:11:57.273156 master-0 kubenswrapper[7599]: I0318 13:11:57.272935 7599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad81888f7e81397819da2e276d066f221c19460ce9d1c808f123e49e5e137c12" Mar 18 13:11:57.273156 master-0 kubenswrapper[7599]: I0318 13:11:57.272961 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 13:11:57.273156 master-0 kubenswrapper[7599]: I0318 13:11:57.272973 7599 scope.go:117] "RemoveContainer" containerID="c54bcf4ddd56343697f9602341ecf51d80939627fe3f4a59637f96162fa1598d" Mar 18 13:11:57.279769 master-0 kubenswrapper[7599]: I0318 13:11:57.276669 7599 generic.go:334] "Generic (PLEG): container finished" podID="6a4c87a8-6bf0-43b2-b598-1561cba3e391" containerID="d3c2d483573799510afcab12d760b1183078a2dd2aa3d3d851d413db0b1d8ab1" exitCode=0 Mar 18 13:11:57.279769 master-0 kubenswrapper[7599]: I0318 13:11:57.276795 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"6a4c87a8-6bf0-43b2-b598-1561cba3e391","Type":"ContainerDied","Data":"d3c2d483573799510afcab12d760b1183078a2dd2aa3d3d851d413db0b1d8ab1"} Mar 18 13:11:57.289015 master-0 kubenswrapper[7599]: I0318 13:11:57.288614 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"ce43e217adc4d0869adee3ba7c628c00","Type":"ContainerStarted","Data":"60156cd8a457797db9bf54e48022d6e4ae174300834ce3ef829021fe366c28b0"} Mar 18 13:11:57.379339 master-0 kubenswrapper[7599]: I0318 13:11:57.379280 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46f265536aba6292ead501bc9b49f327" path="/var/lib/kubelet/pods/46f265536aba6292ead501bc9b49f327/volumes" Mar 18 13:11:57.379838 master-0 kubenswrapper[7599]: I0318 13:11:57.379753 7599 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 18 13:11:57.397846 master-0 kubenswrapper[7599]: I0318 13:11:57.397781 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 18 13:11:57.397846 master-0 kubenswrapper[7599]: I0318 13:11:57.397835 7599 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="bdd82ae2-22f0-4d9d-b53d-93c949b988c1" Mar 18 13:11:57.398159 master-0 kubenswrapper[7599]: I0318 13:11:57.398131 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 18 13:11:57.398159 master-0 kubenswrapper[7599]: I0318 13:11:57.398156 7599 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="bdd82ae2-22f0-4d9d-b53d-93c949b988c1" Mar 18 13:11:58.253684 master-0 kubenswrapper[7599]: I0318 13:11:58.253581 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:58.253684 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:58.253684 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:58.253684 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:58.253684 master-0 kubenswrapper[7599]: I0318 13:11:58.253651 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:11:58.295690 master-0 kubenswrapper[7599]: I0318 13:11:58.295614 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"ce43e217adc4d0869adee3ba7c628c00","Type":"ContainerStarted","Data":"44e87b551cd25fba74201071dfbbc65a904f19d68cc2d608c5f938a0ac57ad14"} Mar 18 13:11:59.258908 master-0 kubenswrapper[7599]: I0318 13:11:59.258602 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:11:59.258908 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:11:59.258908 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:11:59.258908 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:11:59.258908 master-0 kubenswrapper[7599]: I0318 13:11:59.258711 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:00.254229 master-0 kubenswrapper[7599]: I0318 13:12:00.254169 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:00.254229 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:00.254229 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:00.254229 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:00.255728 master-0 kubenswrapper[7599]: I0318 13:12:00.254240 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:01.208714 master-0 kubenswrapper[7599]: I0318 13:12:01.208610 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:12:01.208714 master-0 kubenswrapper[7599]: I0318 13:12:01.208658 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:12:01.252671 master-0 kubenswrapper[7599]: I0318 13:12:01.252603 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:01.252671 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:01.252671 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:01.252671 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:01.252944 master-0 kubenswrapper[7599]: I0318 13:12:01.252693 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:02.254636 master-0 kubenswrapper[7599]: I0318 13:12:02.254399 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:02.254636 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:02.254636 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:02.254636 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:02.254636 master-0 kubenswrapper[7599]: I0318 13:12:02.254539 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:02.959299 master-0 kubenswrapper[7599]: I0318 13:12:02.959227 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-sp4ld_bf9d21f9-64d6-4e21-a985-491197038568/authentication-operator/0.log" Mar 18 13:12:03.154559 master-0 kubenswrapper[7599]: I0318 13:12:03.154496 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-sp4ld_bf9d21f9-64d6-4e21-a985-491197038568/authentication-operator/1.log" Mar 18 13:12:03.253931 master-0 kubenswrapper[7599]: I0318 13:12:03.253805 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:03.253931 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:03.253931 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:03.253931 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:03.253931 master-0 kubenswrapper[7599]: I0318 13:12:03.253872 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:03.359891 master-0 kubenswrapper[7599]: I0318 13:12:03.359855 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-7dcf5569b5-gvmtv_00375107-9a3b-4161-a90d-72ea8827c5fc/router/0.log" Mar 18 13:12:03.556764 master-0 kubenswrapper[7599]: I0318 13:12:03.556608 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-7dcf5569b5-gvmtv_00375107-9a3b-4161-a90d-72ea8827c5fc/router/1.log" Mar 18 13:12:03.752831 master-0 kubenswrapper[7599]: I0318 13:12:03.752770 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-5bb6f9f846-6wq9c_7fb5bad7-07d9-45ac-ad27-a887d12d148f/fix-audit-permissions/0.log" Mar 18 13:12:03.959094 master-0 kubenswrapper[7599]: I0318 13:12:03.959057 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-5bb6f9f846-6wq9c_7fb5bad7-07d9-45ac-ad27-a887d12d148f/oauth-apiserver/0.log" Mar 18 13:12:04.156799 master-0 kubenswrapper[7599]: I0318 13:12:04.156749 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-jx4mf_19a76585-a9ac-4ed9-9146-bb77b31848c6/etcd-operator/0.log" Mar 18 13:12:04.254381 master-0 kubenswrapper[7599]: I0318 13:12:04.254278 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:04.254381 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:04.254381 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:04.254381 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:04.254722 master-0 kubenswrapper[7599]: I0318 13:12:04.254696 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:04.370291 master-0 kubenswrapper[7599]: I0318 13:12:04.370251 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-jx4mf_19a76585-a9ac-4ed9-9146-bb77b31848c6/etcd-operator/1.log" Mar 18 13:12:04.641837 master-0 kubenswrapper[7599]: I0318 13:12:04.641612 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/setup/0.log" Mar 18 13:12:04.752385 master-0 kubenswrapper[7599]: I0318 13:12:04.752343 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-ensure-env-vars/0.log" Mar 18 13:12:04.952819 master-0 kubenswrapper[7599]: I0318 13:12:04.952769 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-resources-copy/0.log" Mar 18 13:12:05.152694 master-0 kubenswrapper[7599]: I0318 13:12:05.152639 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcdctl/0.log" Mar 18 13:12:05.253830 master-0 kubenswrapper[7599]: I0318 13:12:05.253711 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:05.253830 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:05.253830 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:05.253830 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:05.253830 master-0 kubenswrapper[7599]: I0318 13:12:05.253772 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:05.480762 master-0 kubenswrapper[7599]: I0318 13:12:05.480714 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd/0.log" Mar 18 13:12:05.554770 master-0 kubenswrapper[7599]: I0318 13:12:05.554661 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 18 13:12:05.753631 master-0 kubenswrapper[7599]: I0318 13:12:05.753572 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-readyz/0.log" Mar 18 13:12:05.952670 master-0 kubenswrapper[7599]: I0318 13:12:05.952617 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 18 13:12:06.159559 master-0 kubenswrapper[7599]: I0318 13:12:06.159493 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_814ffa63-b08e-4de8-b912-8d7f0638230b/installer/0.log" Mar 18 13:12:06.253795 master-0 kubenswrapper[7599]: I0318 13:12:06.253676 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:06.253795 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:06.253795 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:06.253795 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:06.253795 master-0 kubenswrapper[7599]: I0318 13:12:06.253746 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:06.499517 master-0 kubenswrapper[7599]: I0318 13:12:06.499321 7599 scope.go:117] "RemoveContainer" containerID="ef75baaea3b231f0a943268458f551b383f49ce5906993775a78b47a21e43600" Mar 18 13:12:06.503620 master-0 kubenswrapper[7599]: I0318 13:12:06.503538 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-k8tv4_34a3a84b-048f-4822-9f05-0e7509327ca2/kube-apiserver-operator/0.log" Mar 18 13:12:06.558065 master-0 kubenswrapper[7599]: I0318 13:12:06.557901 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-k8tv4_34a3a84b-048f-4822-9f05-0e7509327ca2/kube-apiserver-operator/1.log" Mar 18 13:12:06.754128 master-0 kubenswrapper[7599]: I0318 13:12:06.754078 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_49fac1b46a11e49501805e891baae4a9/setup/0.log" Mar 18 13:12:06.954974 master-0 kubenswrapper[7599]: I0318 13:12:06.954925 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_49fac1b46a11e49501805e891baae4a9/kube-apiserver/0.log" Mar 18 13:12:07.153508 master-0 kubenswrapper[7599]: I0318 13:12:07.153341 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_49fac1b46a11e49501805e891baae4a9/kube-apiserver-insecure-readyz/0.log" Mar 18 13:12:07.254325 master-0 kubenswrapper[7599]: I0318 13:12:07.254158 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:07.254325 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:07.254325 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:07.254325 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:07.254325 master-0 kubenswrapper[7599]: I0318 13:12:07.254226 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:07.357780 master-0 kubenswrapper[7599]: I0318 13:12:07.357736 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41/installer/0.log" Mar 18 13:12:07.557698 master-0 kubenswrapper[7599]: I0318 13:12:07.557567 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_6a4c87a8-6bf0-43b2-b598-1561cba3e391/installer/0.log" Mar 18 13:12:07.797734 master-0 kubenswrapper[7599]: I0318 13:12:07.797658 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 13:12:07.926430 master-0 kubenswrapper[7599]: I0318 13:12:07.924632 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6a4c87a8-6bf0-43b2-b598-1561cba3e391-var-lock\") pod \"6a4c87a8-6bf0-43b2-b598-1561cba3e391\" (UID: \"6a4c87a8-6bf0-43b2-b598-1561cba3e391\") " Mar 18 13:12:07.926430 master-0 kubenswrapper[7599]: I0318 13:12:07.924707 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6a4c87a8-6bf0-43b2-b598-1561cba3e391-kubelet-dir\") pod \"6a4c87a8-6bf0-43b2-b598-1561cba3e391\" (UID: \"6a4c87a8-6bf0-43b2-b598-1561cba3e391\") " Mar 18 13:12:07.926430 master-0 kubenswrapper[7599]: I0318 13:12:07.924763 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6a4c87a8-6bf0-43b2-b598-1561cba3e391-kube-api-access\") pod \"6a4c87a8-6bf0-43b2-b598-1561cba3e391\" (UID: \"6a4c87a8-6bf0-43b2-b598-1561cba3e391\") " Mar 18 13:12:07.926430 master-0 kubenswrapper[7599]: I0318 13:12:07.924760 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a4c87a8-6bf0-43b2-b598-1561cba3e391-var-lock" (OuterVolumeSpecName: "var-lock") pod "6a4c87a8-6bf0-43b2-b598-1561cba3e391" (UID: "6a4c87a8-6bf0-43b2-b598-1561cba3e391"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:12:07.926430 master-0 kubenswrapper[7599]: I0318 13:12:07.924821 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a4c87a8-6bf0-43b2-b598-1561cba3e391-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6a4c87a8-6bf0-43b2-b598-1561cba3e391" (UID: "6a4c87a8-6bf0-43b2-b598-1561cba3e391"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:12:07.926430 master-0 kubenswrapper[7599]: I0318 13:12:07.924946 7599 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6a4c87a8-6bf0-43b2-b598-1561cba3e391-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:12:07.926430 master-0 kubenswrapper[7599]: I0318 13:12:07.924962 7599 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6a4c87a8-6bf0-43b2-b598-1561cba3e391-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:12:07.939440 master-0 kubenswrapper[7599]: I0318 13:12:07.930973 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a4c87a8-6bf0-43b2-b598-1561cba3e391-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6a4c87a8-6bf0-43b2-b598-1561cba3e391" (UID: "6a4c87a8-6bf0-43b2-b598-1561cba3e391"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:12:08.025870 master-0 kubenswrapper[7599]: I0318 13:12:08.025817 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6a4c87a8-6bf0-43b2-b598-1561cba3e391-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:12:08.253731 master-0 kubenswrapper[7599]: I0318 13:12:08.253629 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:08.253731 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:08.253731 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:08.253731 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:08.253731 master-0 kubenswrapper[7599]: I0318 13:12:08.253710 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:08.371548 master-0 kubenswrapper[7599]: I0318 13:12:08.371076 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"6a4c87a8-6bf0-43b2-b598-1561cba3e391","Type":"ContainerDied","Data":"54b6e29331a441885b9941b0a8d3cb3f4a69221f2394a03c8cf38fa54c2e30f4"} Mar 18 13:12:08.371548 master-0 kubenswrapper[7599]: I0318 13:12:08.371111 7599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54b6e29331a441885b9941b0a8d3cb3f4a69221f2394a03c8cf38fa54c2e30f4" Mar 18 13:12:08.371548 master-0 kubenswrapper[7599]: I0318 13:12:08.371119 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 13:12:08.552505 master-0 kubenswrapper[7599]: I0318 13:12:08.552339 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-n7fn4_b75d4622-ac12-4f82-afc9-ab63e6278b0c/kube-controller-manager-operator/1.log" Mar 18 13:12:08.754489 master-0 kubenswrapper[7599]: I0318 13:12:08.754384 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-n7fn4_b75d4622-ac12-4f82-afc9-ab63e6278b0c/kube-controller-manager-operator/2.log" Mar 18 13:12:08.955897 master-0 kubenswrapper[7599]: I0318 13:12:08.955839 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_c83737980b9ee109184b1d78e942cf36/kube-scheduler/0.log" Mar 18 13:12:09.156496 master-0 kubenswrapper[7599]: I0318 13:12:09.156420 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_c83737980b9ee109184b1d78e942cf36/kube-scheduler/1.log" Mar 18 13:12:09.254128 master-0 kubenswrapper[7599]: I0318 13:12:09.254004 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:09.254128 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:09.254128 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:09.254128 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:09.254128 master-0 kubenswrapper[7599]: I0318 13:12:09.254080 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:09.360961 master-0 kubenswrapper[7599]: I0318 13:12:09.360542 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_615539dc-56e1-4489-9aee-33b3e769d4fc/installer/0.log" Mar 18 13:12:09.556898 master-0 kubenswrapper[7599]: I0318 13:12:09.556789 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-dddff6458-zm5rd_a8eff549-02f3-446e-b3a1-a66cecdc02a6/kube-scheduler-operator-container/0.log" Mar 18 13:12:09.754880 master-0 kubenswrapper[7599]: I0318 13:12:09.754818 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-dddff6458-zm5rd_a8eff549-02f3-446e-b3a1-a66cecdc02a6/kube-scheduler-operator-container/1.log" Mar 18 13:12:09.956333 master-0 kubenswrapper[7599]: I0318 13:12:09.956261 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-d65958b8-4bqf9_0c2c4a58-9780-4ecd-b417-e590ac3576ed/openshift-apiserver-operator/0.log" Mar 18 13:12:10.158729 master-0 kubenswrapper[7599]: I0318 13:12:10.158620 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-d65958b8-4bqf9_0c2c4a58-9780-4ecd-b417-e590ac3576ed/openshift-apiserver-operator/1.log" Mar 18 13:12:10.253476 master-0 kubenswrapper[7599]: I0318 13:12:10.253359 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:10.253476 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:10.253476 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:10.253476 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:10.253476 master-0 kubenswrapper[7599]: I0318 13:12:10.253443 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:10.352290 master-0 kubenswrapper[7599]: I0318 13:12:10.352242 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-85b59d8688-wd26k_a2bdf5b0-8764-4b15-97c9-20af36634fd0/fix-audit-permissions/0.log" Mar 18 13:12:10.557939 master-0 kubenswrapper[7599]: I0318 13:12:10.557799 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-85b59d8688-wd26k_a2bdf5b0-8764-4b15-97c9-20af36634fd0/openshift-apiserver/0.log" Mar 18 13:12:10.577461 master-0 kubenswrapper[7599]: I0318 13:12:10.577385 7599 scope.go:117] "RemoveContainer" containerID="b515c044e4a53f4787c6a1c5354de363795974706495b5ee9abee555e41455a3" Mar 18 13:12:10.755502 master-0 kubenswrapper[7599]: I0318 13:12:10.755438 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-85b59d8688-wd26k_a2bdf5b0-8764-4b15-97c9-20af36634fd0/openshift-apiserver-check-endpoints/0.log" Mar 18 13:12:10.959055 master-0 kubenswrapper[7599]: I0318 13:12:10.959023 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-jx4mf_19a76585-a9ac-4ed9-9146-bb77b31848c6/etcd-operator/0.log" Mar 18 13:12:11.154354 master-0 kubenswrapper[7599]: I0318 13:12:11.154317 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-jx4mf_19a76585-a9ac-4ed9-9146-bb77b31848c6/etcd-operator/1.log" Mar 18 13:12:11.253300 master-0 kubenswrapper[7599]: I0318 13:12:11.253243 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:11.253300 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:11.253300 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:11.253300 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:11.253685 master-0 kubenswrapper[7599]: I0318 13:12:11.253320 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:11.355099 master-0 kubenswrapper[7599]: I0318 13:12:11.355003 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-cpqm5_7dca7577-6bee-4dd3-917a-7b7ccc42f0fc/openshift-controller-manager-operator/1.log" Mar 18 13:12:11.391172 master-0 kubenswrapper[7599]: I0318 13:12:11.391078 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-89st2" event={"ID":"9548e397-0db4-41c8-9cc8-b575060e9c66","Type":"ContainerStarted","Data":"403ebc2e5a41ebd83d754ef243b009a18ec0ae88fbc50c4907c8838a7c5edab4"} Mar 18 13:12:11.394104 master-0 kubenswrapper[7599]: I0318 13:12:11.394066 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"ce43e217adc4d0869adee3ba7c628c00","Type":"ContainerStarted","Data":"704f6a2e758bf39725732abfe9688e37d2c20b27969fbf143112e26938fff48b"} Mar 18 13:12:11.394104 master-0 kubenswrapper[7599]: I0318 13:12:11.394091 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"ce43e217adc4d0869adee3ba7c628c00","Type":"ContainerStarted","Data":"5801ea0bc2c8f6281bfc1858bdd8e4d303817df46389abb0fb746b4985f6eaba"} Mar 18 13:12:11.395899 master-0 kubenswrapper[7599]: I0318 13:12:11.395862 7599 generic.go:334] "Generic (PLEG): container finished" podID="d2316774-4ebc-4fa9-be07-eb1f16f614dd" containerID="1a099b747318c0fe3ecf7281f4b981921dcc9c60c98ba0e17565f1557ebc2839" exitCode=0 Mar 18 13:12:11.396095 master-0 kubenswrapper[7599]: I0318 13:12:11.395906 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8wqfk" event={"ID":"d2316774-4ebc-4fa9-be07-eb1f16f614dd","Type":"ContainerDied","Data":"1a099b747318c0fe3ecf7281f4b981921dcc9c60c98ba0e17565f1557ebc2839"} Mar 18 13:12:11.398050 master-0 kubenswrapper[7599]: I0318 13:12:11.397746 7599 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 13:12:11.399084 master-0 kubenswrapper[7599]: I0318 13:12:11.398841 7599 generic.go:334] "Generic (PLEG): container finished" podID="2a25632e-32d0-43d2-9be7-f515d29a1720" containerID="03f24e4774570f5bcb22723cea17bbe58e8e6018e449616ad7396efe7f6ed545" exitCode=0 Mar 18 13:12:11.399084 master-0 kubenswrapper[7599]: I0318 13:12:11.398905 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tqw5h" event={"ID":"2a25632e-32d0-43d2-9be7-f515d29a1720","Type":"ContainerDied","Data":"03f24e4774570f5bcb22723cea17bbe58e8e6018e449616ad7396efe7f6ed545"} Mar 18 13:12:11.400923 master-0 kubenswrapper[7599]: I0318 13:12:11.400885 7599 generic.go:334] "Generic (PLEG): container finished" podID="e390416b-4fa1-41d5-bc74-9e779b252350" containerID="b43b7d12d5938ada2c8a891881e47265567c35b517ea58afd154109c58f9fc86" exitCode=0 Mar 18 13:12:11.400996 master-0 kubenswrapper[7599]: I0318 13:12:11.400957 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxlrz" event={"ID":"e390416b-4fa1-41d5-bc74-9e779b252350","Type":"ContainerDied","Data":"b43b7d12d5938ada2c8a891881e47265567c35b517ea58afd154109c58f9fc86"} Mar 18 13:12:12.258922 master-0 kubenswrapper[7599]: I0318 13:12:12.257922 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:12.258922 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:12.258922 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:12.258922 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:12.258922 master-0 kubenswrapper[7599]: I0318 13:12:12.258008 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:12.379177 master-0 kubenswrapper[7599]: I0318 13:12:12.379130 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-cpqm5_7dca7577-6bee-4dd3-917a-7b7ccc42f0fc/openshift-controller-manager-operator/2.log" Mar 18 13:12:12.407434 master-0 kubenswrapper[7599]: I0318 13:12:12.405587 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-fffb75699-b7pwr_6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0/controller-manager/0.log" Mar 18 13:12:12.417144 master-0 kubenswrapper[7599]: I0318 13:12:12.417071 7599 generic.go:334] "Generic (PLEG): container finished" podID="9548e397-0db4-41c8-9cc8-b575060e9c66" containerID="403ebc2e5a41ebd83d754ef243b009a18ec0ae88fbc50c4907c8838a7c5edab4" exitCode=0 Mar 18 13:12:12.417336 master-0 kubenswrapper[7599]: I0318 13:12:12.417158 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-89st2" event={"ID":"9548e397-0db4-41c8-9cc8-b575060e9c66","Type":"ContainerDied","Data":"403ebc2e5a41ebd83d754ef243b009a18ec0ae88fbc50c4907c8838a7c5edab4"} Mar 18 13:12:12.421221 master-0 kubenswrapper[7599]: I0318 13:12:12.421171 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-fffb75699-b7pwr_6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0/controller-manager/1.log" Mar 18 13:12:12.423159 master-0 kubenswrapper[7599]: I0318 13:12:12.422924 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"ce43e217adc4d0869adee3ba7c628c00","Type":"ContainerStarted","Data":"d37eeb556049e5b3b4f4b9b22a5d363b374b83b4717134010b728f561f5eda04"} Mar 18 13:12:12.433040 master-0 kubenswrapper[7599]: I0318 13:12:12.432501 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-c8888769b-8mxp6_a350f317-f058-4102-af5c-cbba46d35e02/route-controller-manager/0.log" Mar 18 13:12:12.440627 master-0 kubenswrapper[7599]: I0318 13:12:12.440596 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-68f85b4d6c-t84s9_d2455453-5943-49ef-bfea-cba077197da0/catalog-operator/1.log" Mar 18 13:12:12.468846 master-0 kubenswrapper[7599]: I0318 13:12:12.468773 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=16.46875835 podStartE2EDuration="16.46875835s" podCreationTimestamp="2026-03-18 13:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:12:12.467665244 +0000 UTC m=+307.428719496" watchObservedRunningTime="2026-03-18 13:12:12.46875835 +0000 UTC m=+307.429812592" Mar 18 13:12:12.560547 master-0 kubenswrapper[7599]: I0318 13:12:12.560446 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-68f85b4d6c-t84s9_d2455453-5943-49ef-bfea-cba077197da0/catalog-operator/2.log" Mar 18 13:12:12.755757 master-0 kubenswrapper[7599]: I0318 13:12:12.755704 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-5c9796789-z8jkt_822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9/olm-operator/0.log" Mar 18 13:12:12.952595 master-0 kubenswrapper[7599]: I0318 13:12:12.952548 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-p7vvx_2ea9eb53-0385-4a1a-a64f-696f8520cf49/kube-rbac-proxy/0.log" Mar 18 13:12:13.155017 master-0 kubenswrapper[7599]: I0318 13:12:13.154901 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-p7vvx_2ea9eb53-0385-4a1a-a64f-696f8520cf49/package-server-manager/0.log" Mar 18 13:12:13.263951 master-0 kubenswrapper[7599]: I0318 13:12:13.263891 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:13.263951 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:13.263951 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:13.263951 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:13.264513 master-0 kubenswrapper[7599]: I0318 13:12:13.263970 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:13.334655 master-0 kubenswrapper[7599]: I0318 13:12:13.334597 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wr89x"] Mar 18 13:12:13.334855 master-0 kubenswrapper[7599]: I0318 13:12:13.334828 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wr89x" podUID="20b8c731-9ec8-4abb-8cc2-9821b2819e48" containerName="cluster-cloud-controller-manager" containerID="cri-o://53dc88cb178a1099a7385d15a32bbd65471f1352977700c7370927d3cedef7d2" gracePeriod=30 Mar 18 13:12:13.334965 master-0 kubenswrapper[7599]: I0318 13:12:13.334894 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wr89x" podUID="20b8c731-9ec8-4abb-8cc2-9821b2819e48" containerName="kube-rbac-proxy" containerID="cri-o://2f8cede3208eb8ed1f37fd3a1de6bd3ce1f710b4758b33028b063eb6f08e2f1d" gracePeriod=30 Mar 18 13:12:13.335059 master-0 kubenswrapper[7599]: I0318 13:12:13.334914 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wr89x" podUID="20b8c731-9ec8-4abb-8cc2-9821b2819e48" containerName="config-sync-controllers" containerID="cri-o://49cb30268577529a78d37d728d3b27d0c56be1f0d196c9688ac6dc0186c908b6" gracePeriod=30 Mar 18 13:12:13.358972 master-0 kubenswrapper[7599]: I0318 13:12:13.358902 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-ff75f747c-r46tm_3ee0f85b-219b-47cb-a22a-67d359a69881/packageserver/0.log" Mar 18 13:12:13.443857 master-0 kubenswrapper[7599]: I0318 13:12:13.443809 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-89st2" event={"ID":"9548e397-0db4-41c8-9cc8-b575060e9c66","Type":"ContainerStarted","Data":"9e4278835752208e516e1189ae8ac5a890d3d4160a41274c8f9f115a5ab41220"} Mar 18 13:12:13.446637 master-0 kubenswrapper[7599]: I0318 13:12:13.446592 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8wqfk" event={"ID":"d2316774-4ebc-4fa9-be07-eb1f16f614dd","Type":"ContainerStarted","Data":"7f146f2adc27fe3158369931da2a9a1a2960129e0d07116c72c9e8f51434c0ed"} Mar 18 13:12:13.458469 master-0 kubenswrapper[7599]: I0318 13:12:13.458394 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tqw5h" event={"ID":"2a25632e-32d0-43d2-9be7-f515d29a1720","Type":"ContainerStarted","Data":"e8e8e5f5ee6fd77b7212349b29251fc3476241d2ab0b5d83a3ecdc238d84a2ae"} Mar 18 13:12:13.467206 master-0 kubenswrapper[7599]: I0318 13:12:13.467166 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxlrz" event={"ID":"e390416b-4fa1-41d5-bc74-9e779b252350","Type":"ContainerStarted","Data":"a279c0abb201f34c96a283114d3949bb3fe1eddd5b4315ac341720f9a904daea"} Mar 18 13:12:13.475669 master-0 kubenswrapper[7599]: I0318 13:12:13.475330 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-89st2" podStartSLOduration=2.864795644 podStartE2EDuration="23.475309484s" podCreationTimestamp="2026-03-18 13:11:50 +0000 UTC" firstStartedPulling="2026-03-18 13:11:52.219098822 +0000 UTC m=+287.180153064" lastFinishedPulling="2026-03-18 13:12:12.829612662 +0000 UTC m=+307.790666904" observedRunningTime="2026-03-18 13:12:13.47246337 +0000 UTC m=+308.433517612" watchObservedRunningTime="2026-03-18 13:12:13.475309484 +0000 UTC m=+308.436363726" Mar 18 13:12:13.500560 master-0 kubenswrapper[7599]: I0318 13:12:13.500484 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8wqfk" podStartSLOduration=4.272551489 podStartE2EDuration="25.500461101s" podCreationTimestamp="2026-03-18 13:11:48 +0000 UTC" firstStartedPulling="2026-03-18 13:11:51.197716869 +0000 UTC m=+286.158771121" lastFinishedPulling="2026-03-18 13:12:12.425626491 +0000 UTC m=+307.386680733" observedRunningTime="2026-03-18 13:12:13.499026175 +0000 UTC m=+308.460080417" watchObservedRunningTime="2026-03-18 13:12:13.500461101 +0000 UTC m=+308.461515353" Mar 18 13:12:13.514614 master-0 kubenswrapper[7599]: I0318 13:12:13.514576 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wr89x" Mar 18 13:12:13.521822 master-0 kubenswrapper[7599]: I0318 13:12:13.521720 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bxlrz" podStartSLOduration=3.207289733 podStartE2EDuration="24.52170035s" podCreationTimestamp="2026-03-18 13:11:49 +0000 UTC" firstStartedPulling="2026-03-18 13:11:51.202713294 +0000 UTC m=+286.163767556" lastFinishedPulling="2026-03-18 13:12:12.517123931 +0000 UTC m=+307.478178173" observedRunningTime="2026-03-18 13:12:13.520706888 +0000 UTC m=+308.481761130" watchObservedRunningTime="2026-03-18 13:12:13.52170035 +0000 UTC m=+308.482754592" Mar 18 13:12:13.539680 master-0 kubenswrapper[7599]: I0318 13:12:13.539610 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tqw5h" podStartSLOduration=3.126473615 podStartE2EDuration="25.539593129s" podCreationTimestamp="2026-03-18 13:11:48 +0000 UTC" firstStartedPulling="2026-03-18 13:11:50.167381292 +0000 UTC m=+285.128435534" lastFinishedPulling="2026-03-18 13:12:12.580500806 +0000 UTC m=+307.541555048" observedRunningTime="2026-03-18 13:12:13.537460699 +0000 UTC m=+308.498514951" watchObservedRunningTime="2026-03-18 13:12:13.539593129 +0000 UTC m=+308.500647361" Mar 18 13:12:13.558733 master-0 kubenswrapper[7599]: I0318 13:12:13.558674 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-ff75f747c-r46tm_3ee0f85b-219b-47cb-a22a-67d359a69881/packageserver/0.log" Mar 18 13:12:13.706728 master-0 kubenswrapper[7599]: I0318 13:12:13.706551 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k75tx\" (UniqueName: \"kubernetes.io/projected/20b8c731-9ec8-4abb-8cc2-9821b2819e48-kube-api-access-k75tx\") pod \"20b8c731-9ec8-4abb-8cc2-9821b2819e48\" (UID: \"20b8c731-9ec8-4abb-8cc2-9821b2819e48\") " Mar 18 13:12:13.706728 master-0 kubenswrapper[7599]: I0318 13:12:13.706632 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/20b8c731-9ec8-4abb-8cc2-9821b2819e48-auth-proxy-config\") pod \"20b8c731-9ec8-4abb-8cc2-9821b2819e48\" (UID: \"20b8c731-9ec8-4abb-8cc2-9821b2819e48\") " Mar 18 13:12:13.706728 master-0 kubenswrapper[7599]: I0318 13:12:13.706671 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/20b8c731-9ec8-4abb-8cc2-9821b2819e48-images\") pod \"20b8c731-9ec8-4abb-8cc2-9821b2819e48\" (UID: \"20b8c731-9ec8-4abb-8cc2-9821b2819e48\") " Mar 18 13:12:13.706728 master-0 kubenswrapper[7599]: I0318 13:12:13.706712 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/20b8c731-9ec8-4abb-8cc2-9821b2819e48-host-etc-kube\") pod \"20b8c731-9ec8-4abb-8cc2-9821b2819e48\" (UID: \"20b8c731-9ec8-4abb-8cc2-9821b2819e48\") " Mar 18 13:12:13.707020 master-0 kubenswrapper[7599]: I0318 13:12:13.706778 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/20b8c731-9ec8-4abb-8cc2-9821b2819e48-cloud-controller-manager-operator-tls\") pod \"20b8c731-9ec8-4abb-8cc2-9821b2819e48\" (UID: \"20b8c731-9ec8-4abb-8cc2-9821b2819e48\") " Mar 18 13:12:13.707775 master-0 kubenswrapper[7599]: I0318 13:12:13.707731 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20b8c731-9ec8-4abb-8cc2-9821b2819e48-host-etc-kube" (OuterVolumeSpecName: "host-etc-kube") pod "20b8c731-9ec8-4abb-8cc2-9821b2819e48" (UID: "20b8c731-9ec8-4abb-8cc2-9821b2819e48"). InnerVolumeSpecName "host-etc-kube". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:12:13.708088 master-0 kubenswrapper[7599]: I0318 13:12:13.708056 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20b8c731-9ec8-4abb-8cc2-9821b2819e48-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "20b8c731-9ec8-4abb-8cc2-9821b2819e48" (UID: "20b8c731-9ec8-4abb-8cc2-9821b2819e48"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:12:13.708169 master-0 kubenswrapper[7599]: I0318 13:12:13.708088 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20b8c731-9ec8-4abb-8cc2-9821b2819e48-images" (OuterVolumeSpecName: "images") pod "20b8c731-9ec8-4abb-8cc2-9821b2819e48" (UID: "20b8c731-9ec8-4abb-8cc2-9821b2819e48"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:12:13.710241 master-0 kubenswrapper[7599]: I0318 13:12:13.710212 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b8c731-9ec8-4abb-8cc2-9821b2819e48-kube-api-access-k75tx" (OuterVolumeSpecName: "kube-api-access-k75tx") pod "20b8c731-9ec8-4abb-8cc2-9821b2819e48" (UID: "20b8c731-9ec8-4abb-8cc2-9821b2819e48"). InnerVolumeSpecName "kube-api-access-k75tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:12:13.710591 master-0 kubenswrapper[7599]: I0318 13:12:13.710544 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b8c731-9ec8-4abb-8cc2-9821b2819e48-cloud-controller-manager-operator-tls" (OuterVolumeSpecName: "cloud-controller-manager-operator-tls") pod "20b8c731-9ec8-4abb-8cc2-9821b2819e48" (UID: "20b8c731-9ec8-4abb-8cc2-9821b2819e48"). InnerVolumeSpecName "cloud-controller-manager-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:12:13.765048 master-0 kubenswrapper[7599]: I0318 13:12:13.764997 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-ff75f747c-r46tm_3ee0f85b-219b-47cb-a22a-67d359a69881/packageserver/1.log" Mar 18 13:12:13.808307 master-0 kubenswrapper[7599]: I0318 13:12:13.808256 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k75tx\" (UniqueName: \"kubernetes.io/projected/20b8c731-9ec8-4abb-8cc2-9821b2819e48-kube-api-access-k75tx\") on node \"master-0\" DevicePath \"\"" Mar 18 13:12:13.808307 master-0 kubenswrapper[7599]: I0318 13:12:13.808302 7599 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/20b8c731-9ec8-4abb-8cc2-9821b2819e48-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:12:13.808636 master-0 kubenswrapper[7599]: I0318 13:12:13.808346 7599 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/20b8c731-9ec8-4abb-8cc2-9821b2819e48-images\") on node \"master-0\" DevicePath \"\"" Mar 18 13:12:13.808636 master-0 kubenswrapper[7599]: I0318 13:12:13.808360 7599 reconciler_common.go:293] "Volume detached for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/20b8c731-9ec8-4abb-8cc2-9821b2819e48-host-etc-kube\") on node \"master-0\" DevicePath \"\"" Mar 18 13:12:13.808636 master-0 kubenswrapper[7599]: I0318 13:12:13.808373 7599 reconciler_common.go:293] "Volume detached for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/20b8c731-9ec8-4abb-8cc2-9821b2819e48-cloud-controller-manager-operator-tls\") on node \"master-0\" DevicePath \"\"" Mar 18 13:12:14.253246 master-0 kubenswrapper[7599]: I0318 13:12:14.253201 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:14.253246 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:14.253246 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:14.253246 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:14.253551 master-0 kubenswrapper[7599]: I0318 13:12:14.253285 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:14.473797 master-0 kubenswrapper[7599]: I0318 13:12:14.473764 7599 generic.go:334] "Generic (PLEG): container finished" podID="20b8c731-9ec8-4abb-8cc2-9821b2819e48" containerID="2f8cede3208eb8ed1f37fd3a1de6bd3ce1f710b4758b33028b063eb6f08e2f1d" exitCode=0 Mar 18 13:12:14.474342 master-0 kubenswrapper[7599]: I0318 13:12:14.474328 7599 generic.go:334] "Generic (PLEG): container finished" podID="20b8c731-9ec8-4abb-8cc2-9821b2819e48" containerID="49cb30268577529a78d37d728d3b27d0c56be1f0d196c9688ac6dc0186c908b6" exitCode=0 Mar 18 13:12:14.474423 master-0 kubenswrapper[7599]: I0318 13:12:14.473810 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wr89x" Mar 18 13:12:14.474476 master-0 kubenswrapper[7599]: I0318 13:12:14.474392 7599 generic.go:334] "Generic (PLEG): container finished" podID="20b8c731-9ec8-4abb-8cc2-9821b2819e48" containerID="53dc88cb178a1099a7385d15a32bbd65471f1352977700c7370927d3cedef7d2" exitCode=0 Mar 18 13:12:14.474476 master-0 kubenswrapper[7599]: I0318 13:12:14.473827 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wr89x" event={"ID":"20b8c731-9ec8-4abb-8cc2-9821b2819e48","Type":"ContainerDied","Data":"2f8cede3208eb8ed1f37fd3a1de6bd3ce1f710b4758b33028b063eb6f08e2f1d"} Mar 18 13:12:14.474536 master-0 kubenswrapper[7599]: I0318 13:12:14.474507 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wr89x" event={"ID":"20b8c731-9ec8-4abb-8cc2-9821b2819e48","Type":"ContainerDied","Data":"49cb30268577529a78d37d728d3b27d0c56be1f0d196c9688ac6dc0186c908b6"} Mar 18 13:12:14.474536 master-0 kubenswrapper[7599]: I0318 13:12:14.474527 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wr89x" event={"ID":"20b8c731-9ec8-4abb-8cc2-9821b2819e48","Type":"ContainerDied","Data":"53dc88cb178a1099a7385d15a32bbd65471f1352977700c7370927d3cedef7d2"} Mar 18 13:12:14.474588 master-0 kubenswrapper[7599]: I0318 13:12:14.474542 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wr89x" event={"ID":"20b8c731-9ec8-4abb-8cc2-9821b2819e48","Type":"ContainerDied","Data":"130c9b8b81d1dc1642b282baefa9a945c1fd58c682eee8d1812a7778c9b49286"} Mar 18 13:12:14.474588 master-0 kubenswrapper[7599]: I0318 13:12:14.474571 7599 scope.go:117] "RemoveContainer" containerID="2f8cede3208eb8ed1f37fd3a1de6bd3ce1f710b4758b33028b063eb6f08e2f1d" Mar 18 13:12:14.498629 master-0 kubenswrapper[7599]: I0318 13:12:14.498579 7599 scope.go:117] "RemoveContainer" containerID="49cb30268577529a78d37d728d3b27d0c56be1f0d196c9688ac6dc0186c908b6" Mar 18 13:12:14.521231 master-0 kubenswrapper[7599]: I0318 13:12:14.521204 7599 scope.go:117] "RemoveContainer" containerID="53dc88cb178a1099a7385d15a32bbd65471f1352977700c7370927d3cedef7d2" Mar 18 13:12:14.526871 master-0 kubenswrapper[7599]: I0318 13:12:14.526812 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wr89x"] Mar 18 13:12:14.531268 master-0 kubenswrapper[7599]: I0318 13:12:14.531223 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wr89x"] Mar 18 13:12:14.539685 master-0 kubenswrapper[7599]: I0318 13:12:14.539644 7599 scope.go:117] "RemoveContainer" containerID="2f8cede3208eb8ed1f37fd3a1de6bd3ce1f710b4758b33028b063eb6f08e2f1d" Mar 18 13:12:14.545918 master-0 kubenswrapper[7599]: E0318 13:12:14.545867 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f8cede3208eb8ed1f37fd3a1de6bd3ce1f710b4758b33028b063eb6f08e2f1d\": container with ID starting with 2f8cede3208eb8ed1f37fd3a1de6bd3ce1f710b4758b33028b063eb6f08e2f1d not found: ID does not exist" containerID="2f8cede3208eb8ed1f37fd3a1de6bd3ce1f710b4758b33028b063eb6f08e2f1d" Mar 18 13:12:14.546068 master-0 kubenswrapper[7599]: I0318 13:12:14.545927 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f8cede3208eb8ed1f37fd3a1de6bd3ce1f710b4758b33028b063eb6f08e2f1d"} err="failed to get container status \"2f8cede3208eb8ed1f37fd3a1de6bd3ce1f710b4758b33028b063eb6f08e2f1d\": rpc error: code = NotFound desc = could not find container \"2f8cede3208eb8ed1f37fd3a1de6bd3ce1f710b4758b33028b063eb6f08e2f1d\": container with ID starting with 2f8cede3208eb8ed1f37fd3a1de6bd3ce1f710b4758b33028b063eb6f08e2f1d not found: ID does not exist" Mar 18 13:12:14.546068 master-0 kubenswrapper[7599]: I0318 13:12:14.545957 7599 scope.go:117] "RemoveContainer" containerID="49cb30268577529a78d37d728d3b27d0c56be1f0d196c9688ac6dc0186c908b6" Mar 18 13:12:14.546700 master-0 kubenswrapper[7599]: E0318 13:12:14.546676 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49cb30268577529a78d37d728d3b27d0c56be1f0d196c9688ac6dc0186c908b6\": container with ID starting with 49cb30268577529a78d37d728d3b27d0c56be1f0d196c9688ac6dc0186c908b6 not found: ID does not exist" containerID="49cb30268577529a78d37d728d3b27d0c56be1f0d196c9688ac6dc0186c908b6" Mar 18 13:12:14.546764 master-0 kubenswrapper[7599]: I0318 13:12:14.546703 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49cb30268577529a78d37d728d3b27d0c56be1f0d196c9688ac6dc0186c908b6"} err="failed to get container status \"49cb30268577529a78d37d728d3b27d0c56be1f0d196c9688ac6dc0186c908b6\": rpc error: code = NotFound desc = could not find container \"49cb30268577529a78d37d728d3b27d0c56be1f0d196c9688ac6dc0186c908b6\": container with ID starting with 49cb30268577529a78d37d728d3b27d0c56be1f0d196c9688ac6dc0186c908b6 not found: ID does not exist" Mar 18 13:12:14.546764 master-0 kubenswrapper[7599]: I0318 13:12:14.546720 7599 scope.go:117] "RemoveContainer" containerID="53dc88cb178a1099a7385d15a32bbd65471f1352977700c7370927d3cedef7d2" Mar 18 13:12:14.547340 master-0 kubenswrapper[7599]: E0318 13:12:14.547315 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53dc88cb178a1099a7385d15a32bbd65471f1352977700c7370927d3cedef7d2\": container with ID starting with 53dc88cb178a1099a7385d15a32bbd65471f1352977700c7370927d3cedef7d2 not found: ID does not exist" containerID="53dc88cb178a1099a7385d15a32bbd65471f1352977700c7370927d3cedef7d2" Mar 18 13:12:14.547389 master-0 kubenswrapper[7599]: I0318 13:12:14.547341 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53dc88cb178a1099a7385d15a32bbd65471f1352977700c7370927d3cedef7d2"} err="failed to get container status \"53dc88cb178a1099a7385d15a32bbd65471f1352977700c7370927d3cedef7d2\": rpc error: code = NotFound desc = could not find container \"53dc88cb178a1099a7385d15a32bbd65471f1352977700c7370927d3cedef7d2\": container with ID starting with 53dc88cb178a1099a7385d15a32bbd65471f1352977700c7370927d3cedef7d2 not found: ID does not exist" Mar 18 13:12:14.547389 master-0 kubenswrapper[7599]: I0318 13:12:14.547358 7599 scope.go:117] "RemoveContainer" containerID="2f8cede3208eb8ed1f37fd3a1de6bd3ce1f710b4758b33028b063eb6f08e2f1d" Mar 18 13:12:14.547598 master-0 kubenswrapper[7599]: I0318 13:12:14.547576 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f8cede3208eb8ed1f37fd3a1de6bd3ce1f710b4758b33028b063eb6f08e2f1d"} err="failed to get container status \"2f8cede3208eb8ed1f37fd3a1de6bd3ce1f710b4758b33028b063eb6f08e2f1d\": rpc error: code = NotFound desc = could not find container \"2f8cede3208eb8ed1f37fd3a1de6bd3ce1f710b4758b33028b063eb6f08e2f1d\": container with ID starting with 2f8cede3208eb8ed1f37fd3a1de6bd3ce1f710b4758b33028b063eb6f08e2f1d not found: ID does not exist" Mar 18 13:12:14.547654 master-0 kubenswrapper[7599]: I0318 13:12:14.547597 7599 scope.go:117] "RemoveContainer" containerID="49cb30268577529a78d37d728d3b27d0c56be1f0d196c9688ac6dc0186c908b6" Mar 18 13:12:14.547856 master-0 kubenswrapper[7599]: I0318 13:12:14.547831 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49cb30268577529a78d37d728d3b27d0c56be1f0d196c9688ac6dc0186c908b6"} err="failed to get container status \"49cb30268577529a78d37d728d3b27d0c56be1f0d196c9688ac6dc0186c908b6\": rpc error: code = NotFound desc = could not find container \"49cb30268577529a78d37d728d3b27d0c56be1f0d196c9688ac6dc0186c908b6\": container with ID starting with 49cb30268577529a78d37d728d3b27d0c56be1f0d196c9688ac6dc0186c908b6 not found: ID does not exist" Mar 18 13:12:14.547856 master-0 kubenswrapper[7599]: I0318 13:12:14.547852 7599 scope.go:117] "RemoveContainer" containerID="53dc88cb178a1099a7385d15a32bbd65471f1352977700c7370927d3cedef7d2" Mar 18 13:12:14.548016 master-0 kubenswrapper[7599]: I0318 13:12:14.547991 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53dc88cb178a1099a7385d15a32bbd65471f1352977700c7370927d3cedef7d2"} err="failed to get container status \"53dc88cb178a1099a7385d15a32bbd65471f1352977700c7370927d3cedef7d2\": rpc error: code = NotFound desc = could not find container \"53dc88cb178a1099a7385d15a32bbd65471f1352977700c7370927d3cedef7d2\": container with ID starting with 53dc88cb178a1099a7385d15a32bbd65471f1352977700c7370927d3cedef7d2 not found: ID does not exist" Mar 18 13:12:14.548016 master-0 kubenswrapper[7599]: I0318 13:12:14.548012 7599 scope.go:117] "RemoveContainer" containerID="2f8cede3208eb8ed1f37fd3a1de6bd3ce1f710b4758b33028b063eb6f08e2f1d" Mar 18 13:12:14.550572 master-0 kubenswrapper[7599]: I0318 13:12:14.550525 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f8cede3208eb8ed1f37fd3a1de6bd3ce1f710b4758b33028b063eb6f08e2f1d"} err="failed to get container status \"2f8cede3208eb8ed1f37fd3a1de6bd3ce1f710b4758b33028b063eb6f08e2f1d\": rpc error: code = NotFound desc = could not find container \"2f8cede3208eb8ed1f37fd3a1de6bd3ce1f710b4758b33028b063eb6f08e2f1d\": container with ID starting with 2f8cede3208eb8ed1f37fd3a1de6bd3ce1f710b4758b33028b063eb6f08e2f1d not found: ID does not exist" Mar 18 13:12:14.550572 master-0 kubenswrapper[7599]: I0318 13:12:14.550562 7599 scope.go:117] "RemoveContainer" containerID="49cb30268577529a78d37d728d3b27d0c56be1f0d196c9688ac6dc0186c908b6" Mar 18 13:12:14.550902 master-0 kubenswrapper[7599]: I0318 13:12:14.550858 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49cb30268577529a78d37d728d3b27d0c56be1f0d196c9688ac6dc0186c908b6"} err="failed to get container status \"49cb30268577529a78d37d728d3b27d0c56be1f0d196c9688ac6dc0186c908b6\": rpc error: code = NotFound desc = could not find container \"49cb30268577529a78d37d728d3b27d0c56be1f0d196c9688ac6dc0186c908b6\": container with ID starting with 49cb30268577529a78d37d728d3b27d0c56be1f0d196c9688ac6dc0186c908b6 not found: ID does not exist" Mar 18 13:12:14.550950 master-0 kubenswrapper[7599]: I0318 13:12:14.550909 7599 scope.go:117] "RemoveContainer" containerID="53dc88cb178a1099a7385d15a32bbd65471f1352977700c7370927d3cedef7d2" Mar 18 13:12:14.554615 master-0 kubenswrapper[7599]: I0318 13:12:14.554562 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53dc88cb178a1099a7385d15a32bbd65471f1352977700c7370927d3cedef7d2"} err="failed to get container status \"53dc88cb178a1099a7385d15a32bbd65471f1352977700c7370927d3cedef7d2\": rpc error: code = NotFound desc = could not find container \"53dc88cb178a1099a7385d15a32bbd65471f1352977700c7370927d3cedef7d2\": container with ID starting with 53dc88cb178a1099a7385d15a32bbd65471f1352977700c7370927d3cedef7d2 not found: ID does not exist" Mar 18 13:12:14.574688 master-0 kubenswrapper[7599]: I0318 13:12:14.574369 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl"] Mar 18 13:12:14.574688 master-0 kubenswrapper[7599]: E0318 13:12:14.574638 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20b8c731-9ec8-4abb-8cc2-9821b2819e48" containerName="config-sync-controllers" Mar 18 13:12:14.574688 master-0 kubenswrapper[7599]: I0318 13:12:14.574654 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="20b8c731-9ec8-4abb-8cc2-9821b2819e48" containerName="config-sync-controllers" Mar 18 13:12:14.574688 master-0 kubenswrapper[7599]: E0318 13:12:14.574674 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20b8c731-9ec8-4abb-8cc2-9821b2819e48" containerName="kube-rbac-proxy" Mar 18 13:12:14.574688 master-0 kubenswrapper[7599]: I0318 13:12:14.574683 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="20b8c731-9ec8-4abb-8cc2-9821b2819e48" containerName="kube-rbac-proxy" Mar 18 13:12:14.574688 master-0 kubenswrapper[7599]: E0318 13:12:14.574703 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20b8c731-9ec8-4abb-8cc2-9821b2819e48" containerName="cluster-cloud-controller-manager" Mar 18 13:12:14.575026 master-0 kubenswrapper[7599]: I0318 13:12:14.574712 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="20b8c731-9ec8-4abb-8cc2-9821b2819e48" containerName="cluster-cloud-controller-manager" Mar 18 13:12:14.575026 master-0 kubenswrapper[7599]: E0318 13:12:14.574726 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a4c87a8-6bf0-43b2-b598-1561cba3e391" containerName="installer" Mar 18 13:12:14.575026 master-0 kubenswrapper[7599]: I0318 13:12:14.574733 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a4c87a8-6bf0-43b2-b598-1561cba3e391" containerName="installer" Mar 18 13:12:14.575026 master-0 kubenswrapper[7599]: I0318 13:12:14.574834 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="20b8c731-9ec8-4abb-8cc2-9821b2819e48" containerName="cluster-cloud-controller-manager" Mar 18 13:12:14.575026 master-0 kubenswrapper[7599]: I0318 13:12:14.574853 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a4c87a8-6bf0-43b2-b598-1561cba3e391" containerName="installer" Mar 18 13:12:14.575026 master-0 kubenswrapper[7599]: I0318 13:12:14.574862 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="20b8c731-9ec8-4abb-8cc2-9821b2819e48" containerName="config-sync-controllers" Mar 18 13:12:14.575026 master-0 kubenswrapper[7599]: I0318 13:12:14.574868 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="20b8c731-9ec8-4abb-8cc2-9821b2819e48" containerName="kube-rbac-proxy" Mar 18 13:12:14.575779 master-0 kubenswrapper[7599]: I0318 13:12:14.575715 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" Mar 18 13:12:14.580548 master-0 kubenswrapper[7599]: I0318 13:12:14.577985 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-7vfv5" Mar 18 13:12:14.580548 master-0 kubenswrapper[7599]: I0318 13:12:14.578223 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 18 13:12:14.580548 master-0 kubenswrapper[7599]: I0318 13:12:14.578451 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 18 13:12:14.580548 master-0 kubenswrapper[7599]: I0318 13:12:14.578594 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 18 13:12:14.580548 master-0 kubenswrapper[7599]: I0318 13:12:14.578785 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 13:12:14.584754 master-0 kubenswrapper[7599]: I0318 13:12:14.584707 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 18 13:12:14.622351 master-0 kubenswrapper[7599]: I0318 13:12:14.621733 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-8lzkl\" (UID: \"80994f33-21e7-45d6-9f21-1cfd8e1f41ce\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" Mar 18 13:12:14.622351 master-0 kubenswrapper[7599]: I0318 13:12:14.621784 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-8lzkl\" (UID: \"80994f33-21e7-45d6-9f21-1cfd8e1f41ce\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" Mar 18 13:12:14.622351 master-0 kubenswrapper[7599]: I0318 13:12:14.621891 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-8lzkl\" (UID: \"80994f33-21e7-45d6-9f21-1cfd8e1f41ce\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" Mar 18 13:12:14.622351 master-0 kubenswrapper[7599]: I0318 13:12:14.622051 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-8lzkl\" (UID: \"80994f33-21e7-45d6-9f21-1cfd8e1f41ce\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" Mar 18 13:12:14.622351 master-0 kubenswrapper[7599]: I0318 13:12:14.622090 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwqln\" (UniqueName: \"kubernetes.io/projected/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-kube-api-access-gwqln\") pod \"cluster-cloud-controller-manager-operator-7dff898856-8lzkl\" (UID: \"80994f33-21e7-45d6-9f21-1cfd8e1f41ce\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" Mar 18 13:12:14.724119 master-0 kubenswrapper[7599]: I0318 13:12:14.722791 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-8lzkl\" (UID: \"80994f33-21e7-45d6-9f21-1cfd8e1f41ce\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" Mar 18 13:12:14.724119 master-0 kubenswrapper[7599]: I0318 13:12:14.722853 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwqln\" (UniqueName: \"kubernetes.io/projected/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-kube-api-access-gwqln\") pod \"cluster-cloud-controller-manager-operator-7dff898856-8lzkl\" (UID: \"80994f33-21e7-45d6-9f21-1cfd8e1f41ce\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" Mar 18 13:12:14.724119 master-0 kubenswrapper[7599]: I0318 13:12:14.723087 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-8lzkl\" (UID: \"80994f33-21e7-45d6-9f21-1cfd8e1f41ce\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" Mar 18 13:12:14.724119 master-0 kubenswrapper[7599]: I0318 13:12:14.723225 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-8lzkl\" (UID: \"80994f33-21e7-45d6-9f21-1cfd8e1f41ce\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" Mar 18 13:12:14.724119 master-0 kubenswrapper[7599]: I0318 13:12:14.723291 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-8lzkl\" (UID: \"80994f33-21e7-45d6-9f21-1cfd8e1f41ce\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" Mar 18 13:12:14.724119 master-0 kubenswrapper[7599]: I0318 13:12:14.723232 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-8lzkl\" (UID: \"80994f33-21e7-45d6-9f21-1cfd8e1f41ce\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" Mar 18 13:12:14.724119 master-0 kubenswrapper[7599]: I0318 13:12:14.724057 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-8lzkl\" (UID: \"80994f33-21e7-45d6-9f21-1cfd8e1f41ce\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" Mar 18 13:12:14.724119 master-0 kubenswrapper[7599]: I0318 13:12:14.724064 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-8lzkl\" (UID: \"80994f33-21e7-45d6-9f21-1cfd8e1f41ce\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" Mar 18 13:12:14.727908 master-0 kubenswrapper[7599]: I0318 13:12:14.727284 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-8lzkl\" (UID: \"80994f33-21e7-45d6-9f21-1cfd8e1f41ce\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" Mar 18 13:12:14.742735 master-0 kubenswrapper[7599]: I0318 13:12:14.742685 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwqln\" (UniqueName: \"kubernetes.io/projected/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-kube-api-access-gwqln\") pod \"cluster-cloud-controller-manager-operator-7dff898856-8lzkl\" (UID: \"80994f33-21e7-45d6-9f21-1cfd8e1f41ce\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" Mar 18 13:12:14.918373 master-0 kubenswrapper[7599]: I0318 13:12:14.918298 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" Mar 18 13:12:14.936242 master-0 kubenswrapper[7599]: W0318 13:12:14.936192 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80994f33_21e7_45d6_9f21_1cfd8e1f41ce.slice/crio-6269dfbb0082e40be315007eb2be8e6ed68859c371da0d4ee487418e5943d283 WatchSource:0}: Error finding container 6269dfbb0082e40be315007eb2be8e6ed68859c371da0d4ee487418e5943d283: Status 404 returned error can't find the container with id 6269dfbb0082e40be315007eb2be8e6ed68859c371da0d4ee487418e5943d283 Mar 18 13:12:15.258438 master-0 kubenswrapper[7599]: I0318 13:12:15.256557 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:15.258438 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:15.258438 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:15.258438 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:15.258438 master-0 kubenswrapper[7599]: I0318 13:12:15.256619 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:15.380969 master-0 kubenswrapper[7599]: I0318 13:12:15.380941 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b8c731-9ec8-4abb-8cc2-9821b2819e48" path="/var/lib/kubelet/pods/20b8c731-9ec8-4abb-8cc2-9821b2819e48/volumes" Mar 18 13:12:15.489342 master-0 kubenswrapper[7599]: I0318 13:12:15.489272 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" event={"ID":"80994f33-21e7-45d6-9f21-1cfd8e1f41ce","Type":"ContainerStarted","Data":"d50601e164ccfcbdf07931c427e847ca4740015597032ab2b84aea93b2d7cd31"} Mar 18 13:12:15.489342 master-0 kubenswrapper[7599]: I0318 13:12:15.489327 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" event={"ID":"80994f33-21e7-45d6-9f21-1cfd8e1f41ce","Type":"ContainerStarted","Data":"b60e278771d4ab09e373261d0f5e1a2d382ec8ee4872ddb07f8d9ad772242c29"} Mar 18 13:12:15.489342 master-0 kubenswrapper[7599]: I0318 13:12:15.489347 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" event={"ID":"80994f33-21e7-45d6-9f21-1cfd8e1f41ce","Type":"ContainerStarted","Data":"6269dfbb0082e40be315007eb2be8e6ed68859c371da0d4ee487418e5943d283"} Mar 18 13:12:16.253464 master-0 kubenswrapper[7599]: I0318 13:12:16.253425 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:16.253464 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:16.253464 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:16.253464 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:16.253840 master-0 kubenswrapper[7599]: I0318 13:12:16.253481 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:16.498212 master-0 kubenswrapper[7599]: I0318 13:12:16.498154 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" event={"ID":"80994f33-21e7-45d6-9f21-1cfd8e1f41ce","Type":"ContainerStarted","Data":"aea0ba9c47771383cbd332d289d9bc75e884ce916b9826020091a8cb0cfb26f5"} Mar 18 13:12:16.692491 master-0 kubenswrapper[7599]: I0318 13:12:16.692355 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" podStartSLOduration=2.69233505 podStartE2EDuration="2.69233505s" podCreationTimestamp="2026-03-18 13:12:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:12:16.68867949 +0000 UTC m=+311.649733742" watchObservedRunningTime="2026-03-18 13:12:16.69233505 +0000 UTC m=+311.653389312" Mar 18 13:12:17.016907 master-0 kubenswrapper[7599]: I0318 13:12:17.016766 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:12:17.016907 master-0 kubenswrapper[7599]: I0318 13:12:17.016843 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:12:17.016907 master-0 kubenswrapper[7599]: I0318 13:12:17.016867 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:12:17.016907 master-0 kubenswrapper[7599]: I0318 13:12:17.016879 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:12:17.021284 master-0 kubenswrapper[7599]: I0318 13:12:17.021237 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:12:17.022156 master-0 kubenswrapper[7599]: I0318 13:12:17.022123 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:12:17.253630 master-0 kubenswrapper[7599]: I0318 13:12:17.253573 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:17.253630 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:17.253630 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:17.253630 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:17.253934 master-0 kubenswrapper[7599]: I0318 13:12:17.253643 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:17.517495 master-0 kubenswrapper[7599]: I0318 13:12:17.517435 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:12:18.254530 master-0 kubenswrapper[7599]: I0318 13:12:18.254464 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:18.254530 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:18.254530 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:18.254530 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:18.254985 master-0 kubenswrapper[7599]: I0318 13:12:18.254551 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:18.705846 master-0 kubenswrapper[7599]: I0318 13:12:18.705789 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tqw5h" Mar 18 13:12:18.706781 master-0 kubenswrapper[7599]: I0318 13:12:18.706734 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tqw5h" Mar 18 13:12:18.746624 master-0 kubenswrapper[7599]: I0318 13:12:18.746574 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tqw5h" Mar 18 13:12:19.255177 master-0 kubenswrapper[7599]: I0318 13:12:19.255111 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:19.255177 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:19.255177 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:19.255177 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:19.255698 master-0 kubenswrapper[7599]: I0318 13:12:19.255187 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:19.551396 master-0 kubenswrapper[7599]: I0318 13:12:19.551288 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tqw5h" Mar 18 13:12:19.558682 master-0 kubenswrapper[7599]: I0318 13:12:19.558614 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8wqfk" Mar 18 13:12:19.559188 master-0 kubenswrapper[7599]: I0318 13:12:19.558766 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8wqfk" Mar 18 13:12:19.596124 master-0 kubenswrapper[7599]: I0318 13:12:19.596064 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8wqfk" Mar 18 13:12:20.253707 master-0 kubenswrapper[7599]: I0318 13:12:20.253557 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:20.253707 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:20.253707 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:20.253707 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:20.253707 master-0 kubenswrapper[7599]: I0318 13:12:20.253687 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:20.472747 master-0 kubenswrapper[7599]: I0318 13:12:20.472644 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bxlrz" Mar 18 13:12:20.472747 master-0 kubenswrapper[7599]: I0318 13:12:20.472713 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bxlrz" Mar 18 13:12:20.514999 master-0 kubenswrapper[7599]: I0318 13:12:20.514897 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bxlrz" Mar 18 13:12:20.554101 master-0 kubenswrapper[7599]: I0318 13:12:20.554038 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8wqfk" Mar 18 13:12:20.554323 master-0 kubenswrapper[7599]: I0318 13:12:20.554196 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bxlrz" Mar 18 13:12:21.214740 master-0 kubenswrapper[7599]: I0318 13:12:21.214674 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:12:21.218178 master-0 kubenswrapper[7599]: I0318 13:12:21.218129 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:12:21.254012 master-0 kubenswrapper[7599]: I0318 13:12:21.253948 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:21.254012 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:21.254012 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:21.254012 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:21.254568 master-0 kubenswrapper[7599]: I0318 13:12:21.254016 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:21.404768 master-0 kubenswrapper[7599]: I0318 13:12:21.404683 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-89st2" Mar 18 13:12:21.404768 master-0 kubenswrapper[7599]: I0318 13:12:21.404754 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-89st2" Mar 18 13:12:21.439084 master-0 kubenswrapper[7599]: I0318 13:12:21.439031 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-89st2" Mar 18 13:12:21.570327 master-0 kubenswrapper[7599]: I0318 13:12:21.570198 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-89st2" Mar 18 13:12:22.752778 master-0 kubenswrapper[7599]: I0318 13:12:22.752560 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:22.752778 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:22.752778 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:22.752778 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:22.753502 master-0 kubenswrapper[7599]: I0318 13:12:22.752780 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:23.254572 master-0 kubenswrapper[7599]: I0318 13:12:23.254493 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:23.254572 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:23.254572 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:23.254572 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:23.254953 master-0 kubenswrapper[7599]: I0318 13:12:23.254599 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:24.253378 master-0 kubenswrapper[7599]: I0318 13:12:24.253282 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:24.253378 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:24.253378 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:24.253378 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:24.253378 master-0 kubenswrapper[7599]: I0318 13:12:24.253369 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:25.253438 master-0 kubenswrapper[7599]: I0318 13:12:25.253362 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:25.253438 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:25.253438 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:25.253438 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:25.253438 master-0 kubenswrapper[7599]: I0318 13:12:25.253430 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:26.255098 master-0 kubenswrapper[7599]: I0318 13:12:26.255035 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:26.255098 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:26.255098 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:26.255098 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:26.256090 master-0 kubenswrapper[7599]: I0318 13:12:26.255115 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:27.023357 master-0 kubenswrapper[7599]: I0318 13:12:27.023289 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:12:27.254669 master-0 kubenswrapper[7599]: I0318 13:12:27.254594 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:27.254669 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:27.254669 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:27.254669 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:27.254993 master-0 kubenswrapper[7599]: I0318 13:12:27.254703 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:28.253925 master-0 kubenswrapper[7599]: I0318 13:12:28.253837 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:28.253925 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:28.253925 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:28.253925 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:28.253925 master-0 kubenswrapper[7599]: I0318 13:12:28.253893 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:29.255124 master-0 kubenswrapper[7599]: I0318 13:12:29.255024 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:29.255124 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:29.255124 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:29.255124 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:29.255833 master-0 kubenswrapper[7599]: I0318 13:12:29.255131 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:30.253264 master-0 kubenswrapper[7599]: I0318 13:12:30.253228 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:30.253264 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:30.253264 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:30.253264 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:30.253628 master-0 kubenswrapper[7599]: I0318 13:12:30.253602 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:31.254318 master-0 kubenswrapper[7599]: I0318 13:12:31.254264 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:31.254318 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:31.254318 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:31.254318 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:31.254948 master-0 kubenswrapper[7599]: I0318 13:12:31.254337 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:32.253968 master-0 kubenswrapper[7599]: I0318 13:12:32.253899 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:32.253968 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:32.253968 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:32.253968 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:32.253968 master-0 kubenswrapper[7599]: I0318 13:12:32.253956 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:33.254055 master-0 kubenswrapper[7599]: I0318 13:12:33.253991 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:33.254055 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:33.254055 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:33.254055 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:33.254861 master-0 kubenswrapper[7599]: I0318 13:12:33.254093 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:34.252740 master-0 kubenswrapper[7599]: I0318 13:12:34.252675 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:34.252740 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:34.252740 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:34.252740 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:34.253211 master-0 kubenswrapper[7599]: I0318 13:12:34.253177 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:35.254224 master-0 kubenswrapper[7599]: I0318 13:12:35.254160 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:35.254224 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:35.254224 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:35.254224 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:35.255007 master-0 kubenswrapper[7599]: I0318 13:12:35.254238 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:36.253888 master-0 kubenswrapper[7599]: I0318 13:12:36.253817 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:36.253888 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:36.253888 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:36.253888 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:36.253888 master-0 kubenswrapper[7599]: I0318 13:12:36.253883 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:37.256158 master-0 kubenswrapper[7599]: I0318 13:12:37.256072 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:37.256158 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:37.256158 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:37.256158 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:37.256865 master-0 kubenswrapper[7599]: I0318 13:12:37.256166 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:38.253644 master-0 kubenswrapper[7599]: I0318 13:12:38.253570 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:38.253644 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:38.253644 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:38.253644 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:38.253982 master-0 kubenswrapper[7599]: I0318 13:12:38.253653 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:39.253863 master-0 kubenswrapper[7599]: I0318 13:12:39.253820 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:39.253863 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:39.253863 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:39.253863 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:39.254567 master-0 kubenswrapper[7599]: I0318 13:12:39.253878 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:40.254128 master-0 kubenswrapper[7599]: I0318 13:12:40.254017 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:40.254128 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:40.254128 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:40.254128 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:40.255282 master-0 kubenswrapper[7599]: I0318 13:12:40.254141 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:41.253431 master-0 kubenswrapper[7599]: I0318 13:12:41.253281 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:41.253431 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:41.253431 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:41.253431 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:41.253726 master-0 kubenswrapper[7599]: I0318 13:12:41.253486 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:42.253219 master-0 kubenswrapper[7599]: I0318 13:12:42.253162 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:42.253219 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:42.253219 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:42.253219 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:42.254248 master-0 kubenswrapper[7599]: I0318 13:12:42.254205 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:43.255686 master-0 kubenswrapper[7599]: I0318 13:12:43.255579 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:43.255686 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:43.255686 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:43.255686 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:43.256753 master-0 kubenswrapper[7599]: I0318 13:12:43.255706 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:44.254066 master-0 kubenswrapper[7599]: I0318 13:12:44.253981 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:44.254066 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:44.254066 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:44.254066 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:44.254066 master-0 kubenswrapper[7599]: I0318 13:12:44.254059 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:45.254444 master-0 kubenswrapper[7599]: I0318 13:12:45.254370 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:45.254444 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:45.254444 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:45.254444 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:45.254444 master-0 kubenswrapper[7599]: I0318 13:12:45.254443 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:46.253870 master-0 kubenswrapper[7599]: I0318 13:12:46.253795 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:46.253870 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:46.253870 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:46.253870 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:46.254212 master-0 kubenswrapper[7599]: I0318 13:12:46.253878 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:47.253742 master-0 kubenswrapper[7599]: I0318 13:12:47.253685 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:47.253742 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:47.253742 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:47.253742 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:47.254297 master-0 kubenswrapper[7599]: I0318 13:12:47.253742 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:48.253510 master-0 kubenswrapper[7599]: I0318 13:12:48.253461 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:48.253510 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:48.253510 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:48.253510 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:48.254123 master-0 kubenswrapper[7599]: I0318 13:12:48.253528 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:49.254878 master-0 kubenswrapper[7599]: I0318 13:12:49.254801 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:49.254878 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:49.254878 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:49.254878 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:49.255937 master-0 kubenswrapper[7599]: I0318 13:12:49.254880 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:50.253929 master-0 kubenswrapper[7599]: I0318 13:12:50.253849 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:50.253929 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:50.253929 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:50.253929 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:50.254365 master-0 kubenswrapper[7599]: I0318 13:12:50.253939 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:51.255082 master-0 kubenswrapper[7599]: I0318 13:12:51.255033 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:51.255082 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:51.255082 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:51.255082 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:51.256096 master-0 kubenswrapper[7599]: I0318 13:12:51.256062 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:52.254389 master-0 kubenswrapper[7599]: I0318 13:12:52.254322 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:52.254389 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:52.254389 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:52.254389 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:52.255186 master-0 kubenswrapper[7599]: I0318 13:12:52.255038 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:53.254580 master-0 kubenswrapper[7599]: I0318 13:12:53.254520 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:53.254580 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:53.254580 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:53.254580 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:53.255127 master-0 kubenswrapper[7599]: I0318 13:12:53.255086 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:54.256641 master-0 kubenswrapper[7599]: I0318 13:12:54.256571 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:54.256641 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:54.256641 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:54.256641 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:54.257912 master-0 kubenswrapper[7599]: I0318 13:12:54.256661 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:55.253207 master-0 kubenswrapper[7599]: I0318 13:12:55.253096 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:55.253207 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:55.253207 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:55.253207 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:55.253207 master-0 kubenswrapper[7599]: I0318 13:12:55.253191 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:56.253614 master-0 kubenswrapper[7599]: I0318 13:12:56.253568 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:56.253614 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:56.253614 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:56.253614 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:56.254463 master-0 kubenswrapper[7599]: I0318 13:12:56.254434 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:57.254189 master-0 kubenswrapper[7599]: I0318 13:12:57.254124 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:57.254189 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:57.254189 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:57.254189 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:57.255038 master-0 kubenswrapper[7599]: I0318 13:12:57.254207 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:58.254127 master-0 kubenswrapper[7599]: I0318 13:12:58.254035 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:58.254127 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:58.254127 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:58.254127 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:58.255666 master-0 kubenswrapper[7599]: I0318 13:12:58.254162 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:12:59.255023 master-0 kubenswrapper[7599]: I0318 13:12:59.254911 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:12:59.255023 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:12:59.255023 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:12:59.255023 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:12:59.255023 master-0 kubenswrapper[7599]: I0318 13:12:59.255012 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:00.253984 master-0 kubenswrapper[7599]: I0318 13:13:00.253880 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:00.253984 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:13:00.253984 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:13:00.253984 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:13:00.253984 master-0 kubenswrapper[7599]: I0318 13:13:00.253978 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:01.253993 master-0 kubenswrapper[7599]: I0318 13:13:01.253928 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:01.253993 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:13:01.253993 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:13:01.253993 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:13:01.253993 master-0 kubenswrapper[7599]: I0318 13:13:01.253987 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:02.255675 master-0 kubenswrapper[7599]: I0318 13:13:02.255478 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:02.255675 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:13:02.255675 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:13:02.255675 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:13:02.255675 master-0 kubenswrapper[7599]: I0318 13:13:02.255594 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:03.255067 master-0 kubenswrapper[7599]: I0318 13:13:03.254952 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:03.255067 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:13:03.255067 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:13:03.255067 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:13:03.256498 master-0 kubenswrapper[7599]: I0318 13:13:03.255076 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:04.254406 master-0 kubenswrapper[7599]: I0318 13:13:04.254292 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:04.254406 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:13:04.254406 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:13:04.254406 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:13:04.254406 master-0 kubenswrapper[7599]: I0318 13:13:04.254403 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:05.254828 master-0 kubenswrapper[7599]: I0318 13:13:05.254726 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:05.254828 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:13:05.254828 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:13:05.254828 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:13:05.254828 master-0 kubenswrapper[7599]: I0318 13:13:05.254802 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:06.253542 master-0 kubenswrapper[7599]: I0318 13:13:06.253477 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:06.253542 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:13:06.253542 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:13:06.253542 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:13:06.253975 master-0 kubenswrapper[7599]: I0318 13:13:06.253542 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:07.253920 master-0 kubenswrapper[7599]: I0318 13:13:07.253813 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:07.253920 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:13:07.253920 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:13:07.253920 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:13:07.254620 master-0 kubenswrapper[7599]: I0318 13:13:07.253945 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:08.254124 master-0 kubenswrapper[7599]: I0318 13:13:08.254048 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:08.254124 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:13:08.254124 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:13:08.254124 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:13:08.254124 master-0 kubenswrapper[7599]: I0318 13:13:08.254128 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:09.253955 master-0 kubenswrapper[7599]: I0318 13:13:09.253807 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:09.253955 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:13:09.253955 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:13:09.253955 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:13:09.253955 master-0 kubenswrapper[7599]: I0318 13:13:09.253895 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:09.255279 master-0 kubenswrapper[7599]: I0318 13:13:09.253971 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:13:09.255279 master-0 kubenswrapper[7599]: I0318 13:13:09.255059 7599 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"f14e73371f76e20d73c8968b8d34cca55ee15e6f6c8c8c101d7840ace2efb3fd"} pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" containerMessage="Container router failed startup probe, will be restarted" Mar 18 13:13:09.255279 master-0 kubenswrapper[7599]: I0318 13:13:09.255122 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" containerID="cri-o://f14e73371f76e20d73c8968b8d34cca55ee15e6f6c8c8c101d7840ace2efb3fd" gracePeriod=3600 Mar 18 13:13:14.739431 master-0 kubenswrapper[7599]: I0318 13:13:14.739299 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-6hldc"] Mar 18 13:13:14.741023 master-0 kubenswrapper[7599]: I0318 13:13:14.740982 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-6hldc" Mar 18 13:13:14.748436 master-0 kubenswrapper[7599]: I0318 13:13:14.748130 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-7kt87" Mar 18 13:13:14.748602 master-0 kubenswrapper[7599]: I0318 13:13:14.748528 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 18 13:13:14.748844 master-0 kubenswrapper[7599]: I0318 13:13:14.748809 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 18 13:13:14.749040 master-0 kubenswrapper[7599]: I0318 13:13:14.749012 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 18 13:13:14.760096 master-0 kubenswrapper[7599]: I0318 13:13:14.759185 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-6hldc"] Mar 18 13:13:14.867855 master-0 kubenswrapper[7599]: I0318 13:13:14.867776 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e54baea8-6c3e-45a0-ac8c-880a8aaa8208-cert\") pod \"ingress-canary-6hldc\" (UID: \"e54baea8-6c3e-45a0-ac8c-880a8aaa8208\") " pod="openshift-ingress-canary/ingress-canary-6hldc" Mar 18 13:13:14.868072 master-0 kubenswrapper[7599]: I0318 13:13:14.867896 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkw55\" (UniqueName: \"kubernetes.io/projected/e54baea8-6c3e-45a0-ac8c-880a8aaa8208-kube-api-access-kkw55\") pod \"ingress-canary-6hldc\" (UID: \"e54baea8-6c3e-45a0-ac8c-880a8aaa8208\") " pod="openshift-ingress-canary/ingress-canary-6hldc" Mar 18 13:13:14.968943 master-0 kubenswrapper[7599]: I0318 13:13:14.968898 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e54baea8-6c3e-45a0-ac8c-880a8aaa8208-cert\") pod \"ingress-canary-6hldc\" (UID: \"e54baea8-6c3e-45a0-ac8c-880a8aaa8208\") " pod="openshift-ingress-canary/ingress-canary-6hldc" Mar 18 13:13:14.969222 master-0 kubenswrapper[7599]: I0318 13:13:14.969084 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkw55\" (UniqueName: \"kubernetes.io/projected/e54baea8-6c3e-45a0-ac8c-880a8aaa8208-kube-api-access-kkw55\") pod \"ingress-canary-6hldc\" (UID: \"e54baea8-6c3e-45a0-ac8c-880a8aaa8208\") " pod="openshift-ingress-canary/ingress-canary-6hldc" Mar 18 13:13:14.969222 master-0 kubenswrapper[7599]: E0318 13:13:14.969100 7599 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Mar 18 13:13:14.969222 master-0 kubenswrapper[7599]: E0318 13:13:14.969194 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e54baea8-6c3e-45a0-ac8c-880a8aaa8208-cert podName:e54baea8-6c3e-45a0-ac8c-880a8aaa8208 nodeName:}" failed. No retries permitted until 2026-03-18 13:13:15.469173341 +0000 UTC m=+370.430227593 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e54baea8-6c3e-45a0-ac8c-880a8aaa8208-cert") pod "ingress-canary-6hldc" (UID: "e54baea8-6c3e-45a0-ac8c-880a8aaa8208") : secret "canary-serving-cert" not found Mar 18 13:13:14.994114 master-0 kubenswrapper[7599]: I0318 13:13:14.994023 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkw55\" (UniqueName: \"kubernetes.io/projected/e54baea8-6c3e-45a0-ac8c-880a8aaa8208-kube-api-access-kkw55\") pod \"ingress-canary-6hldc\" (UID: \"e54baea8-6c3e-45a0-ac8c-880a8aaa8208\") " pod="openshift-ingress-canary/ingress-canary-6hldc" Mar 18 13:13:15.160678 master-0 kubenswrapper[7599]: I0318 13:13:15.160602 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-wqxpk_d9d09a56-ed4c-40b7-8be1-f3934c07296e/ingress-operator/1.log" Mar 18 13:13:15.161507 master-0 kubenswrapper[7599]: I0318 13:13:15.161474 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-wqxpk_d9d09a56-ed4c-40b7-8be1-f3934c07296e/ingress-operator/0.log" Mar 18 13:13:15.161591 master-0 kubenswrapper[7599]: I0318 13:13:15.161519 7599 generic.go:334] "Generic (PLEG): container finished" podID="d9d09a56-ed4c-40b7-8be1-f3934c07296e" containerID="df5d711a967c436c3ef89b97c0b604c819b293d8a09e8223cc8050c145294e10" exitCode=1 Mar 18 13:13:15.161591 master-0 kubenswrapper[7599]: I0318 13:13:15.161550 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" event={"ID":"d9d09a56-ed4c-40b7-8be1-f3934c07296e","Type":"ContainerDied","Data":"df5d711a967c436c3ef89b97c0b604c819b293d8a09e8223cc8050c145294e10"} Mar 18 13:13:15.161591 master-0 kubenswrapper[7599]: I0318 13:13:15.161581 7599 scope.go:117] "RemoveContainer" containerID="a09e30a0e0a70728f4eacd16714f41244f1eaa2c744901296ee7506c0e6ed81f" Mar 18 13:13:15.162229 master-0 kubenswrapper[7599]: I0318 13:13:15.162174 7599 scope.go:117] "RemoveContainer" containerID="df5d711a967c436c3ef89b97c0b604c819b293d8a09e8223cc8050c145294e10" Mar 18 13:13:15.162549 master-0 kubenswrapper[7599]: E0318 13:13:15.162506 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-wqxpk_openshift-ingress-operator(d9d09a56-ed4c-40b7-8be1-f3934c07296e)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" podUID="d9d09a56-ed4c-40b7-8be1-f3934c07296e" Mar 18 13:13:15.476048 master-0 kubenswrapper[7599]: I0318 13:13:15.475947 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e54baea8-6c3e-45a0-ac8c-880a8aaa8208-cert\") pod \"ingress-canary-6hldc\" (UID: \"e54baea8-6c3e-45a0-ac8c-880a8aaa8208\") " pod="openshift-ingress-canary/ingress-canary-6hldc" Mar 18 13:13:15.487139 master-0 kubenswrapper[7599]: I0318 13:13:15.487065 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e54baea8-6c3e-45a0-ac8c-880a8aaa8208-cert\") pod \"ingress-canary-6hldc\" (UID: \"e54baea8-6c3e-45a0-ac8c-880a8aaa8208\") " pod="openshift-ingress-canary/ingress-canary-6hldc" Mar 18 13:13:15.710240 master-0 kubenswrapper[7599]: I0318 13:13:15.710119 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-6hldc" Mar 18 13:13:16.139617 master-0 kubenswrapper[7599]: I0318 13:13:16.139487 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-6hldc"] Mar 18 13:13:16.149098 master-0 kubenswrapper[7599]: W0318 13:13:16.148697 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode54baea8_6c3e_45a0_ac8c_880a8aaa8208.slice/crio-79c45dcce1d819c7fccd19f2123bf5227e7882d825c4cbdf8c140e544e9eccec WatchSource:0}: Error finding container 79c45dcce1d819c7fccd19f2123bf5227e7882d825c4cbdf8c140e544e9eccec: Status 404 returned error can't find the container with id 79c45dcce1d819c7fccd19f2123bf5227e7882d825c4cbdf8c140e544e9eccec Mar 18 13:13:16.169363 master-0 kubenswrapper[7599]: I0318 13:13:16.169317 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-wqxpk_d9d09a56-ed4c-40b7-8be1-f3934c07296e/ingress-operator/1.log" Mar 18 13:13:16.170767 master-0 kubenswrapper[7599]: I0318 13:13:16.170733 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-6hldc" event={"ID":"e54baea8-6c3e-45a0-ac8c-880a8aaa8208","Type":"ContainerStarted","Data":"79c45dcce1d819c7fccd19f2123bf5227e7882d825c4cbdf8c140e544e9eccec"} Mar 18 13:13:17.183984 master-0 kubenswrapper[7599]: I0318 13:13:17.183903 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-6hldc" event={"ID":"e54baea8-6c3e-45a0-ac8c-880a8aaa8208","Type":"ContainerStarted","Data":"381bb2c8f965d035883df3aed2837df6b027fe5a3fa9b570128156fcc37a3b8c"} Mar 18 13:13:29.374269 master-0 kubenswrapper[7599]: I0318 13:13:29.373149 7599 scope.go:117] "RemoveContainer" containerID="df5d711a967c436c3ef89b97c0b604c819b293d8a09e8223cc8050c145294e10" Mar 18 13:13:29.406406 master-0 kubenswrapper[7599]: I0318 13:13:29.406303 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-6hldc" podStartSLOduration=15.406280685 podStartE2EDuration="15.406280685s" podCreationTimestamp="2026-03-18 13:13:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:13:17.222126495 +0000 UTC m=+372.183180777" watchObservedRunningTime="2026-03-18 13:13:29.406280685 +0000 UTC m=+384.367334957" Mar 18 13:13:30.265778 master-0 kubenswrapper[7599]: I0318 13:13:30.265727 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-wqxpk_d9d09a56-ed4c-40b7-8be1-f3934c07296e/ingress-operator/1.log" Mar 18 13:13:30.266923 master-0 kubenswrapper[7599]: I0318 13:13:30.266850 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" event={"ID":"d9d09a56-ed4c-40b7-8be1-f3934c07296e","Type":"ContainerStarted","Data":"6ac0b3c06e29048753b73b2f17b0ac17c18e2b197c3c5c6227d9ef97a38d373f"} Mar 18 13:13:55.444825 master-0 kubenswrapper[7599]: I0318 13:13:55.444751 7599 generic.go:334] "Generic (PLEG): container finished" podID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerID="f14e73371f76e20d73c8968b8d34cca55ee15e6f6c8c8c101d7840ace2efb3fd" exitCode=0 Mar 18 13:13:55.445868 master-0 kubenswrapper[7599]: I0318 13:13:55.444835 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" event={"ID":"00375107-9a3b-4161-a90d-72ea8827c5fc","Type":"ContainerDied","Data":"f14e73371f76e20d73c8968b8d34cca55ee15e6f6c8c8c101d7840ace2efb3fd"} Mar 18 13:13:55.445868 master-0 kubenswrapper[7599]: I0318 13:13:55.444967 7599 scope.go:117] "RemoveContainer" containerID="9ce3394879cb362e5d7236279a34aac71fedeb577c1dc6ec801d0fa7287bb15c" Mar 18 13:13:56.456953 master-0 kubenswrapper[7599]: I0318 13:13:56.456862 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" event={"ID":"00375107-9a3b-4161-a90d-72ea8827c5fc","Type":"ContainerStarted","Data":"07ca97585aaa8b06b5f428151eb377bdd83b407ab4db465d3d58a7d10ed909a2"} Mar 18 13:13:57.251482 master-0 kubenswrapper[7599]: I0318 13:13:57.251347 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:13:57.255214 master-0 kubenswrapper[7599]: I0318 13:13:57.255143 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:57.255214 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:13:57.255214 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:13:57.255214 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:13:57.255671 master-0 kubenswrapper[7599]: I0318 13:13:57.255252 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:58.255133 master-0 kubenswrapper[7599]: I0318 13:13:58.254729 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:58.255133 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:13:58.255133 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:13:58.255133 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:13:58.255133 master-0 kubenswrapper[7599]: I0318 13:13:58.254850 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:13:59.253995 master-0 kubenswrapper[7599]: I0318 13:13:59.253928 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:13:59.253995 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:13:59.253995 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:13:59.253995 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:13:59.254378 master-0 kubenswrapper[7599]: I0318 13:13:59.254030 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:00.253809 master-0 kubenswrapper[7599]: I0318 13:14:00.253742 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:00.253809 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:00.253809 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:00.253809 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:00.254332 master-0 kubenswrapper[7599]: I0318 13:14:00.253832 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:01.254316 master-0 kubenswrapper[7599]: I0318 13:14:01.254235 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:01.254316 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:01.254316 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:01.254316 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:01.255437 master-0 kubenswrapper[7599]: I0318 13:14:01.254331 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:02.255485 master-0 kubenswrapper[7599]: I0318 13:14:02.255366 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:02.255485 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:02.255485 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:02.255485 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:02.256453 master-0 kubenswrapper[7599]: I0318 13:14:02.255522 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:03.253870 master-0 kubenswrapper[7599]: I0318 13:14:03.253735 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:03.253870 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:03.253870 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:03.253870 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:03.254354 master-0 kubenswrapper[7599]: I0318 13:14:03.253879 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:04.254747 master-0 kubenswrapper[7599]: I0318 13:14:04.254660 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:04.254747 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:04.254747 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:04.254747 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:04.255672 master-0 kubenswrapper[7599]: I0318 13:14:04.254757 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:05.251576 master-0 kubenswrapper[7599]: I0318 13:14:05.251468 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:14:05.253616 master-0 kubenswrapper[7599]: I0318 13:14:05.253568 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:05.253616 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:05.253616 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:05.253616 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:05.253890 master-0 kubenswrapper[7599]: I0318 13:14:05.253637 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:06.253677 master-0 kubenswrapper[7599]: I0318 13:14:06.253621 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:06.253677 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:06.253677 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:06.253677 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:06.254566 master-0 kubenswrapper[7599]: I0318 13:14:06.253682 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:07.254676 master-0 kubenswrapper[7599]: I0318 13:14:07.254592 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:07.254676 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:07.254676 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:07.254676 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:07.255367 master-0 kubenswrapper[7599]: I0318 13:14:07.254717 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:08.254172 master-0 kubenswrapper[7599]: I0318 13:14:08.254108 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:08.254172 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:08.254172 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:08.254172 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:08.254748 master-0 kubenswrapper[7599]: I0318 13:14:08.254706 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:09.253847 master-0 kubenswrapper[7599]: I0318 13:14:09.253765 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:09.253847 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:09.253847 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:09.253847 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:09.254275 master-0 kubenswrapper[7599]: I0318 13:14:09.253868 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:10.256165 master-0 kubenswrapper[7599]: I0318 13:14:10.256050 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:10.256165 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:10.256165 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:10.256165 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:10.257242 master-0 kubenswrapper[7599]: I0318 13:14:10.256176 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:10.700196 master-0 kubenswrapper[7599]: I0318 13:14:10.700113 7599 scope.go:117] "RemoveContainer" containerID="53bd0f911da22f6347919de47020dd5ee65cf68785aa75b9d25bd48d7e0221f2" Mar 18 13:14:11.254718 master-0 kubenswrapper[7599]: I0318 13:14:11.254638 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:11.254718 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:11.254718 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:11.254718 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:11.254976 master-0 kubenswrapper[7599]: I0318 13:14:11.254766 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:12.253988 master-0 kubenswrapper[7599]: I0318 13:14:12.253938 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:12.253988 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:12.253988 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:12.253988 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:12.254617 master-0 kubenswrapper[7599]: I0318 13:14:12.254009 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:13.253837 master-0 kubenswrapper[7599]: I0318 13:14:13.253779 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:13.253837 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:13.253837 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:13.253837 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:13.254812 master-0 kubenswrapper[7599]: I0318 13:14:13.253844 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:14.254391 master-0 kubenswrapper[7599]: I0318 13:14:14.254285 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:14.254391 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:14.254391 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:14.254391 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:14.254940 master-0 kubenswrapper[7599]: I0318 13:14:14.254432 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:15.255185 master-0 kubenswrapper[7599]: I0318 13:14:15.255109 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:15.255185 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:15.255185 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:15.255185 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:15.256029 master-0 kubenswrapper[7599]: I0318 13:14:15.255186 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:16.253276 master-0 kubenswrapper[7599]: I0318 13:14:16.253223 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:16.253276 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:16.253276 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:16.253276 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:16.253575 master-0 kubenswrapper[7599]: I0318 13:14:16.253316 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:17.253016 master-0 kubenswrapper[7599]: I0318 13:14:17.252934 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:17.253016 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:17.253016 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:17.253016 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:17.253016 master-0 kubenswrapper[7599]: I0318 13:14:17.253016 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:18.254476 master-0 kubenswrapper[7599]: I0318 13:14:18.254390 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:18.254476 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:18.254476 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:18.254476 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:18.255095 master-0 kubenswrapper[7599]: I0318 13:14:18.254505 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:19.254015 master-0 kubenswrapper[7599]: I0318 13:14:19.253885 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:19.254015 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:19.254015 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:19.254015 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:19.255645 master-0 kubenswrapper[7599]: I0318 13:14:19.254024 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:20.254228 master-0 kubenswrapper[7599]: I0318 13:14:20.254171 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:20.254228 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:20.254228 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:20.254228 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:20.254621 master-0 kubenswrapper[7599]: I0318 13:14:20.254244 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:21.253597 master-0 kubenswrapper[7599]: I0318 13:14:21.253543 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:21.253597 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:21.253597 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:21.253597 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:21.253922 master-0 kubenswrapper[7599]: I0318 13:14:21.253608 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:22.255354 master-0 kubenswrapper[7599]: I0318 13:14:22.255292 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:22.255354 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:22.255354 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:22.255354 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:22.256591 master-0 kubenswrapper[7599]: I0318 13:14:22.256540 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:23.255388 master-0 kubenswrapper[7599]: I0318 13:14:23.255250 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:23.255388 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:23.255388 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:23.255388 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:23.256392 master-0 kubenswrapper[7599]: I0318 13:14:23.255472 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:24.254564 master-0 kubenswrapper[7599]: I0318 13:14:24.254481 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:24.254564 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:24.254564 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:24.254564 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:24.255051 master-0 kubenswrapper[7599]: I0318 13:14:24.254565 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:25.254992 master-0 kubenswrapper[7599]: I0318 13:14:25.254899 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:25.254992 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:25.254992 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:25.254992 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:25.254992 master-0 kubenswrapper[7599]: I0318 13:14:25.254988 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:26.255194 master-0 kubenswrapper[7599]: I0318 13:14:26.255130 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:26.255194 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:26.255194 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:26.255194 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:26.255986 master-0 kubenswrapper[7599]: I0318 13:14:26.255232 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:27.255001 master-0 kubenswrapper[7599]: I0318 13:14:27.254892 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:27.255001 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:27.255001 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:27.255001 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:27.256111 master-0 kubenswrapper[7599]: I0318 13:14:27.254995 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:28.253991 master-0 kubenswrapper[7599]: I0318 13:14:28.253901 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:28.253991 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:28.253991 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:28.253991 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:28.254368 master-0 kubenswrapper[7599]: I0318 13:14:28.254005 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:29.255624 master-0 kubenswrapper[7599]: I0318 13:14:29.255535 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:29.255624 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:29.255624 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:29.255624 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:29.255624 master-0 kubenswrapper[7599]: I0318 13:14:29.255605 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:30.253592 master-0 kubenswrapper[7599]: I0318 13:14:30.253511 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:30.253592 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:30.253592 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:30.253592 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:30.253592 master-0 kubenswrapper[7599]: I0318 13:14:30.253576 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:31.253321 master-0 kubenswrapper[7599]: I0318 13:14:31.253233 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:31.253321 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:31.253321 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:31.253321 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:31.254234 master-0 kubenswrapper[7599]: I0318 13:14:31.253332 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:32.253955 master-0 kubenswrapper[7599]: I0318 13:14:32.253840 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:32.253955 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:32.253955 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:32.253955 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:32.255168 master-0 kubenswrapper[7599]: I0318 13:14:32.253951 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:33.254240 master-0 kubenswrapper[7599]: I0318 13:14:33.254182 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:33.254240 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:33.254240 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:33.254240 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:33.254797 master-0 kubenswrapper[7599]: I0318 13:14:33.254263 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:34.254338 master-0 kubenswrapper[7599]: I0318 13:14:34.254266 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:34.254338 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:34.254338 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:34.254338 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:34.255068 master-0 kubenswrapper[7599]: I0318 13:14:34.254361 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:35.254251 master-0 kubenswrapper[7599]: I0318 13:14:35.254203 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:35.254251 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:35.254251 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:35.254251 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:35.254916 master-0 kubenswrapper[7599]: I0318 13:14:35.254884 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:36.253481 master-0 kubenswrapper[7599]: I0318 13:14:36.253439 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:36.253481 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:36.253481 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:36.253481 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:36.253760 master-0 kubenswrapper[7599]: I0318 13:14:36.253523 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:37.253896 master-0 kubenswrapper[7599]: I0318 13:14:37.253831 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:37.253896 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:37.253896 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:37.253896 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:37.254571 master-0 kubenswrapper[7599]: I0318 13:14:37.253915 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:38.254126 master-0 kubenswrapper[7599]: I0318 13:14:38.254001 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:38.254126 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:38.254126 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:38.254126 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:38.254126 master-0 kubenswrapper[7599]: I0318 13:14:38.254099 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:39.254663 master-0 kubenswrapper[7599]: I0318 13:14:39.254586 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:39.254663 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:39.254663 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:39.254663 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:39.255365 master-0 kubenswrapper[7599]: I0318 13:14:39.254680 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:40.253697 master-0 kubenswrapper[7599]: I0318 13:14:40.253612 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:40.253697 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:40.253697 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:40.253697 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:40.254027 master-0 kubenswrapper[7599]: I0318 13:14:40.253708 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:41.254369 master-0 kubenswrapper[7599]: I0318 13:14:41.254306 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:41.254369 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:41.254369 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:41.254369 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:41.255057 master-0 kubenswrapper[7599]: I0318 13:14:41.254395 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:42.254179 master-0 kubenswrapper[7599]: I0318 13:14:42.254060 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:42.254179 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:42.254179 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:42.254179 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:42.255297 master-0 kubenswrapper[7599]: I0318 13:14:42.254199 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:43.252736 master-0 kubenswrapper[7599]: I0318 13:14:43.252680 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:43.252736 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:43.252736 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:43.252736 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:43.252736 master-0 kubenswrapper[7599]: I0318 13:14:43.252732 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:44.254262 master-0 kubenswrapper[7599]: I0318 13:14:44.254198 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:44.254262 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:44.254262 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:44.254262 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:44.255027 master-0 kubenswrapper[7599]: I0318 13:14:44.254293 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:45.254354 master-0 kubenswrapper[7599]: I0318 13:14:45.254278 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:45.254354 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:45.254354 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:45.254354 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:45.254933 master-0 kubenswrapper[7599]: I0318 13:14:45.254383 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:46.253552 master-0 kubenswrapper[7599]: I0318 13:14:46.253495 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:46.253552 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:46.253552 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:46.253552 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:46.253861 master-0 kubenswrapper[7599]: I0318 13:14:46.253556 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:47.254616 master-0 kubenswrapper[7599]: I0318 13:14:47.254557 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:47.254616 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:47.254616 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:47.254616 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:47.254616 master-0 kubenswrapper[7599]: I0318 13:14:47.254622 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:48.254835 master-0 kubenswrapper[7599]: I0318 13:14:48.254766 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:48.254835 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:48.254835 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:48.254835 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:48.255851 master-0 kubenswrapper[7599]: I0318 13:14:48.254878 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:49.253542 master-0 kubenswrapper[7599]: I0318 13:14:49.253455 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:49.253542 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:49.253542 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:49.253542 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:49.254065 master-0 kubenswrapper[7599]: I0318 13:14:49.253588 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:50.254289 master-0 kubenswrapper[7599]: I0318 13:14:50.254212 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:50.254289 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:50.254289 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:50.254289 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:50.254921 master-0 kubenswrapper[7599]: I0318 13:14:50.254294 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:51.254717 master-0 kubenswrapper[7599]: I0318 13:14:51.254642 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:51.254717 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:51.254717 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:51.254717 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:51.255531 master-0 kubenswrapper[7599]: I0318 13:14:51.254744 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:52.254348 master-0 kubenswrapper[7599]: I0318 13:14:52.254271 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:52.254348 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:52.254348 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:52.254348 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:52.255331 master-0 kubenswrapper[7599]: I0318 13:14:52.254383 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:53.254119 master-0 kubenswrapper[7599]: I0318 13:14:53.254034 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:53.254119 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:53.254119 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:53.254119 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:53.254528 master-0 kubenswrapper[7599]: I0318 13:14:53.254112 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:54.254086 master-0 kubenswrapper[7599]: I0318 13:14:54.254020 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:54.254086 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:54.254086 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:54.254086 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:54.254862 master-0 kubenswrapper[7599]: I0318 13:14:54.254090 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:55.253172 master-0 kubenswrapper[7599]: I0318 13:14:55.253095 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:55.253172 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:55.253172 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:55.253172 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:55.253585 master-0 kubenswrapper[7599]: I0318 13:14:55.253187 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:56.255635 master-0 kubenswrapper[7599]: I0318 13:14:56.255586 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:56.255635 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:56.255635 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:56.255635 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:56.256464 master-0 kubenswrapper[7599]: I0318 13:14:56.256432 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:57.254228 master-0 kubenswrapper[7599]: I0318 13:14:57.254099 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:57.254228 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:57.254228 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:57.254228 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:57.254834 master-0 kubenswrapper[7599]: I0318 13:14:57.254229 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:58.254788 master-0 kubenswrapper[7599]: I0318 13:14:58.254684 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:58.254788 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:58.254788 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:58.254788 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:58.255845 master-0 kubenswrapper[7599]: I0318 13:14:58.254804 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:14:59.254487 master-0 kubenswrapper[7599]: I0318 13:14:59.254192 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:14:59.254487 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:14:59.254487 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:14:59.254487 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:14:59.254487 master-0 kubenswrapper[7599]: I0318 13:14:59.254358 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:00.254375 master-0 kubenswrapper[7599]: I0318 13:15:00.254293 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:00.254375 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:00.254375 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:00.254375 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:00.255043 master-0 kubenswrapper[7599]: I0318 13:15:00.254373 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:01.254179 master-0 kubenswrapper[7599]: I0318 13:15:01.254075 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:01.254179 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:01.254179 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:01.254179 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:01.254179 master-0 kubenswrapper[7599]: I0318 13:15:01.254160 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:02.254395 master-0 kubenswrapper[7599]: I0318 13:15:02.254290 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:02.254395 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:02.254395 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:02.254395 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:02.254859 master-0 kubenswrapper[7599]: I0318 13:15:02.254385 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:03.253727 master-0 kubenswrapper[7599]: I0318 13:15:03.253617 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:03.253727 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:03.253727 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:03.253727 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:03.253727 master-0 kubenswrapper[7599]: I0318 13:15:03.253716 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:04.253234 master-0 kubenswrapper[7599]: I0318 13:15:04.253170 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:04.253234 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:04.253234 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:04.253234 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:04.253785 master-0 kubenswrapper[7599]: I0318 13:15:04.253263 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:05.254351 master-0 kubenswrapper[7599]: I0318 13:15:05.254281 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:05.254351 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:05.254351 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:05.254351 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:05.255109 master-0 kubenswrapper[7599]: I0318 13:15:05.254372 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:06.255784 master-0 kubenswrapper[7599]: I0318 13:15:06.255737 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:06.255784 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:06.255784 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:06.255784 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:06.256379 master-0 kubenswrapper[7599]: I0318 13:15:06.255806 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:07.254764 master-0 kubenswrapper[7599]: I0318 13:15:07.254692 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:07.254764 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:07.254764 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:07.254764 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:07.255623 master-0 kubenswrapper[7599]: I0318 13:15:07.255573 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:08.255096 master-0 kubenswrapper[7599]: I0318 13:15:08.254989 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:08.255096 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:08.255096 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:08.255096 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:08.256243 master-0 kubenswrapper[7599]: I0318 13:15:08.255149 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:09.254538 master-0 kubenswrapper[7599]: I0318 13:15:09.254438 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:09.254538 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:09.254538 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:09.254538 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:09.254934 master-0 kubenswrapper[7599]: I0318 13:15:09.254591 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:10.255302 master-0 kubenswrapper[7599]: I0318 13:15:10.255199 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:10.255302 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:10.255302 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:10.255302 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:10.256868 master-0 kubenswrapper[7599]: I0318 13:15:10.255324 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:11.253738 master-0 kubenswrapper[7599]: I0318 13:15:11.253669 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:11.253738 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:11.253738 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:11.253738 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:11.254234 master-0 kubenswrapper[7599]: I0318 13:15:11.253751 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:12.257693 master-0 kubenswrapper[7599]: I0318 13:15:12.257637 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:12.257693 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:12.257693 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:12.257693 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:12.258344 master-0 kubenswrapper[7599]: I0318 13:15:12.257713 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:13.259348 master-0 kubenswrapper[7599]: I0318 13:15:13.259267 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:13.259348 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:13.259348 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:13.259348 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:13.259348 master-0 kubenswrapper[7599]: I0318 13:15:13.259355 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:14.255367 master-0 kubenswrapper[7599]: I0318 13:15:14.255297 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:14.255367 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:14.255367 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:14.255367 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:14.255747 master-0 kubenswrapper[7599]: I0318 13:15:14.255402 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:15.254144 master-0 kubenswrapper[7599]: I0318 13:15:15.254088 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:15.254144 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:15.254144 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:15.254144 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:15.254717 master-0 kubenswrapper[7599]: I0318 13:15:15.254159 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:16.254037 master-0 kubenswrapper[7599]: I0318 13:15:16.253934 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:16.254037 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:16.254037 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:16.254037 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:16.254037 master-0 kubenswrapper[7599]: I0318 13:15:16.254024 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:17.253310 master-0 kubenswrapper[7599]: I0318 13:15:17.253244 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:17.253310 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:17.253310 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:17.253310 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:17.253645 master-0 kubenswrapper[7599]: I0318 13:15:17.253333 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:18.254056 master-0 kubenswrapper[7599]: I0318 13:15:18.253974 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:18.254056 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:18.254056 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:18.254056 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:18.254056 master-0 kubenswrapper[7599]: I0318 13:15:18.254043 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:19.253794 master-0 kubenswrapper[7599]: I0318 13:15:19.253748 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:19.253794 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:19.253794 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:19.253794 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:19.254457 master-0 kubenswrapper[7599]: I0318 13:15:19.253810 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:20.253959 master-0 kubenswrapper[7599]: I0318 13:15:20.253882 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:20.253959 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:20.253959 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:20.253959 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:20.253959 master-0 kubenswrapper[7599]: I0318 13:15:20.253934 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:21.253839 master-0 kubenswrapper[7599]: I0318 13:15:21.253762 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:21.253839 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:21.253839 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:21.253839 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:21.253839 master-0 kubenswrapper[7599]: I0318 13:15:21.253823 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:22.255134 master-0 kubenswrapper[7599]: I0318 13:15:22.255027 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:22.255134 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:22.255134 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:22.255134 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:22.255134 master-0 kubenswrapper[7599]: I0318 13:15:22.255128 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:23.254882 master-0 kubenswrapper[7599]: I0318 13:15:23.254811 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:23.254882 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:23.254882 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:23.254882 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:23.254882 master-0 kubenswrapper[7599]: I0318 13:15:23.254880 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:24.255360 master-0 kubenswrapper[7599]: I0318 13:15:24.255260 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:24.255360 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:24.255360 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:24.255360 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:24.255360 master-0 kubenswrapper[7599]: I0318 13:15:24.255357 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:25.253677 master-0 kubenswrapper[7599]: I0318 13:15:25.253606 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:25.253677 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:25.253677 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:25.253677 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:25.253677 master-0 kubenswrapper[7599]: I0318 13:15:25.253676 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:26.254644 master-0 kubenswrapper[7599]: I0318 13:15:26.254506 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:26.254644 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:26.254644 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:26.254644 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:26.254644 master-0 kubenswrapper[7599]: I0318 13:15:26.254617 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:27.253826 master-0 kubenswrapper[7599]: I0318 13:15:27.253753 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:27.253826 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:27.253826 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:27.253826 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:27.254101 master-0 kubenswrapper[7599]: I0318 13:15:27.253878 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:28.254731 master-0 kubenswrapper[7599]: I0318 13:15:28.254647 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:28.254731 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:28.254731 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:28.254731 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:28.255882 master-0 kubenswrapper[7599]: I0318 13:15:28.254742 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:29.253908 master-0 kubenswrapper[7599]: I0318 13:15:29.253840 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:29.253908 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:29.253908 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:29.253908 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:29.254537 master-0 kubenswrapper[7599]: I0318 13:15:29.254482 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:30.254970 master-0 kubenswrapper[7599]: I0318 13:15:30.254483 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:30.254970 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:30.254970 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:30.254970 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:30.254970 master-0 kubenswrapper[7599]: I0318 13:15:30.254582 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:31.121596 master-0 kubenswrapper[7599]: I0318 13:15:31.121531 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-wqxpk_d9d09a56-ed4c-40b7-8be1-f3934c07296e/ingress-operator/2.log" Mar 18 13:15:31.122405 master-0 kubenswrapper[7599]: I0318 13:15:31.122371 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-wqxpk_d9d09a56-ed4c-40b7-8be1-f3934c07296e/ingress-operator/1.log" Mar 18 13:15:31.122827 master-0 kubenswrapper[7599]: I0318 13:15:31.122791 7599 generic.go:334] "Generic (PLEG): container finished" podID="d9d09a56-ed4c-40b7-8be1-f3934c07296e" containerID="6ac0b3c06e29048753b73b2f17b0ac17c18e2b197c3c5c6227d9ef97a38d373f" exitCode=1 Mar 18 13:15:31.122890 master-0 kubenswrapper[7599]: I0318 13:15:31.122829 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" event={"ID":"d9d09a56-ed4c-40b7-8be1-f3934c07296e","Type":"ContainerDied","Data":"6ac0b3c06e29048753b73b2f17b0ac17c18e2b197c3c5c6227d9ef97a38d373f"} Mar 18 13:15:31.122926 master-0 kubenswrapper[7599]: I0318 13:15:31.122891 7599 scope.go:117] "RemoveContainer" containerID="df5d711a967c436c3ef89b97c0b604c819b293d8a09e8223cc8050c145294e10" Mar 18 13:15:31.124524 master-0 kubenswrapper[7599]: I0318 13:15:31.123518 7599 scope.go:117] "RemoveContainer" containerID="6ac0b3c06e29048753b73b2f17b0ac17c18e2b197c3c5c6227d9ef97a38d373f" Mar 18 13:15:31.124524 master-0 kubenswrapper[7599]: E0318 13:15:31.123839 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-wqxpk_openshift-ingress-operator(d9d09a56-ed4c-40b7-8be1-f3934c07296e)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" podUID="d9d09a56-ed4c-40b7-8be1-f3934c07296e" Mar 18 13:15:31.254178 master-0 kubenswrapper[7599]: I0318 13:15:31.254105 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:31.254178 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:31.254178 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:31.254178 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:31.254178 master-0 kubenswrapper[7599]: I0318 13:15:31.254162 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:32.133053 master-0 kubenswrapper[7599]: I0318 13:15:32.132980 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-wqxpk_d9d09a56-ed4c-40b7-8be1-f3934c07296e/ingress-operator/2.log" Mar 18 13:15:32.255531 master-0 kubenswrapper[7599]: I0318 13:15:32.255427 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:32.255531 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:32.255531 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:32.255531 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:32.255531 master-0 kubenswrapper[7599]: I0318 13:15:32.255525 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:33.255020 master-0 kubenswrapper[7599]: I0318 13:15:33.254919 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:33.255020 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:33.255020 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:33.255020 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:33.255672 master-0 kubenswrapper[7599]: I0318 13:15:33.255029 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:34.255317 master-0 kubenswrapper[7599]: I0318 13:15:34.255244 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:34.255317 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:34.255317 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:34.255317 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:34.255317 master-0 kubenswrapper[7599]: I0318 13:15:34.255319 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:35.254027 master-0 kubenswrapper[7599]: I0318 13:15:35.253949 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:35.254027 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:35.254027 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:35.254027 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:35.254350 master-0 kubenswrapper[7599]: I0318 13:15:35.254049 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:36.254556 master-0 kubenswrapper[7599]: I0318 13:15:36.254468 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:36.254556 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:36.254556 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:36.254556 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:36.255955 master-0 kubenswrapper[7599]: I0318 13:15:36.254555 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:37.254400 master-0 kubenswrapper[7599]: I0318 13:15:37.254332 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:37.254400 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:37.254400 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:37.254400 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:37.255091 master-0 kubenswrapper[7599]: I0318 13:15:37.254449 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:38.295553 master-0 kubenswrapper[7599]: I0318 13:15:38.295455 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:38.295553 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:38.295553 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:38.295553 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:38.296592 master-0 kubenswrapper[7599]: I0318 13:15:38.295583 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:39.254711 master-0 kubenswrapper[7599]: I0318 13:15:39.254618 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:39.254711 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:39.254711 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:39.254711 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:39.255242 master-0 kubenswrapper[7599]: I0318 13:15:39.254713 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:40.254531 master-0 kubenswrapper[7599]: I0318 13:15:40.254399 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:40.254531 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:40.254531 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:40.254531 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:40.255807 master-0 kubenswrapper[7599]: I0318 13:15:40.254536 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:41.254683 master-0 kubenswrapper[7599]: I0318 13:15:41.254591 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:41.254683 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:41.254683 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:41.254683 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:41.255295 master-0 kubenswrapper[7599]: I0318 13:15:41.254692 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:42.255378 master-0 kubenswrapper[7599]: I0318 13:15:42.255265 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:42.255378 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:42.255378 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:42.255378 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:42.256909 master-0 kubenswrapper[7599]: I0318 13:15:42.255716 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:43.254818 master-0 kubenswrapper[7599]: I0318 13:15:43.254756 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:43.254818 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:43.254818 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:43.254818 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:43.255120 master-0 kubenswrapper[7599]: I0318 13:15:43.254852 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:44.259966 master-0 kubenswrapper[7599]: I0318 13:15:44.259899 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:44.259966 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:44.259966 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:44.259966 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:44.260683 master-0 kubenswrapper[7599]: I0318 13:15:44.259985 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:44.371173 master-0 kubenswrapper[7599]: I0318 13:15:44.371097 7599 scope.go:117] "RemoveContainer" containerID="6ac0b3c06e29048753b73b2f17b0ac17c18e2b197c3c5c6227d9ef97a38d373f" Mar 18 13:15:44.371490 master-0 kubenswrapper[7599]: E0318 13:15:44.371434 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-wqxpk_openshift-ingress-operator(d9d09a56-ed4c-40b7-8be1-f3934c07296e)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" podUID="d9d09a56-ed4c-40b7-8be1-f3934c07296e" Mar 18 13:15:45.253169 master-0 kubenswrapper[7599]: I0318 13:15:45.253099 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:45.253169 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:45.253169 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:45.253169 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:45.253524 master-0 kubenswrapper[7599]: I0318 13:15:45.253174 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:46.254434 master-0 kubenswrapper[7599]: I0318 13:15:46.254324 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:46.254434 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:46.254434 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:46.254434 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:46.255105 master-0 kubenswrapper[7599]: I0318 13:15:46.254455 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:47.253610 master-0 kubenswrapper[7599]: I0318 13:15:47.253539 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:47.253610 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:47.253610 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:47.253610 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:47.253610 master-0 kubenswrapper[7599]: I0318 13:15:47.253607 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:48.253594 master-0 kubenswrapper[7599]: I0318 13:15:48.253480 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:48.253594 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:48.253594 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:48.253594 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:48.253594 master-0 kubenswrapper[7599]: I0318 13:15:48.253557 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:49.254708 master-0 kubenswrapper[7599]: I0318 13:15:49.254624 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:49.254708 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:49.254708 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:49.254708 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:49.254708 master-0 kubenswrapper[7599]: I0318 13:15:49.254686 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:50.262542 master-0 kubenswrapper[7599]: I0318 13:15:50.254709 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:50.262542 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:50.262542 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:50.262542 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:50.262542 master-0 kubenswrapper[7599]: I0318 13:15:50.254807 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:51.254965 master-0 kubenswrapper[7599]: I0318 13:15:51.254855 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:51.254965 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:51.254965 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:51.254965 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:51.254965 master-0 kubenswrapper[7599]: I0318 13:15:51.254955 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:52.253893 master-0 kubenswrapper[7599]: I0318 13:15:52.253829 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:52.253893 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:52.253893 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:52.253893 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:52.254472 master-0 kubenswrapper[7599]: I0318 13:15:52.253899 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:53.253134 master-0 kubenswrapper[7599]: I0318 13:15:53.253054 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:53.253134 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:53.253134 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:53.253134 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:53.253560 master-0 kubenswrapper[7599]: I0318 13:15:53.253154 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:54.254198 master-0 kubenswrapper[7599]: I0318 13:15:54.254100 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:54.254198 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:54.254198 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:54.254198 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:54.254198 master-0 kubenswrapper[7599]: I0318 13:15:54.254175 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:55.253445 master-0 kubenswrapper[7599]: I0318 13:15:55.253370 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:55.253445 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:55.253445 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:55.253445 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:55.253788 master-0 kubenswrapper[7599]: I0318 13:15:55.253460 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:55.375815 master-0 kubenswrapper[7599]: I0318 13:15:55.375740 7599 scope.go:117] "RemoveContainer" containerID="6ac0b3c06e29048753b73b2f17b0ac17c18e2b197c3c5c6227d9ef97a38d373f" Mar 18 13:15:56.254745 master-0 kubenswrapper[7599]: I0318 13:15:56.254697 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:15:56.254745 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:15:56.254745 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:15:56.254745 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:15:56.255016 master-0 kubenswrapper[7599]: I0318 13:15:56.254758 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:15:56.255016 master-0 kubenswrapper[7599]: I0318 13:15:56.254799 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:15:56.255346 master-0 kubenswrapper[7599]: I0318 13:15:56.255307 7599 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"07ca97585aaa8b06b5f428151eb377bdd83b407ab4db465d3d58a7d10ed909a2"} pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" containerMessage="Container router failed startup probe, will be restarted" Mar 18 13:15:56.255395 master-0 kubenswrapper[7599]: I0318 13:15:56.255352 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" containerID="cri-o://07ca97585aaa8b06b5f428151eb377bdd83b407ab4db465d3d58a7d10ed909a2" gracePeriod=3600 Mar 18 13:15:56.294225 master-0 kubenswrapper[7599]: I0318 13:15:56.294180 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-wqxpk_d9d09a56-ed4c-40b7-8be1-f3934c07296e/ingress-operator/2.log" Mar 18 13:15:56.294621 master-0 kubenswrapper[7599]: I0318 13:15:56.294563 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" event={"ID":"d9d09a56-ed4c-40b7-8be1-f3934c07296e","Type":"ContainerStarted","Data":"207d7ce188bef24394f03adf64334bd6fe10b8b414c650a2e20c36baa054efbd"} Mar 18 13:15:56.313326 master-0 kubenswrapper[7599]: I0318 13:15:56.313277 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-fffb75699-b7pwr"] Mar 18 13:15:56.313555 master-0 kubenswrapper[7599]: I0318 13:15:56.313501 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-fffb75699-b7pwr" podUID="6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0" containerName="controller-manager" containerID="cri-o://faa3ae234ab802023f31cbc58b44ed217cc96ae50b5b94a276ed5c1ebb501106" gracePeriod=30 Mar 18 13:15:56.693753 master-0 kubenswrapper[7599]: I0318 13:15:56.693708 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-fffb75699-b7pwr" Mar 18 13:15:56.771839 master-0 kubenswrapper[7599]: I0318 13:15:56.770627 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5nvb\" (UniqueName: \"kubernetes.io/projected/6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0-kube-api-access-m5nvb\") pod \"6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0\" (UID: \"6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0\") " Mar 18 13:15:56.771839 master-0 kubenswrapper[7599]: I0318 13:15:56.770718 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0-serving-cert\") pod \"6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0\" (UID: \"6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0\") " Mar 18 13:15:56.771839 master-0 kubenswrapper[7599]: I0318 13:15:56.770784 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0-client-ca\") pod \"6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0\" (UID: \"6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0\") " Mar 18 13:15:56.771839 master-0 kubenswrapper[7599]: I0318 13:15:56.770826 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0-proxy-ca-bundles\") pod \"6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0\" (UID: \"6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0\") " Mar 18 13:15:56.771839 master-0 kubenswrapper[7599]: I0318 13:15:56.770841 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0-config\") pod \"6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0\" (UID: \"6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0\") " Mar 18 13:15:56.771839 master-0 kubenswrapper[7599]: I0318 13:15:56.771565 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0-config" (OuterVolumeSpecName: "config") pod "6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0" (UID: "6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:15:56.772273 master-0 kubenswrapper[7599]: I0318 13:15:56.771961 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0-client-ca" (OuterVolumeSpecName: "client-ca") pod "6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0" (UID: "6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:15:56.772455 master-0 kubenswrapper[7599]: I0318 13:15:56.772387 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0" (UID: "6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:15:56.773985 master-0 kubenswrapper[7599]: I0318 13:15:56.773922 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0" (UID: "6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:15:56.775917 master-0 kubenswrapper[7599]: I0318 13:15:56.775825 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0-kube-api-access-m5nvb" (OuterVolumeSpecName: "kube-api-access-m5nvb") pod "6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0" (UID: "6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0"). InnerVolumeSpecName "kube-api-access-m5nvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:15:56.871599 master-0 kubenswrapper[7599]: I0318 13:15:56.871516 7599 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:15:56.871599 master-0 kubenswrapper[7599]: I0318 13:15:56.871587 7599 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:15:56.871599 master-0 kubenswrapper[7599]: I0318 13:15:56.871598 7599 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:15:56.871599 master-0 kubenswrapper[7599]: I0318 13:15:56.871607 7599 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 18 13:15:56.871599 master-0 kubenswrapper[7599]: I0318 13:15:56.871618 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5nvb\" (UniqueName: \"kubernetes.io/projected/6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0-kube-api-access-m5nvb\") on node \"master-0\" DevicePath \"\"" Mar 18 13:15:57.111780 master-0 kubenswrapper[7599]: I0318 13:15:57.111662 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 18 13:15:57.111963 master-0 kubenswrapper[7599]: E0318 13:15:57.111940 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0" containerName="controller-manager" Mar 18 13:15:57.112003 master-0 kubenswrapper[7599]: I0318 13:15:57.111963 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0" containerName="controller-manager" Mar 18 13:15:57.112125 master-0 kubenswrapper[7599]: I0318 13:15:57.112104 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0" containerName="controller-manager" Mar 18 13:15:57.112125 master-0 kubenswrapper[7599]: I0318 13:15:57.112121 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0" containerName="controller-manager" Mar 18 13:15:57.112568 master-0 kubenswrapper[7599]: I0318 13:15:57.112543 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 13:15:57.116024 master-0 kubenswrapper[7599]: I0318 13:15:57.115931 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-qcrdm" Mar 18 13:15:57.116733 master-0 kubenswrapper[7599]: I0318 13:15:57.116688 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 18 13:15:57.133298 master-0 kubenswrapper[7599]: I0318 13:15:57.133236 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 18 13:15:57.176293 master-0 kubenswrapper[7599]: I0318 13:15:57.176222 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b4d424a6-cf4e-4e32-bc50-db63ef03f8dd-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"b4d424a6-cf4e-4e32-bc50-db63ef03f8dd\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 13:15:57.176293 master-0 kubenswrapper[7599]: I0318 13:15:57.176265 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4d424a6-cf4e-4e32-bc50-db63ef03f8dd-kube-api-access\") pod \"installer-4-master-0\" (UID: \"b4d424a6-cf4e-4e32-bc50-db63ef03f8dd\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 13:15:57.176293 master-0 kubenswrapper[7599]: I0318 13:15:57.176293 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b4d424a6-cf4e-4e32-bc50-db63ef03f8dd-var-lock\") pod \"installer-4-master-0\" (UID: \"b4d424a6-cf4e-4e32-bc50-db63ef03f8dd\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 13:15:57.277137 master-0 kubenswrapper[7599]: I0318 13:15:57.277034 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b4d424a6-cf4e-4e32-bc50-db63ef03f8dd-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"b4d424a6-cf4e-4e32-bc50-db63ef03f8dd\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 13:15:57.277137 master-0 kubenswrapper[7599]: I0318 13:15:57.277104 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4d424a6-cf4e-4e32-bc50-db63ef03f8dd-kube-api-access\") pod \"installer-4-master-0\" (UID: \"b4d424a6-cf4e-4e32-bc50-db63ef03f8dd\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 13:15:57.277137 master-0 kubenswrapper[7599]: I0318 13:15:57.277152 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b4d424a6-cf4e-4e32-bc50-db63ef03f8dd-var-lock\") pod \"installer-4-master-0\" (UID: \"b4d424a6-cf4e-4e32-bc50-db63ef03f8dd\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 13:15:57.277609 master-0 kubenswrapper[7599]: I0318 13:15:57.277281 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b4d424a6-cf4e-4e32-bc50-db63ef03f8dd-var-lock\") pod \"installer-4-master-0\" (UID: \"b4d424a6-cf4e-4e32-bc50-db63ef03f8dd\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 13:15:57.277609 master-0 kubenswrapper[7599]: I0318 13:15:57.277341 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b4d424a6-cf4e-4e32-bc50-db63ef03f8dd-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"b4d424a6-cf4e-4e32-bc50-db63ef03f8dd\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 13:15:57.294690 master-0 kubenswrapper[7599]: I0318 13:15:57.294609 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4d424a6-cf4e-4e32-bc50-db63ef03f8dd-kube-api-access\") pod \"installer-4-master-0\" (UID: \"b4d424a6-cf4e-4e32-bc50-db63ef03f8dd\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 13:15:57.303077 master-0 kubenswrapper[7599]: I0318 13:15:57.303007 7599 generic.go:334] "Generic (PLEG): container finished" podID="6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0" containerID="faa3ae234ab802023f31cbc58b44ed217cc96ae50b5b94a276ed5c1ebb501106" exitCode=0 Mar 18 13:15:57.303077 master-0 kubenswrapper[7599]: I0318 13:15:57.303075 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-fffb75699-b7pwr" event={"ID":"6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0","Type":"ContainerDied","Data":"faa3ae234ab802023f31cbc58b44ed217cc96ae50b5b94a276ed5c1ebb501106"} Mar 18 13:15:57.303505 master-0 kubenswrapper[7599]: I0318 13:15:57.303120 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-fffb75699-b7pwr" event={"ID":"6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0","Type":"ContainerDied","Data":"2db7c8466cbb5260cee9337225110234f66cfda1042ae3eab42421d66a814e6e"} Mar 18 13:15:57.303505 master-0 kubenswrapper[7599]: I0318 13:15:57.303147 7599 scope.go:117] "RemoveContainer" containerID="faa3ae234ab802023f31cbc58b44ed217cc96ae50b5b94a276ed5c1ebb501106" Mar 18 13:15:57.303964 master-0 kubenswrapper[7599]: I0318 13:15:57.303907 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-fffb75699-b7pwr" Mar 18 13:15:57.324904 master-0 kubenswrapper[7599]: I0318 13:15:57.324870 7599 scope.go:117] "RemoveContainer" containerID="bb553102326f2c4e518a7d4b30f5d51c49f697e05a25cafd876fd2a4683c379b" Mar 18 13:15:57.350528 master-0 kubenswrapper[7599]: I0318 13:15:57.350499 7599 scope.go:117] "RemoveContainer" containerID="faa3ae234ab802023f31cbc58b44ed217cc96ae50b5b94a276ed5c1ebb501106" Mar 18 13:15:57.351966 master-0 kubenswrapper[7599]: E0318 13:15:57.351904 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faa3ae234ab802023f31cbc58b44ed217cc96ae50b5b94a276ed5c1ebb501106\": container with ID starting with faa3ae234ab802023f31cbc58b44ed217cc96ae50b5b94a276ed5c1ebb501106 not found: ID does not exist" containerID="faa3ae234ab802023f31cbc58b44ed217cc96ae50b5b94a276ed5c1ebb501106" Mar 18 13:15:57.352032 master-0 kubenswrapper[7599]: I0318 13:15:57.351976 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faa3ae234ab802023f31cbc58b44ed217cc96ae50b5b94a276ed5c1ebb501106"} err="failed to get container status \"faa3ae234ab802023f31cbc58b44ed217cc96ae50b5b94a276ed5c1ebb501106\": rpc error: code = NotFound desc = could not find container \"faa3ae234ab802023f31cbc58b44ed217cc96ae50b5b94a276ed5c1ebb501106\": container with ID starting with faa3ae234ab802023f31cbc58b44ed217cc96ae50b5b94a276ed5c1ebb501106 not found: ID does not exist" Mar 18 13:15:57.352032 master-0 kubenswrapper[7599]: I0318 13:15:57.352013 7599 scope.go:117] "RemoveContainer" containerID="bb553102326f2c4e518a7d4b30f5d51c49f697e05a25cafd876fd2a4683c379b" Mar 18 13:15:57.352770 master-0 kubenswrapper[7599]: E0318 13:15:57.352717 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb553102326f2c4e518a7d4b30f5d51c49f697e05a25cafd876fd2a4683c379b\": container with ID starting with bb553102326f2c4e518a7d4b30f5d51c49f697e05a25cafd876fd2a4683c379b not found: ID does not exist" containerID="bb553102326f2c4e518a7d4b30f5d51c49f697e05a25cafd876fd2a4683c379b" Mar 18 13:15:57.352885 master-0 kubenswrapper[7599]: I0318 13:15:57.352856 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb553102326f2c4e518a7d4b30f5d51c49f697e05a25cafd876fd2a4683c379b"} err="failed to get container status \"bb553102326f2c4e518a7d4b30f5d51c49f697e05a25cafd876fd2a4683c379b\": rpc error: code = NotFound desc = could not find container \"bb553102326f2c4e518a7d4b30f5d51c49f697e05a25cafd876fd2a4683c379b\": container with ID starting with bb553102326f2c4e518a7d4b30f5d51c49f697e05a25cafd876fd2a4683c379b not found: ID does not exist" Mar 18 13:15:57.359689 master-0 kubenswrapper[7599]: I0318 13:15:57.359637 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-fffb75699-b7pwr"] Mar 18 13:15:57.365238 master-0 kubenswrapper[7599]: I0318 13:15:57.365135 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-fffb75699-b7pwr"] Mar 18 13:15:57.383166 master-0 kubenswrapper[7599]: I0318 13:15:57.383120 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0" path="/var/lib/kubelet/pods/6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0/volumes" Mar 18 13:15:57.453565 master-0 kubenswrapper[7599]: I0318 13:15:57.453487 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 13:15:57.924854 master-0 kubenswrapper[7599]: I0318 13:15:57.924789 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 18 13:15:57.954429 master-0 kubenswrapper[7599]: I0318 13:15:57.954177 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d7c95db55-d6lqm"] Mar 18 13:15:57.954650 master-0 kubenswrapper[7599]: E0318 13:15:57.954557 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0" containerName="controller-manager" Mar 18 13:15:57.954650 master-0 kubenswrapper[7599]: I0318 13:15:57.954578 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a3f37a0-7a87-486e-90b9-d5e15d6ab4c0" containerName="controller-manager" Mar 18 13:15:57.955349 master-0 kubenswrapper[7599]: I0318 13:15:57.955313 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:15:57.957520 master-0 kubenswrapper[7599]: I0318 13:15:57.957487 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-b4r5l" Mar 18 13:15:57.958221 master-0 kubenswrapper[7599]: I0318 13:15:57.958049 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 13:15:57.958221 master-0 kubenswrapper[7599]: I0318 13:15:57.958073 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 13:15:57.958353 master-0 kubenswrapper[7599]: I0318 13:15:57.958233 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 13:15:57.960025 master-0 kubenswrapper[7599]: I0318 13:15:57.958499 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 13:15:57.960598 master-0 kubenswrapper[7599]: I0318 13:15:57.960493 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 13:15:57.965529 master-0 kubenswrapper[7599]: I0318 13:15:57.965010 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 13:15:57.967825 master-0 kubenswrapper[7599]: I0318 13:15:57.967207 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d7c95db55-d6lqm"] Mar 18 13:15:57.989599 master-0 kubenswrapper[7599]: I0318 13:15:57.987431 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qn7f\" (UniqueName: \"kubernetes.io/projected/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-kube-api-access-5qn7f\") pod \"controller-manager-d7c95db55-d6lqm\" (UID: \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\") " pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:15:57.989599 master-0 kubenswrapper[7599]: I0318 13:15:57.987534 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-serving-cert\") pod \"controller-manager-d7c95db55-d6lqm\" (UID: \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\") " pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:15:57.989599 master-0 kubenswrapper[7599]: I0318 13:15:57.987571 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-config\") pod \"controller-manager-d7c95db55-d6lqm\" (UID: \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\") " pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:15:57.989599 master-0 kubenswrapper[7599]: I0318 13:15:57.987611 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-client-ca\") pod \"controller-manager-d7c95db55-d6lqm\" (UID: \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\") " pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:15:57.989599 master-0 kubenswrapper[7599]: I0318 13:15:57.987632 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-proxy-ca-bundles\") pod \"controller-manager-d7c95db55-d6lqm\" (UID: \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\") " pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:15:58.089115 master-0 kubenswrapper[7599]: I0318 13:15:58.089049 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-client-ca\") pod \"controller-manager-d7c95db55-d6lqm\" (UID: \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\") " pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:15:58.089115 master-0 kubenswrapper[7599]: I0318 13:15:58.089116 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-proxy-ca-bundles\") pod \"controller-manager-d7c95db55-d6lqm\" (UID: \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\") " pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:15:58.089561 master-0 kubenswrapper[7599]: I0318 13:15:58.089193 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qn7f\" (UniqueName: \"kubernetes.io/projected/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-kube-api-access-5qn7f\") pod \"controller-manager-d7c95db55-d6lqm\" (UID: \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\") " pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:15:58.089830 master-0 kubenswrapper[7599]: I0318 13:15:58.089679 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-serving-cert\") pod \"controller-manager-d7c95db55-d6lqm\" (UID: \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\") " pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:15:58.089830 master-0 kubenswrapper[7599]: I0318 13:15:58.089733 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-config\") pod \"controller-manager-d7c95db55-d6lqm\" (UID: \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\") " pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:15:58.090614 master-0 kubenswrapper[7599]: I0318 13:15:58.090572 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-client-ca\") pod \"controller-manager-d7c95db55-d6lqm\" (UID: \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\") " pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:15:58.091137 master-0 kubenswrapper[7599]: I0318 13:15:58.091094 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-proxy-ca-bundles\") pod \"controller-manager-d7c95db55-d6lqm\" (UID: \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\") " pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:15:58.091998 master-0 kubenswrapper[7599]: I0318 13:15:58.091926 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-config\") pod \"controller-manager-d7c95db55-d6lqm\" (UID: \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\") " pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:15:58.093342 master-0 kubenswrapper[7599]: I0318 13:15:58.093297 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-serving-cert\") pod \"controller-manager-d7c95db55-d6lqm\" (UID: \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\") " pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:15:58.105728 master-0 kubenswrapper[7599]: I0318 13:15:58.105685 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qn7f\" (UniqueName: \"kubernetes.io/projected/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-kube-api-access-5qn7f\") pod \"controller-manager-d7c95db55-d6lqm\" (UID: \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\") " pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:15:58.310966 master-0 kubenswrapper[7599]: I0318 13:15:58.310884 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"b4d424a6-cf4e-4e32-bc50-db63ef03f8dd","Type":"ContainerStarted","Data":"4daffe612ceab094bb2d1f38476f0856eefbbaa467bc42a2d0b021a9807cf03f"} Mar 18 13:15:58.310966 master-0 kubenswrapper[7599]: I0318 13:15:58.310934 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"b4d424a6-cf4e-4e32-bc50-db63ef03f8dd","Type":"ContainerStarted","Data":"52a3b14cf6bdc42bb301c45eb61a63c8e96420bc048eb6405582d863a95b40ad"} Mar 18 13:15:58.322996 master-0 kubenswrapper[7599]: I0318 13:15:58.322933 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:15:58.330933 master-0 kubenswrapper[7599]: I0318 13:15:58.330857 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-master-0" podStartSLOduration=1.330836178 podStartE2EDuration="1.330836178s" podCreationTimestamp="2026-03-18 13:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:15:58.327757554 +0000 UTC m=+533.288811796" watchObservedRunningTime="2026-03-18 13:15:58.330836178 +0000 UTC m=+533.291890420" Mar 18 13:15:58.703078 master-0 kubenswrapper[7599]: I0318 13:15:58.702939 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d7c95db55-d6lqm"] Mar 18 13:15:59.320925 master-0 kubenswrapper[7599]: I0318 13:15:59.320849 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" event={"ID":"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7","Type":"ContainerStarted","Data":"8fd3086731035b08c09720259a6ef231b1be865d3ade946ceb31136e3b43913c"} Mar 18 13:15:59.320925 master-0 kubenswrapper[7599]: I0318 13:15:59.320904 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" event={"ID":"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7","Type":"ContainerStarted","Data":"3f4c5edfdc04ff6f06a18f7e79a33fe2c7ca34a279290a61c3b81818bc079d6b"} Mar 18 13:15:59.321516 master-0 kubenswrapper[7599]: I0318 13:15:59.321068 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:15:59.326250 master-0 kubenswrapper[7599]: I0318 13:15:59.326207 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:15:59.371090 master-0 kubenswrapper[7599]: I0318 13:15:59.370981 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" podStartSLOduration=3.370953335 podStartE2EDuration="3.370953335s" podCreationTimestamp="2026-03-18 13:15:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:15:59.346957487 +0000 UTC m=+534.308011759" watchObservedRunningTime="2026-03-18 13:15:59.370953335 +0000 UTC m=+534.332007597" Mar 18 13:16:02.623622 master-0 kubenswrapper[7599]: I0318 13:16:02.623547 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 18 13:16:02.624572 master-0 kubenswrapper[7599]: I0318 13:16:02.624539 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 18 13:16:02.628052 master-0 kubenswrapper[7599]: I0318 13:16:02.628022 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-8swsn" Mar 18 13:16:02.628881 master-0 kubenswrapper[7599]: I0318 13:16:02.628838 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 18 13:16:02.646078 master-0 kubenswrapper[7599]: I0318 13:16:02.646033 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 18 13:16:02.652940 master-0 kubenswrapper[7599]: I0318 13:16:02.652902 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5217b77d-b517-45c3-b76d-eee86d72b141-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"5217b77d-b517-45c3-b76d-eee86d72b141\") " pod="openshift-etcd/installer-2-master-0" Mar 18 13:16:02.653056 master-0 kubenswrapper[7599]: I0318 13:16:02.652956 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5217b77d-b517-45c3-b76d-eee86d72b141-var-lock\") pod \"installer-2-master-0\" (UID: \"5217b77d-b517-45c3-b76d-eee86d72b141\") " pod="openshift-etcd/installer-2-master-0" Mar 18 13:16:02.653232 master-0 kubenswrapper[7599]: I0318 13:16:02.653181 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5217b77d-b517-45c3-b76d-eee86d72b141-kube-api-access\") pod \"installer-2-master-0\" (UID: \"5217b77d-b517-45c3-b76d-eee86d72b141\") " pod="openshift-etcd/installer-2-master-0" Mar 18 13:16:02.754512 master-0 kubenswrapper[7599]: I0318 13:16:02.754373 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5217b77d-b517-45c3-b76d-eee86d72b141-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"5217b77d-b517-45c3-b76d-eee86d72b141\") " pod="openshift-etcd/installer-2-master-0" Mar 18 13:16:02.754790 master-0 kubenswrapper[7599]: I0318 13:16:02.754532 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5217b77d-b517-45c3-b76d-eee86d72b141-var-lock\") pod \"installer-2-master-0\" (UID: \"5217b77d-b517-45c3-b76d-eee86d72b141\") " pod="openshift-etcd/installer-2-master-0" Mar 18 13:16:02.754790 master-0 kubenswrapper[7599]: I0318 13:16:02.754560 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5217b77d-b517-45c3-b76d-eee86d72b141-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"5217b77d-b517-45c3-b76d-eee86d72b141\") " pod="openshift-etcd/installer-2-master-0" Mar 18 13:16:02.754790 master-0 kubenswrapper[7599]: I0318 13:16:02.754607 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5217b77d-b517-45c3-b76d-eee86d72b141-kube-api-access\") pod \"installer-2-master-0\" (UID: \"5217b77d-b517-45c3-b76d-eee86d72b141\") " pod="openshift-etcd/installer-2-master-0" Mar 18 13:16:02.754790 master-0 kubenswrapper[7599]: I0318 13:16:02.754666 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5217b77d-b517-45c3-b76d-eee86d72b141-var-lock\") pod \"installer-2-master-0\" (UID: \"5217b77d-b517-45c3-b76d-eee86d72b141\") " pod="openshift-etcd/installer-2-master-0" Mar 18 13:16:02.781294 master-0 kubenswrapper[7599]: I0318 13:16:02.781219 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5217b77d-b517-45c3-b76d-eee86d72b141-kube-api-access\") pod \"installer-2-master-0\" (UID: \"5217b77d-b517-45c3-b76d-eee86d72b141\") " pod="openshift-etcd/installer-2-master-0" Mar 18 13:16:02.953378 master-0 kubenswrapper[7599]: I0318 13:16:02.953293 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 18 13:16:03.444473 master-0 kubenswrapper[7599]: I0318 13:16:03.444380 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 18 13:16:03.457742 master-0 kubenswrapper[7599]: W0318 13:16:03.457490 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5217b77d_b517_45c3_b76d_eee86d72b141.slice/crio-a1317dc133fc5be7f7286b377875f1533cdac704e7aa1b12b6a9de99eec130ad WatchSource:0}: Error finding container a1317dc133fc5be7f7286b377875f1533cdac704e7aa1b12b6a9de99eec130ad: Status 404 returned error can't find the container with id a1317dc133fc5be7f7286b377875f1533cdac704e7aa1b12b6a9de99eec130ad Mar 18 13:16:04.360147 master-0 kubenswrapper[7599]: I0318 13:16:04.360029 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"5217b77d-b517-45c3-b76d-eee86d72b141","Type":"ContainerStarted","Data":"44724c38cb2d6b59ba2396d53ded36b1d7f457c6dd6834e92f2a09e247880a38"} Mar 18 13:16:04.360147 master-0 kubenswrapper[7599]: I0318 13:16:04.360113 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"5217b77d-b517-45c3-b76d-eee86d72b141","Type":"ContainerStarted","Data":"a1317dc133fc5be7f7286b377875f1533cdac704e7aa1b12b6a9de99eec130ad"} Mar 18 13:16:04.390370 master-0 kubenswrapper[7599]: I0318 13:16:04.390220 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-2-master-0" podStartSLOduration=2.390191109 podStartE2EDuration="2.390191109s" podCreationTimestamp="2026-03-18 13:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:16:04.38399867 +0000 UTC m=+539.345052942" watchObservedRunningTime="2026-03-18 13:16:04.390191109 +0000 UTC m=+539.351245391" Mar 18 13:16:23.650806 master-0 kubenswrapper[7599]: I0318 13:16:23.650232 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 18 13:16:23.654127 master-0 kubenswrapper[7599]: I0318 13:16:23.651637 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 13:16:23.659391 master-0 kubenswrapper[7599]: I0318 13:16:23.655356 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-w7jpc" Mar 18 13:16:23.659391 master-0 kubenswrapper[7599]: I0318 13:16:23.655773 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 18 13:16:23.684283 master-0 kubenswrapper[7599]: I0318 13:16:23.684185 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 18 13:16:23.751702 master-0 kubenswrapper[7599]: I0318 13:16:23.751586 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/62eae2a9-2667-431e-ad73-ca18124d01f6-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"62eae2a9-2667-431e-ad73-ca18124d01f6\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 13:16:23.752032 master-0 kubenswrapper[7599]: I0318 13:16:23.751953 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/62eae2a9-2667-431e-ad73-ca18124d01f6-var-lock\") pod \"installer-3-master-0\" (UID: \"62eae2a9-2667-431e-ad73-ca18124d01f6\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 13:16:23.752283 master-0 kubenswrapper[7599]: I0318 13:16:23.752227 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/62eae2a9-2667-431e-ad73-ca18124d01f6-kube-api-access\") pod \"installer-3-master-0\" (UID: \"62eae2a9-2667-431e-ad73-ca18124d01f6\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 13:16:23.853747 master-0 kubenswrapper[7599]: I0318 13:16:23.853649 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/62eae2a9-2667-431e-ad73-ca18124d01f6-kube-api-access\") pod \"installer-3-master-0\" (UID: \"62eae2a9-2667-431e-ad73-ca18124d01f6\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 13:16:23.853747 master-0 kubenswrapper[7599]: I0318 13:16:23.853736 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/62eae2a9-2667-431e-ad73-ca18124d01f6-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"62eae2a9-2667-431e-ad73-ca18124d01f6\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 13:16:23.854043 master-0 kubenswrapper[7599]: I0318 13:16:23.853827 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/62eae2a9-2667-431e-ad73-ca18124d01f6-var-lock\") pod \"installer-3-master-0\" (UID: \"62eae2a9-2667-431e-ad73-ca18124d01f6\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 13:16:23.854043 master-0 kubenswrapper[7599]: I0318 13:16:23.853999 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/62eae2a9-2667-431e-ad73-ca18124d01f6-var-lock\") pod \"installer-3-master-0\" (UID: \"62eae2a9-2667-431e-ad73-ca18124d01f6\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 13:16:23.854139 master-0 kubenswrapper[7599]: I0318 13:16:23.854034 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/62eae2a9-2667-431e-ad73-ca18124d01f6-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"62eae2a9-2667-431e-ad73-ca18124d01f6\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 13:16:23.876990 master-0 kubenswrapper[7599]: I0318 13:16:23.876895 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/62eae2a9-2667-431e-ad73-ca18124d01f6-kube-api-access\") pod \"installer-3-master-0\" (UID: \"62eae2a9-2667-431e-ad73-ca18124d01f6\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 13:16:23.988688 master-0 kubenswrapper[7599]: I0318 13:16:23.988611 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 13:16:24.486782 master-0 kubenswrapper[7599]: W0318 13:16:24.486715 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod62eae2a9_2667_431e_ad73_ca18124d01f6.slice/crio-d8f5aeb16a7c0c81d68e0b4c65665a3f2d3427f082b4e9bb5b808153582ec4ae WatchSource:0}: Error finding container d8f5aeb16a7c0c81d68e0b4c65665a3f2d3427f082b4e9bb5b808153582ec4ae: Status 404 returned error can't find the container with id d8f5aeb16a7c0c81d68e0b4c65665a3f2d3427f082b4e9bb5b808153582ec4ae Mar 18 13:16:24.493183 master-0 kubenswrapper[7599]: I0318 13:16:24.493083 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 18 13:16:24.496170 master-0 kubenswrapper[7599]: I0318 13:16:24.496108 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"62eae2a9-2667-431e-ad73-ca18124d01f6","Type":"ContainerStarted","Data":"d8f5aeb16a7c0c81d68e0b4c65665a3f2d3427f082b4e9bb5b808153582ec4ae"} Mar 18 13:16:25.505342 master-0 kubenswrapper[7599]: I0318 13:16:25.505250 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"62eae2a9-2667-431e-ad73-ca18124d01f6","Type":"ContainerStarted","Data":"84d4addeaab69d00ff961004821b23d05bc68d242853d91f47889592129b1a88"} Mar 18 13:16:25.531081 master-0 kubenswrapper[7599]: I0318 13:16:25.530965 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-master-0" podStartSLOduration=2.5309397860000002 podStartE2EDuration="2.530939786s" podCreationTimestamp="2026-03-18 13:16:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:16:25.525302851 +0000 UTC m=+560.486357093" watchObservedRunningTime="2026-03-18 13:16:25.530939786 +0000 UTC m=+560.491994068" Mar 18 13:16:29.343673 master-0 kubenswrapper[7599]: E0318 13:16:29.343531 7599 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-scheduler-pod.yaml\": /etc/kubernetes/manifests/kube-scheduler-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Mar 18 13:16:29.344545 master-0 kubenswrapper[7599]: I0318 13:16:29.343843 7599 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 18 13:16:29.344545 master-0 kubenswrapper[7599]: I0318 13:16:29.344125 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" containerID="cri-o://2b457975b25ebc40cb55c5ee5a932669f56be2bc949f61f598bc7d15209f09c7" gracePeriod=30 Mar 18 13:16:29.345359 master-0 kubenswrapper[7599]: I0318 13:16:29.345293 7599 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 18 13:16:29.345695 master-0 kubenswrapper[7599]: E0318 13:16:29.345657 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 13:16:29.345695 master-0 kubenswrapper[7599]: I0318 13:16:29.345683 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 13:16:29.345809 master-0 kubenswrapper[7599]: E0318 13:16:29.345701 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 13:16:29.345809 master-0 kubenswrapper[7599]: I0318 13:16:29.345710 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 13:16:29.345913 master-0 kubenswrapper[7599]: I0318 13:16:29.345859 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 13:16:29.345913 master-0 kubenswrapper[7599]: I0318 13:16:29.345875 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 13:16:29.347270 master-0 kubenswrapper[7599]: I0318 13:16:29.347207 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:16:29.386724 master-0 kubenswrapper[7599]: I0318 13:16:29.385437 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 18 13:16:29.430597 master-0 kubenswrapper[7599]: I0318 13:16:29.430550 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:16:29.430875 master-0 kubenswrapper[7599]: I0318 13:16:29.430817 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:16:29.531821 master-0 kubenswrapper[7599]: I0318 13:16:29.531782 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:16:29.531962 master-0 kubenswrapper[7599]: I0318 13:16:29.531927 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:16:29.532069 master-0 kubenswrapper[7599]: I0318 13:16:29.532050 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:16:29.532186 master-0 kubenswrapper[7599]: I0318 13:16:29.532162 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:16:29.562663 master-0 kubenswrapper[7599]: I0318 13:16:29.562594 7599 generic.go:334] "Generic (PLEG): container finished" podID="c83737980b9ee109184b1d78e942cf36" containerID="2b457975b25ebc40cb55c5ee5a932669f56be2bc949f61f598bc7d15209f09c7" exitCode=0 Mar 18 13:16:29.562876 master-0 kubenswrapper[7599]: I0318 13:16:29.562706 7599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="492f8b5295cdebec40bc2ae4858cc5e6b7f333a6d877b4f8238f7b732ed663e7" Mar 18 13:16:29.562876 master-0 kubenswrapper[7599]: I0318 13:16:29.562746 7599 scope.go:117] "RemoveContainer" containerID="fa24e07dc1e554926055d55fec3f68de49cdd19d5efe278d06ec7ad571b7e767" Mar 18 13:16:29.569893 master-0 kubenswrapper[7599]: I0318 13:16:29.565682 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"b4d424a6-cf4e-4e32-bc50-db63ef03f8dd","Type":"ContainerDied","Data":"4daffe612ceab094bb2d1f38476f0856eefbbaa467bc42a2d0b021a9807cf03f"} Mar 18 13:16:29.569893 master-0 kubenswrapper[7599]: I0318 13:16:29.565307 7599 generic.go:334] "Generic (PLEG): container finished" podID="b4d424a6-cf4e-4e32-bc50-db63ef03f8dd" containerID="4daffe612ceab094bb2d1f38476f0856eefbbaa467bc42a2d0b021a9807cf03f" exitCode=0 Mar 18 13:16:29.624404 master-0 kubenswrapper[7599]: I0318 13:16:29.624371 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 13:16:29.632782 master-0 kubenswrapper[7599]: I0318 13:16:29.632754 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"c83737980b9ee109184b1d78e942cf36\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " Mar 18 13:16:29.632860 master-0 kubenswrapper[7599]: I0318 13:16:29.632810 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"c83737980b9ee109184b1d78e942cf36\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " Mar 18 13:16:29.632940 master-0 kubenswrapper[7599]: I0318 13:16:29.632914 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets" (OuterVolumeSpecName: "secrets") pod "c83737980b9ee109184b1d78e942cf36" (UID: "c83737980b9ee109184b1d78e942cf36"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:16:29.633061 master-0 kubenswrapper[7599]: I0318 13:16:29.633044 7599 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") on node \"master-0\" DevicePath \"\"" Mar 18 13:16:29.633092 master-0 kubenswrapper[7599]: I0318 13:16:29.633072 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs" (OuterVolumeSpecName: "logs") pod "c83737980b9ee109184b1d78e942cf36" (UID: "c83737980b9ee109184b1d78e942cf36"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:16:29.682918 master-0 kubenswrapper[7599]: I0318 13:16:29.682771 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:16:29.701370 master-0 kubenswrapper[7599]: W0318 13:16:29.701321 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8413125cf444e5c95f023c5dd9c6151e.slice/crio-8e4ba40307f1e3c32ed5043b13eaa8d528a5352038969de985182a9daf4f59ae WatchSource:0}: Error finding container 8e4ba40307f1e3c32ed5043b13eaa8d528a5352038969de985182a9daf4f59ae: Status 404 returned error can't find the container with id 8e4ba40307f1e3c32ed5043b13eaa8d528a5352038969de985182a9daf4f59ae Mar 18 13:16:29.735453 master-0 kubenswrapper[7599]: I0318 13:16:29.735382 7599 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:16:30.577335 master-0 kubenswrapper[7599]: I0318 13:16:30.577272 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 13:16:30.579049 master-0 kubenswrapper[7599]: I0318 13:16:30.579005 7599 generic.go:334] "Generic (PLEG): container finished" podID="8413125cf444e5c95f023c5dd9c6151e" containerID="66077c2a26014879f2ee8a44731dd4750343ebe7a4a34fc0f126a55d48c25d7c" exitCode=0 Mar 18 13:16:30.579136 master-0 kubenswrapper[7599]: I0318 13:16:30.579071 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerDied","Data":"66077c2a26014879f2ee8a44731dd4750343ebe7a4a34fc0f126a55d48c25d7c"} Mar 18 13:16:30.579136 master-0 kubenswrapper[7599]: I0318 13:16:30.579105 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"8e4ba40307f1e3c32ed5043b13eaa8d528a5352038969de985182a9daf4f59ae"} Mar 18 13:16:30.927453 master-0 kubenswrapper[7599]: I0318 13:16:30.927398 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 13:16:30.955612 master-0 kubenswrapper[7599]: I0318 13:16:30.955570 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b4d424a6-cf4e-4e32-bc50-db63ef03f8dd-var-lock\") pod \"b4d424a6-cf4e-4e32-bc50-db63ef03f8dd\" (UID: \"b4d424a6-cf4e-4e32-bc50-db63ef03f8dd\") " Mar 18 13:16:30.955838 master-0 kubenswrapper[7599]: I0318 13:16:30.955650 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4d424a6-cf4e-4e32-bc50-db63ef03f8dd-var-lock" (OuterVolumeSpecName: "var-lock") pod "b4d424a6-cf4e-4e32-bc50-db63ef03f8dd" (UID: "b4d424a6-cf4e-4e32-bc50-db63ef03f8dd"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:16:30.955838 master-0 kubenswrapper[7599]: I0318 13:16:30.955676 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b4d424a6-cf4e-4e32-bc50-db63ef03f8dd-kubelet-dir\") pod \"b4d424a6-cf4e-4e32-bc50-db63ef03f8dd\" (UID: \"b4d424a6-cf4e-4e32-bc50-db63ef03f8dd\") " Mar 18 13:16:30.955838 master-0 kubenswrapper[7599]: I0318 13:16:30.955703 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4d424a6-cf4e-4e32-bc50-db63ef03f8dd-kube-api-access\") pod \"b4d424a6-cf4e-4e32-bc50-db63ef03f8dd\" (UID: \"b4d424a6-cf4e-4e32-bc50-db63ef03f8dd\") " Mar 18 13:16:30.955838 master-0 kubenswrapper[7599]: I0318 13:16:30.955752 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4d424a6-cf4e-4e32-bc50-db63ef03f8dd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b4d424a6-cf4e-4e32-bc50-db63ef03f8dd" (UID: "b4d424a6-cf4e-4e32-bc50-db63ef03f8dd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:16:30.956011 master-0 kubenswrapper[7599]: I0318 13:16:30.955873 7599 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b4d424a6-cf4e-4e32-bc50-db63ef03f8dd-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:16:30.956011 master-0 kubenswrapper[7599]: I0318 13:16:30.955884 7599 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b4d424a6-cf4e-4e32-bc50-db63ef03f8dd-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:16:30.958707 master-0 kubenswrapper[7599]: I0318 13:16:30.958669 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4d424a6-cf4e-4e32-bc50-db63ef03f8dd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b4d424a6-cf4e-4e32-bc50-db63ef03f8dd" (UID: "b4d424a6-cf4e-4e32-bc50-db63ef03f8dd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:16:31.066002 master-0 kubenswrapper[7599]: I0318 13:16:31.065953 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b4d424a6-cf4e-4e32-bc50-db63ef03f8dd-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:16:31.382344 master-0 kubenswrapper[7599]: I0318 13:16:31.382195 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c83737980b9ee109184b1d78e942cf36" path="/var/lib/kubelet/pods/c83737980b9ee109184b1d78e942cf36/volumes" Mar 18 13:16:31.382658 master-0 kubenswrapper[7599]: I0318 13:16:31.382628 7599 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Mar 18 13:16:31.397245 master-0 kubenswrapper[7599]: I0318 13:16:31.397181 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 18 13:16:31.397245 master-0 kubenswrapper[7599]: I0318 13:16:31.397236 7599 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="8d122ae0-181c-4ce9-94a9-faf8aafde05d" Mar 18 13:16:31.402560 master-0 kubenswrapper[7599]: I0318 13:16:31.402505 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 18 13:16:31.402702 master-0 kubenswrapper[7599]: I0318 13:16:31.402553 7599 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="8d122ae0-181c-4ce9-94a9-faf8aafde05d" Mar 18 13:16:31.591499 master-0 kubenswrapper[7599]: I0318 13:16:31.591376 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"b4d424a6-cf4e-4e32-bc50-db63ef03f8dd","Type":"ContainerDied","Data":"52a3b14cf6bdc42bb301c45eb61a63c8e96420bc048eb6405582d863a95b40ad"} Mar 18 13:16:31.592002 master-0 kubenswrapper[7599]: I0318 13:16:31.591553 7599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52a3b14cf6bdc42bb301c45eb61a63c8e96420bc048eb6405582d863a95b40ad" Mar 18 13:16:31.592002 master-0 kubenswrapper[7599]: I0318 13:16:31.591699 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 13:16:31.595176 master-0 kubenswrapper[7599]: I0318 13:16:31.595121 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"b889e0b36e4f8979e44b821a5f017daae136b65f503dea68d88d71644816b7aa"} Mar 18 13:16:31.595244 master-0 kubenswrapper[7599]: I0318 13:16:31.595185 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"bb3c1483ffac3748926d161fadb4e79f4a598cb1de15bbbd5db0a2eb9306ca39"} Mar 18 13:16:31.595299 master-0 kubenswrapper[7599]: I0318 13:16:31.595267 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"8f2f91bac220e62247e22b1d4ddac3f6faed23614b554c7d9cb87b50de91ff64"} Mar 18 13:16:31.595677 master-0 kubenswrapper[7599]: I0318 13:16:31.595600 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:16:31.618367 master-0 kubenswrapper[7599]: I0318 13:16:31.618286 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=2.6182730039999997 podStartE2EDuration="2.618273004s" podCreationTimestamp="2026-03-18 13:16:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:16:31.61593381 +0000 UTC m=+566.576988072" watchObservedRunningTime="2026-03-18 13:16:31.618273004 +0000 UTC m=+566.579327246" Mar 18 13:16:34.871223 master-0 kubenswrapper[7599]: I0318 13:16:34.871143 7599 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 18 13:16:34.871944 master-0 kubenswrapper[7599]: I0318 13:16:34.871759 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcdctl" containerID="cri-o://b1a8894c60e1a0b4aac83cae127aac7a97f1365cf3ab01361a3d2b5c535d9553" gracePeriod=30 Mar 18 13:16:34.871944 master-0 kubenswrapper[7599]: I0318 13:16:34.871786 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-readyz" containerID="cri-o://a56ef2268aedcc9b0bdbcda78017fa91cce5eaa7bb2b9a0bcc6e1faf89d9bd0b" gracePeriod=30 Mar 18 13:16:34.871944 master-0 kubenswrapper[7599]: I0318 13:16:34.871814 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" containerID="cri-o://a13dcbbd7771909f98990e3fbfa639b77bb9b4ed659307240d5421a4bc1673b4" gracePeriod=30 Mar 18 13:16:34.871944 master-0 kubenswrapper[7599]: I0318 13:16:34.871824 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-metrics" containerID="cri-o://25667e017afa5136617f1c6f60e91bf035d6e8ded1b221beee9a50d1249b2623" gracePeriod=30 Mar 18 13:16:34.872816 master-0 kubenswrapper[7599]: I0318 13:16:34.872649 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-rev" containerID="cri-o://690d150bf9126351d3371d14a932ae3fddd48238d7ea5cee202800a083a202c1" gracePeriod=30 Mar 18 13:16:34.875047 master-0 kubenswrapper[7599]: I0318 13:16:34.875000 7599 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 18 13:16:34.875386 master-0 kubenswrapper[7599]: E0318 13:16:34.875344 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-metrics" Mar 18 13:16:34.875386 master-0 kubenswrapper[7599]: I0318 13:16:34.875381 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-metrics" Mar 18 13:16:34.875516 master-0 kubenswrapper[7599]: E0318 13:16:34.875444 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="setup" Mar 18 13:16:34.875516 master-0 kubenswrapper[7599]: I0318 13:16:34.875459 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="setup" Mar 18 13:16:34.875516 master-0 kubenswrapper[7599]: E0318 13:16:34.875479 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-resources-copy" Mar 18 13:16:34.875516 master-0 kubenswrapper[7599]: I0318 13:16:34.875492 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-resources-copy" Mar 18 13:16:34.875516 master-0 kubenswrapper[7599]: E0318 13:16:34.875511 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4d424a6-cf4e-4e32-bc50-db63ef03f8dd" containerName="installer" Mar 18 13:16:34.875737 master-0 kubenswrapper[7599]: I0318 13:16:34.875524 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d424a6-cf4e-4e32-bc50-db63ef03f8dd" containerName="installer" Mar 18 13:16:34.875737 master-0 kubenswrapper[7599]: E0318 13:16:34.875544 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-ensure-env-vars" Mar 18 13:16:34.875737 master-0 kubenswrapper[7599]: I0318 13:16:34.875558 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-ensure-env-vars" Mar 18 13:16:34.875737 master-0 kubenswrapper[7599]: E0318 13:16:34.875580 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-rev" Mar 18 13:16:34.875737 master-0 kubenswrapper[7599]: I0318 13:16:34.875592 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-rev" Mar 18 13:16:34.875737 master-0 kubenswrapper[7599]: E0318 13:16:34.875612 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" Mar 18 13:16:34.875737 master-0 kubenswrapper[7599]: I0318 13:16:34.875623 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" Mar 18 13:16:34.875737 master-0 kubenswrapper[7599]: E0318 13:16:34.875640 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcdctl" Mar 18 13:16:34.875737 master-0 kubenswrapper[7599]: I0318 13:16:34.875652 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcdctl" Mar 18 13:16:34.875737 master-0 kubenswrapper[7599]: E0318 13:16:34.875681 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-readyz" Mar 18 13:16:34.875737 master-0 kubenswrapper[7599]: I0318 13:16:34.875693 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-readyz" Mar 18 13:16:34.876217 master-0 kubenswrapper[7599]: I0318 13:16:34.875924 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcdctl" Mar 18 13:16:34.876217 master-0 kubenswrapper[7599]: I0318 13:16:34.875942 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-rev" Mar 18 13:16:34.876217 master-0 kubenswrapper[7599]: I0318 13:16:34.875956 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4d424a6-cf4e-4e32-bc50-db63ef03f8dd" containerName="installer" Mar 18 13:16:34.876217 master-0 kubenswrapper[7599]: I0318 13:16:34.875986 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" Mar 18 13:16:34.876217 master-0 kubenswrapper[7599]: I0318 13:16:34.876006 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-readyz" Mar 18 13:16:34.876217 master-0 kubenswrapper[7599]: I0318 13:16:34.876028 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-metrics" Mar 18 13:16:34.917121 master-0 kubenswrapper[7599]: I0318 13:16:34.917050 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:16:34.917272 master-0 kubenswrapper[7599]: I0318 13:16:34.917177 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:16:34.917328 master-0 kubenswrapper[7599]: I0318 13:16:34.917297 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:16:34.917436 master-0 kubenswrapper[7599]: I0318 13:16:34.917378 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:16:34.917519 master-0 kubenswrapper[7599]: I0318 13:16:34.917449 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:16:34.917578 master-0 kubenswrapper[7599]: I0318 13:16:34.917515 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:16:35.018953 master-0 kubenswrapper[7599]: I0318 13:16:35.018900 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:16:35.019068 master-0 kubenswrapper[7599]: I0318 13:16:35.019010 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:16:35.019120 master-0 kubenswrapper[7599]: I0318 13:16:35.019091 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:16:35.019204 master-0 kubenswrapper[7599]: I0318 13:16:35.019172 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:16:35.019284 master-0 kubenswrapper[7599]: I0318 13:16:35.019255 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:16:35.019338 master-0 kubenswrapper[7599]: I0318 13:16:35.019303 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:16:35.019338 master-0 kubenswrapper[7599]: I0318 13:16:35.019312 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:16:35.019446 master-0 kubenswrapper[7599]: I0318 13:16:35.019345 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:16:35.019446 master-0 kubenswrapper[7599]: I0318 13:16:35.019362 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:16:35.019446 master-0 kubenswrapper[7599]: I0318 13:16:35.019375 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:16:35.019446 master-0 kubenswrapper[7599]: I0318 13:16:35.019402 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:16:35.019784 master-0 kubenswrapper[7599]: I0318 13:16:35.019742 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:16:35.626802 master-0 kubenswrapper[7599]: I0318 13:16:35.626746 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 18 13:16:35.628394 master-0 kubenswrapper[7599]: I0318 13:16:35.628358 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 18 13:16:35.631588 master-0 kubenswrapper[7599]: I0318 13:16:35.631538 7599 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="690d150bf9126351d3371d14a932ae3fddd48238d7ea5cee202800a083a202c1" exitCode=2 Mar 18 13:16:35.631860 master-0 kubenswrapper[7599]: I0318 13:16:35.631820 7599 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="a56ef2268aedcc9b0bdbcda78017fa91cce5eaa7bb2b9a0bcc6e1faf89d9bd0b" exitCode=0 Mar 18 13:16:35.632066 master-0 kubenswrapper[7599]: I0318 13:16:35.632031 7599 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="25667e017afa5136617f1c6f60e91bf035d6e8ded1b221beee9a50d1249b2623" exitCode=2 Mar 18 13:16:42.348357 master-0 kubenswrapper[7599]: E0318 13:16:42.348308 7599 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00375107_9a3b_4161_a90d_72ea8827c5fc.slice/crio-conmon-07ca97585aaa8b06b5f428151eb377bdd83b407ab4db465d3d58a7d10ed909a2.scope\": RecentStats: unable to find data in memory cache]" Mar 18 13:16:42.689368 master-0 kubenswrapper[7599]: I0318 13:16:42.689277 7599 generic.go:334] "Generic (PLEG): container finished" podID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerID="07ca97585aaa8b06b5f428151eb377bdd83b407ab4db465d3d58a7d10ed909a2" exitCode=0 Mar 18 13:16:42.689750 master-0 kubenswrapper[7599]: I0318 13:16:42.689357 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" event={"ID":"00375107-9a3b-4161-a90d-72ea8827c5fc","Type":"ContainerDied","Data":"07ca97585aaa8b06b5f428151eb377bdd83b407ab4db465d3d58a7d10ed909a2"} Mar 18 13:16:42.689750 master-0 kubenswrapper[7599]: I0318 13:16:42.689574 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" event={"ID":"00375107-9a3b-4161-a90d-72ea8827c5fc","Type":"ContainerStarted","Data":"9c551ae25ef9367709ba8842a822330b49626584583ec5ef49474f8a67486429"} Mar 18 13:16:42.689750 master-0 kubenswrapper[7599]: I0318 13:16:42.689664 7599 scope.go:117] "RemoveContainer" containerID="f14e73371f76e20d73c8968b8d34cca55ee15e6f6c8c8c101d7840ace2efb3fd" Mar 18 13:16:43.251848 master-0 kubenswrapper[7599]: I0318 13:16:43.251744 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:16:43.254509 master-0 kubenswrapper[7599]: I0318 13:16:43.254444 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:43.254509 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:16:43.254509 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:16:43.254509 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:16:43.254958 master-0 kubenswrapper[7599]: I0318 13:16:43.254513 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:44.253855 master-0 kubenswrapper[7599]: I0318 13:16:44.253772 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:44.253855 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:16:44.253855 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:16:44.253855 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:16:44.253855 master-0 kubenswrapper[7599]: I0318 13:16:44.253858 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:45.251508 master-0 kubenswrapper[7599]: I0318 13:16:45.251389 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:16:45.254313 master-0 kubenswrapper[7599]: I0318 13:16:45.254253 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:45.254313 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:16:45.254313 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:16:45.254313 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:16:45.254907 master-0 kubenswrapper[7599]: I0318 13:16:45.254335 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:46.255644 master-0 kubenswrapper[7599]: I0318 13:16:46.255521 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:46.255644 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:16:46.255644 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:16:46.255644 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:16:46.256625 master-0 kubenswrapper[7599]: I0318 13:16:46.255653 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:47.254055 master-0 kubenswrapper[7599]: I0318 13:16:47.253986 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:47.254055 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:16:47.254055 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:16:47.254055 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:16:47.254346 master-0 kubenswrapper[7599]: I0318 13:16:47.254054 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:48.254684 master-0 kubenswrapper[7599]: I0318 13:16:48.254569 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:48.254684 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:16:48.254684 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:16:48.254684 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:16:48.255687 master-0 kubenswrapper[7599]: I0318 13:16:48.254737 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:48.783356 master-0 kubenswrapper[7599]: E0318 13:16:48.783281 7599 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:16:38Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:16:38Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:16:38Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:16:38Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:16:49.255200 master-0 kubenswrapper[7599]: I0318 13:16:49.255108 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:49.255200 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:16:49.255200 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:16:49.255200 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:16:49.256270 master-0 kubenswrapper[7599]: I0318 13:16:49.255217 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:49.746463 master-0 kubenswrapper[7599]: I0318 13:16:49.746347 7599 generic.go:334] "Generic (PLEG): container finished" podID="5217b77d-b517-45c3-b76d-eee86d72b141" containerID="44724c38cb2d6b59ba2396d53ded36b1d7f457c6dd6834e92f2a09e247880a38" exitCode=0 Mar 18 13:16:49.746770 master-0 kubenswrapper[7599]: I0318 13:16:49.746473 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"5217b77d-b517-45c3-b76d-eee86d72b141","Type":"ContainerDied","Data":"44724c38cb2d6b59ba2396d53ded36b1d7f457c6dd6834e92f2a09e247880a38"} Mar 18 13:16:49.750324 master-0 kubenswrapper[7599]: I0318 13:16:49.750255 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ce43e217adc4d0869adee3ba7c628c00/kube-controller-manager/0.log" Mar 18 13:16:49.750500 master-0 kubenswrapper[7599]: I0318 13:16:49.750380 7599 generic.go:334] "Generic (PLEG): container finished" podID="ce43e217adc4d0869adee3ba7c628c00" containerID="44e87b551cd25fba74201071dfbbc65a904f19d68cc2d608c5f938a0ac57ad14" exitCode=1 Mar 18 13:16:49.750582 master-0 kubenswrapper[7599]: I0318 13:16:49.750483 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"ce43e217adc4d0869adee3ba7c628c00","Type":"ContainerDied","Data":"44e87b551cd25fba74201071dfbbc65a904f19d68cc2d608c5f938a0ac57ad14"} Mar 18 13:16:49.751464 master-0 kubenswrapper[7599]: I0318 13:16:49.751394 7599 scope.go:117] "RemoveContainer" containerID="44e87b551cd25fba74201071dfbbc65a904f19d68cc2d608c5f938a0ac57ad14" Mar 18 13:16:50.254168 master-0 kubenswrapper[7599]: I0318 13:16:50.254061 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:50.254168 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:16:50.254168 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:16:50.254168 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:16:50.254168 master-0 kubenswrapper[7599]: I0318 13:16:50.254153 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:50.763771 master-0 kubenswrapper[7599]: I0318 13:16:50.763700 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ce43e217adc4d0869adee3ba7c628c00/kube-controller-manager/0.log" Mar 18 13:16:50.764793 master-0 kubenswrapper[7599]: I0318 13:16:50.763818 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"ce43e217adc4d0869adee3ba7c628c00","Type":"ContainerStarted","Data":"fb60ab1fea57ec49871d5edaaf3891b0d60ae36efb59421fd58289dbb8a18b9d"} Mar 18 13:16:51.064456 master-0 kubenswrapper[7599]: I0318 13:16:51.064376 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 18 13:16:51.251857 master-0 kubenswrapper[7599]: I0318 13:16:51.251765 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5217b77d-b517-45c3-b76d-eee86d72b141-var-lock\") pod \"5217b77d-b517-45c3-b76d-eee86d72b141\" (UID: \"5217b77d-b517-45c3-b76d-eee86d72b141\") " Mar 18 13:16:51.252209 master-0 kubenswrapper[7599]: I0318 13:16:51.251978 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5217b77d-b517-45c3-b76d-eee86d72b141-kube-api-access\") pod \"5217b77d-b517-45c3-b76d-eee86d72b141\" (UID: \"5217b77d-b517-45c3-b76d-eee86d72b141\") " Mar 18 13:16:51.252209 master-0 kubenswrapper[7599]: I0318 13:16:51.252004 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5217b77d-b517-45c3-b76d-eee86d72b141-var-lock" (OuterVolumeSpecName: "var-lock") pod "5217b77d-b517-45c3-b76d-eee86d72b141" (UID: "5217b77d-b517-45c3-b76d-eee86d72b141"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:16:51.252209 master-0 kubenswrapper[7599]: I0318 13:16:51.252055 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5217b77d-b517-45c3-b76d-eee86d72b141-kubelet-dir\") pod \"5217b77d-b517-45c3-b76d-eee86d72b141\" (UID: \"5217b77d-b517-45c3-b76d-eee86d72b141\") " Mar 18 13:16:51.252400 master-0 kubenswrapper[7599]: I0318 13:16:51.252359 7599 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5217b77d-b517-45c3-b76d-eee86d72b141-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:16:51.252517 master-0 kubenswrapper[7599]: I0318 13:16:51.252444 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5217b77d-b517-45c3-b76d-eee86d72b141-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5217b77d-b517-45c3-b76d-eee86d72b141" (UID: "5217b77d-b517-45c3-b76d-eee86d72b141"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:16:51.252742 master-0 kubenswrapper[7599]: I0318 13:16:51.252688 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:51.252742 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:16:51.252742 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:16:51.252742 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:16:51.252960 master-0 kubenswrapper[7599]: I0318 13:16:51.252748 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:51.257659 master-0 kubenswrapper[7599]: I0318 13:16:51.257626 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5217b77d-b517-45c3-b76d-eee86d72b141-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5217b77d-b517-45c3-b76d-eee86d72b141" (UID: "5217b77d-b517-45c3-b76d-eee86d72b141"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:16:51.353642 master-0 kubenswrapper[7599]: I0318 13:16:51.353469 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5217b77d-b517-45c3-b76d-eee86d72b141-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:16:51.353642 master-0 kubenswrapper[7599]: I0318 13:16:51.353543 7599 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5217b77d-b517-45c3-b76d-eee86d72b141-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:16:51.773758 master-0 kubenswrapper[7599]: I0318 13:16:51.773667 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"5217b77d-b517-45c3-b76d-eee86d72b141","Type":"ContainerDied","Data":"a1317dc133fc5be7f7286b377875f1533cdac704e7aa1b12b6a9de99eec130ad"} Mar 18 13:16:51.773758 master-0 kubenswrapper[7599]: I0318 13:16:51.773730 7599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1317dc133fc5be7f7286b377875f1533cdac704e7aa1b12b6a9de99eec130ad" Mar 18 13:16:51.774735 master-0 kubenswrapper[7599]: I0318 13:16:51.773774 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 18 13:16:52.254959 master-0 kubenswrapper[7599]: I0318 13:16:52.254833 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:52.254959 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:16:52.254959 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:16:52.254959 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:16:52.255407 master-0 kubenswrapper[7599]: I0318 13:16:52.254995 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:53.254711 master-0 kubenswrapper[7599]: I0318 13:16:53.254605 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:53.254711 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:16:53.254711 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:16:53.254711 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:16:53.255763 master-0 kubenswrapper[7599]: I0318 13:16:53.254720 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:53.374534 master-0 kubenswrapper[7599]: E0318 13:16:53.374405 7599 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:16:54.255394 master-0 kubenswrapper[7599]: I0318 13:16:54.255279 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:54.255394 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:16:54.255394 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:16:54.255394 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:16:54.257016 master-0 kubenswrapper[7599]: I0318 13:16:54.256968 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:55.255493 master-0 kubenswrapper[7599]: I0318 13:16:55.255367 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:55.255493 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:16:55.255493 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:16:55.255493 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:16:55.256501 master-0 kubenswrapper[7599]: I0318 13:16:55.255489 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:56.254679 master-0 kubenswrapper[7599]: I0318 13:16:56.254589 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:56.254679 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:16:56.254679 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:16:56.254679 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:16:56.255204 master-0 kubenswrapper[7599]: I0318 13:16:56.254687 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:57.016776 master-0 kubenswrapper[7599]: I0318 13:16:57.016627 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:16:57.016776 master-0 kubenswrapper[7599]: I0318 13:16:57.016747 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:16:57.025785 master-0 kubenswrapper[7599]: I0318 13:16:57.025720 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:16:57.254641 master-0 kubenswrapper[7599]: I0318 13:16:57.254519 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:57.254641 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:16:57.254641 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:16:57.254641 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:16:57.255073 master-0 kubenswrapper[7599]: I0318 13:16:57.254627 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:58.254613 master-0 kubenswrapper[7599]: I0318 13:16:58.254525 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:58.254613 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:16:58.254613 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:16:58.254613 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:16:58.255773 master-0 kubenswrapper[7599]: I0318 13:16:58.254622 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:16:58.783703 master-0 kubenswrapper[7599]: E0318 13:16:58.783624 7599 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:16:59.254440 master-0 kubenswrapper[7599]: I0318 13:16:59.254342 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:16:59.254440 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:16:59.254440 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:16:59.254440 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:16:59.255221 master-0 kubenswrapper[7599]: I0318 13:16:59.254451 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:00.254401 master-0 kubenswrapper[7599]: I0318 13:17:00.254311 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:00.254401 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:00.254401 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:00.254401 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:00.255716 master-0 kubenswrapper[7599]: I0318 13:17:00.254402 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:01.254457 master-0 kubenswrapper[7599]: I0318 13:17:01.254364 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:01.254457 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:01.254457 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:01.254457 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:01.254457 master-0 kubenswrapper[7599]: I0318 13:17:01.254449 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:02.254224 master-0 kubenswrapper[7599]: I0318 13:17:02.254146 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:02.254224 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:02.254224 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:02.254224 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:02.255043 master-0 kubenswrapper[7599]: I0318 13:17:02.254246 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:03.254811 master-0 kubenswrapper[7599]: I0318 13:17:03.254764 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:03.254811 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:03.254811 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:03.254811 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:03.255595 master-0 kubenswrapper[7599]: I0318 13:17:03.255568 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:03.376013 master-0 kubenswrapper[7599]: E0318 13:17:03.375719 7599 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:17:04.254090 master-0 kubenswrapper[7599]: I0318 13:17:04.254024 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:04.254090 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:04.254090 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:04.254090 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:04.254703 master-0 kubenswrapper[7599]: I0318 13:17:04.254652 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:05.253812 master-0 kubenswrapper[7599]: I0318 13:17:05.253736 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:05.253812 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:05.253812 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:05.253812 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:05.254665 master-0 kubenswrapper[7599]: I0318 13:17:05.253822 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:05.442426 master-0 kubenswrapper[7599]: I0318 13:17:05.442361 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 18 13:17:05.443533 master-0 kubenswrapper[7599]: I0318 13:17:05.443502 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 18 13:17:05.444161 master-0 kubenswrapper[7599]: I0318 13:17:05.444135 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd/0.log" Mar 18 13:17:05.444578 master-0 kubenswrapper[7599]: I0318 13:17:05.444555 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcdctl/0.log" Mar 18 13:17:05.445791 master-0 kubenswrapper[7599]: I0318 13:17:05.445772 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 18 13:17:05.571076 master-0 kubenswrapper[7599]: I0318 13:17:05.570839 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 18 13:17:05.571076 master-0 kubenswrapper[7599]: I0318 13:17:05.570969 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 18 13:17:05.571076 master-0 kubenswrapper[7599]: I0318 13:17:05.570980 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir" (OuterVolumeSpecName: "log-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:17:05.571076 master-0 kubenswrapper[7599]: I0318 13:17:05.571030 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 18 13:17:05.571076 master-0 kubenswrapper[7599]: I0318 13:17:05.571050 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:17:05.571686 master-0 kubenswrapper[7599]: I0318 13:17:05.571093 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 18 13:17:05.571686 master-0 kubenswrapper[7599]: I0318 13:17:05.571127 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 18 13:17:05.571686 master-0 kubenswrapper[7599]: I0318 13:17:05.571146 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:17:05.571686 master-0 kubenswrapper[7599]: I0318 13:17:05.571184 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:17:05.571686 master-0 kubenswrapper[7599]: I0318 13:17:05.571196 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 18 13:17:05.571686 master-0 kubenswrapper[7599]: I0318 13:17:05.571218 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir" (OuterVolumeSpecName: "data-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:17:05.571686 master-0 kubenswrapper[7599]: I0318 13:17:05.571225 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:17:05.572237 master-0 kubenswrapper[7599]: I0318 13:17:05.571711 7599 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:17:05.572237 master-0 kubenswrapper[7599]: I0318 13:17:05.571748 7599 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:17:05.572237 master-0 kubenswrapper[7599]: I0318 13:17:05.571767 7599 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") on node \"master-0\" DevicePath \"\"" Mar 18 13:17:05.572237 master-0 kubenswrapper[7599]: I0318 13:17:05.571788 7599 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:17:05.572237 master-0 kubenswrapper[7599]: I0318 13:17:05.571805 7599 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:17:05.572237 master-0 kubenswrapper[7599]: I0318 13:17:05.571824 7599 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:17:05.902267 master-0 kubenswrapper[7599]: I0318 13:17:05.902097 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 18 13:17:05.903157 master-0 kubenswrapper[7599]: I0318 13:17:05.903129 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 18 13:17:05.904669 master-0 kubenswrapper[7599]: I0318 13:17:05.904602 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd/0.log" Mar 18 13:17:05.905343 master-0 kubenswrapper[7599]: I0318 13:17:05.905285 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcdctl/0.log" Mar 18 13:17:05.907125 master-0 kubenswrapper[7599]: I0318 13:17:05.907049 7599 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="a13dcbbd7771909f98990e3fbfa639b77bb9b4ed659307240d5421a4bc1673b4" exitCode=137 Mar 18 13:17:05.907125 master-0 kubenswrapper[7599]: I0318 13:17:05.907105 7599 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="b1a8894c60e1a0b4aac83cae127aac7a97f1365cf3ab01361a3d2b5c535d9553" exitCode=137 Mar 18 13:17:05.907125 master-0 kubenswrapper[7599]: I0318 13:17:05.907129 7599 scope.go:117] "RemoveContainer" containerID="690d150bf9126351d3371d14a932ae3fddd48238d7ea5cee202800a083a202c1" Mar 18 13:17:05.907556 master-0 kubenswrapper[7599]: I0318 13:17:05.907247 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 18 13:17:05.928813 master-0 kubenswrapper[7599]: I0318 13:17:05.928745 7599 scope.go:117] "RemoveContainer" containerID="a56ef2268aedcc9b0bdbcda78017fa91cce5eaa7bb2b9a0bcc6e1faf89d9bd0b" Mar 18 13:17:05.962318 master-0 kubenswrapper[7599]: I0318 13:17:05.962199 7599 scope.go:117] "RemoveContainer" containerID="25667e017afa5136617f1c6f60e91bf035d6e8ded1b221beee9a50d1249b2623" Mar 18 13:17:05.983784 master-0 kubenswrapper[7599]: I0318 13:17:05.983597 7599 scope.go:117] "RemoveContainer" containerID="a13dcbbd7771909f98990e3fbfa639b77bb9b4ed659307240d5421a4bc1673b4" Mar 18 13:17:05.999685 master-0 kubenswrapper[7599]: I0318 13:17:05.999583 7599 scope.go:117] "RemoveContainer" containerID="b1a8894c60e1a0b4aac83cae127aac7a97f1365cf3ab01361a3d2b5c535d9553" Mar 18 13:17:06.017231 master-0 kubenswrapper[7599]: I0318 13:17:06.017178 7599 scope.go:117] "RemoveContainer" containerID="e09a418d06bba466d802f306bad44d8be5e7b6c6d9c9ae5189c3748d7fc68429" Mar 18 13:17:06.037269 master-0 kubenswrapper[7599]: I0318 13:17:06.037210 7599 scope.go:117] "RemoveContainer" containerID="0c50ad9bf75bb5d525307b74331dd8a6e65e0eee7272e3ac766a19e6ee5d1252" Mar 18 13:17:06.066273 master-0 kubenswrapper[7599]: I0318 13:17:06.066209 7599 scope.go:117] "RemoveContainer" containerID="34cdcbdb05e204049cd01ed9f0a676fced26049cc5906931e3951888aaef818a" Mar 18 13:17:06.085142 master-0 kubenswrapper[7599]: I0318 13:17:06.085078 7599 scope.go:117] "RemoveContainer" containerID="690d150bf9126351d3371d14a932ae3fddd48238d7ea5cee202800a083a202c1" Mar 18 13:17:06.085812 master-0 kubenswrapper[7599]: E0318 13:17:06.085770 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"690d150bf9126351d3371d14a932ae3fddd48238d7ea5cee202800a083a202c1\": container with ID starting with 690d150bf9126351d3371d14a932ae3fddd48238d7ea5cee202800a083a202c1 not found: ID does not exist" containerID="690d150bf9126351d3371d14a932ae3fddd48238d7ea5cee202800a083a202c1" Mar 18 13:17:06.086058 master-0 kubenswrapper[7599]: I0318 13:17:06.085819 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"690d150bf9126351d3371d14a932ae3fddd48238d7ea5cee202800a083a202c1"} err="failed to get container status \"690d150bf9126351d3371d14a932ae3fddd48238d7ea5cee202800a083a202c1\": rpc error: code = NotFound desc = could not find container \"690d150bf9126351d3371d14a932ae3fddd48238d7ea5cee202800a083a202c1\": container with ID starting with 690d150bf9126351d3371d14a932ae3fddd48238d7ea5cee202800a083a202c1 not found: ID does not exist" Mar 18 13:17:06.086058 master-0 kubenswrapper[7599]: I0318 13:17:06.085855 7599 scope.go:117] "RemoveContainer" containerID="a56ef2268aedcc9b0bdbcda78017fa91cce5eaa7bb2b9a0bcc6e1faf89d9bd0b" Mar 18 13:17:06.086606 master-0 kubenswrapper[7599]: E0318 13:17:06.086513 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a56ef2268aedcc9b0bdbcda78017fa91cce5eaa7bb2b9a0bcc6e1faf89d9bd0b\": container with ID starting with a56ef2268aedcc9b0bdbcda78017fa91cce5eaa7bb2b9a0bcc6e1faf89d9bd0b not found: ID does not exist" containerID="a56ef2268aedcc9b0bdbcda78017fa91cce5eaa7bb2b9a0bcc6e1faf89d9bd0b" Mar 18 13:17:06.086702 master-0 kubenswrapper[7599]: I0318 13:17:06.086618 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a56ef2268aedcc9b0bdbcda78017fa91cce5eaa7bb2b9a0bcc6e1faf89d9bd0b"} err="failed to get container status \"a56ef2268aedcc9b0bdbcda78017fa91cce5eaa7bb2b9a0bcc6e1faf89d9bd0b\": rpc error: code = NotFound desc = could not find container \"a56ef2268aedcc9b0bdbcda78017fa91cce5eaa7bb2b9a0bcc6e1faf89d9bd0b\": container with ID starting with a56ef2268aedcc9b0bdbcda78017fa91cce5eaa7bb2b9a0bcc6e1faf89d9bd0b not found: ID does not exist" Mar 18 13:17:06.086756 master-0 kubenswrapper[7599]: I0318 13:17:06.086696 7599 scope.go:117] "RemoveContainer" containerID="25667e017afa5136617f1c6f60e91bf035d6e8ded1b221beee9a50d1249b2623" Mar 18 13:17:06.087293 master-0 kubenswrapper[7599]: E0318 13:17:06.087253 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25667e017afa5136617f1c6f60e91bf035d6e8ded1b221beee9a50d1249b2623\": container with ID starting with 25667e017afa5136617f1c6f60e91bf035d6e8ded1b221beee9a50d1249b2623 not found: ID does not exist" containerID="25667e017afa5136617f1c6f60e91bf035d6e8ded1b221beee9a50d1249b2623" Mar 18 13:17:06.087345 master-0 kubenswrapper[7599]: I0318 13:17:06.087295 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25667e017afa5136617f1c6f60e91bf035d6e8ded1b221beee9a50d1249b2623"} err="failed to get container status \"25667e017afa5136617f1c6f60e91bf035d6e8ded1b221beee9a50d1249b2623\": rpc error: code = NotFound desc = could not find container \"25667e017afa5136617f1c6f60e91bf035d6e8ded1b221beee9a50d1249b2623\": container with ID starting with 25667e017afa5136617f1c6f60e91bf035d6e8ded1b221beee9a50d1249b2623 not found: ID does not exist" Mar 18 13:17:06.087345 master-0 kubenswrapper[7599]: I0318 13:17:06.087323 7599 scope.go:117] "RemoveContainer" containerID="a13dcbbd7771909f98990e3fbfa639b77bb9b4ed659307240d5421a4bc1673b4" Mar 18 13:17:06.087867 master-0 kubenswrapper[7599]: E0318 13:17:06.087810 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a13dcbbd7771909f98990e3fbfa639b77bb9b4ed659307240d5421a4bc1673b4\": container with ID starting with a13dcbbd7771909f98990e3fbfa639b77bb9b4ed659307240d5421a4bc1673b4 not found: ID does not exist" containerID="a13dcbbd7771909f98990e3fbfa639b77bb9b4ed659307240d5421a4bc1673b4" Mar 18 13:17:06.087939 master-0 kubenswrapper[7599]: I0318 13:17:06.087895 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a13dcbbd7771909f98990e3fbfa639b77bb9b4ed659307240d5421a4bc1673b4"} err="failed to get container status \"a13dcbbd7771909f98990e3fbfa639b77bb9b4ed659307240d5421a4bc1673b4\": rpc error: code = NotFound desc = could not find container \"a13dcbbd7771909f98990e3fbfa639b77bb9b4ed659307240d5421a4bc1673b4\": container with ID starting with a13dcbbd7771909f98990e3fbfa639b77bb9b4ed659307240d5421a4bc1673b4 not found: ID does not exist" Mar 18 13:17:06.087939 master-0 kubenswrapper[7599]: I0318 13:17:06.087930 7599 scope.go:117] "RemoveContainer" containerID="b1a8894c60e1a0b4aac83cae127aac7a97f1365cf3ab01361a3d2b5c535d9553" Mar 18 13:17:06.088476 master-0 kubenswrapper[7599]: E0318 13:17:06.088382 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1a8894c60e1a0b4aac83cae127aac7a97f1365cf3ab01361a3d2b5c535d9553\": container with ID starting with b1a8894c60e1a0b4aac83cae127aac7a97f1365cf3ab01361a3d2b5c535d9553 not found: ID does not exist" containerID="b1a8894c60e1a0b4aac83cae127aac7a97f1365cf3ab01361a3d2b5c535d9553" Mar 18 13:17:06.088548 master-0 kubenswrapper[7599]: I0318 13:17:06.088487 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1a8894c60e1a0b4aac83cae127aac7a97f1365cf3ab01361a3d2b5c535d9553"} err="failed to get container status \"b1a8894c60e1a0b4aac83cae127aac7a97f1365cf3ab01361a3d2b5c535d9553\": rpc error: code = NotFound desc = could not find container \"b1a8894c60e1a0b4aac83cae127aac7a97f1365cf3ab01361a3d2b5c535d9553\": container with ID starting with b1a8894c60e1a0b4aac83cae127aac7a97f1365cf3ab01361a3d2b5c535d9553 not found: ID does not exist" Mar 18 13:17:06.088548 master-0 kubenswrapper[7599]: I0318 13:17:06.088535 7599 scope.go:117] "RemoveContainer" containerID="e09a418d06bba466d802f306bad44d8be5e7b6c6d9c9ae5189c3748d7fc68429" Mar 18 13:17:06.089232 master-0 kubenswrapper[7599]: E0318 13:17:06.088993 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e09a418d06bba466d802f306bad44d8be5e7b6c6d9c9ae5189c3748d7fc68429\": container with ID starting with e09a418d06bba466d802f306bad44d8be5e7b6c6d9c9ae5189c3748d7fc68429 not found: ID does not exist" containerID="e09a418d06bba466d802f306bad44d8be5e7b6c6d9c9ae5189c3748d7fc68429" Mar 18 13:17:06.089232 master-0 kubenswrapper[7599]: I0318 13:17:06.089084 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e09a418d06bba466d802f306bad44d8be5e7b6c6d9c9ae5189c3748d7fc68429"} err="failed to get container status \"e09a418d06bba466d802f306bad44d8be5e7b6c6d9c9ae5189c3748d7fc68429\": rpc error: code = NotFound desc = could not find container \"e09a418d06bba466d802f306bad44d8be5e7b6c6d9c9ae5189c3748d7fc68429\": container with ID starting with e09a418d06bba466d802f306bad44d8be5e7b6c6d9c9ae5189c3748d7fc68429 not found: ID does not exist" Mar 18 13:17:06.089232 master-0 kubenswrapper[7599]: I0318 13:17:06.089148 7599 scope.go:117] "RemoveContainer" containerID="0c50ad9bf75bb5d525307b74331dd8a6e65e0eee7272e3ac766a19e6ee5d1252" Mar 18 13:17:06.089652 master-0 kubenswrapper[7599]: E0318 13:17:06.089590 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c50ad9bf75bb5d525307b74331dd8a6e65e0eee7272e3ac766a19e6ee5d1252\": container with ID starting with 0c50ad9bf75bb5d525307b74331dd8a6e65e0eee7272e3ac766a19e6ee5d1252 not found: ID does not exist" containerID="0c50ad9bf75bb5d525307b74331dd8a6e65e0eee7272e3ac766a19e6ee5d1252" Mar 18 13:17:06.089710 master-0 kubenswrapper[7599]: I0318 13:17:06.089664 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c50ad9bf75bb5d525307b74331dd8a6e65e0eee7272e3ac766a19e6ee5d1252"} err="failed to get container status \"0c50ad9bf75bb5d525307b74331dd8a6e65e0eee7272e3ac766a19e6ee5d1252\": rpc error: code = NotFound desc = could not find container \"0c50ad9bf75bb5d525307b74331dd8a6e65e0eee7272e3ac766a19e6ee5d1252\": container with ID starting with 0c50ad9bf75bb5d525307b74331dd8a6e65e0eee7272e3ac766a19e6ee5d1252 not found: ID does not exist" Mar 18 13:17:06.089746 master-0 kubenswrapper[7599]: I0318 13:17:06.089718 7599 scope.go:117] "RemoveContainer" containerID="34cdcbdb05e204049cd01ed9f0a676fced26049cc5906931e3951888aaef818a" Mar 18 13:17:06.090964 master-0 kubenswrapper[7599]: E0318 13:17:06.090903 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34cdcbdb05e204049cd01ed9f0a676fced26049cc5906931e3951888aaef818a\": container with ID starting with 34cdcbdb05e204049cd01ed9f0a676fced26049cc5906931e3951888aaef818a not found: ID does not exist" containerID="34cdcbdb05e204049cd01ed9f0a676fced26049cc5906931e3951888aaef818a" Mar 18 13:17:06.091032 master-0 kubenswrapper[7599]: I0318 13:17:06.090969 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34cdcbdb05e204049cd01ed9f0a676fced26049cc5906931e3951888aaef818a"} err="failed to get container status \"34cdcbdb05e204049cd01ed9f0a676fced26049cc5906931e3951888aaef818a\": rpc error: code = NotFound desc = could not find container \"34cdcbdb05e204049cd01ed9f0a676fced26049cc5906931e3951888aaef818a\": container with ID starting with 34cdcbdb05e204049cd01ed9f0a676fced26049cc5906931e3951888aaef818a not found: ID does not exist" Mar 18 13:17:06.091032 master-0 kubenswrapper[7599]: I0318 13:17:06.091010 7599 scope.go:117] "RemoveContainer" containerID="690d150bf9126351d3371d14a932ae3fddd48238d7ea5cee202800a083a202c1" Mar 18 13:17:06.091737 master-0 kubenswrapper[7599]: I0318 13:17:06.091594 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"690d150bf9126351d3371d14a932ae3fddd48238d7ea5cee202800a083a202c1"} err="failed to get container status \"690d150bf9126351d3371d14a932ae3fddd48238d7ea5cee202800a083a202c1\": rpc error: code = NotFound desc = could not find container \"690d150bf9126351d3371d14a932ae3fddd48238d7ea5cee202800a083a202c1\": container with ID starting with 690d150bf9126351d3371d14a932ae3fddd48238d7ea5cee202800a083a202c1 not found: ID does not exist" Mar 18 13:17:06.091791 master-0 kubenswrapper[7599]: I0318 13:17:06.091735 7599 scope.go:117] "RemoveContainer" containerID="a56ef2268aedcc9b0bdbcda78017fa91cce5eaa7bb2b9a0bcc6e1faf89d9bd0b" Mar 18 13:17:06.092384 master-0 kubenswrapper[7599]: I0318 13:17:06.092286 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a56ef2268aedcc9b0bdbcda78017fa91cce5eaa7bb2b9a0bcc6e1faf89d9bd0b"} err="failed to get container status \"a56ef2268aedcc9b0bdbcda78017fa91cce5eaa7bb2b9a0bcc6e1faf89d9bd0b\": rpc error: code = NotFound desc = could not find container \"a56ef2268aedcc9b0bdbcda78017fa91cce5eaa7bb2b9a0bcc6e1faf89d9bd0b\": container with ID starting with a56ef2268aedcc9b0bdbcda78017fa91cce5eaa7bb2b9a0bcc6e1faf89d9bd0b not found: ID does not exist" Mar 18 13:17:06.092384 master-0 kubenswrapper[7599]: I0318 13:17:06.092357 7599 scope.go:117] "RemoveContainer" containerID="25667e017afa5136617f1c6f60e91bf035d6e8ded1b221beee9a50d1249b2623" Mar 18 13:17:06.092841 master-0 kubenswrapper[7599]: I0318 13:17:06.092777 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25667e017afa5136617f1c6f60e91bf035d6e8ded1b221beee9a50d1249b2623"} err="failed to get container status \"25667e017afa5136617f1c6f60e91bf035d6e8ded1b221beee9a50d1249b2623\": rpc error: code = NotFound desc = could not find container \"25667e017afa5136617f1c6f60e91bf035d6e8ded1b221beee9a50d1249b2623\": container with ID starting with 25667e017afa5136617f1c6f60e91bf035d6e8ded1b221beee9a50d1249b2623 not found: ID does not exist" Mar 18 13:17:06.092897 master-0 kubenswrapper[7599]: I0318 13:17:06.092838 7599 scope.go:117] "RemoveContainer" containerID="a13dcbbd7771909f98990e3fbfa639b77bb9b4ed659307240d5421a4bc1673b4" Mar 18 13:17:06.093294 master-0 kubenswrapper[7599]: I0318 13:17:06.093251 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a13dcbbd7771909f98990e3fbfa639b77bb9b4ed659307240d5421a4bc1673b4"} err="failed to get container status \"a13dcbbd7771909f98990e3fbfa639b77bb9b4ed659307240d5421a4bc1673b4\": rpc error: code = NotFound desc = could not find container \"a13dcbbd7771909f98990e3fbfa639b77bb9b4ed659307240d5421a4bc1673b4\": container with ID starting with a13dcbbd7771909f98990e3fbfa639b77bb9b4ed659307240d5421a4bc1673b4 not found: ID does not exist" Mar 18 13:17:06.093294 master-0 kubenswrapper[7599]: I0318 13:17:06.093289 7599 scope.go:117] "RemoveContainer" containerID="b1a8894c60e1a0b4aac83cae127aac7a97f1365cf3ab01361a3d2b5c535d9553" Mar 18 13:17:06.093698 master-0 kubenswrapper[7599]: I0318 13:17:06.093644 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1a8894c60e1a0b4aac83cae127aac7a97f1365cf3ab01361a3d2b5c535d9553"} err="failed to get container status \"b1a8894c60e1a0b4aac83cae127aac7a97f1365cf3ab01361a3d2b5c535d9553\": rpc error: code = NotFound desc = could not find container \"b1a8894c60e1a0b4aac83cae127aac7a97f1365cf3ab01361a3d2b5c535d9553\": container with ID starting with b1a8894c60e1a0b4aac83cae127aac7a97f1365cf3ab01361a3d2b5c535d9553 not found: ID does not exist" Mar 18 13:17:06.093752 master-0 kubenswrapper[7599]: I0318 13:17:06.093700 7599 scope.go:117] "RemoveContainer" containerID="e09a418d06bba466d802f306bad44d8be5e7b6c6d9c9ae5189c3748d7fc68429" Mar 18 13:17:06.094039 master-0 kubenswrapper[7599]: I0318 13:17:06.094001 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e09a418d06bba466d802f306bad44d8be5e7b6c6d9c9ae5189c3748d7fc68429"} err="failed to get container status \"e09a418d06bba466d802f306bad44d8be5e7b6c6d9c9ae5189c3748d7fc68429\": rpc error: code = NotFound desc = could not find container \"e09a418d06bba466d802f306bad44d8be5e7b6c6d9c9ae5189c3748d7fc68429\": container with ID starting with e09a418d06bba466d802f306bad44d8be5e7b6c6d9c9ae5189c3748d7fc68429 not found: ID does not exist" Mar 18 13:17:06.094076 master-0 kubenswrapper[7599]: I0318 13:17:06.094033 7599 scope.go:117] "RemoveContainer" containerID="0c50ad9bf75bb5d525307b74331dd8a6e65e0eee7272e3ac766a19e6ee5d1252" Mar 18 13:17:06.094359 master-0 kubenswrapper[7599]: I0318 13:17:06.094322 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c50ad9bf75bb5d525307b74331dd8a6e65e0eee7272e3ac766a19e6ee5d1252"} err="failed to get container status \"0c50ad9bf75bb5d525307b74331dd8a6e65e0eee7272e3ac766a19e6ee5d1252\": rpc error: code = NotFound desc = could not find container \"0c50ad9bf75bb5d525307b74331dd8a6e65e0eee7272e3ac766a19e6ee5d1252\": container with ID starting with 0c50ad9bf75bb5d525307b74331dd8a6e65e0eee7272e3ac766a19e6ee5d1252 not found: ID does not exist" Mar 18 13:17:06.094435 master-0 kubenswrapper[7599]: I0318 13:17:06.094355 7599 scope.go:117] "RemoveContainer" containerID="34cdcbdb05e204049cd01ed9f0a676fced26049cc5906931e3951888aaef818a" Mar 18 13:17:06.094779 master-0 kubenswrapper[7599]: I0318 13:17:06.094706 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34cdcbdb05e204049cd01ed9f0a676fced26049cc5906931e3951888aaef818a"} err="failed to get container status \"34cdcbdb05e204049cd01ed9f0a676fced26049cc5906931e3951888aaef818a\": rpc error: code = NotFound desc = could not find container \"34cdcbdb05e204049cd01ed9f0a676fced26049cc5906931e3951888aaef818a\": container with ID starting with 34cdcbdb05e204049cd01ed9f0a676fced26049cc5906931e3951888aaef818a not found: ID does not exist" Mar 18 13:17:06.254977 master-0 kubenswrapper[7599]: I0318 13:17:06.254884 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:06.254977 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:06.254977 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:06.254977 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:06.255642 master-0 kubenswrapper[7599]: I0318 13:17:06.255001 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:07.025194 master-0 kubenswrapper[7599]: I0318 13:17:07.025110 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:17:07.254323 master-0 kubenswrapper[7599]: I0318 13:17:07.254237 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:07.254323 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:07.254323 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:07.254323 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:07.254760 master-0 kubenswrapper[7599]: I0318 13:17:07.254342 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:07.391188 master-0 kubenswrapper[7599]: I0318 13:17:07.391009 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24b4ed170d527099878cb5fdd508a2fb" path="/var/lib/kubelet/pods/24b4ed170d527099878cb5fdd508a2fb/volumes" Mar 18 13:17:08.254243 master-0 kubenswrapper[7599]: I0318 13:17:08.254119 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:08.254243 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:08.254243 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:08.254243 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:08.254714 master-0 kubenswrapper[7599]: I0318 13:17:08.254237 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:08.784346 master-0 kubenswrapper[7599]: E0318 13:17:08.784207 7599 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:17:08.901458 master-0 kubenswrapper[7599]: E0318 13:17:08.901246 7599 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.189df1e52dc8815e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:24b4ed170d527099878cb5fdd508a2fb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Killing,Message:Stopping container etcd-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:16:34.871763294 +0000 UTC m=+569.832817536,LastTimestamp:2026-03-18 13:16:34.871763294 +0000 UTC m=+569.832817536,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:17:09.255162 master-0 kubenswrapper[7599]: I0318 13:17:09.255035 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:09.255162 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:09.255162 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:09.255162 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:09.255711 master-0 kubenswrapper[7599]: I0318 13:17:09.255176 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:09.944865 master-0 kubenswrapper[7599]: I0318 13:17:09.944744 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_62eae2a9-2667-431e-ad73-ca18124d01f6/installer/0.log" Mar 18 13:17:09.944865 master-0 kubenswrapper[7599]: I0318 13:17:09.944846 7599 generic.go:334] "Generic (PLEG): container finished" podID="62eae2a9-2667-431e-ad73-ca18124d01f6" containerID="84d4addeaab69d00ff961004821b23d05bc68d242853d91f47889592129b1a88" exitCode=1 Mar 18 13:17:09.946076 master-0 kubenswrapper[7599]: I0318 13:17:09.944894 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"62eae2a9-2667-431e-ad73-ca18124d01f6","Type":"ContainerDied","Data":"84d4addeaab69d00ff961004821b23d05bc68d242853d91f47889592129b1a88"} Mar 18 13:17:10.254266 master-0 kubenswrapper[7599]: I0318 13:17:10.254088 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:10.254266 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:10.254266 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:10.254266 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:10.254266 master-0 kubenswrapper[7599]: I0318 13:17:10.254178 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:10.773714 master-0 kubenswrapper[7599]: I0318 13:17:10.773624 7599 scope.go:117] "RemoveContainer" containerID="2b457975b25ebc40cb55c5ee5a932669f56be2bc949f61f598bc7d15209f09c7" Mar 18 13:17:10.793075 master-0 kubenswrapper[7599]: I0318 13:17:10.793034 7599 scope.go:117] "RemoveContainer" containerID="f8d9ce1d67226c0b362cac090a8a6e718851e873d29da1183f8e1cd8096dfcfa" Mar 18 13:17:11.242795 master-0 kubenswrapper[7599]: I0318 13:17:11.242719 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_62eae2a9-2667-431e-ad73-ca18124d01f6/installer/0.log" Mar 18 13:17:11.243309 master-0 kubenswrapper[7599]: I0318 13:17:11.242851 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 13:17:11.253600 master-0 kubenswrapper[7599]: I0318 13:17:11.253500 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:11.253600 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:11.253600 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:11.253600 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:11.253948 master-0 kubenswrapper[7599]: I0318 13:17:11.253598 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:11.269100 master-0 kubenswrapper[7599]: I0318 13:17:11.269021 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/62eae2a9-2667-431e-ad73-ca18124d01f6-var-lock\") pod \"62eae2a9-2667-431e-ad73-ca18124d01f6\" (UID: \"62eae2a9-2667-431e-ad73-ca18124d01f6\") " Mar 18 13:17:11.269313 master-0 kubenswrapper[7599]: I0318 13:17:11.269114 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/62eae2a9-2667-431e-ad73-ca18124d01f6-kubelet-dir\") pod \"62eae2a9-2667-431e-ad73-ca18124d01f6\" (UID: \"62eae2a9-2667-431e-ad73-ca18124d01f6\") " Mar 18 13:17:11.269313 master-0 kubenswrapper[7599]: I0318 13:17:11.269141 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/62eae2a9-2667-431e-ad73-ca18124d01f6-kube-api-access\") pod \"62eae2a9-2667-431e-ad73-ca18124d01f6\" (UID: \"62eae2a9-2667-431e-ad73-ca18124d01f6\") " Mar 18 13:17:11.269313 master-0 kubenswrapper[7599]: I0318 13:17:11.269284 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62eae2a9-2667-431e-ad73-ca18124d01f6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "62eae2a9-2667-431e-ad73-ca18124d01f6" (UID: "62eae2a9-2667-431e-ad73-ca18124d01f6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:17:11.269531 master-0 kubenswrapper[7599]: I0318 13:17:11.269282 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62eae2a9-2667-431e-ad73-ca18124d01f6-var-lock" (OuterVolumeSpecName: "var-lock") pod "62eae2a9-2667-431e-ad73-ca18124d01f6" (UID: "62eae2a9-2667-431e-ad73-ca18124d01f6"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:17:11.271684 master-0 kubenswrapper[7599]: I0318 13:17:11.271637 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62eae2a9-2667-431e-ad73-ca18124d01f6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "62eae2a9-2667-431e-ad73-ca18124d01f6" (UID: "62eae2a9-2667-431e-ad73-ca18124d01f6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:17:11.370925 master-0 kubenswrapper[7599]: I0318 13:17:11.370759 7599 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/62eae2a9-2667-431e-ad73-ca18124d01f6-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:17:11.370925 master-0 kubenswrapper[7599]: I0318 13:17:11.370840 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/62eae2a9-2667-431e-ad73-ca18124d01f6-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:17:11.370925 master-0 kubenswrapper[7599]: I0318 13:17:11.370875 7599 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/62eae2a9-2667-431e-ad73-ca18124d01f6-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:17:11.964919 master-0 kubenswrapper[7599]: I0318 13:17:11.964836 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_62eae2a9-2667-431e-ad73-ca18124d01f6/installer/0.log" Mar 18 13:17:11.965346 master-0 kubenswrapper[7599]: I0318 13:17:11.964931 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"62eae2a9-2667-431e-ad73-ca18124d01f6","Type":"ContainerDied","Data":"d8f5aeb16a7c0c81d68e0b4c65665a3f2d3427f082b4e9bb5b808153582ec4ae"} Mar 18 13:17:11.965346 master-0 kubenswrapper[7599]: I0318 13:17:11.964970 7599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8f5aeb16a7c0c81d68e0b4c65665a3f2d3427f082b4e9bb5b808153582ec4ae" Mar 18 13:17:11.965346 master-0 kubenswrapper[7599]: I0318 13:17:11.965054 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 13:17:12.253963 master-0 kubenswrapper[7599]: I0318 13:17:12.253823 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:12.253963 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:12.253963 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:12.253963 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:12.253963 master-0 kubenswrapper[7599]: I0318 13:17:12.253886 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:13.254439 master-0 kubenswrapper[7599]: I0318 13:17:13.254313 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:13.254439 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:13.254439 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:13.254439 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:13.255064 master-0 kubenswrapper[7599]: I0318 13:17:13.254478 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:13.376827 master-0 kubenswrapper[7599]: E0318 13:17:13.376494 7599 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:17:14.254752 master-0 kubenswrapper[7599]: I0318 13:17:14.254676 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:14.254752 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:14.254752 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:14.254752 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:14.255496 master-0 kubenswrapper[7599]: I0318 13:17:14.254781 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:15.255452 master-0 kubenswrapper[7599]: I0318 13:17:15.255327 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:15.255452 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:15.255452 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:15.255452 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:15.255452 master-0 kubenswrapper[7599]: I0318 13:17:15.255401 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:15.371313 master-0 kubenswrapper[7599]: I0318 13:17:15.371223 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 18 13:17:15.408723 master-0 kubenswrapper[7599]: I0318 13:17:15.408595 7599 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="9f52b11b-6ba0-49bd-8220-405f8b5303fe" Mar 18 13:17:15.408723 master-0 kubenswrapper[7599]: I0318 13:17:15.408686 7599 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="9f52b11b-6ba0-49bd-8220-405f8b5303fe" Mar 18 13:17:16.254132 master-0 kubenswrapper[7599]: I0318 13:17:16.254023 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:16.254132 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:16.254132 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:16.254132 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:16.254787 master-0 kubenswrapper[7599]: I0318 13:17:16.254152 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:17.253725 master-0 kubenswrapper[7599]: I0318 13:17:17.253647 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:17.253725 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:17.253725 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:17.253725 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:17.254679 master-0 kubenswrapper[7599]: I0318 13:17:17.253757 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:18.254167 master-0 kubenswrapper[7599]: I0318 13:17:18.254085 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:18.254167 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:18.254167 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:18.254167 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:18.254167 master-0 kubenswrapper[7599]: I0318 13:17:18.254160 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:18.785175 master-0 kubenswrapper[7599]: E0318 13:17:18.784802 7599 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:17:19.254053 master-0 kubenswrapper[7599]: I0318 13:17:19.253999 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:19.254053 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:19.254053 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:19.254053 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:19.254734 master-0 kubenswrapper[7599]: I0318 13:17:19.254079 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:19.689792 master-0 kubenswrapper[7599]: I0318 13:17:19.689729 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:17:20.254262 master-0 kubenswrapper[7599]: I0318 13:17:20.254160 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:20.254262 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:20.254262 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:20.254262 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:20.254262 master-0 kubenswrapper[7599]: I0318 13:17:20.254258 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:21.253814 master-0 kubenswrapper[7599]: I0318 13:17:21.253767 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:21.253814 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:21.253814 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:21.253814 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:21.254236 master-0 kubenswrapper[7599]: I0318 13:17:21.254211 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:22.254870 master-0 kubenswrapper[7599]: I0318 13:17:22.254819 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:22.254870 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:22.254870 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:22.254870 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:22.255676 master-0 kubenswrapper[7599]: I0318 13:17:22.255583 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:23.254644 master-0 kubenswrapper[7599]: I0318 13:17:23.254590 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:23.254644 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:23.254644 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:23.254644 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:23.255938 master-0 kubenswrapper[7599]: I0318 13:17:23.255888 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:23.377292 master-0 kubenswrapper[7599]: E0318 13:17:23.377192 7599 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:17:24.255217 master-0 kubenswrapper[7599]: I0318 13:17:24.255111 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:24.255217 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:24.255217 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:24.255217 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:24.255217 master-0 kubenswrapper[7599]: I0318 13:17:24.255198 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:25.254708 master-0 kubenswrapper[7599]: I0318 13:17:25.254619 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:25.254708 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:25.254708 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:25.254708 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:25.255163 master-0 kubenswrapper[7599]: I0318 13:17:25.254721 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:26.254658 master-0 kubenswrapper[7599]: I0318 13:17:26.254586 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:26.254658 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:26.254658 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:26.254658 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:26.255810 master-0 kubenswrapper[7599]: I0318 13:17:26.254686 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:27.255210 master-0 kubenswrapper[7599]: I0318 13:17:27.255117 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:27.255210 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:27.255210 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:27.255210 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:27.256975 master-0 kubenswrapper[7599]: I0318 13:17:27.255229 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:28.253694 master-0 kubenswrapper[7599]: I0318 13:17:28.253592 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:28.253694 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:28.253694 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:28.253694 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:28.254016 master-0 kubenswrapper[7599]: I0318 13:17:28.253705 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:28.785638 master-0 kubenswrapper[7599]: E0318 13:17:28.785539 7599 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:17:28.786569 master-0 kubenswrapper[7599]: E0318 13:17:28.786509 7599 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 13:17:29.254802 master-0 kubenswrapper[7599]: I0318 13:17:29.254695 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:29.254802 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:29.254802 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:29.254802 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:29.255179 master-0 kubenswrapper[7599]: I0318 13:17:29.254827 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:30.254358 master-0 kubenswrapper[7599]: I0318 13:17:30.254296 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:30.254358 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:30.254358 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:30.254358 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:30.255234 master-0 kubenswrapper[7599]: I0318 13:17:30.255199 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:31.254577 master-0 kubenswrapper[7599]: I0318 13:17:31.254483 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:31.254577 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:31.254577 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:31.254577 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:31.255257 master-0 kubenswrapper[7599]: I0318 13:17:31.254574 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:32.255574 master-0 kubenswrapper[7599]: I0318 13:17:32.255532 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:32.255574 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:32.255574 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:32.255574 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:32.256172 master-0 kubenswrapper[7599]: I0318 13:17:32.256149 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:33.254127 master-0 kubenswrapper[7599]: I0318 13:17:33.254043 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:33.254127 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:33.254127 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:33.254127 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:33.254630 master-0 kubenswrapper[7599]: I0318 13:17:33.254165 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:33.378060 master-0 kubenswrapper[7599]: E0318 13:17:33.377956 7599 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:17:33.378060 master-0 kubenswrapper[7599]: I0318 13:17:33.378012 7599 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 18 13:17:34.254919 master-0 kubenswrapper[7599]: I0318 13:17:34.254838 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:34.254919 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:34.254919 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:34.254919 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:34.256595 master-0 kubenswrapper[7599]: I0318 13:17:34.254963 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:35.139369 master-0 kubenswrapper[7599]: I0318 13:17:35.139288 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-x8r78_0e7156cf-2d68-4de8-b7e7-60e1539590dd/approver/1.log" Mar 18 13:17:35.140397 master-0 kubenswrapper[7599]: I0318 13:17:35.140331 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-x8r78_0e7156cf-2d68-4de8-b7e7-60e1539590dd/approver/0.log" Mar 18 13:17:35.141104 master-0 kubenswrapper[7599]: I0318 13:17:35.141020 7599 generic.go:334] "Generic (PLEG): container finished" podID="0e7156cf-2d68-4de8-b7e7-60e1539590dd" containerID="0df05b30952e8bace8c1fadfea54a4900c846053f046ccb0bcbeb970b3b63e09" exitCode=1 Mar 18 13:17:35.141104 master-0 kubenswrapper[7599]: I0318 13:17:35.141067 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-x8r78" event={"ID":"0e7156cf-2d68-4de8-b7e7-60e1539590dd","Type":"ContainerDied","Data":"0df05b30952e8bace8c1fadfea54a4900c846053f046ccb0bcbeb970b3b63e09"} Mar 18 13:17:35.141489 master-0 kubenswrapper[7599]: I0318 13:17:35.141152 7599 scope.go:117] "RemoveContainer" containerID="4a2b96ab3e758ccd953d067f7229799e7c3da85d90ceb61612bf33b3cfdeebe2" Mar 18 13:17:35.142177 master-0 kubenswrapper[7599]: I0318 13:17:35.142115 7599 scope.go:117] "RemoveContainer" containerID="0df05b30952e8bace8c1fadfea54a4900c846053f046ccb0bcbeb970b3b63e09" Mar 18 13:17:35.142632 master-0 kubenswrapper[7599]: E0318 13:17:35.142563 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"approver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=approver pod=network-node-identity-x8r78_openshift-network-node-identity(0e7156cf-2d68-4de8-b7e7-60e1539590dd)\"" pod="openshift-network-node-identity/network-node-identity-x8r78" podUID="0e7156cf-2d68-4de8-b7e7-60e1539590dd" Mar 18 13:17:35.254193 master-0 kubenswrapper[7599]: I0318 13:17:35.254136 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:35.254193 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:35.254193 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:35.254193 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:35.254536 master-0 kubenswrapper[7599]: I0318 13:17:35.254237 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:35.377036 master-0 kubenswrapper[7599]: I0318 13:17:35.376851 7599 status_manager.go:851] "Failed to get status for pod" podUID="24b4ed170d527099878cb5fdd508a2fb" pod="openshift-etcd/etcd-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-master-0)" Mar 18 13:17:36.153636 master-0 kubenswrapper[7599]: I0318 13:17:36.153563 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-x8r78_0e7156cf-2d68-4de8-b7e7-60e1539590dd/approver/1.log" Mar 18 13:17:36.254349 master-0 kubenswrapper[7599]: I0318 13:17:36.254294 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:36.254349 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:36.254349 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:36.254349 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:36.254911 master-0 kubenswrapper[7599]: I0318 13:17:36.254868 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:37.254720 master-0 kubenswrapper[7599]: I0318 13:17:37.254616 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:37.254720 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:37.254720 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:37.254720 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:37.254720 master-0 kubenswrapper[7599]: I0318 13:17:37.254706 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:38.255235 master-0 kubenswrapper[7599]: I0318 13:17:38.255146 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:38.255235 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:38.255235 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:38.255235 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:38.256341 master-0 kubenswrapper[7599]: I0318 13:17:38.255240 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:39.254293 master-0 kubenswrapper[7599]: I0318 13:17:39.254220 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:39.254293 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:39.254293 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:39.254293 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:39.254806 master-0 kubenswrapper[7599]: I0318 13:17:39.254327 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:40.255014 master-0 kubenswrapper[7599]: I0318 13:17:40.254896 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:40.255014 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:40.255014 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:40.255014 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:40.255014 master-0 kubenswrapper[7599]: I0318 13:17:40.254984 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:41.254166 master-0 kubenswrapper[7599]: I0318 13:17:41.254067 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:41.254166 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:41.254166 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:41.254166 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:41.254782 master-0 kubenswrapper[7599]: I0318 13:17:41.254172 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:42.254644 master-0 kubenswrapper[7599]: I0318 13:17:42.254539 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:42.254644 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:42.254644 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:42.254644 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:42.255802 master-0 kubenswrapper[7599]: I0318 13:17:42.254677 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:42.907096 master-0 kubenswrapper[7599]: E0318 13:17:42.906897 7599 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189df1a47e22e3f5 openshift-kube-controller-manager 10769 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:ce43e217adc4d0869adee3ba7c628c00,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:11:57 +0000 UTC,LastTimestamp:2026-03-18 13:16:49.753063722 +0000 UTC m=+584.714117994,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:17:43.254919 master-0 kubenswrapper[7599]: I0318 13:17:43.254828 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:43.254919 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:43.254919 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:43.254919 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:43.256111 master-0 kubenswrapper[7599]: I0318 13:17:43.254922 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:43.378991 master-0 kubenswrapper[7599]: E0318 13:17:43.378892 7599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Mar 18 13:17:44.255317 master-0 kubenswrapper[7599]: I0318 13:17:44.255218 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:44.255317 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:44.255317 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:44.255317 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:44.256332 master-0 kubenswrapper[7599]: I0318 13:17:44.255334 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:45.253831 master-0 kubenswrapper[7599]: I0318 13:17:45.253781 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:45.253831 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:45.253831 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:45.253831 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:45.254318 master-0 kubenswrapper[7599]: I0318 13:17:45.253855 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:46.254857 master-0 kubenswrapper[7599]: I0318 13:17:46.254784 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:46.254857 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:46.254857 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:46.254857 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:46.254857 master-0 kubenswrapper[7599]: I0318 13:17:46.254859 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:47.254373 master-0 kubenswrapper[7599]: I0318 13:17:47.254299 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:47.254373 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:47.254373 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:47.254373 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:47.254718 master-0 kubenswrapper[7599]: I0318 13:17:47.254399 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:48.253823 master-0 kubenswrapper[7599]: I0318 13:17:48.253710 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:48.253823 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:48.253823 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:48.253823 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:48.254962 master-0 kubenswrapper[7599]: I0318 13:17:48.253834 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:49.254492 master-0 kubenswrapper[7599]: I0318 13:17:49.254376 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:49.254492 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:49.254492 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:49.254492 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:49.255598 master-0 kubenswrapper[7599]: I0318 13:17:49.254499 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:49.411992 master-0 kubenswrapper[7599]: E0318 13:17:49.411907 7599 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 18 13:17:49.412634 master-0 kubenswrapper[7599]: I0318 13:17:49.412592 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 18 13:17:50.254010 master-0 kubenswrapper[7599]: I0318 13:17:50.253947 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:50.254010 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:50.254010 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:50.254010 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:50.254375 master-0 kubenswrapper[7599]: I0318 13:17:50.254010 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:50.281805 master-0 kubenswrapper[7599]: I0318 13:17:50.281699 7599 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="e51c044a2405dc8e2c15e99d23adc3d518ef8ba93339eb0eb649f5a9e556f757" exitCode=0 Mar 18 13:17:50.281805 master-0 kubenswrapper[7599]: I0318 13:17:50.281770 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"e51c044a2405dc8e2c15e99d23adc3d518ef8ba93339eb0eb649f5a9e556f757"} Mar 18 13:17:50.281805 master-0 kubenswrapper[7599]: I0318 13:17:50.281807 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"fa7bdc6eb3bcdebec3d64b4ce8194bafce362b67c9019cd975ec6f9a5ac40f46"} Mar 18 13:17:50.283080 master-0 kubenswrapper[7599]: I0318 13:17:50.282359 7599 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="9f52b11b-6ba0-49bd-8220-405f8b5303fe" Mar 18 13:17:50.283080 master-0 kubenswrapper[7599]: I0318 13:17:50.282383 7599 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="9f52b11b-6ba0-49bd-8220-405f8b5303fe" Mar 18 13:17:50.371696 master-0 kubenswrapper[7599]: I0318 13:17:50.371603 7599 scope.go:117] "RemoveContainer" containerID="0df05b30952e8bace8c1fadfea54a4900c846053f046ccb0bcbeb970b3b63e09" Mar 18 13:17:51.254862 master-0 kubenswrapper[7599]: I0318 13:17:51.254797 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:51.254862 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:51.254862 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:51.254862 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:51.255390 master-0 kubenswrapper[7599]: I0318 13:17:51.254889 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:51.293276 master-0 kubenswrapper[7599]: I0318 13:17:51.293181 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-x8r78_0e7156cf-2d68-4de8-b7e7-60e1539590dd/approver/1.log" Mar 18 13:17:51.294255 master-0 kubenswrapper[7599]: I0318 13:17:51.293823 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-x8r78" event={"ID":"0e7156cf-2d68-4de8-b7e7-60e1539590dd","Type":"ContainerStarted","Data":"086e9fc8ca523144919a5163c71d4016399ffa720f997ba6ec3ad12584d9cb30"} Mar 18 13:17:52.255039 master-0 kubenswrapper[7599]: I0318 13:17:52.254887 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:52.255039 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:52.255039 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:52.255039 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:52.255596 master-0 kubenswrapper[7599]: I0318 13:17:52.255041 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:53.255184 master-0 kubenswrapper[7599]: I0318 13:17:53.255104 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:53.255184 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:53.255184 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:53.255184 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:53.255184 master-0 kubenswrapper[7599]: I0318 13:17:53.255184 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:53.580218 master-0 kubenswrapper[7599]: E0318 13:17:53.579948 7599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Mar 18 13:17:54.255037 master-0 kubenswrapper[7599]: I0318 13:17:54.254890 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:54.255037 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:54.255037 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:54.255037 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:54.255819 master-0 kubenswrapper[7599]: I0318 13:17:54.255063 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:55.254244 master-0 kubenswrapper[7599]: I0318 13:17:55.254151 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:55.254244 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:55.254244 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:55.254244 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:55.254748 master-0 kubenswrapper[7599]: I0318 13:17:55.254249 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:56.254794 master-0 kubenswrapper[7599]: I0318 13:17:56.254648 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:56.254794 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:56.254794 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:56.254794 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:56.256314 master-0 kubenswrapper[7599]: I0318 13:17:56.254800 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:56.337947 master-0 kubenswrapper[7599]: I0318 13:17:56.337861 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-wqxpk_d9d09a56-ed4c-40b7-8be1-f3934c07296e/ingress-operator/3.log" Mar 18 13:17:56.339479 master-0 kubenswrapper[7599]: I0318 13:17:56.339303 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-wqxpk_d9d09a56-ed4c-40b7-8be1-f3934c07296e/ingress-operator/2.log" Mar 18 13:17:56.340296 master-0 kubenswrapper[7599]: I0318 13:17:56.340245 7599 generic.go:334] "Generic (PLEG): container finished" podID="d9d09a56-ed4c-40b7-8be1-f3934c07296e" containerID="207d7ce188bef24394f03adf64334bd6fe10b8b414c650a2e20c36baa054efbd" exitCode=1 Mar 18 13:17:56.340482 master-0 kubenswrapper[7599]: I0318 13:17:56.340309 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" event={"ID":"d9d09a56-ed4c-40b7-8be1-f3934c07296e","Type":"ContainerDied","Data":"207d7ce188bef24394f03adf64334bd6fe10b8b414c650a2e20c36baa054efbd"} Mar 18 13:17:56.340482 master-0 kubenswrapper[7599]: I0318 13:17:56.340374 7599 scope.go:117] "RemoveContainer" containerID="6ac0b3c06e29048753b73b2f17b0ac17c18e2b197c3c5c6227d9ef97a38d373f" Mar 18 13:17:56.341281 master-0 kubenswrapper[7599]: I0318 13:17:56.341223 7599 scope.go:117] "RemoveContainer" containerID="207d7ce188bef24394f03adf64334bd6fe10b8b414c650a2e20c36baa054efbd" Mar 18 13:17:56.341876 master-0 kubenswrapper[7599]: E0318 13:17:56.341746 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-wqxpk_openshift-ingress-operator(d9d09a56-ed4c-40b7-8be1-f3934c07296e)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" podUID="d9d09a56-ed4c-40b7-8be1-f3934c07296e" Mar 18 13:17:57.254703 master-0 kubenswrapper[7599]: I0318 13:17:57.254617 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:57.254703 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:57.254703 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:57.254703 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:57.255581 master-0 kubenswrapper[7599]: I0318 13:17:57.254721 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:57.351228 master-0 kubenswrapper[7599]: I0318 13:17:57.351141 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-wqxpk_d9d09a56-ed4c-40b7-8be1-f3934c07296e/ingress-operator/3.log" Mar 18 13:17:58.254559 master-0 kubenswrapper[7599]: I0318 13:17:58.254405 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:58.254559 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:58.254559 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:58.254559 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:58.255702 master-0 kubenswrapper[7599]: I0318 13:17:58.254628 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:59.254135 master-0 kubenswrapper[7599]: I0318 13:17:59.253966 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:17:59.254135 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:17:59.254135 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:17:59.254135 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:17:59.254135 master-0 kubenswrapper[7599]: I0318 13:17:59.254049 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:17:59.437833 master-0 kubenswrapper[7599]: E0318 13:17:59.437737 7599 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:17:49Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:17:49Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:17:49Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:17:49Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:18:00.254704 master-0 kubenswrapper[7599]: I0318 13:18:00.254616 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:00.254704 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:00.254704 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:00.254704 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:00.255137 master-0 kubenswrapper[7599]: I0318 13:18:00.254733 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:01.254551 master-0 kubenswrapper[7599]: I0318 13:18:01.254480 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:01.254551 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:01.254551 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:01.254551 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:01.255788 master-0 kubenswrapper[7599]: I0318 13:18:01.255549 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:02.254304 master-0 kubenswrapper[7599]: I0318 13:18:02.254243 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:02.254304 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:02.254304 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:02.254304 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:02.255552 master-0 kubenswrapper[7599]: I0318 13:18:02.255505 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:03.253741 master-0 kubenswrapper[7599]: I0318 13:18:03.253693 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:03.253741 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:03.253741 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:03.253741 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:03.254004 master-0 kubenswrapper[7599]: I0318 13:18:03.253756 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:03.981315 master-0 kubenswrapper[7599]: E0318 13:18:03.981204 7599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Mar 18 13:18:04.255094 master-0 kubenswrapper[7599]: I0318 13:18:04.254955 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:04.255094 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:04.255094 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:04.255094 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:04.255680 master-0 kubenswrapper[7599]: I0318 13:18:04.255635 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:05.254082 master-0 kubenswrapper[7599]: I0318 13:18:05.254019 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:05.254082 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:05.254082 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:05.254082 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:05.255112 master-0 kubenswrapper[7599]: I0318 13:18:05.254689 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:06.254855 master-0 kubenswrapper[7599]: I0318 13:18:06.254764 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:06.254855 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:06.254855 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:06.254855 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:06.256162 master-0 kubenswrapper[7599]: I0318 13:18:06.255543 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:07.254673 master-0 kubenswrapper[7599]: I0318 13:18:07.254594 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:07.254673 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:07.254673 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:07.254673 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:07.255900 master-0 kubenswrapper[7599]: I0318 13:18:07.254697 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:08.254314 master-0 kubenswrapper[7599]: I0318 13:18:08.254233 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:08.254314 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:08.254314 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:08.254314 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:08.254644 master-0 kubenswrapper[7599]: I0318 13:18:08.254341 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:08.371715 master-0 kubenswrapper[7599]: I0318 13:18:08.371613 7599 scope.go:117] "RemoveContainer" containerID="207d7ce188bef24394f03adf64334bd6fe10b8b414c650a2e20c36baa054efbd" Mar 18 13:18:08.372777 master-0 kubenswrapper[7599]: E0318 13:18:08.372160 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-wqxpk_openshift-ingress-operator(d9d09a56-ed4c-40b7-8be1-f3934c07296e)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" podUID="d9d09a56-ed4c-40b7-8be1-f3934c07296e" Mar 18 13:18:09.255465 master-0 kubenswrapper[7599]: I0318 13:18:09.255373 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:09.255465 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:09.255465 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:09.255465 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:09.255947 master-0 kubenswrapper[7599]: I0318 13:18:09.255486 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:09.438092 master-0 kubenswrapper[7599]: E0318 13:18:09.437967 7599 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded" Mar 18 13:18:10.254562 master-0 kubenswrapper[7599]: I0318 13:18:10.254370 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:10.254562 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:10.254562 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:10.254562 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:10.254562 master-0 kubenswrapper[7599]: I0318 13:18:10.254494 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:11.253525 master-0 kubenswrapper[7599]: I0318 13:18:11.253428 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:11.253525 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:11.253525 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:11.253525 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:11.253525 master-0 kubenswrapper[7599]: I0318 13:18:11.253505 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:12.255790 master-0 kubenswrapper[7599]: I0318 13:18:12.255662 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:12.255790 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:12.255790 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:12.255790 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:12.255790 master-0 kubenswrapper[7599]: I0318 13:18:12.255771 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:12.478406 master-0 kubenswrapper[7599]: I0318 13:18:12.478314 7599 generic.go:334] "Generic (PLEG): container finished" podID="fe643e40-d06d-4e69-9be3-0065c2a78567" containerID="f43bc8a8bbbc899289099dead977b3860d9a72dc536beb3c67e961db9a9900e8" exitCode=0 Mar 18 13:18:12.478406 master-0 kubenswrapper[7599]: I0318 13:18:12.478368 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" event={"ID":"fe643e40-d06d-4e69-9be3-0065c2a78567","Type":"ContainerDied","Data":"f43bc8a8bbbc899289099dead977b3860d9a72dc536beb3c67e961db9a9900e8"} Mar 18 13:18:12.478406 master-0 kubenswrapper[7599]: I0318 13:18:12.478404 7599 scope.go:117] "RemoveContainer" containerID="ae6b8122ce3ad297d1b8d967c790c62c2b0fe5b326636877eaeee68260e70360" Mar 18 13:18:12.479072 master-0 kubenswrapper[7599]: I0318 13:18:12.478985 7599 scope.go:117] "RemoveContainer" containerID="f43bc8a8bbbc899289099dead977b3860d9a72dc536beb3c67e961db9a9900e8" Mar 18 13:18:12.479275 master-0 kubenswrapper[7599]: E0318 13:18:12.479221 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-89ccd998f-99pzm_openshift-marketplace(fe643e40-d06d-4e69-9be3-0065c2a78567)\"" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" podUID="fe643e40-d06d-4e69-9be3-0065c2a78567" Mar 18 13:18:13.254851 master-0 kubenswrapper[7599]: I0318 13:18:13.254734 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:13.254851 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:13.254851 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:13.254851 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:13.255376 master-0 kubenswrapper[7599]: I0318 13:18:13.254849 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:14.254872 master-0 kubenswrapper[7599]: I0318 13:18:14.254750 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:14.254872 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:14.254872 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:14.254872 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:14.255453 master-0 kubenswrapper[7599]: I0318 13:18:14.254877 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:14.782826 master-0 kubenswrapper[7599]: E0318 13:18:14.782721 7599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Mar 18 13:18:15.254538 master-0 kubenswrapper[7599]: I0318 13:18:15.254452 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:15.254538 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:15.254538 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:15.254538 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:15.254538 master-0 kubenswrapper[7599]: I0318 13:18:15.254528 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:16.254409 master-0 kubenswrapper[7599]: I0318 13:18:16.254324 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:16.254409 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:16.254409 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:16.254409 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:16.254409 master-0 kubenswrapper[7599]: I0318 13:18:16.254401 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:16.910219 master-0 kubenswrapper[7599]: E0318 13:18:16.910017 7599 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189df1a48d8318cf openshift-kube-controller-manager 10777 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:ce43e217adc4d0869adee3ba7c628c00,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:11:57 +0000 UTC,LastTimestamp:2026-03-18 13:16:49.995795225 +0000 UTC m=+584.956849467,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:18:17.254672 master-0 kubenswrapper[7599]: I0318 13:18:17.254512 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:17.254672 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:17.254672 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:17.254672 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:17.254672 master-0 kubenswrapper[7599]: I0318 13:18:17.254625 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:18.254514 master-0 kubenswrapper[7599]: I0318 13:18:18.254432 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:18.254514 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:18.254514 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:18.254514 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:18.254514 master-0 kubenswrapper[7599]: I0318 13:18:18.254503 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:18.392568 master-0 kubenswrapper[7599]: I0318 13:18:18.392469 7599 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:18:18.392836 master-0 kubenswrapper[7599]: I0318 13:18:18.392638 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:18:18.393225 master-0 kubenswrapper[7599]: I0318 13:18:18.393187 7599 scope.go:117] "RemoveContainer" containerID="f43bc8a8bbbc899289099dead977b3860d9a72dc536beb3c67e961db9a9900e8" Mar 18 13:18:18.393594 master-0 kubenswrapper[7599]: E0318 13:18:18.393515 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-89ccd998f-99pzm_openshift-marketplace(fe643e40-d06d-4e69-9be3-0065c2a78567)\"" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" podUID="fe643e40-d06d-4e69-9be3-0065c2a78567" Mar 18 13:18:18.522352 master-0 kubenswrapper[7599]: I0318 13:18:18.522123 7599 scope.go:117] "RemoveContainer" containerID="f43bc8a8bbbc899289099dead977b3860d9a72dc536beb3c67e961db9a9900e8" Mar 18 13:18:18.522352 master-0 kubenswrapper[7599]: E0318 13:18:18.522321 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-89ccd998f-99pzm_openshift-marketplace(fe643e40-d06d-4e69-9be3-0065c2a78567)\"" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" podUID="fe643e40-d06d-4e69-9be3-0065c2a78567" Mar 18 13:18:19.253632 master-0 kubenswrapper[7599]: I0318 13:18:19.253501 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:19.253632 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:19.253632 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:19.253632 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:19.253632 master-0 kubenswrapper[7599]: I0318 13:18:19.253583 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:19.438738 master-0 kubenswrapper[7599]: E0318 13:18:19.438620 7599 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:18:20.253839 master-0 kubenswrapper[7599]: I0318 13:18:20.253725 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:20.253839 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:20.253839 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:20.253839 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:20.253839 master-0 kubenswrapper[7599]: I0318 13:18:20.253818 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:21.253714 master-0 kubenswrapper[7599]: I0318 13:18:21.253628 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:21.253714 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:21.253714 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:21.253714 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:21.253714 master-0 kubenswrapper[7599]: I0318 13:18:21.253700 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:21.372754 master-0 kubenswrapper[7599]: I0318 13:18:21.372693 7599 scope.go:117] "RemoveContainer" containerID="207d7ce188bef24394f03adf64334bd6fe10b8b414c650a2e20c36baa054efbd" Mar 18 13:18:21.372987 master-0 kubenswrapper[7599]: E0318 13:18:21.372935 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-wqxpk_openshift-ingress-operator(d9d09a56-ed4c-40b7-8be1-f3934c07296e)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" podUID="d9d09a56-ed4c-40b7-8be1-f3934c07296e" Mar 18 13:18:22.254478 master-0 kubenswrapper[7599]: I0318 13:18:22.254355 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:22.254478 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:22.254478 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:22.254478 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:22.254478 master-0 kubenswrapper[7599]: I0318 13:18:22.254456 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:22.547809 master-0 kubenswrapper[7599]: I0318 13:18:22.547740 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-q2ndb_16f8e725-f18a-478e-88c5-87d54aeb4857/manager/1.log" Mar 18 13:18:22.548616 master-0 kubenswrapper[7599]: I0318 13:18:22.548574 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-q2ndb_16f8e725-f18a-478e-88c5-87d54aeb4857/manager/0.log" Mar 18 13:18:22.549031 master-0 kubenswrapper[7599]: I0318 13:18:22.548988 7599 generic.go:334] "Generic (PLEG): container finished" podID="16f8e725-f18a-478e-88c5-87d54aeb4857" containerID="8b95eb8fc69ceaf4692d8f4970690d7b9c31eb8fb64b767afa33cfaa9ea6e088" exitCode=1 Mar 18 13:18:22.549031 master-0 kubenswrapper[7599]: I0318 13:18:22.549026 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" event={"ID":"16f8e725-f18a-478e-88c5-87d54aeb4857","Type":"ContainerDied","Data":"8b95eb8fc69ceaf4692d8f4970690d7b9c31eb8fb64b767afa33cfaa9ea6e088"} Mar 18 13:18:22.549120 master-0 kubenswrapper[7599]: I0318 13:18:22.549058 7599 scope.go:117] "RemoveContainer" containerID="aebb870640af737294de5fde7faf1b19862e6f81b4ae715f35fdf208373b75e7" Mar 18 13:18:22.550117 master-0 kubenswrapper[7599]: I0318 13:18:22.550050 7599 scope.go:117] "RemoveContainer" containerID="8b95eb8fc69ceaf4692d8f4970690d7b9c31eb8fb64b767afa33cfaa9ea6e088" Mar 18 13:18:22.550493 master-0 kubenswrapper[7599]: E0318 13:18:22.550388 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=catalogd-controller-manager-6864dc98f7-q2ndb_openshift-catalogd(16f8e725-f18a-478e-88c5-87d54aeb4857)\"" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" podUID="16f8e725-f18a-478e-88c5-87d54aeb4857" Mar 18 13:18:23.254895 master-0 kubenswrapper[7599]: I0318 13:18:23.254809 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:23.254895 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:23.254895 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:23.254895 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:23.254895 master-0 kubenswrapper[7599]: I0318 13:18:23.254896 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:23.558349 master-0 kubenswrapper[7599]: I0318 13:18:23.558209 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-q2ndb_16f8e725-f18a-478e-88c5-87d54aeb4857/manager/1.log" Mar 18 13:18:24.255943 master-0 kubenswrapper[7599]: I0318 13:18:24.255869 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:24.255943 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:24.255943 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:24.255943 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:24.257293 master-0 kubenswrapper[7599]: I0318 13:18:24.255971 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:24.285723 master-0 kubenswrapper[7599]: E0318 13:18:24.285626 7599 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 18 13:18:25.254764 master-0 kubenswrapper[7599]: I0318 13:18:25.254687 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:25.254764 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:25.254764 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:25.254764 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:25.255257 master-0 kubenswrapper[7599]: I0318 13:18:25.254771 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:25.576059 master-0 kubenswrapper[7599]: I0318 13:18:25.575935 7599 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="ac6a03840b83398cf49ffdbda9e45e37a6a4ad486796c7aa5525dfdd483b2a1c" exitCode=0 Mar 18 13:18:25.576059 master-0 kubenswrapper[7599]: I0318 13:18:25.576008 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"ac6a03840b83398cf49ffdbda9e45e37a6a4ad486796c7aa5525dfdd483b2a1c"} Mar 18 13:18:25.576936 master-0 kubenswrapper[7599]: I0318 13:18:25.576518 7599 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="9f52b11b-6ba0-49bd-8220-405f8b5303fe" Mar 18 13:18:25.576936 master-0 kubenswrapper[7599]: I0318 13:18:25.576544 7599 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="9f52b11b-6ba0-49bd-8220-405f8b5303fe" Mar 18 13:18:26.021466 master-0 kubenswrapper[7599]: I0318 13:18:26.021293 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:18:26.022092 master-0 kubenswrapper[7599]: I0318 13:18:26.022040 7599 scope.go:117] "RemoveContainer" containerID="8b95eb8fc69ceaf4692d8f4970690d7b9c31eb8fb64b767afa33cfaa9ea6e088" Mar 18 13:18:26.022385 master-0 kubenswrapper[7599]: E0318 13:18:26.022337 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=catalogd-controller-manager-6864dc98f7-q2ndb_openshift-catalogd(16f8e725-f18a-478e-88c5-87d54aeb4857)\"" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" podUID="16f8e725-f18a-478e-88c5-87d54aeb4857" Mar 18 13:18:26.254609 master-0 kubenswrapper[7599]: I0318 13:18:26.254494 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:26.254609 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:26.254609 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:26.254609 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:26.254609 master-0 kubenswrapper[7599]: I0318 13:18:26.254588 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:26.383339 master-0 kubenswrapper[7599]: E0318 13:18:26.383154 7599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Mar 18 13:18:27.254606 master-0 kubenswrapper[7599]: I0318 13:18:27.254532 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:27.254606 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:27.254606 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:27.254606 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:27.255692 master-0 kubenswrapper[7599]: I0318 13:18:27.254645 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:28.254443 master-0 kubenswrapper[7599]: I0318 13:18:28.254344 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:28.254443 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:28.254443 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:28.254443 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:28.255051 master-0 kubenswrapper[7599]: I0318 13:18:28.254456 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:29.254279 master-0 kubenswrapper[7599]: I0318 13:18:29.254216 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:29.254279 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:29.254279 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:29.254279 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:29.254588 master-0 kubenswrapper[7599]: I0318 13:18:29.254326 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:29.439431 master-0 kubenswrapper[7599]: E0318 13:18:29.439341 7599 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:18:30.254990 master-0 kubenswrapper[7599]: I0318 13:18:30.254854 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:30.254990 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:30.254990 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:30.254990 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:30.254990 master-0 kubenswrapper[7599]: I0318 13:18:30.254973 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:31.253901 master-0 kubenswrapper[7599]: I0318 13:18:31.253831 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:31.253901 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:31.253901 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:31.253901 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:31.253901 master-0 kubenswrapper[7599]: I0318 13:18:31.253900 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:31.372280 master-0 kubenswrapper[7599]: I0318 13:18:31.372183 7599 scope.go:117] "RemoveContainer" containerID="f43bc8a8bbbc899289099dead977b3860d9a72dc536beb3c67e961db9a9900e8" Mar 18 13:18:31.637153 master-0 kubenswrapper[7599]: I0318 13:18:31.634449 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" event={"ID":"fe643e40-d06d-4e69-9be3-0065c2a78567","Type":"ContainerStarted","Data":"1e569b3cafd93d8af4f801b48428238651c12ec610106d9d95db5f8c5cc1b218"} Mar 18 13:18:31.637153 master-0 kubenswrapper[7599]: I0318 13:18:31.634850 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:18:31.637379 master-0 kubenswrapper[7599]: I0318 13:18:31.637251 7599 patch_prober.go:28] interesting pod/marketplace-operator-89ccd998f-99pzm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" start-of-body= Mar 18 13:18:31.637379 master-0 kubenswrapper[7599]: I0318 13:18:31.637320 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" podUID="fe643e40-d06d-4e69-9be3-0065c2a78567" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" Mar 18 13:18:32.254480 master-0 kubenswrapper[7599]: I0318 13:18:32.254382 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:32.254480 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:32.254480 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:32.254480 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:32.255583 master-0 kubenswrapper[7599]: I0318 13:18:32.254500 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:32.372766 master-0 kubenswrapper[7599]: I0318 13:18:32.372715 7599 scope.go:117] "RemoveContainer" containerID="207d7ce188bef24394f03adf64334bd6fe10b8b414c650a2e20c36baa054efbd" Mar 18 13:18:32.373607 master-0 kubenswrapper[7599]: E0318 13:18:32.373578 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-wqxpk_openshift-ingress-operator(d9d09a56-ed4c-40b7-8be1-f3934c07296e)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" podUID="d9d09a56-ed4c-40b7-8be1-f3934c07296e" Mar 18 13:18:32.645468 master-0 kubenswrapper[7599]: I0318 13:18:32.645285 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:18:33.254580 master-0 kubenswrapper[7599]: I0318 13:18:33.254486 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:33.254580 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:33.254580 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:33.254580 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:33.255579 master-0 kubenswrapper[7599]: I0318 13:18:33.254591 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:34.255732 master-0 kubenswrapper[7599]: I0318 13:18:34.255654 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:34.255732 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:34.255732 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:34.255732 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:34.255732 master-0 kubenswrapper[7599]: I0318 13:18:34.255743 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:35.254723 master-0 kubenswrapper[7599]: I0318 13:18:35.254643 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:35.254723 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:35.254723 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:35.254723 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:35.255229 master-0 kubenswrapper[7599]: I0318 13:18:35.254736 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:35.387369 master-0 kubenswrapper[7599]: I0318 13:18:35.387262 7599 status_manager.go:851] "Failed to get status for pod" podUID="0e7156cf-2d68-4de8-b7e7-60e1539590dd" pod="openshift-network-node-identity/network-node-identity-x8r78" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods network-node-identity-x8r78)" Mar 18 13:18:35.668128 master-0 kubenswrapper[7599]: I0318 13:18:35.668031 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-8lzkl_80994f33-21e7-45d6-9f21-1cfd8e1f41ce/config-sync-controllers/0.log" Mar 18 13:18:35.668779 master-0 kubenswrapper[7599]: I0318 13:18:35.668710 7599 generic.go:334] "Generic (PLEG): container finished" podID="80994f33-21e7-45d6-9f21-1cfd8e1f41ce" containerID="d50601e164ccfcbdf07931c427e847ca4740015597032ab2b84aea93b2d7cd31" exitCode=1 Mar 18 13:18:35.669068 master-0 kubenswrapper[7599]: I0318 13:18:35.668777 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" event={"ID":"80994f33-21e7-45d6-9f21-1cfd8e1f41ce","Type":"ContainerDied","Data":"d50601e164ccfcbdf07931c427e847ca4740015597032ab2b84aea93b2d7cd31"} Mar 18 13:18:35.669788 master-0 kubenswrapper[7599]: I0318 13:18:35.669733 7599 scope.go:117] "RemoveContainer" containerID="d50601e164ccfcbdf07931c427e847ca4740015597032ab2b84aea93b2d7cd31" Mar 18 13:18:36.021363 master-0 kubenswrapper[7599]: I0318 13:18:36.021252 7599 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:18:36.023148 master-0 kubenswrapper[7599]: I0318 13:18:36.023086 7599 scope.go:117] "RemoveContainer" containerID="8b95eb8fc69ceaf4692d8f4970690d7b9c31eb8fb64b767afa33cfaa9ea6e088" Mar 18 13:18:36.253614 master-0 kubenswrapper[7599]: I0318 13:18:36.253540 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:36.253614 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:36.253614 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:36.253614 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:36.253890 master-0 kubenswrapper[7599]: I0318 13:18:36.253644 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:36.680436 master-0 kubenswrapper[7599]: I0318 13:18:36.680362 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-q2ndb_16f8e725-f18a-478e-88c5-87d54aeb4857/manager/1.log" Mar 18 13:18:36.681066 master-0 kubenswrapper[7599]: I0318 13:18:36.681025 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" event={"ID":"16f8e725-f18a-478e-88c5-87d54aeb4857","Type":"ContainerStarted","Data":"73e962786148e331d71bca99dadc5db6b5fccf1a19effac4baa8614e839409fb"} Mar 18 13:18:36.681359 master-0 kubenswrapper[7599]: I0318 13:18:36.681306 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:18:36.683995 master-0 kubenswrapper[7599]: I0318 13:18:36.683970 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-8lzkl_80994f33-21e7-45d6-9f21-1cfd8e1f41ce/config-sync-controllers/0.log" Mar 18 13:18:36.684520 master-0 kubenswrapper[7599]: I0318 13:18:36.684475 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" event={"ID":"80994f33-21e7-45d6-9f21-1cfd8e1f41ce","Type":"ContainerStarted","Data":"5f3c8d778382c867b08fbd74f7923dd512336eb8b121e70f84bb319617f783a5"} Mar 18 13:18:37.253962 master-0 kubenswrapper[7599]: I0318 13:18:37.253884 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:37.253962 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:37.253962 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:37.253962 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:37.254315 master-0 kubenswrapper[7599]: I0318 13:18:37.253965 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:38.254736 master-0 kubenswrapper[7599]: I0318 13:18:38.254639 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:38.254736 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:38.254736 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:38.254736 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:38.254736 master-0 kubenswrapper[7599]: I0318 13:18:38.254735 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:39.254816 master-0 kubenswrapper[7599]: I0318 13:18:39.254707 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:39.254816 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:39.254816 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:39.254816 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:39.254816 master-0 kubenswrapper[7599]: I0318 13:18:39.254796 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:39.440280 master-0 kubenswrapper[7599]: E0318 13:18:39.440145 7599 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:18:39.440280 master-0 kubenswrapper[7599]: E0318 13:18:39.440216 7599 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 13:18:39.584533 master-0 kubenswrapper[7599]: E0318 13:18:39.584316 7599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Mar 18 13:18:40.254947 master-0 kubenswrapper[7599]: I0318 13:18:40.254791 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:40.254947 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:40.254947 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:40.254947 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:40.255917 master-0 kubenswrapper[7599]: I0318 13:18:40.254953 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:41.254715 master-0 kubenswrapper[7599]: I0318 13:18:41.254370 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:41.254715 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:41.254715 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:41.254715 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:41.254715 master-0 kubenswrapper[7599]: I0318 13:18:41.254560 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:41.731053 master-0 kubenswrapper[7599]: I0318 13:18:41.730963 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qsnxz_deb67ea0-8342-40cb-b0f4-115270e878dd/snapshot-controller/1.log" Mar 18 13:18:41.732157 master-0 kubenswrapper[7599]: I0318 13:18:41.732092 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qsnxz_deb67ea0-8342-40cb-b0f4-115270e878dd/snapshot-controller/0.log" Mar 18 13:18:41.732258 master-0 kubenswrapper[7599]: I0318 13:18:41.732210 7599 generic.go:334] "Generic (PLEG): container finished" podID="deb67ea0-8342-40cb-b0f4-115270e878dd" containerID="f15ee5d33285c15f95f38e99b3afffed56d23dec0f3da62015e493b81d27528c" exitCode=1 Mar 18 13:18:41.732402 master-0 kubenswrapper[7599]: I0318 13:18:41.732358 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qsnxz" event={"ID":"deb67ea0-8342-40cb-b0f4-115270e878dd","Type":"ContainerDied","Data":"f15ee5d33285c15f95f38e99b3afffed56d23dec0f3da62015e493b81d27528c"} Mar 18 13:18:41.732538 master-0 kubenswrapper[7599]: I0318 13:18:41.732521 7599 scope.go:117] "RemoveContainer" containerID="8c9e4d7f5a1cfb905af9530af8305e93c12f5088f9374b32f042b05f77b48591" Mar 18 13:18:41.733075 master-0 kubenswrapper[7599]: I0318 13:18:41.733045 7599 scope.go:117] "RemoveContainer" containerID="f15ee5d33285c15f95f38e99b3afffed56d23dec0f3da62015e493b81d27528c" Mar 18 13:18:41.733438 master-0 kubenswrapper[7599]: E0318 13:18:41.733384 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-qsnxz_openshift-cluster-storage-operator(deb67ea0-8342-40cb-b0f4-115270e878dd)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qsnxz" podUID="deb67ea0-8342-40cb-b0f4-115270e878dd" Mar 18 13:18:42.254811 master-0 kubenswrapper[7599]: I0318 13:18:42.254625 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:18:42.254811 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:18:42.254811 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:18:42.254811 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:18:42.255388 master-0 kubenswrapper[7599]: I0318 13:18:42.254879 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:18:42.255388 master-0 kubenswrapper[7599]: I0318 13:18:42.254960 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:18:42.255999 master-0 kubenswrapper[7599]: I0318 13:18:42.255921 7599 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"9c551ae25ef9367709ba8842a822330b49626584583ec5ef49474f8a67486429"} pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" containerMessage="Container router failed startup probe, will be restarted" Mar 18 13:18:42.256143 master-0 kubenswrapper[7599]: I0318 13:18:42.255998 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" containerID="cri-o://9c551ae25ef9367709ba8842a822330b49626584583ec5ef49474f8a67486429" gracePeriod=3600 Mar 18 13:18:42.743993 master-0 kubenswrapper[7599]: I0318 13:18:42.743905 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qsnxz_deb67ea0-8342-40cb-b0f4-115270e878dd/snapshot-controller/1.log" Mar 18 13:18:46.025482 master-0 kubenswrapper[7599]: I0318 13:18:46.025259 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:18:46.371753 master-0 kubenswrapper[7599]: I0318 13:18:46.371559 7599 scope.go:117] "RemoveContainer" containerID="207d7ce188bef24394f03adf64334bd6fe10b8b414c650a2e20c36baa054efbd" Mar 18 13:18:46.774692 master-0 kubenswrapper[7599]: I0318 13:18:46.774625 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-wqxpk_d9d09a56-ed4c-40b7-8be1-f3934c07296e/ingress-operator/3.log" Mar 18 13:18:46.775050 master-0 kubenswrapper[7599]: I0318 13:18:46.775011 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" event={"ID":"d9d09a56-ed4c-40b7-8be1-f3934c07296e","Type":"ContainerStarted","Data":"caba03fd38cc380067c9996cdbf9e7480fdd072e4dbef7e8d359a205ae43b4e1"} Mar 18 13:18:46.777229 master-0 kubenswrapper[7599]: I0318 13:18:46.777185 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-8lzkl_80994f33-21e7-45d6-9f21-1cfd8e1f41ce/config-sync-controllers/0.log" Mar 18 13:18:46.777649 master-0 kubenswrapper[7599]: I0318 13:18:46.777588 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-8lzkl_80994f33-21e7-45d6-9f21-1cfd8e1f41ce/cluster-cloud-controller-manager/0.log" Mar 18 13:18:46.777649 master-0 kubenswrapper[7599]: I0318 13:18:46.777636 7599 generic.go:334] "Generic (PLEG): container finished" podID="80994f33-21e7-45d6-9f21-1cfd8e1f41ce" containerID="b60e278771d4ab09e373261d0f5e1a2d382ec8ee4872ddb07f8d9ad772242c29" exitCode=1 Mar 18 13:18:46.777971 master-0 kubenswrapper[7599]: I0318 13:18:46.777686 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" event={"ID":"80994f33-21e7-45d6-9f21-1cfd8e1f41ce","Type":"ContainerDied","Data":"b60e278771d4ab09e373261d0f5e1a2d382ec8ee4872ddb07f8d9ad772242c29"} Mar 18 13:18:46.778553 master-0 kubenswrapper[7599]: I0318 13:18:46.778509 7599 scope.go:117] "RemoveContainer" containerID="b60e278771d4ab09e373261d0f5e1a2d382ec8ee4872ddb07f8d9ad772242c29" Mar 18 13:18:47.789522 master-0 kubenswrapper[7599]: I0318 13:18:47.789448 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-8lzkl_80994f33-21e7-45d6-9f21-1cfd8e1f41ce/config-sync-controllers/0.log" Mar 18 13:18:47.790795 master-0 kubenswrapper[7599]: I0318 13:18:47.790763 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-8lzkl_80994f33-21e7-45d6-9f21-1cfd8e1f41ce/cluster-cloud-controller-manager/0.log" Mar 18 13:18:47.791080 master-0 kubenswrapper[7599]: I0318 13:18:47.791039 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" event={"ID":"80994f33-21e7-45d6-9f21-1cfd8e1f41ce","Type":"ContainerStarted","Data":"71d3404a38107a5a5dbdfa1cee4c6928a2f2a83f6bfa89195edb436d961641f1"} Mar 18 13:18:47.793893 master-0 kubenswrapper[7599]: I0318 13:18:47.793834 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-9bjsj_98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617/manager/1.log" Mar 18 13:18:47.795879 master-0 kubenswrapper[7599]: I0318 13:18:47.795822 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-9bjsj_98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617/manager/0.log" Mar 18 13:18:47.796030 master-0 kubenswrapper[7599]: I0318 13:18:47.795908 7599 generic.go:334] "Generic (PLEG): container finished" podID="98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617" containerID="373a96948933142905a0929fac3fe9686db40a54b4edff77be09a9cdf58a235d" exitCode=1 Mar 18 13:18:47.796030 master-0 kubenswrapper[7599]: I0318 13:18:47.795955 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" event={"ID":"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617","Type":"ContainerDied","Data":"373a96948933142905a0929fac3fe9686db40a54b4edff77be09a9cdf58a235d"} Mar 18 13:18:47.796161 master-0 kubenswrapper[7599]: I0318 13:18:47.796027 7599 scope.go:117] "RemoveContainer" containerID="e6d3b86684e16237f7515b45dbb7b40a94f5f8bddf2d34d18c36a6a4d6af41b4" Mar 18 13:18:47.797060 master-0 kubenswrapper[7599]: I0318 13:18:47.797012 7599 scope.go:117] "RemoveContainer" containerID="373a96948933142905a0929fac3fe9686db40a54b4edff77be09a9cdf58a235d" Mar 18 13:18:47.797507 master-0 kubenswrapper[7599]: E0318 13:18:47.797465 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=operator-controller-controller-manager-57777556ff-9bjsj_openshift-operator-controller(98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617)\"" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" podUID="98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617" Mar 18 13:18:48.807490 master-0 kubenswrapper[7599]: I0318 13:18:48.807389 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-9bjsj_98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617/manager/1.log" Mar 18 13:18:50.913036 master-0 kubenswrapper[7599]: E0318 13:18:50.912850 7599 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189df1a48e00c186 openshift-kube-controller-manager 10778 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:ce43e217adc4d0869adee3ba7c628c00,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:11:57 +0000 UTC,LastTimestamp:2026-03-18 13:16:50.008893753 +0000 UTC m=+584.969947995,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:18:53.371179 master-0 kubenswrapper[7599]: I0318 13:18:53.371077 7599 scope.go:117] "RemoveContainer" containerID="f15ee5d33285c15f95f38e99b3afffed56d23dec0f3da62015e493b81d27528c" Mar 18 13:18:53.847979 master-0 kubenswrapper[7599]: I0318 13:18:53.847888 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qsnxz_deb67ea0-8342-40cb-b0f4-115270e878dd/snapshot-controller/1.log" Mar 18 13:18:53.848531 master-0 kubenswrapper[7599]: I0318 13:18:53.847994 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qsnxz" event={"ID":"deb67ea0-8342-40cb-b0f4-115270e878dd","Type":"ContainerStarted","Data":"8ef573ee57b8050c3c4e94180a33fe16493427de3188e526b45f7045b89d6fa0"} Mar 18 13:18:55.986818 master-0 kubenswrapper[7599]: E0318 13:18:55.986612 7599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 13:18:56.049271 master-0 kubenswrapper[7599]: I0318 13:18:56.049159 7599 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:18:56.049271 master-0 kubenswrapper[7599]: I0318 13:18:56.049248 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:18:56.050051 master-0 kubenswrapper[7599]: I0318 13:18:56.050002 7599 scope.go:117] "RemoveContainer" containerID="373a96948933142905a0929fac3fe9686db40a54b4edff77be09a9cdf58a235d" Mar 18 13:18:56.050451 master-0 kubenswrapper[7599]: E0318 13:18:56.050350 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=operator-controller-controller-manager-57777556ff-9bjsj_openshift-operator-controller(98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617)\"" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" podUID="98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617" Mar 18 13:18:57.884080 master-0 kubenswrapper[7599]: I0318 13:18:57.884012 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-mz4qp_ac6d8eb6-1d5e-4757-9823-5ffe478c711c/cluster-baremetal-operator/1.log" Mar 18 13:18:57.887314 master-0 kubenswrapper[7599]: I0318 13:18:57.887243 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-mz4qp_ac6d8eb6-1d5e-4757-9823-5ffe478c711c/cluster-baremetal-operator/0.log" Mar 18 13:18:57.887496 master-0 kubenswrapper[7599]: I0318 13:18:57.887334 7599 generic.go:334] "Generic (PLEG): container finished" podID="ac6d8eb6-1d5e-4757-9823-5ffe478c711c" containerID="acf7b4bcf62e14560e517d9dee729cb0a4ed47ca5e163fdf0d69e59c5b3307d6" exitCode=1 Mar 18 13:18:57.887496 master-0 kubenswrapper[7599]: I0318 13:18:57.887378 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" event={"ID":"ac6d8eb6-1d5e-4757-9823-5ffe478c711c","Type":"ContainerDied","Data":"acf7b4bcf62e14560e517d9dee729cb0a4ed47ca5e163fdf0d69e59c5b3307d6"} Mar 18 13:18:57.887496 master-0 kubenswrapper[7599]: I0318 13:18:57.887458 7599 scope.go:117] "RemoveContainer" containerID="49667c3562724d21d11f45af9648468c2dd5436306c9e389954957510ee7b256" Mar 18 13:18:57.888075 master-0 kubenswrapper[7599]: I0318 13:18:57.888027 7599 scope.go:117] "RemoveContainer" containerID="acf7b4bcf62e14560e517d9dee729cb0a4ed47ca5e163fdf0d69e59c5b3307d6" Mar 18 13:18:57.888366 master-0 kubenswrapper[7599]: E0318 13:18:57.888316 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-6f69995874-mz4qp_openshift-machine-api(ac6d8eb6-1d5e-4757-9823-5ffe478c711c)\"" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" podUID="ac6d8eb6-1d5e-4757-9823-5ffe478c711c" Mar 18 13:18:58.898363 master-0 kubenswrapper[7599]: I0318 13:18:58.898265 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-mz4qp_ac6d8eb6-1d5e-4757-9823-5ffe478c711c/cluster-baremetal-operator/1.log" Mar 18 13:18:59.580308 master-0 kubenswrapper[7599]: E0318 13:18:59.580221 7599 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 18 13:18:59.908966 master-0 kubenswrapper[7599]: I0318 13:18:59.908886 7599 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="aec5346f46da33d997a4c62bc92998fc48a19573760229d71e97091c1c9a67c9" exitCode=0 Mar 18 13:18:59.908966 master-0 kubenswrapper[7599]: I0318 13:18:59.908958 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"aec5346f46da33d997a4c62bc92998fc48a19573760229d71e97091c1c9a67c9"} Mar 18 13:18:59.909488 master-0 kubenswrapper[7599]: I0318 13:18:59.909462 7599 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="9f52b11b-6ba0-49bd-8220-405f8b5303fe" Mar 18 13:18:59.909526 master-0 kubenswrapper[7599]: I0318 13:18:59.909492 7599 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="9f52b11b-6ba0-49bd-8220-405f8b5303fe" Mar 18 13:19:00.918980 master-0 kubenswrapper[7599]: I0318 13:19:00.918919 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-5c6485487f-fk8ql_f38b464d-a218-4753-b7ac-a7d373952c4d/machine-approver-controller/0.log" Mar 18 13:19:00.920107 master-0 kubenswrapper[7599]: I0318 13:19:00.920001 7599 generic.go:334] "Generic (PLEG): container finished" podID="f38b464d-a218-4753-b7ac-a7d373952c4d" containerID="55fa6d94ce214941faacc4a186e818424b11b71ba4c1eab406a044ddb774b931" exitCode=255 Mar 18 13:19:00.920107 master-0 kubenswrapper[7599]: I0318 13:19:00.920080 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql" event={"ID":"f38b464d-a218-4753-b7ac-a7d373952c4d","Type":"ContainerDied","Data":"55fa6d94ce214941faacc4a186e818424b11b71ba4c1eab406a044ddb774b931"} Mar 18 13:19:00.920776 master-0 kubenswrapper[7599]: I0318 13:19:00.920734 7599 scope.go:117] "RemoveContainer" containerID="55fa6d94ce214941faacc4a186e818424b11b71ba4c1eab406a044ddb774b931" Mar 18 13:19:01.931826 master-0 kubenswrapper[7599]: I0318 13:19:01.931744 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-5c6485487f-fk8ql_f38b464d-a218-4753-b7ac-a7d373952c4d/machine-approver-controller/0.log" Mar 18 13:19:01.932617 master-0 kubenswrapper[7599]: I0318 13:19:01.932352 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql" event={"ID":"f38b464d-a218-4753-b7ac-a7d373952c4d","Type":"ContainerStarted","Data":"92280bccd2b6d5e0f9862db35c3e4e8885627146385a886dbc5fe3415968b7dc"} Mar 18 13:19:03.998103 master-0 kubenswrapper[7599]: E0318 13:19:03.998017 7599 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdcd72a6_a8e8_47ba_8b51_7325d35bad6b.slice/crio-7256607264aec34fc303524e25688f50a4035bdf4da670438e512c20c88759c7.scope\": RecentStats: unable to find data in memory cache]" Mar 18 13:19:04.953796 master-0 kubenswrapper[7599]: I0318 13:19:04.953700 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-5vhnr_bdcd72a6-a8e8-47ba-8b51-7325d35bad6b/control-plane-machine-set-operator/0.log" Mar 18 13:19:04.953796 master-0 kubenswrapper[7599]: I0318 13:19:04.953778 7599 generic.go:334] "Generic (PLEG): container finished" podID="bdcd72a6-a8e8-47ba-8b51-7325d35bad6b" containerID="7256607264aec34fc303524e25688f50a4035bdf4da670438e512c20c88759c7" exitCode=1 Mar 18 13:19:04.954394 master-0 kubenswrapper[7599]: I0318 13:19:04.953822 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-5vhnr" event={"ID":"bdcd72a6-a8e8-47ba-8b51-7325d35bad6b","Type":"ContainerDied","Data":"7256607264aec34fc303524e25688f50a4035bdf4da670438e512c20c88759c7"} Mar 18 13:19:04.954563 master-0 kubenswrapper[7599]: I0318 13:19:04.954516 7599 scope.go:117] "RemoveContainer" containerID="7256607264aec34fc303524e25688f50a4035bdf4da670438e512c20c88759c7" Mar 18 13:19:05.963035 master-0 kubenswrapper[7599]: I0318 13:19:05.962928 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-5vhnr_bdcd72a6-a8e8-47ba-8b51-7325d35bad6b/control-plane-machine-set-operator/0.log" Mar 18 13:19:05.963717 master-0 kubenswrapper[7599]: I0318 13:19:05.963687 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-5vhnr" event={"ID":"bdcd72a6-a8e8-47ba-8b51-7325d35bad6b","Type":"ContainerStarted","Data":"ba376a1d73a67617b715fd0231574193b06155f8209c21a3c5307d41e5c8af24"} Mar 18 13:19:06.976377 master-0 kubenswrapper[7599]: I0318 13:19:06.976236 7599 generic.go:334] "Generic (PLEG): container finished" podID="d42bcf13-548b-46c4-9a3d-a46f1b6ec045" containerID="d95ca6e96bcbe20e26ec06e8bea97630f7abc38b8dcb855ed93eec8b8ea1c22b" exitCode=0 Mar 18 13:19:06.976377 master-0 kubenswrapper[7599]: I0318 13:19:06.976337 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" event={"ID":"d42bcf13-548b-46c4-9a3d-a46f1b6ec045","Type":"ContainerDied","Data":"d95ca6e96bcbe20e26ec06e8bea97630f7abc38b8dcb855ed93eec8b8ea1c22b"} Mar 18 13:19:06.977662 master-0 kubenswrapper[7599]: I0318 13:19:06.976401 7599 scope.go:117] "RemoveContainer" containerID="bdb06b047a43d8f5cc135f15126477528bd6743cd5d10a3d7306b59927303450" Mar 18 13:19:06.977662 master-0 kubenswrapper[7599]: I0318 13:19:06.977292 7599 scope.go:117] "RemoveContainer" containerID="d95ca6e96bcbe20e26ec06e8bea97630f7abc38b8dcb855ed93eec8b8ea1c22b" Mar 18 13:19:06.977992 master-0 kubenswrapper[7599]: E0318 13:19:06.977853 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-cluster-manager pod=ovnkube-control-plane-57f769d897-hvnt4_openshift-ovn-kubernetes(d42bcf13-548b-46c4-9a3d-a46f1b6ec045)\"" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" podUID="d42bcf13-548b-46c4-9a3d-a46f1b6ec045" Mar 18 13:19:09.372096 master-0 kubenswrapper[7599]: I0318 13:19:09.371990 7599 scope.go:117] "RemoveContainer" containerID="373a96948933142905a0929fac3fe9686db40a54b4edff77be09a9cdf58a235d" Mar 18 13:19:10.019229 master-0 kubenswrapper[7599]: I0318 13:19:10.019121 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-9bjsj_98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617/manager/1.log" Mar 18 13:19:10.020144 master-0 kubenswrapper[7599]: I0318 13:19:10.020019 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" event={"ID":"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617","Type":"ContainerStarted","Data":"7a7f2ccfc78b34586b520e7b273c5529da5b88ce117fdf9009b75da391aff58c"} Mar 18 13:19:10.020933 master-0 kubenswrapper[7599]: I0318 13:19:10.020811 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:19:12.038133 master-0 kubenswrapper[7599]: I0318 13:19:12.038027 7599 generic.go:334] "Generic (PLEG): container finished" podID="c6c35e08-cdbc-4a86-a64a-3e5c34e941d7" containerID="8fd3086731035b08c09720259a6ef231b1be865d3ade946ceb31136e3b43913c" exitCode=0 Mar 18 13:19:12.038133 master-0 kubenswrapper[7599]: I0318 13:19:12.038091 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" event={"ID":"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7","Type":"ContainerDied","Data":"8fd3086731035b08c09720259a6ef231b1be865d3ade946ceb31136e3b43913c"} Mar 18 13:19:12.039198 master-0 kubenswrapper[7599]: I0318 13:19:12.038908 7599 scope.go:117] "RemoveContainer" containerID="8fd3086731035b08c09720259a6ef231b1be865d3ade946ceb31136e3b43913c" Mar 18 13:19:12.987661 master-0 kubenswrapper[7599]: E0318 13:19:12.987543 7599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 13:19:13.046742 master-0 kubenswrapper[7599]: I0318 13:19:13.046704 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" event={"ID":"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7","Type":"ContainerStarted","Data":"313c72120bec2b6d08365ada8135c3dfd105d61c037f0f5155256e309f9275b8"} Mar 18 13:19:13.047770 master-0 kubenswrapper[7599]: I0318 13:19:13.047717 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:19:13.052121 master-0 kubenswrapper[7599]: I0318 13:19:13.052071 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:19:13.373362 master-0 kubenswrapper[7599]: I0318 13:19:13.373165 7599 scope.go:117] "RemoveContainer" containerID="acf7b4bcf62e14560e517d9dee729cb0a4ed47ca5e163fdf0d69e59c5b3307d6" Mar 18 13:19:14.057844 master-0 kubenswrapper[7599]: I0318 13:19:14.057750 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-mz4qp_ac6d8eb6-1d5e-4757-9823-5ffe478c711c/cluster-baremetal-operator/1.log" Mar 18 13:19:14.058577 master-0 kubenswrapper[7599]: I0318 13:19:14.058152 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" event={"ID":"ac6d8eb6-1d5e-4757-9823-5ffe478c711c","Type":"ContainerStarted","Data":"3ec5f61268f5b704bdd3ae4759c44192ac2e3c0b60c608cf999dd449ac28017b"} Mar 18 13:19:16.052780 master-0 kubenswrapper[7599]: I0318 13:19:16.052593 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:19:19.099932 master-0 kubenswrapper[7599]: I0318 13:19:19.099873 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8413125cf444e5c95f023c5dd9c6151e/kube-scheduler/0.log" Mar 18 13:19:19.100764 master-0 kubenswrapper[7599]: I0318 13:19:19.100329 7599 generic.go:334] "Generic (PLEG): container finished" podID="8413125cf444e5c95f023c5dd9c6151e" containerID="8f2f91bac220e62247e22b1d4ddac3f6faed23614b554c7d9cb87b50de91ff64" exitCode=1 Mar 18 13:19:19.100764 master-0 kubenswrapper[7599]: I0318 13:19:19.100394 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerDied","Data":"8f2f91bac220e62247e22b1d4ddac3f6faed23614b554c7d9cb87b50de91ff64"} Mar 18 13:19:19.101168 master-0 kubenswrapper[7599]: I0318 13:19:19.101141 7599 scope.go:117] "RemoveContainer" containerID="8f2f91bac220e62247e22b1d4ddac3f6faed23614b554c7d9cb87b50de91ff64" Mar 18 13:19:19.683624 master-0 kubenswrapper[7599]: I0318 13:19:19.683363 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:19:19.683624 master-0 kubenswrapper[7599]: I0318 13:19:19.683538 7599 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:19:20.110678 master-0 kubenswrapper[7599]: I0318 13:19:20.110506 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8413125cf444e5c95f023c5dd9c6151e/kube-scheduler/0.log" Mar 18 13:19:20.111668 master-0 kubenswrapper[7599]: I0318 13:19:20.111006 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"a205b027fb3bb1fbef6f4f0b2f902a1dfc370d3685fa3edd769df89a510f9823"} Mar 18 13:19:20.111668 master-0 kubenswrapper[7599]: I0318 13:19:20.111129 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:19:20.113860 master-0 kubenswrapper[7599]: I0318 13:19:20.113819 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ce43e217adc4d0869adee3ba7c628c00/kube-controller-manager/0.log" Mar 18 13:19:20.113860 master-0 kubenswrapper[7599]: I0318 13:19:20.113854 7599 generic.go:334] "Generic (PLEG): container finished" podID="ce43e217adc4d0869adee3ba7c628c00" containerID="5801ea0bc2c8f6281bfc1858bdd8e4d303817df46389abb0fb746b4985f6eaba" exitCode=0 Mar 18 13:19:20.113992 master-0 kubenswrapper[7599]: I0318 13:19:20.113873 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"ce43e217adc4d0869adee3ba7c628c00","Type":"ContainerDied","Data":"5801ea0bc2c8f6281bfc1858bdd8e4d303817df46389abb0fb746b4985f6eaba"} Mar 18 13:19:20.114172 master-0 kubenswrapper[7599]: I0318 13:19:20.114139 7599 scope.go:117] "RemoveContainer" containerID="5801ea0bc2c8f6281bfc1858bdd8e4d303817df46389abb0fb746b4985f6eaba" Mar 18 13:19:21.178071 master-0 kubenswrapper[7599]: I0318 13:19:21.125543 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ce43e217adc4d0869adee3ba7c628c00/kube-controller-manager/0.log" Mar 18 13:19:21.178071 master-0 kubenswrapper[7599]: I0318 13:19:21.126466 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"ce43e217adc4d0869adee3ba7c628c00","Type":"ContainerStarted","Data":"f8bd9eefa1e06d8809135458c227d2663cf258bc36f9f7d3d5127f27675b429f"} Mar 18 13:19:22.372478 master-0 kubenswrapper[7599]: I0318 13:19:22.372390 7599 scope.go:117] "RemoveContainer" containerID="d95ca6e96bcbe20e26ec06e8bea97630f7abc38b8dcb855ed93eec8b8ea1c22b" Mar 18 13:19:23.145025 master-0 kubenswrapper[7599]: I0318 13:19:23.144936 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" event={"ID":"d42bcf13-548b-46c4-9a3d-a46f1b6ec045","Type":"ContainerStarted","Data":"d9d04bdcfdc2c33ba07b3882d662c20d9203671752e04b4037bf3995673ad759"} Mar 18 13:19:24.156200 master-0 kubenswrapper[7599]: I0318 13:19:24.156110 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qsnxz_deb67ea0-8342-40cb-b0f4-115270e878dd/snapshot-controller/2.log" Mar 18 13:19:24.157216 master-0 kubenswrapper[7599]: I0318 13:19:24.156789 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qsnxz_deb67ea0-8342-40cb-b0f4-115270e878dd/snapshot-controller/1.log" Mar 18 13:19:24.157216 master-0 kubenswrapper[7599]: I0318 13:19:24.156843 7599 generic.go:334] "Generic (PLEG): container finished" podID="deb67ea0-8342-40cb-b0f4-115270e878dd" containerID="8ef573ee57b8050c3c4e94180a33fe16493427de3188e526b45f7045b89d6fa0" exitCode=1 Mar 18 13:19:24.157216 master-0 kubenswrapper[7599]: I0318 13:19:24.156876 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qsnxz" event={"ID":"deb67ea0-8342-40cb-b0f4-115270e878dd","Type":"ContainerDied","Data":"8ef573ee57b8050c3c4e94180a33fe16493427de3188e526b45f7045b89d6fa0"} Mar 18 13:19:24.157216 master-0 kubenswrapper[7599]: I0318 13:19:24.156914 7599 scope.go:117] "RemoveContainer" containerID="f15ee5d33285c15f95f38e99b3afffed56d23dec0f3da62015e493b81d27528c" Mar 18 13:19:24.158143 master-0 kubenswrapper[7599]: I0318 13:19:24.158048 7599 scope.go:117] "RemoveContainer" containerID="8ef573ee57b8050c3c4e94180a33fe16493427de3188e526b45f7045b89d6fa0" Mar 18 13:19:24.158723 master-0 kubenswrapper[7599]: E0318 13:19:24.158666 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-qsnxz_openshift-cluster-storage-operator(deb67ea0-8342-40cb-b0f4-115270e878dd)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qsnxz" podUID="deb67ea0-8342-40cb-b0f4-115270e878dd" Mar 18 13:19:24.915628 master-0 kubenswrapper[7599]: E0318 13:19:24.915400 7599 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event=< Mar 18 13:19:24.915628 master-0 kubenswrapper[7599]: &Event{ObjectMeta:{router-default-7dcf5569b5-gvmtv.189df16f390020c1 openshift-ingress 11121 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-ingress,Name:router-default-7dcf5569b5-gvmtv,UID:00375107-9a3b-4161-a90d-72ea8827c5fc,APIVersion:v1,ResourceVersion:7900,FieldPath:spec.containers{router},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Mar 18 13:19:24.915628 master-0 kubenswrapper[7599]: body: [-]backend-http failed: reason withheld Mar 18 13:19:24.915628 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:19:24.915628 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:19:24.915628 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:19:24.915628 master-0 kubenswrapper[7599]: Mar 18 13:19:24.915628 master-0 kubenswrapper[7599]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:08:08 +0000 UTC,LastTimestamp:2026-03-18 13:16:50.254103924 +0000 UTC m=+585.215158166,Count:368,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Mar 18 13:19:24.915628 master-0 kubenswrapper[7599]: > Mar 18 13:19:25.167039 master-0 kubenswrapper[7599]: I0318 13:19:25.166842 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qsnxz_deb67ea0-8342-40cb-b0f4-115270e878dd/snapshot-controller/2.log" Mar 18 13:19:27.015962 master-0 kubenswrapper[7599]: I0318 13:19:27.015894 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:19:27.016756 master-0 kubenswrapper[7599]: I0318 13:19:27.016657 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:19:29.202539 master-0 kubenswrapper[7599]: I0318 13:19:29.202378 7599 generic.go:334] "Generic (PLEG): container finished" podID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerID="9c551ae25ef9367709ba8842a822330b49626584583ec5ef49474f8a67486429" exitCode=0 Mar 18 13:19:29.203671 master-0 kubenswrapper[7599]: I0318 13:19:29.202476 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" event={"ID":"00375107-9a3b-4161-a90d-72ea8827c5fc","Type":"ContainerDied","Data":"9c551ae25ef9367709ba8842a822330b49626584583ec5ef49474f8a67486429"} Mar 18 13:19:29.203671 master-0 kubenswrapper[7599]: I0318 13:19:29.203543 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" event={"ID":"00375107-9a3b-4161-a90d-72ea8827c5fc","Type":"ContainerStarted","Data":"ca11ff8dd74bbd57e44f6070d192194a64ab628351ce867a0ac332f4e51a71b5"} Mar 18 13:19:29.203671 master-0 kubenswrapper[7599]: I0318 13:19:29.203574 7599 scope.go:117] "RemoveContainer" containerID="07ca97585aaa8b06b5f428151eb377bdd83b407ab4db465d3d58a7d10ed909a2" Mar 18 13:19:29.251753 master-0 kubenswrapper[7599]: I0318 13:19:29.251681 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:19:29.255082 master-0 kubenswrapper[7599]: I0318 13:19:29.255038 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:29.255082 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:19:29.255082 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:19:29.255082 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:19:29.255247 master-0 kubenswrapper[7599]: I0318 13:19:29.255098 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:29.989121 master-0 kubenswrapper[7599]: E0318 13:19:29.988991 7599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 13:19:30.016555 master-0 kubenswrapper[7599]: I0318 13:19:30.016453 7599 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 13:19:30.016824 master-0 kubenswrapper[7599]: I0318 13:19:30.016576 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:19:30.254229 master-0 kubenswrapper[7599]: I0318 13:19:30.254061 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:30.254229 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:19:30.254229 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:19:30.254229 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:19:30.254229 master-0 kubenswrapper[7599]: I0318 13:19:30.254174 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:31.255122 master-0 kubenswrapper[7599]: I0318 13:19:31.255025 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:31.255122 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:19:31.255122 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:19:31.255122 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:19:31.256111 master-0 kubenswrapper[7599]: I0318 13:19:31.255131 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:32.255301 master-0 kubenswrapper[7599]: I0318 13:19:32.255188 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:32.255301 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:19:32.255301 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:19:32.255301 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:19:32.256720 master-0 kubenswrapper[7599]: I0318 13:19:32.255298 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:33.254534 master-0 kubenswrapper[7599]: I0318 13:19:33.254396 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:33.254534 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:19:33.254534 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:19:33.254534 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:19:33.254859 master-0 kubenswrapper[7599]: I0318 13:19:33.254567 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:33.912514 master-0 kubenswrapper[7599]: E0318 13:19:33.912402 7599 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 18 13:19:34.253435 master-0 kubenswrapper[7599]: I0318 13:19:34.253368 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:34.253435 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:19:34.253435 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:19:34.253435 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:19:34.253732 master-0 kubenswrapper[7599]: I0318 13:19:34.253486 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:34.256348 master-0 kubenswrapper[7599]: I0318 13:19:34.256291 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"5a427fe739d798898ee45c0bf356bb2e2c26d43edea40af7c1f44b831591867e"} Mar 18 13:19:35.252606 master-0 kubenswrapper[7599]: I0318 13:19:35.252102 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:19:35.255054 master-0 kubenswrapper[7599]: I0318 13:19:35.254888 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:35.255054 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:19:35.255054 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:19:35.255054 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:19:35.255054 master-0 kubenswrapper[7599]: I0318 13:19:35.254959 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:35.273094 master-0 kubenswrapper[7599]: I0318 13:19:35.273006 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"73ea828b1ec3fb5eeb4f799bf4aac37c9e219ec1697ef6d6fbf4963823466e19"} Mar 18 13:19:35.273094 master-0 kubenswrapper[7599]: I0318 13:19:35.273072 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"447db57cab5ba3c69b27b8cc5082a77bb51da84b7ea28cd8dbd5650fa54f13e0"} Mar 18 13:19:35.273094 master-0 kubenswrapper[7599]: I0318 13:19:35.273091 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"4fdf9a4a6b4d2639ef00c48189b5ca39aef049f50cde7194dc5dacc0bb496278"} Mar 18 13:19:35.273094 master-0 kubenswrapper[7599]: I0318 13:19:35.273108 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"f3b165a56beb52cbeaa61b6b02ce9e692cd29bd9898a9870c02bb4754aac4be3"} Mar 18 13:19:35.273673 master-0 kubenswrapper[7599]: I0318 13:19:35.273528 7599 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="9f52b11b-6ba0-49bd-8220-405f8b5303fe" Mar 18 13:19:35.273673 master-0 kubenswrapper[7599]: I0318 13:19:35.273564 7599 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="9f52b11b-6ba0-49bd-8220-405f8b5303fe" Mar 18 13:19:35.389094 master-0 kubenswrapper[7599]: I0318 13:19:35.388672 7599 status_manager.go:851] "Failed to get status for pod" podUID="5217b77d-b517-45c3-b76d-eee86d72b141" pod="openshift-etcd/installer-2-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-2-master-0)" Mar 18 13:19:36.254345 master-0 kubenswrapper[7599]: I0318 13:19:36.254236 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:36.254345 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:19:36.254345 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:19:36.254345 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:19:36.254345 master-0 kubenswrapper[7599]: I0318 13:19:36.254301 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:36.372455 master-0 kubenswrapper[7599]: I0318 13:19:36.372315 7599 scope.go:117] "RemoveContainer" containerID="8ef573ee57b8050c3c4e94180a33fe16493427de3188e526b45f7045b89d6fa0" Mar 18 13:19:36.373054 master-0 kubenswrapper[7599]: E0318 13:19:36.372966 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-qsnxz_openshift-cluster-storage-operator(deb67ea0-8342-40cb-b0f4-115270e878dd)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qsnxz" podUID="deb67ea0-8342-40cb-b0f4-115270e878dd" Mar 18 13:19:37.254243 master-0 kubenswrapper[7599]: I0318 13:19:37.254173 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:37.254243 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:19:37.254243 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:19:37.254243 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:19:37.256129 master-0 kubenswrapper[7599]: I0318 13:19:37.254275 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:38.253852 master-0 kubenswrapper[7599]: I0318 13:19:38.253789 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:38.253852 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:19:38.253852 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:19:38.253852 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:19:38.254174 master-0 kubenswrapper[7599]: I0318 13:19:38.253862 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:39.254993 master-0 kubenswrapper[7599]: I0318 13:19:39.254875 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:39.254993 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:19:39.254993 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:19:39.254993 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:19:39.254993 master-0 kubenswrapper[7599]: I0318 13:19:39.254978 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:39.413896 master-0 kubenswrapper[7599]: I0318 13:19:39.413786 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 18 13:19:39.413896 master-0 kubenswrapper[7599]: I0318 13:19:39.413866 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 18 13:19:40.016211 master-0 kubenswrapper[7599]: I0318 13:19:40.016151 7599 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 13:19:40.016512 master-0 kubenswrapper[7599]: I0318 13:19:40.016228 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:19:40.253988 master-0 kubenswrapper[7599]: I0318 13:19:40.253921 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:40.253988 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:19:40.253988 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:19:40.253988 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:19:40.254256 master-0 kubenswrapper[7599]: I0318 13:19:40.254005 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:41.253209 master-0 kubenswrapper[7599]: I0318 13:19:41.253137 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:41.253209 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:19:41.253209 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:19:41.253209 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:19:41.254326 master-0 kubenswrapper[7599]: I0318 13:19:41.253218 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:42.254182 master-0 kubenswrapper[7599]: I0318 13:19:42.254094 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:42.254182 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:19:42.254182 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:19:42.254182 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:19:42.255182 master-0 kubenswrapper[7599]: I0318 13:19:42.254205 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:43.254175 master-0 kubenswrapper[7599]: I0318 13:19:43.254090 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:43.254175 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:19:43.254175 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:19:43.254175 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:19:43.254781 master-0 kubenswrapper[7599]: I0318 13:19:43.254185 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:44.255151 master-0 kubenswrapper[7599]: I0318 13:19:44.254999 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:44.255151 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:19:44.255151 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:19:44.255151 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:19:44.256212 master-0 kubenswrapper[7599]: I0318 13:19:44.255165 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:45.254216 master-0 kubenswrapper[7599]: I0318 13:19:45.254157 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:45.254216 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:19:45.254216 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:19:45.254216 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:19:45.254643 master-0 kubenswrapper[7599]: I0318 13:19:45.254228 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:46.255298 master-0 kubenswrapper[7599]: I0318 13:19:46.255178 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:46.255298 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:19:46.255298 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:19:46.255298 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:19:46.255298 master-0 kubenswrapper[7599]: I0318 13:19:46.255271 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:46.989856 master-0 kubenswrapper[7599]: E0318 13:19:46.989759 7599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 13:19:47.255213 master-0 kubenswrapper[7599]: I0318 13:19:47.255048 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:47.255213 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:19:47.255213 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:19:47.255213 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:19:47.255213 master-0 kubenswrapper[7599]: I0318 13:19:47.255161 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:48.253881 master-0 kubenswrapper[7599]: I0318 13:19:48.253800 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:48.253881 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:19:48.253881 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:19:48.253881 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:19:48.254213 master-0 kubenswrapper[7599]: I0318 13:19:48.253897 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:48.371274 master-0 kubenswrapper[7599]: I0318 13:19:48.371191 7599 scope.go:117] "RemoveContainer" containerID="8ef573ee57b8050c3c4e94180a33fe16493427de3188e526b45f7045b89d6fa0" Mar 18 13:19:49.254107 master-0 kubenswrapper[7599]: I0318 13:19:49.254025 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:49.254107 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:19:49.254107 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:19:49.254107 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:19:49.254504 master-0 kubenswrapper[7599]: I0318 13:19:49.254149 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:49.377723 master-0 kubenswrapper[7599]: I0318 13:19:49.377666 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qsnxz_deb67ea0-8342-40cb-b0f4-115270e878dd/snapshot-controller/2.log" Mar 18 13:19:49.384971 master-0 kubenswrapper[7599]: I0318 13:19:49.384913 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qsnxz" event={"ID":"deb67ea0-8342-40cb-b0f4-115270e878dd","Type":"ContainerStarted","Data":"006cf29dad6df8430c273883ed49909e3a00377ac86065a75fe3162eb36811e5"} Mar 18 13:19:49.438033 master-0 kubenswrapper[7599]: I0318 13:19:49.437957 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 18 13:19:50.016655 master-0 kubenswrapper[7599]: I0318 13:19:50.016580 7599 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 13:19:50.017322 master-0 kubenswrapper[7599]: I0318 13:19:50.017274 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:19:50.017585 master-0 kubenswrapper[7599]: I0318 13:19:50.017559 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:19:50.019114 master-0 kubenswrapper[7599]: I0318 13:19:50.019067 7599 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"f8bd9eefa1e06d8809135458c227d2663cf258bc36f9f7d3d5127f27675b429f"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 18 13:19:50.019659 master-0 kubenswrapper[7599]: I0318 13:19:50.019623 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="cluster-policy-controller" containerID="cri-o://f8bd9eefa1e06d8809135458c227d2663cf258bc36f9f7d3d5127f27675b429f" gracePeriod=30 Mar 18 13:19:50.150245 master-0 kubenswrapper[7599]: E0318 13:19:50.150183 7599 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce43e217adc4d0869adee3ba7c628c00.slice/crio-f8bd9eefa1e06d8809135458c227d2663cf258bc36f9f7d3d5127f27675b429f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce43e217adc4d0869adee3ba7c628c00.slice/crio-conmon-f8bd9eefa1e06d8809135458c227d2663cf258bc36f9f7d3d5127f27675b429f.scope\": RecentStats: unable to find data in memory cache]" Mar 18 13:19:50.254457 master-0 kubenswrapper[7599]: I0318 13:19:50.254397 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:50.254457 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:19:50.254457 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:19:50.254457 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:19:50.254825 master-0 kubenswrapper[7599]: I0318 13:19:50.254798 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:50.389257 master-0 kubenswrapper[7599]: I0318 13:19:50.389211 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ce43e217adc4d0869adee3ba7c628c00/cluster-policy-controller/1.log" Mar 18 13:19:50.391862 master-0 kubenswrapper[7599]: I0318 13:19:50.391824 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ce43e217adc4d0869adee3ba7c628c00/kube-controller-manager/0.log" Mar 18 13:19:50.392040 master-0 kubenswrapper[7599]: I0318 13:19:50.391885 7599 generic.go:334] "Generic (PLEG): container finished" podID="ce43e217adc4d0869adee3ba7c628c00" containerID="f8bd9eefa1e06d8809135458c227d2663cf258bc36f9f7d3d5127f27675b429f" exitCode=255 Mar 18 13:19:50.392040 master-0 kubenswrapper[7599]: I0318 13:19:50.391920 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"ce43e217adc4d0869adee3ba7c628c00","Type":"ContainerDied","Data":"f8bd9eefa1e06d8809135458c227d2663cf258bc36f9f7d3d5127f27675b429f"} Mar 18 13:19:50.392040 master-0 kubenswrapper[7599]: I0318 13:19:50.391956 7599 scope.go:117] "RemoveContainer" containerID="5801ea0bc2c8f6281bfc1858bdd8e4d303817df46389abb0fb746b4985f6eaba" Mar 18 13:19:51.254808 master-0 kubenswrapper[7599]: I0318 13:19:51.254702 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:51.254808 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:19:51.254808 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:19:51.254808 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:19:51.254808 master-0 kubenswrapper[7599]: I0318 13:19:51.254797 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:51.406372 master-0 kubenswrapper[7599]: I0318 13:19:51.406277 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ce43e217adc4d0869adee3ba7c628c00/cluster-policy-controller/1.log" Mar 18 13:19:51.408631 master-0 kubenswrapper[7599]: I0318 13:19:51.408590 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ce43e217adc4d0869adee3ba7c628c00/kube-controller-manager/0.log" Mar 18 13:19:51.408737 master-0 kubenswrapper[7599]: I0318 13:19:51.408695 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"ce43e217adc4d0869adee3ba7c628c00","Type":"ContainerStarted","Data":"9ead416ab4ab1c52ae601410f812ec15d5657e653d10d69eb95ccb39cc83ca8b"} Mar 18 13:19:52.254911 master-0 kubenswrapper[7599]: I0318 13:19:52.254815 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:52.254911 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:19:52.254911 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:19:52.254911 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:19:52.255351 master-0 kubenswrapper[7599]: I0318 13:19:52.254911 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:53.254677 master-0 kubenswrapper[7599]: I0318 13:19:53.254580 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:53.254677 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:19:53.254677 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:19:53.254677 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:19:53.254677 master-0 kubenswrapper[7599]: I0318 13:19:53.254664 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:54.254123 master-0 kubenswrapper[7599]: I0318 13:19:54.254050 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:54.254123 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:19:54.254123 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:19:54.254123 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:19:54.254624 master-0 kubenswrapper[7599]: I0318 13:19:54.254128 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:54.437269 master-0 kubenswrapper[7599]: I0318 13:19:54.437183 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 18 13:19:55.254995 master-0 kubenswrapper[7599]: I0318 13:19:55.254874 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:55.254995 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:19:55.254995 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:19:55.254995 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:19:55.255397 master-0 kubenswrapper[7599]: I0318 13:19:55.254996 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:56.254223 master-0 kubenswrapper[7599]: I0318 13:19:56.254173 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:56.254223 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:19:56.254223 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:19:56.254223 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:19:56.255087 master-0 kubenswrapper[7599]: I0318 13:19:56.255055 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:57.016260 master-0 kubenswrapper[7599]: I0318 13:19:57.016176 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:19:57.016855 master-0 kubenswrapper[7599]: I0318 13:19:57.016287 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:19:57.254825 master-0 kubenswrapper[7599]: I0318 13:19:57.254763 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:57.254825 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:19:57.254825 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:19:57.254825 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:19:57.255895 master-0 kubenswrapper[7599]: I0318 13:19:57.255568 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:58.254541 master-0 kubenswrapper[7599]: I0318 13:19:58.254406 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:58.254541 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:19:58.254541 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:19:58.254541 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:19:58.255767 master-0 kubenswrapper[7599]: I0318 13:19:58.254544 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:19:59.254315 master-0 kubenswrapper[7599]: I0318 13:19:59.254202 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:19:59.254315 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:19:59.254315 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:19:59.254315 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:19:59.254702 master-0 kubenswrapper[7599]: I0318 13:19:59.254327 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:00.017386 master-0 kubenswrapper[7599]: I0318 13:20:00.017271 7599 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 13:20:00.018239 master-0 kubenswrapper[7599]: I0318 13:20:00.017479 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:20:00.254548 master-0 kubenswrapper[7599]: I0318 13:20:00.254479 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:00.254548 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:00.254548 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:00.254548 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:00.254548 master-0 kubenswrapper[7599]: I0318 13:20:00.254548 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:01.254105 master-0 kubenswrapper[7599]: I0318 13:20:01.253995 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:01.254105 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:01.254105 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:01.254105 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:01.254105 master-0 kubenswrapper[7599]: I0318 13:20:01.254055 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:02.255055 master-0 kubenswrapper[7599]: I0318 13:20:02.254949 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:02.255055 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:02.255055 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:02.255055 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:02.255055 master-0 kubenswrapper[7599]: I0318 13:20:02.255030 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:03.255271 master-0 kubenswrapper[7599]: I0318 13:20:03.255157 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:03.255271 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:03.255271 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:03.255271 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:03.256582 master-0 kubenswrapper[7599]: I0318 13:20:03.255269 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:03.990725 master-0 kubenswrapper[7599]: E0318 13:20:03.990618 7599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="7s" Mar 18 13:20:04.253856 master-0 kubenswrapper[7599]: I0318 13:20:04.253615 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:04.253856 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:04.253856 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:04.253856 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:04.253856 master-0 kubenswrapper[7599]: I0318 13:20:04.253776 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:05.253860 master-0 kubenswrapper[7599]: I0318 13:20:05.253778 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:05.253860 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:05.253860 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:05.253860 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:05.254795 master-0 kubenswrapper[7599]: I0318 13:20:05.253878 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:06.254830 master-0 kubenswrapper[7599]: I0318 13:20:06.254685 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:06.254830 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:06.254830 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:06.254830 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:06.254830 master-0 kubenswrapper[7599]: I0318 13:20:06.254767 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:07.254252 master-0 kubenswrapper[7599]: I0318 13:20:07.254149 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:07.254252 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:07.254252 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:07.254252 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:07.254252 master-0 kubenswrapper[7599]: I0318 13:20:07.254230 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:08.255124 master-0 kubenswrapper[7599]: I0318 13:20:08.254970 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:08.255124 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:08.255124 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:08.255124 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:08.255124 master-0 kubenswrapper[7599]: I0318 13:20:08.255113 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:09.254263 master-0 kubenswrapper[7599]: I0318 13:20:09.254167 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:09.254263 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:09.254263 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:09.254263 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:09.254263 master-0 kubenswrapper[7599]: I0318 13:20:09.254238 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:09.276270 master-0 kubenswrapper[7599]: E0318 13:20:09.276173 7599 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 18 13:20:09.552060 master-0 kubenswrapper[7599]: I0318 13:20:09.551889 7599 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="9f52b11b-6ba0-49bd-8220-405f8b5303fe" Mar 18 13:20:09.552060 master-0 kubenswrapper[7599]: I0318 13:20:09.551952 7599 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="9f52b11b-6ba0-49bd-8220-405f8b5303fe" Mar 18 13:20:09.689363 master-0 kubenswrapper[7599]: I0318 13:20:09.689270 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:20:10.016963 master-0 kubenswrapper[7599]: I0318 13:20:10.016843 7599 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 13:20:10.016963 master-0 kubenswrapper[7599]: I0318 13:20:10.016950 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:20:10.255108 master-0 kubenswrapper[7599]: I0318 13:20:10.254985 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:10.255108 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:10.255108 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:10.255108 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:10.255108 master-0 kubenswrapper[7599]: I0318 13:20:10.255067 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:11.254190 master-0 kubenswrapper[7599]: I0318 13:20:11.254124 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:11.254190 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:11.254190 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:11.254190 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:11.254986 master-0 kubenswrapper[7599]: I0318 13:20:11.254202 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:12.254308 master-0 kubenswrapper[7599]: I0318 13:20:12.254247 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:12.254308 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:12.254308 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:12.254308 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:12.254992 master-0 kubenswrapper[7599]: I0318 13:20:12.254329 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:13.254281 master-0 kubenswrapper[7599]: I0318 13:20:13.254169 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:13.254281 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:13.254281 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:13.254281 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:13.254281 master-0 kubenswrapper[7599]: I0318 13:20:13.254259 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:13.598885 master-0 kubenswrapper[7599]: I0318 13:20:13.598793 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-mz4qp_ac6d8eb6-1d5e-4757-9823-5ffe478c711c/cluster-baremetal-operator/2.log" Mar 18 13:20:13.599746 master-0 kubenswrapper[7599]: I0318 13:20:13.599686 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-mz4qp_ac6d8eb6-1d5e-4757-9823-5ffe478c711c/cluster-baremetal-operator/1.log" Mar 18 13:20:13.600311 master-0 kubenswrapper[7599]: I0318 13:20:13.600254 7599 generic.go:334] "Generic (PLEG): container finished" podID="ac6d8eb6-1d5e-4757-9823-5ffe478c711c" containerID="3ec5f61268f5b704bdd3ae4759c44192ac2e3c0b60c608cf999dd449ac28017b" exitCode=1 Mar 18 13:20:13.600493 master-0 kubenswrapper[7599]: I0318 13:20:13.600310 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" event={"ID":"ac6d8eb6-1d5e-4757-9823-5ffe478c711c","Type":"ContainerDied","Data":"3ec5f61268f5b704bdd3ae4759c44192ac2e3c0b60c608cf999dd449ac28017b"} Mar 18 13:20:13.600493 master-0 kubenswrapper[7599]: I0318 13:20:13.600389 7599 scope.go:117] "RemoveContainer" containerID="acf7b4bcf62e14560e517d9dee729cb0a4ed47ca5e163fdf0d69e59c5b3307d6" Mar 18 13:20:13.601538 master-0 kubenswrapper[7599]: I0318 13:20:13.601486 7599 scope.go:117] "RemoveContainer" containerID="3ec5f61268f5b704bdd3ae4759c44192ac2e3c0b60c608cf999dd449ac28017b" Mar 18 13:20:13.602078 master-0 kubenswrapper[7599]: E0318 13:20:13.602012 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-6f69995874-mz4qp_openshift-machine-api(ac6d8eb6-1d5e-4757-9823-5ffe478c711c)\"" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" podUID="ac6d8eb6-1d5e-4757-9823-5ffe478c711c" Mar 18 13:20:14.254716 master-0 kubenswrapper[7599]: I0318 13:20:14.254614 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:14.254716 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:14.254716 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:14.254716 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:14.255822 master-0 kubenswrapper[7599]: I0318 13:20:14.254741 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:14.610629 master-0 kubenswrapper[7599]: I0318 13:20:14.610394 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-mz4qp_ac6d8eb6-1d5e-4757-9823-5ffe478c711c/cluster-baremetal-operator/2.log" Mar 18 13:20:15.254730 master-0 kubenswrapper[7599]: I0318 13:20:15.254629 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:15.254730 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:15.254730 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:15.254730 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:15.256036 master-0 kubenswrapper[7599]: I0318 13:20:15.254796 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:16.254690 master-0 kubenswrapper[7599]: I0318 13:20:16.254521 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:16.254690 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:16.254690 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:16.254690 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:16.254690 master-0 kubenswrapper[7599]: I0318 13:20:16.254671 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:17.254822 master-0 kubenswrapper[7599]: I0318 13:20:17.254717 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:17.254822 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:17.254822 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:17.254822 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:17.256008 master-0 kubenswrapper[7599]: I0318 13:20:17.254836 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:18.253750 master-0 kubenswrapper[7599]: I0318 13:20:18.253672 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:18.253750 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:18.253750 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:18.253750 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:18.254086 master-0 kubenswrapper[7599]: I0318 13:20:18.253780 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:18.639165 master-0 kubenswrapper[7599]: I0318 13:20:18.639067 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qsnxz_deb67ea0-8342-40cb-b0f4-115270e878dd/snapshot-controller/3.log" Mar 18 13:20:18.640057 master-0 kubenswrapper[7599]: I0318 13:20:18.639812 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qsnxz_deb67ea0-8342-40cb-b0f4-115270e878dd/snapshot-controller/2.log" Mar 18 13:20:18.640057 master-0 kubenswrapper[7599]: I0318 13:20:18.639873 7599 generic.go:334] "Generic (PLEG): container finished" podID="deb67ea0-8342-40cb-b0f4-115270e878dd" containerID="006cf29dad6df8430c273883ed49909e3a00377ac86065a75fe3162eb36811e5" exitCode=1 Mar 18 13:20:18.640057 master-0 kubenswrapper[7599]: I0318 13:20:18.639923 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qsnxz" event={"ID":"deb67ea0-8342-40cb-b0f4-115270e878dd","Type":"ContainerDied","Data":"006cf29dad6df8430c273883ed49909e3a00377ac86065a75fe3162eb36811e5"} Mar 18 13:20:18.640057 master-0 kubenswrapper[7599]: I0318 13:20:18.639970 7599 scope.go:117] "RemoveContainer" containerID="8ef573ee57b8050c3c4e94180a33fe16493427de3188e526b45f7045b89d6fa0" Mar 18 13:20:18.640575 master-0 kubenswrapper[7599]: I0318 13:20:18.640528 7599 scope.go:117] "RemoveContainer" containerID="006cf29dad6df8430c273883ed49909e3a00377ac86065a75fe3162eb36811e5" Mar 18 13:20:18.640811 master-0 kubenswrapper[7599]: E0318 13:20:18.640766 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-qsnxz_openshift-cluster-storage-operator(deb67ea0-8342-40cb-b0f4-115270e878dd)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qsnxz" podUID="deb67ea0-8342-40cb-b0f4-115270e878dd" Mar 18 13:20:19.254113 master-0 kubenswrapper[7599]: I0318 13:20:19.254040 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:19.254113 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:19.254113 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:19.254113 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:19.254113 master-0 kubenswrapper[7599]: I0318 13:20:19.254104 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:19.655897 master-0 kubenswrapper[7599]: I0318 13:20:19.655741 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qsnxz_deb67ea0-8342-40cb-b0f4-115270e878dd/snapshot-controller/3.log" Mar 18 13:20:20.018070 master-0 kubenswrapper[7599]: I0318 13:20:20.017109 7599 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 13:20:20.018402 master-0 kubenswrapper[7599]: I0318 13:20:20.018175 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:20:20.018402 master-0 kubenswrapper[7599]: I0318 13:20:20.018317 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:20:20.020291 master-0 kubenswrapper[7599]: I0318 13:20:20.019710 7599 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"9ead416ab4ab1c52ae601410f812ec15d5657e653d10d69eb95ccb39cc83ca8b"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 18 13:20:20.020291 master-0 kubenswrapper[7599]: I0318 13:20:20.019882 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="cluster-policy-controller" containerID="cri-o://9ead416ab4ab1c52ae601410f812ec15d5657e653d10d69eb95ccb39cc83ca8b" gracePeriod=30 Mar 18 13:20:20.104704 master-0 kubenswrapper[7599]: E0318 13:20:20.104626 7599 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce43e217adc4d0869adee3ba7c628c00.slice/crio-9ead416ab4ab1c52ae601410f812ec15d5657e653d10d69eb95ccb39cc83ca8b.scope\": RecentStats: unable to find data in memory cache]" Mar 18 13:20:20.253289 master-0 kubenswrapper[7599]: I0318 13:20:20.253227 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:20.253289 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:20.253289 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:20.253289 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:20.253727 master-0 kubenswrapper[7599]: I0318 13:20:20.253289 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:20.664733 master-0 kubenswrapper[7599]: I0318 13:20:20.664661 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ce43e217adc4d0869adee3ba7c628c00/cluster-policy-controller/2.log" Mar 18 13:20:20.665670 master-0 kubenswrapper[7599]: I0318 13:20:20.665276 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ce43e217adc4d0869adee3ba7c628c00/cluster-policy-controller/1.log" Mar 18 13:20:20.667303 master-0 kubenswrapper[7599]: I0318 13:20:20.667252 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ce43e217adc4d0869adee3ba7c628c00/kube-controller-manager/0.log" Mar 18 13:20:20.667461 master-0 kubenswrapper[7599]: I0318 13:20:20.667323 7599 generic.go:334] "Generic (PLEG): container finished" podID="ce43e217adc4d0869adee3ba7c628c00" containerID="9ead416ab4ab1c52ae601410f812ec15d5657e653d10d69eb95ccb39cc83ca8b" exitCode=255 Mar 18 13:20:20.667461 master-0 kubenswrapper[7599]: I0318 13:20:20.667371 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"ce43e217adc4d0869adee3ba7c628c00","Type":"ContainerDied","Data":"9ead416ab4ab1c52ae601410f812ec15d5657e653d10d69eb95ccb39cc83ca8b"} Mar 18 13:20:20.667461 master-0 kubenswrapper[7599]: I0318 13:20:20.667451 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"ce43e217adc4d0869adee3ba7c628c00","Type":"ContainerStarted","Data":"32969b726c2002b7577cb3809f73ec56502c0bea8740fe451098a36b38c87375"} Mar 18 13:20:20.667690 master-0 kubenswrapper[7599]: I0318 13:20:20.667491 7599 scope.go:117] "RemoveContainer" containerID="f8bd9eefa1e06d8809135458c227d2663cf258bc36f9f7d3d5127f27675b429f" Mar 18 13:20:20.993475 master-0 kubenswrapper[7599]: E0318 13:20:20.993279 7599 controller.go:145] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io master-0)" interval="7s" Mar 18 13:20:21.254588 master-0 kubenswrapper[7599]: I0318 13:20:21.254451 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:21.254588 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:21.254588 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:21.254588 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:21.254588 master-0 kubenswrapper[7599]: I0318 13:20:21.254504 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:21.678328 master-0 kubenswrapper[7599]: I0318 13:20:21.678198 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ce43e217adc4d0869adee3ba7c628c00/cluster-policy-controller/2.log" Mar 18 13:20:21.680578 master-0 kubenswrapper[7599]: I0318 13:20:21.680532 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ce43e217adc4d0869adee3ba7c628c00/kube-controller-manager/0.log" Mar 18 13:20:22.254728 master-0 kubenswrapper[7599]: I0318 13:20:22.254633 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:22.254728 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:22.254728 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:22.254728 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:22.254728 master-0 kubenswrapper[7599]: I0318 13:20:22.254726 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:23.254295 master-0 kubenswrapper[7599]: I0318 13:20:23.254200 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:23.254295 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:23.254295 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:23.254295 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:23.255283 master-0 kubenswrapper[7599]: I0318 13:20:23.254322 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:24.254273 master-0 kubenswrapper[7599]: I0318 13:20:24.254180 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:24.254273 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:24.254273 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:24.254273 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:24.255089 master-0 kubenswrapper[7599]: I0318 13:20:24.254287 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:25.254938 master-0 kubenswrapper[7599]: I0318 13:20:25.254795 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:25.254938 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:25.254938 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:25.254938 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:25.256014 master-0 kubenswrapper[7599]: I0318 13:20:25.255008 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:26.254132 master-0 kubenswrapper[7599]: I0318 13:20:26.254070 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:26.254132 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:26.254132 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:26.254132 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:26.254572 master-0 kubenswrapper[7599]: I0318 13:20:26.254145 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:26.371355 master-0 kubenswrapper[7599]: I0318 13:20:26.371279 7599 scope.go:117] "RemoveContainer" containerID="3ec5f61268f5b704bdd3ae4759c44192ac2e3c0b60c608cf999dd449ac28017b" Mar 18 13:20:26.371969 master-0 kubenswrapper[7599]: E0318 13:20:26.371680 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-6f69995874-mz4qp_openshift-machine-api(ac6d8eb6-1d5e-4757-9823-5ffe478c711c)\"" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" podUID="ac6d8eb6-1d5e-4757-9823-5ffe478c711c" Mar 18 13:20:27.015889 master-0 kubenswrapper[7599]: I0318 13:20:27.015761 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:20:27.015889 master-0 kubenswrapper[7599]: I0318 13:20:27.015841 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:20:27.253889 master-0 kubenswrapper[7599]: I0318 13:20:27.253832 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:27.253889 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:27.253889 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:27.253889 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:27.254497 master-0 kubenswrapper[7599]: I0318 13:20:27.254454 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:28.254051 master-0 kubenswrapper[7599]: I0318 13:20:28.253959 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:28.254051 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:28.254051 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:28.254051 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:28.255154 master-0 kubenswrapper[7599]: I0318 13:20:28.254054 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:29.254329 master-0 kubenswrapper[7599]: I0318 13:20:29.254259 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:29.254329 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:29.254329 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:29.254329 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:29.255557 master-0 kubenswrapper[7599]: I0318 13:20:29.255511 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:30.016650 master-0 kubenswrapper[7599]: I0318 13:20:30.016551 7599 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 13:20:30.016911 master-0 kubenswrapper[7599]: I0318 13:20:30.016651 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:20:30.254209 master-0 kubenswrapper[7599]: I0318 13:20:30.254132 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:30.254209 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:30.254209 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:30.254209 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:30.254997 master-0 kubenswrapper[7599]: I0318 13:20:30.254204 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:31.255067 master-0 kubenswrapper[7599]: I0318 13:20:31.254992 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:31.255067 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:31.255067 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:31.255067 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:31.256116 master-0 kubenswrapper[7599]: I0318 13:20:31.255126 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:32.255032 master-0 kubenswrapper[7599]: I0318 13:20:32.254960 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:32.255032 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:32.255032 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:32.255032 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:32.256530 master-0 kubenswrapper[7599]: I0318 13:20:32.255075 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:32.371494 master-0 kubenswrapper[7599]: I0318 13:20:32.371313 7599 scope.go:117] "RemoveContainer" containerID="006cf29dad6df8430c273883ed49909e3a00377ac86065a75fe3162eb36811e5" Mar 18 13:20:32.371829 master-0 kubenswrapper[7599]: E0318 13:20:32.371757 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-qsnxz_openshift-cluster-storage-operator(deb67ea0-8342-40cb-b0f4-115270e878dd)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qsnxz" podUID="deb67ea0-8342-40cb-b0f4-115270e878dd" Mar 18 13:20:33.253544 master-0 kubenswrapper[7599]: I0318 13:20:33.253481 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:33.253544 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:33.253544 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:33.253544 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:33.253856 master-0 kubenswrapper[7599]: I0318 13:20:33.253571 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:34.253822 master-0 kubenswrapper[7599]: I0318 13:20:34.253740 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:34.253822 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:34.253822 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:34.253822 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:34.254876 master-0 kubenswrapper[7599]: I0318 13:20:34.253831 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:35.255110 master-0 kubenswrapper[7599]: I0318 13:20:35.255014 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:35.255110 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:35.255110 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:35.255110 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:35.256124 master-0 kubenswrapper[7599]: I0318 13:20:35.255130 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:35.390843 master-0 kubenswrapper[7599]: I0318 13:20:35.390734 7599 status_manager.go:851] "Failed to get status for pod" podUID="62eae2a9-2667-431e-ad73-ca18124d01f6" pod="openshift-kube-controller-manager/installer-3-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-3-master-0)" Mar 18 13:20:36.254603 master-0 kubenswrapper[7599]: I0318 13:20:36.254480 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:36.254603 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:36.254603 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:36.254603 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:36.254948 master-0 kubenswrapper[7599]: I0318 13:20:36.254625 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:37.254162 master-0 kubenswrapper[7599]: I0318 13:20:37.254094 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:37.254162 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:37.254162 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:37.254162 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:37.254866 master-0 kubenswrapper[7599]: I0318 13:20:37.254190 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:37.995134 master-0 kubenswrapper[7599]: E0318 13:20:37.995008 7599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 13:20:38.254713 master-0 kubenswrapper[7599]: I0318 13:20:38.254532 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:38.254713 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:38.254713 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:38.254713 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:38.254713 master-0 kubenswrapper[7599]: I0318 13:20:38.254639 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:39.254768 master-0 kubenswrapper[7599]: I0318 13:20:39.254658 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:39.254768 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:39.254768 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:39.254768 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:39.254768 master-0 kubenswrapper[7599]: I0318 13:20:39.254768 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:40.016915 master-0 kubenswrapper[7599]: I0318 13:20:40.016782 7599 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 13:20:40.016915 master-0 kubenswrapper[7599]: I0318 13:20:40.016892 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:20:40.255551 master-0 kubenswrapper[7599]: I0318 13:20:40.255486 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:40.255551 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:40.255551 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:40.255551 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:40.256638 master-0 kubenswrapper[7599]: I0318 13:20:40.256588 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:40.371390 master-0 kubenswrapper[7599]: I0318 13:20:40.371211 7599 scope.go:117] "RemoveContainer" containerID="3ec5f61268f5b704bdd3ae4759c44192ac2e3c0b60c608cf999dd449ac28017b" Mar 18 13:20:40.836614 master-0 kubenswrapper[7599]: I0318 13:20:40.836554 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-mz4qp_ac6d8eb6-1d5e-4757-9823-5ffe478c711c/cluster-baremetal-operator/2.log" Mar 18 13:20:40.837287 master-0 kubenswrapper[7599]: I0318 13:20:40.837237 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" event={"ID":"ac6d8eb6-1d5e-4757-9823-5ffe478c711c","Type":"ContainerStarted","Data":"dc388f1effef07f85f07a2d22d20e7738827bcf12878e52c4f8e033bb80ad74c"} Mar 18 13:20:41.254730 master-0 kubenswrapper[7599]: I0318 13:20:41.254680 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:41.254730 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:41.254730 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:41.254730 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:41.255183 master-0 kubenswrapper[7599]: I0318 13:20:41.254741 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:42.254020 master-0 kubenswrapper[7599]: I0318 13:20:42.253965 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:42.254020 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:42.254020 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:42.254020 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:42.254696 master-0 kubenswrapper[7599]: I0318 13:20:42.254033 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:43.254847 master-0 kubenswrapper[7599]: I0318 13:20:43.254778 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:43.254847 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:43.254847 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:43.254847 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:43.255581 master-0 kubenswrapper[7599]: I0318 13:20:43.254892 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:43.555671 master-0 kubenswrapper[7599]: E0318 13:20:43.555394 7599 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 18 13:20:44.254606 master-0 kubenswrapper[7599]: I0318 13:20:44.254488 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:44.254606 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:44.254606 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:44.254606 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:44.256077 master-0 kubenswrapper[7599]: I0318 13:20:44.254608 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:45.254435 master-0 kubenswrapper[7599]: I0318 13:20:45.254340 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:45.254435 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:45.254435 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:45.254435 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:45.254435 master-0 kubenswrapper[7599]: I0318 13:20:45.254439 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:46.254868 master-0 kubenswrapper[7599]: I0318 13:20:46.254802 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:46.254868 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:46.254868 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:46.254868 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:46.255695 master-0 kubenswrapper[7599]: I0318 13:20:46.254890 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:46.372044 master-0 kubenswrapper[7599]: I0318 13:20:46.371978 7599 scope.go:117] "RemoveContainer" containerID="006cf29dad6df8430c273883ed49909e3a00377ac86065a75fe3162eb36811e5" Mar 18 13:20:46.372468 master-0 kubenswrapper[7599]: E0318 13:20:46.372396 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-qsnxz_openshift-cluster-storage-operator(deb67ea0-8342-40cb-b0f4-115270e878dd)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qsnxz" podUID="deb67ea0-8342-40cb-b0f4-115270e878dd" Mar 18 13:20:46.882024 master-0 kubenswrapper[7599]: I0318 13:20:46.881964 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-wqxpk_d9d09a56-ed4c-40b7-8be1-f3934c07296e/ingress-operator/4.log" Mar 18 13:20:46.882945 master-0 kubenswrapper[7599]: I0318 13:20:46.882899 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-wqxpk_d9d09a56-ed4c-40b7-8be1-f3934c07296e/ingress-operator/3.log" Mar 18 13:20:46.883495 master-0 kubenswrapper[7599]: I0318 13:20:46.883410 7599 generic.go:334] "Generic (PLEG): container finished" podID="d9d09a56-ed4c-40b7-8be1-f3934c07296e" containerID="caba03fd38cc380067c9996cdbf9e7480fdd072e4dbef7e8d359a205ae43b4e1" exitCode=1 Mar 18 13:20:46.883573 master-0 kubenswrapper[7599]: I0318 13:20:46.883491 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" event={"ID":"d9d09a56-ed4c-40b7-8be1-f3934c07296e","Type":"ContainerDied","Data":"caba03fd38cc380067c9996cdbf9e7480fdd072e4dbef7e8d359a205ae43b4e1"} Mar 18 13:20:46.883573 master-0 kubenswrapper[7599]: I0318 13:20:46.883561 7599 scope.go:117] "RemoveContainer" containerID="207d7ce188bef24394f03adf64334bd6fe10b8b414c650a2e20c36baa054efbd" Mar 18 13:20:46.884366 master-0 kubenswrapper[7599]: I0318 13:20:46.884324 7599 scope.go:117] "RemoveContainer" containerID="caba03fd38cc380067c9996cdbf9e7480fdd072e4dbef7e8d359a205ae43b4e1" Mar 18 13:20:46.884877 master-0 kubenswrapper[7599]: E0318 13:20:46.884817 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-wqxpk_openshift-ingress-operator(d9d09a56-ed4c-40b7-8be1-f3934c07296e)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" podUID="d9d09a56-ed4c-40b7-8be1-f3934c07296e" Mar 18 13:20:47.255135 master-0 kubenswrapper[7599]: I0318 13:20:47.255040 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:47.255135 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:47.255135 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:47.255135 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:47.256137 master-0 kubenswrapper[7599]: I0318 13:20:47.255156 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:47.894329 master-0 kubenswrapper[7599]: I0318 13:20:47.894255 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-wqxpk_d9d09a56-ed4c-40b7-8be1-f3934c07296e/ingress-operator/4.log" Mar 18 13:20:48.255490 master-0 kubenswrapper[7599]: I0318 13:20:48.255381 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:48.255490 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:48.255490 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:48.255490 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:48.255490 master-0 kubenswrapper[7599]: I0318 13:20:48.255472 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:49.255393 master-0 kubenswrapper[7599]: I0318 13:20:49.255267 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:49.255393 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:49.255393 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:49.255393 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:49.256824 master-0 kubenswrapper[7599]: I0318 13:20:49.255384 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:50.017205 master-0 kubenswrapper[7599]: I0318 13:20:50.017125 7599 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 13:20:50.017518 master-0 kubenswrapper[7599]: I0318 13:20:50.017227 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:20:50.017518 master-0 kubenswrapper[7599]: I0318 13:20:50.017300 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:20:50.018439 master-0 kubenswrapper[7599]: I0318 13:20:50.018334 7599 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"32969b726c2002b7577cb3809f73ec56502c0bea8740fe451098a36b38c87375"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 18 13:20:50.018597 master-0 kubenswrapper[7599]: I0318 13:20:50.018563 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="cluster-policy-controller" containerID="cri-o://32969b726c2002b7577cb3809f73ec56502c0bea8740fe451098a36b38c87375" gracePeriod=30 Mar 18 13:20:50.134589 master-0 kubenswrapper[7599]: E0318 13:20:50.134528 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(ce43e217adc4d0869adee3ba7c628c00)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ce43e217adc4d0869adee3ba7c628c00" Mar 18 13:20:50.252947 master-0 kubenswrapper[7599]: I0318 13:20:50.252867 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:50.252947 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:50.252947 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:50.252947 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:50.252947 master-0 kubenswrapper[7599]: I0318 13:20:50.252924 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:50.916575 master-0 kubenswrapper[7599]: I0318 13:20:50.916530 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ce43e217adc4d0869adee3ba7c628c00/cluster-policy-controller/3.log" Mar 18 13:20:50.917107 master-0 kubenswrapper[7599]: I0318 13:20:50.916856 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ce43e217adc4d0869adee3ba7c628c00/cluster-policy-controller/2.log" Mar 18 13:20:50.918046 master-0 kubenswrapper[7599]: I0318 13:20:50.918016 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ce43e217adc4d0869adee3ba7c628c00/kube-controller-manager/0.log" Mar 18 13:20:50.918115 master-0 kubenswrapper[7599]: I0318 13:20:50.918048 7599 generic.go:334] "Generic (PLEG): container finished" podID="ce43e217adc4d0869adee3ba7c628c00" containerID="32969b726c2002b7577cb3809f73ec56502c0bea8740fe451098a36b38c87375" exitCode=255 Mar 18 13:20:50.918115 master-0 kubenswrapper[7599]: I0318 13:20:50.918073 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"ce43e217adc4d0869adee3ba7c628c00","Type":"ContainerDied","Data":"32969b726c2002b7577cb3809f73ec56502c0bea8740fe451098a36b38c87375"} Mar 18 13:20:50.918115 master-0 kubenswrapper[7599]: I0318 13:20:50.918102 7599 scope.go:117] "RemoveContainer" containerID="9ead416ab4ab1c52ae601410f812ec15d5657e653d10d69eb95ccb39cc83ca8b" Mar 18 13:20:50.918598 master-0 kubenswrapper[7599]: I0318 13:20:50.918577 7599 scope.go:117] "RemoveContainer" containerID="32969b726c2002b7577cb3809f73ec56502c0bea8740fe451098a36b38c87375" Mar 18 13:20:50.918842 master-0 kubenswrapper[7599]: E0318 13:20:50.918808 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(ce43e217adc4d0869adee3ba7c628c00)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ce43e217adc4d0869adee3ba7c628c00" Mar 18 13:20:51.255027 master-0 kubenswrapper[7599]: I0318 13:20:51.254665 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:51.255027 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:51.255027 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:51.255027 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:51.255027 master-0 kubenswrapper[7599]: I0318 13:20:51.254739 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:51.928329 master-0 kubenswrapper[7599]: I0318 13:20:51.928234 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ce43e217adc4d0869adee3ba7c628c00/cluster-policy-controller/3.log" Mar 18 13:20:51.930140 master-0 kubenswrapper[7599]: I0318 13:20:51.930078 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ce43e217adc4d0869adee3ba7c628c00/kube-controller-manager/0.log" Mar 18 13:20:52.254464 master-0 kubenswrapper[7599]: I0318 13:20:52.254267 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:52.254464 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:52.254464 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:52.254464 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:52.254464 master-0 kubenswrapper[7599]: I0318 13:20:52.254358 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:53.254750 master-0 kubenswrapper[7599]: I0318 13:20:53.254647 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:53.254750 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:53.254750 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:53.254750 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:53.255515 master-0 kubenswrapper[7599]: I0318 13:20:53.254755 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:54.254488 master-0 kubenswrapper[7599]: I0318 13:20:54.254437 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:54.254488 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:54.254488 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:54.254488 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:54.254889 master-0 kubenswrapper[7599]: I0318 13:20:54.254860 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:54.996511 master-0 kubenswrapper[7599]: E0318 13:20:54.996354 7599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 13:20:55.253379 master-0 kubenswrapper[7599]: I0318 13:20:55.253232 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:55.253379 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:55.253379 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:55.253379 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:55.253379 master-0 kubenswrapper[7599]: I0318 13:20:55.253300 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:56.255181 master-0 kubenswrapper[7599]: I0318 13:20:56.255039 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:56.255181 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:56.255181 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:56.255181 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:56.255181 master-0 kubenswrapper[7599]: I0318 13:20:56.255129 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:57.016372 master-0 kubenswrapper[7599]: I0318 13:20:57.016271 7599 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:20:57.017855 master-0 kubenswrapper[7599]: I0318 13:20:57.017215 7599 scope.go:117] "RemoveContainer" containerID="32969b726c2002b7577cb3809f73ec56502c0bea8740fe451098a36b38c87375" Mar 18 13:20:57.017855 master-0 kubenswrapper[7599]: E0318 13:20:57.017524 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(ce43e217adc4d0869adee3ba7c628c00)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ce43e217adc4d0869adee3ba7c628c00" Mar 18 13:20:57.254460 master-0 kubenswrapper[7599]: I0318 13:20:57.254386 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:57.254460 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:57.254460 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:57.254460 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:57.254757 master-0 kubenswrapper[7599]: I0318 13:20:57.254476 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:58.255126 master-0 kubenswrapper[7599]: I0318 13:20:58.255073 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:58.255126 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:58.255126 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:58.255126 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:58.256199 master-0 kubenswrapper[7599]: I0318 13:20:58.256152 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:59.253543 master-0 kubenswrapper[7599]: I0318 13:20:59.253377 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:20:59.253543 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:20:59.253543 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:20:59.253543 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:20:59.254214 master-0 kubenswrapper[7599]: I0318 13:20:59.254168 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:20:59.371566 master-0 kubenswrapper[7599]: I0318 13:20:59.371477 7599 scope.go:117] "RemoveContainer" containerID="caba03fd38cc380067c9996cdbf9e7480fdd072e4dbef7e8d359a205ae43b4e1" Mar 18 13:20:59.372749 master-0 kubenswrapper[7599]: E0318 13:20:59.371927 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-wqxpk_openshift-ingress-operator(d9d09a56-ed4c-40b7-8be1-f3934c07296e)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" podUID="d9d09a56-ed4c-40b7-8be1-f3934c07296e" Mar 18 13:21:00.254792 master-0 kubenswrapper[7599]: I0318 13:21:00.254701 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:00.254792 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:21:00.254792 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:21:00.254792 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:21:00.255300 master-0 kubenswrapper[7599]: I0318 13:21:00.254799 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:00.372873 master-0 kubenswrapper[7599]: I0318 13:21:00.372655 7599 scope.go:117] "RemoveContainer" containerID="006cf29dad6df8430c273883ed49909e3a00377ac86065a75fe3162eb36811e5" Mar 18 13:21:01.000715 master-0 kubenswrapper[7599]: I0318 13:21:01.000643 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qsnxz_deb67ea0-8342-40cb-b0f4-115270e878dd/snapshot-controller/3.log" Mar 18 13:21:01.000988 master-0 kubenswrapper[7599]: I0318 13:21:01.000725 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qsnxz" event={"ID":"deb67ea0-8342-40cb-b0f4-115270e878dd","Type":"ContainerStarted","Data":"19465452bf90617b71d40fb46ab80696b86f027e8232a3f4b9f70c4975c500c6"} Mar 18 13:21:01.255239 master-0 kubenswrapper[7599]: I0318 13:21:01.255062 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:01.255239 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:21:01.255239 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:21:01.255239 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:21:01.255239 master-0 kubenswrapper[7599]: I0318 13:21:01.255140 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:02.255032 master-0 kubenswrapper[7599]: I0318 13:21:02.254872 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:02.255032 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:21:02.255032 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:21:02.255032 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:21:02.255032 master-0 kubenswrapper[7599]: I0318 13:21:02.255018 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:03.254380 master-0 kubenswrapper[7599]: I0318 13:21:03.254268 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:03.254380 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:21:03.254380 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:21:03.254380 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:21:03.254845 master-0 kubenswrapper[7599]: I0318 13:21:03.254386 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:04.253713 master-0 kubenswrapper[7599]: I0318 13:21:04.253661 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:04.253713 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:21:04.253713 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:21:04.253713 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:21:04.254767 master-0 kubenswrapper[7599]: I0318 13:21:04.254672 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:05.254483 master-0 kubenswrapper[7599]: I0318 13:21:05.254396 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:05.254483 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:21:05.254483 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:21:05.254483 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:21:05.254483 master-0 kubenswrapper[7599]: I0318 13:21:05.254469 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:06.254877 master-0 kubenswrapper[7599]: I0318 13:21:06.254814 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:06.254877 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:21:06.254877 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:21:06.254877 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:21:06.256046 master-0 kubenswrapper[7599]: I0318 13:21:06.256001 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:07.254126 master-0 kubenswrapper[7599]: I0318 13:21:07.253853 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:07.254126 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:21:07.254126 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:21:07.254126 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:21:07.254480 master-0 kubenswrapper[7599]: I0318 13:21:07.254138 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:08.253897 master-0 kubenswrapper[7599]: I0318 13:21:08.253840 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:08.253897 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:21:08.253897 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:21:08.253897 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:21:08.254593 master-0 kubenswrapper[7599]: I0318 13:21:08.253914 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:08.372689 master-0 kubenswrapper[7599]: I0318 13:21:08.372628 7599 scope.go:117] "RemoveContainer" containerID="32969b726c2002b7577cb3809f73ec56502c0bea8740fe451098a36b38c87375" Mar 18 13:21:08.373492 master-0 kubenswrapper[7599]: E0318 13:21:08.373393 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(ce43e217adc4d0869adee3ba7c628c00)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ce43e217adc4d0869adee3ba7c628c00" Mar 18 13:21:09.254924 master-0 kubenswrapper[7599]: I0318 13:21:09.254860 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:09.254924 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:21:09.254924 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:21:09.254924 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:21:09.256131 master-0 kubenswrapper[7599]: I0318 13:21:09.254953 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:10.254449 master-0 kubenswrapper[7599]: I0318 13:21:10.254332 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:10.254449 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:21:10.254449 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:21:10.254449 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:21:10.254966 master-0 kubenswrapper[7599]: I0318 13:21:10.254469 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:10.901683 master-0 kubenswrapper[7599]: I0318 13:21:10.901605 7599 scope.go:117] "RemoveContainer" containerID="e98d728f4b1b0e813247323f6966121eae00b055f966e7db7eab7c672af9c4da" Mar 18 13:21:10.946895 master-0 kubenswrapper[7599]: I0318 13:21:10.946832 7599 scope.go:117] "RemoveContainer" containerID="0fa9267fcb1942ed177056f1462768d5db7582291e5f4b758f528a23e47041d8" Mar 18 13:21:10.967515 master-0 kubenswrapper[7599]: I0318 13:21:10.967470 7599 scope.go:117] "RemoveContainer" containerID="8dcf0d47755aa9729c9174b6d9eec6a76d4adc29a9ce8725fd5baba97772cee5" Mar 18 13:21:10.976475 master-0 kubenswrapper[7599]: I0318 13:21:10.976386 7599 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-qwgrm container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 18 13:21:10.976577 master-0 kubenswrapper[7599]: I0318 13:21:10.976514 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" podUID="ce3728ab-5d50-40ac-95b3-74a5b62a557f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 18 13:21:10.988202 master-0 kubenswrapper[7599]: I0318 13:21:10.988149 7599 scope.go:117] "RemoveContainer" containerID="8fd581d9433e603018eead43b8e27a33c255b946ee133532ab11a25007d5ddfb" Mar 18 13:21:11.007677 master-0 kubenswrapper[7599]: I0318 13:21:11.007637 7599 scope.go:117] "RemoveContainer" containerID="cc3f7f9178a5ebacbbfda5fb5509e6451e88862baa49b921cb87ec5d6cc82ee7" Mar 18 13:21:11.029647 master-0 kubenswrapper[7599]: I0318 13:21:11.029590 7599 scope.go:117] "RemoveContainer" containerID="a2dd4b79716d36a56d21bba417e3ebe1360ab2ee3f667763e4260bf014da2347" Mar 18 13:21:11.051993 master-0 kubenswrapper[7599]: I0318 13:21:11.051484 7599 scope.go:117] "RemoveContainer" containerID="9fff8d077ff23531e072493f53ed61382efbb48a6316de383a5ae419bc8c0c9f" Mar 18 13:21:11.077760 master-0 kubenswrapper[7599]: I0318 13:21:11.077682 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-598fbc5f8f-kvbzn_c3ff09ab-cbe1-49e7-8121-5f71997a5176/cluster-node-tuning-operator/0.log" Mar 18 13:21:11.077943 master-0 kubenswrapper[7599]: I0318 13:21:11.077807 7599 generic.go:334] "Generic (PLEG): container finished" podID="c3ff09ab-cbe1-49e7-8121-5f71997a5176" containerID="8a0561b48d7cbb59281ef2be420f500c179586e31854a6ba87f0ee5471e4ee95" exitCode=1 Mar 18 13:21:11.077943 master-0 kubenswrapper[7599]: I0318 13:21:11.077889 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" event={"ID":"c3ff09ab-cbe1-49e7-8121-5f71997a5176","Type":"ContainerDied","Data":"8a0561b48d7cbb59281ef2be420f500c179586e31854a6ba87f0ee5471e4ee95"} Mar 18 13:21:11.078457 master-0 kubenswrapper[7599]: I0318 13:21:11.078408 7599 scope.go:117] "RemoveContainer" containerID="8a0561b48d7cbb59281ef2be420f500c179586e31854a6ba87f0ee5471e4ee95" Mar 18 13:21:11.082849 master-0 kubenswrapper[7599]: I0318 13:21:11.082724 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-lqtbg_2b12af9a-8041-477f-90eb-05bb6ae7861a/cluster-autoscaler-operator/0.log" Mar 18 13:21:11.120161 master-0 kubenswrapper[7599]: I0318 13:21:11.119896 7599 generic.go:334] "Generic (PLEG): container finished" podID="2b12af9a-8041-477f-90eb-05bb6ae7861a" containerID="1ea74ec7ff988c3aa1326aad273ebf989a1e564b326b601e6eb48c414dd19ee0" exitCode=255 Mar 18 13:21:11.120161 master-0 kubenswrapper[7599]: I0318 13:21:11.119991 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lqtbg" event={"ID":"2b12af9a-8041-477f-90eb-05bb6ae7861a","Type":"ContainerDied","Data":"1ea74ec7ff988c3aa1326aad273ebf989a1e564b326b601e6eb48c414dd19ee0"} Mar 18 13:21:11.120851 master-0 kubenswrapper[7599]: I0318 13:21:11.120717 7599 scope.go:117] "RemoveContainer" containerID="1ea74ec7ff988c3aa1326aad273ebf989a1e564b326b601e6eb48c414dd19ee0" Mar 18 13:21:11.124554 master-0 kubenswrapper[7599]: I0318 13:21:11.124504 7599 generic.go:334] "Generic (PLEG): container finished" podID="902909ca-ab08-49aa-9736-70e073f8e67d" containerID="55c94bf30a1ccca039ed50a5bce5510c09848033cc6f053a453f757341dfc8bc" exitCode=0 Mar 18 13:21:11.124694 master-0 kubenswrapper[7599]: I0318 13:21:11.124569 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4" event={"ID":"902909ca-ab08-49aa-9736-70e073f8e67d","Type":"ContainerDied","Data":"55c94bf30a1ccca039ed50a5bce5510c09848033cc6f053a453f757341dfc8bc"} Mar 18 13:21:11.124939 master-0 kubenswrapper[7599]: I0318 13:21:11.124896 7599 scope.go:117] "RemoveContainer" containerID="55c94bf30a1ccca039ed50a5bce5510c09848033cc6f053a453f757341dfc8bc" Mar 18 13:21:11.129579 master-0 kubenswrapper[7599]: I0318 13:21:11.129536 7599 generic.go:334] "Generic (PLEG): container finished" podID="7dca7577-6bee-4dd3-917a-7b7ccc42f0fc" containerID="b5aaa571a68806249fc7d55159a4093df00ace03fbc9a12d84446e66a7f3e311" exitCode=0 Mar 18 13:21:11.129806 master-0 kubenswrapper[7599]: I0318 13:21:11.129605 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5" event={"ID":"7dca7577-6bee-4dd3-917a-7b7ccc42f0fc","Type":"ContainerDied","Data":"b5aaa571a68806249fc7d55159a4093df00ace03fbc9a12d84446e66a7f3e311"} Mar 18 13:21:11.130226 master-0 kubenswrapper[7599]: I0318 13:21:11.130162 7599 scope.go:117] "RemoveContainer" containerID="b5aaa571a68806249fc7d55159a4093df00ace03fbc9a12d84446e66a7f3e311" Mar 18 13:21:11.153924 master-0 kubenswrapper[7599]: I0318 13:21:11.153851 7599 generic.go:334] "Generic (PLEG): container finished" podID="b89fb313-d01a-4305-b123-e253b3382b85" containerID="9a89fb2a5bf4388a7514a371a51f6ac933c33ac9c54d8113cf8c422503facd37" exitCode=0 Mar 18 13:21:11.154110 master-0 kubenswrapper[7599]: I0318 13:21:11.154005 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-79bc6b8d76-2zvf2" event={"ID":"b89fb313-d01a-4305-b123-e253b3382b85","Type":"ContainerDied","Data":"9a89fb2a5bf4388a7514a371a51f6ac933c33ac9c54d8113cf8c422503facd37"} Mar 18 13:21:11.157388 master-0 kubenswrapper[7599]: I0318 13:21:11.157344 7599 scope.go:117] "RemoveContainer" containerID="9a89fb2a5bf4388a7514a371a51f6ac933c33ac9c54d8113cf8c422503facd37" Mar 18 13:21:11.159788 master-0 kubenswrapper[7599]: I0318 13:21:11.159720 7599 generic.go:334] "Generic (PLEG): container finished" podID="a8eff549-02f3-446e-b3a1-a66cecdc02a6" containerID="8282b58a87a9816b39b8e46af1e553cfafda7bc3ace1196ac63b527830a8a86a" exitCode=0 Mar 18 13:21:11.159922 master-0 kubenswrapper[7599]: I0318 13:21:11.159813 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd" event={"ID":"a8eff549-02f3-446e-b3a1-a66cecdc02a6","Type":"ContainerDied","Data":"8282b58a87a9816b39b8e46af1e553cfafda7bc3ace1196ac63b527830a8a86a"} Mar 18 13:21:11.160381 master-0 kubenswrapper[7599]: I0318 13:21:11.160340 7599 scope.go:117] "RemoveContainer" containerID="8282b58a87a9816b39b8e46af1e553cfafda7bc3ace1196ac63b527830a8a86a" Mar 18 13:21:11.162368 master-0 kubenswrapper[7599]: I0318 13:21:11.162335 7599 generic.go:334] "Generic (PLEG): container finished" podID="595f697b-d238-4500-84ce-1ea00377f05e" containerID="239bc63a547a5d1be7fb026224506bae5660c286e46adef016daf55c15815d54" exitCode=0 Mar 18 13:21:11.162532 master-0 kubenswrapper[7599]: I0318 13:21:11.162457 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m" event={"ID":"595f697b-d238-4500-84ce-1ea00377f05e","Type":"ContainerDied","Data":"239bc63a547a5d1be7fb026224506bae5660c286e46adef016daf55c15815d54"} Mar 18 13:21:11.163017 master-0 kubenswrapper[7599]: I0318 13:21:11.162958 7599 scope.go:117] "RemoveContainer" containerID="239bc63a547a5d1be7fb026224506bae5660c286e46adef016daf55c15815d54" Mar 18 13:21:11.164483 master-0 kubenswrapper[7599]: I0318 13:21:11.164450 7599 generic.go:334] "Generic (PLEG): container finished" podID="ce3728ab-5d50-40ac-95b3-74a5b62a557f" containerID="ce628a61289a6356a4840f81be538656bf2f65763801f5f5367447fe1929945e" exitCode=0 Mar 18 13:21:11.164619 master-0 kubenswrapper[7599]: I0318 13:21:11.164503 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" event={"ID":"ce3728ab-5d50-40ac-95b3-74a5b62a557f","Type":"ContainerDied","Data":"ce628a61289a6356a4840f81be538656bf2f65763801f5f5367447fe1929945e"} Mar 18 13:21:11.164886 master-0 kubenswrapper[7599]: I0318 13:21:11.164846 7599 scope.go:117] "RemoveContainer" containerID="ce628a61289a6356a4840f81be538656bf2f65763801f5f5367447fe1929945e" Mar 18 13:21:11.166537 master-0 kubenswrapper[7599]: I0318 13:21:11.166500 7599 generic.go:334] "Generic (PLEG): container finished" podID="394061b4-1bac-4699-96d2-88558c1adaf8" containerID="c9f1921c446214d30702dfb6939c3c003e6da6eb3a26e4b0d63f3a857db0e4ce" exitCode=0 Mar 18 13:21:11.166724 master-0 kubenswrapper[7599]: I0318 13:21:11.166550 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-68lgz" event={"ID":"394061b4-1bac-4699-96d2-88558c1adaf8","Type":"ContainerDied","Data":"c9f1921c446214d30702dfb6939c3c003e6da6eb3a26e4b0d63f3a857db0e4ce"} Mar 18 13:21:11.166849 master-0 kubenswrapper[7599]: I0318 13:21:11.166810 7599 scope.go:117] "RemoveContainer" containerID="c9f1921c446214d30702dfb6939c3c003e6da6eb3a26e4b0d63f3a857db0e4ce" Mar 18 13:21:11.170191 master-0 kubenswrapper[7599]: I0318 13:21:11.170000 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-9bqxm_68104a8c-3fac-4d4b-b975-bc2d045b3375/machine-api-operator/0.log" Mar 18 13:21:11.171260 master-0 kubenswrapper[7599]: I0318 13:21:11.170370 7599 generic.go:334] "Generic (PLEG): container finished" podID="68104a8c-3fac-4d4b-b975-bc2d045b3375" containerID="48a75a1bd556b4ca5c903ca8cec01a63d2822cbb454ffb75470b5fa995517263" exitCode=255 Mar 18 13:21:11.171260 master-0 kubenswrapper[7599]: I0318 13:21:11.170503 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm" event={"ID":"68104a8c-3fac-4d4b-b975-bc2d045b3375","Type":"ContainerDied","Data":"48a75a1bd556b4ca5c903ca8cec01a63d2822cbb454ffb75470b5fa995517263"} Mar 18 13:21:11.171260 master-0 kubenswrapper[7599]: I0318 13:21:11.170952 7599 scope.go:117] "RemoveContainer" containerID="48a75a1bd556b4ca5c903ca8cec01a63d2822cbb454ffb75470b5fa995517263" Mar 18 13:21:11.173485 master-0 kubenswrapper[7599]: I0318 13:21:11.173427 7599 generic.go:334] "Generic (PLEG): container finished" podID="b75d4622-ac12-4f82-afc9-ab63e6278b0c" containerID="efe6e287c36852699c4eb20fb17353458d83a029dc0001b97b2d103045cc17c2" exitCode=0 Mar 18 13:21:11.173485 master-0 kubenswrapper[7599]: I0318 13:21:11.173454 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4" event={"ID":"b75d4622-ac12-4f82-afc9-ab63e6278b0c","Type":"ContainerDied","Data":"efe6e287c36852699c4eb20fb17353458d83a029dc0001b97b2d103045cc17c2"} Mar 18 13:21:11.174213 master-0 kubenswrapper[7599]: I0318 13:21:11.174180 7599 scope.go:117] "RemoveContainer" containerID="efe6e287c36852699c4eb20fb17353458d83a029dc0001b97b2d103045cc17c2" Mar 18 13:21:11.174970 master-0 kubenswrapper[7599]: I0318 13:21:11.174951 7599 generic.go:334] "Generic (PLEG): container finished" podID="19a76585-a9ac-4ed9-9146-bb77b31848c6" containerID="f3d6a2875cca50d672dfde1a32c8dca9e65a425957da660e57609821797e598b" exitCode=0 Mar 18 13:21:11.175035 master-0 kubenswrapper[7599]: I0318 13:21:11.174978 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" event={"ID":"19a76585-a9ac-4ed9-9146-bb77b31848c6","Type":"ContainerDied","Data":"f3d6a2875cca50d672dfde1a32c8dca9e65a425957da660e57609821797e598b"} Mar 18 13:21:11.175256 master-0 kubenswrapper[7599]: I0318 13:21:11.175231 7599 scope.go:117] "RemoveContainer" containerID="f3d6a2875cca50d672dfde1a32c8dca9e65a425957da660e57609821797e598b" Mar 18 13:21:11.263446 master-0 kubenswrapper[7599]: I0318 13:21:11.259734 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:11.263446 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:21:11.263446 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:21:11.263446 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:21:11.263446 master-0 kubenswrapper[7599]: I0318 13:21:11.259786 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:11.993060 master-0 kubenswrapper[7599]: I0318 13:21:11.992992 7599 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" Mar 18 13:21:12.182782 master-0 kubenswrapper[7599]: I0318 13:21:12.182716 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" event={"ID":"ce3728ab-5d50-40ac-95b3-74a5b62a557f","Type":"ContainerStarted","Data":"a84047fd9b87cdfb49ea7e164528794ba2d0999a5e7dcba9dd9e544a562e4b04"} Mar 18 13:21:12.183608 master-0 kubenswrapper[7599]: I0318 13:21:12.183583 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" Mar 18 13:21:12.184930 master-0 kubenswrapper[7599]: I0318 13:21:12.184900 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-598fbc5f8f-kvbzn_c3ff09ab-cbe1-49e7-8121-5f71997a5176/cluster-node-tuning-operator/0.log" Mar 18 13:21:12.184979 master-0 kubenswrapper[7599]: I0318 13:21:12.184955 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" event={"ID":"c3ff09ab-cbe1-49e7-8121-5f71997a5176","Type":"ContainerStarted","Data":"3a95d1fdcb3068d3515bb9fdf082318832b4849fdd4dcfcbda66215465532969"} Mar 18 13:21:12.186920 master-0 kubenswrapper[7599]: I0318 13:21:12.186897 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-9bqxm_68104a8c-3fac-4d4b-b975-bc2d045b3375/machine-api-operator/0.log" Mar 18 13:21:12.187203 master-0 kubenswrapper[7599]: I0318 13:21:12.187176 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm" event={"ID":"68104a8c-3fac-4d4b-b975-bc2d045b3375","Type":"ContainerStarted","Data":"8a1d2a28a02adaf96d6f547aaeb69dc4f550840901bd9f9f9311a6733ad3c203"} Mar 18 13:21:12.188683 master-0 kubenswrapper[7599]: I0318 13:21:12.188660 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5" event={"ID":"7dca7577-6bee-4dd3-917a-7b7ccc42f0fc","Type":"ContainerStarted","Data":"f45388cd1975238bf0e6b465991fcc80231413d8a53415460458a08b790ffcab"} Mar 18 13:21:12.190201 master-0 kubenswrapper[7599]: I0318 13:21:12.190177 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd" event={"ID":"a8eff549-02f3-446e-b3a1-a66cecdc02a6","Type":"ContainerStarted","Data":"6a4244395f4d75895479c6bde3bd69b3e184f114ebdcc985559b2e60abc18c9f"} Mar 18 13:21:12.191634 master-0 kubenswrapper[7599]: I0318 13:21:12.191592 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-79bc6b8d76-2zvf2" event={"ID":"b89fb313-d01a-4305-b123-e253b3382b85","Type":"ContainerStarted","Data":"ed173ff2c2e57179575f62933d50841df443b61a4153c15f43a1fe1d3be7ca34"} Mar 18 13:21:12.194317 master-0 kubenswrapper[7599]: I0318 13:21:12.194275 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-68lgz" event={"ID":"394061b4-1bac-4699-96d2-88558c1adaf8","Type":"ContainerStarted","Data":"56938ffab16990f3cffb8faf949e1cb22709029d512d01af84649257d7bf62fc"} Mar 18 13:21:12.195912 master-0 kubenswrapper[7599]: I0318 13:21:12.195860 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-lqtbg_2b12af9a-8041-477f-90eb-05bb6ae7861a/cluster-autoscaler-operator/0.log" Mar 18 13:21:12.196272 master-0 kubenswrapper[7599]: I0318 13:21:12.196238 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lqtbg" event={"ID":"2b12af9a-8041-477f-90eb-05bb6ae7861a","Type":"ContainerStarted","Data":"46bdd357defc2dd21565769c8123edf1ef61b5a491fb0aa0d385f559a48dfecf"} Mar 18 13:21:12.199418 master-0 kubenswrapper[7599]: I0318 13:21:12.199352 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4" event={"ID":"b75d4622-ac12-4f82-afc9-ab63e6278b0c","Type":"ContainerStarted","Data":"61c248cba1bc559e1d4464ce4ef3f38b93e86ef81619df8b81ab863a153e9722"} Mar 18 13:21:12.201639 master-0 kubenswrapper[7599]: I0318 13:21:12.201598 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" event={"ID":"19a76585-a9ac-4ed9-9146-bb77b31848c6","Type":"ContainerStarted","Data":"e6502b58667f09b48e77dc67a79186e19cc74b3537a34e37099ff0c5b4adbd6e"} Mar 18 13:21:12.203312 master-0 kubenswrapper[7599]: I0318 13:21:12.203280 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m" event={"ID":"595f697b-d238-4500-84ce-1ea00377f05e","Type":"ContainerStarted","Data":"924646e44a1c5adbdb9870533fe34c79d2c53b932110e145fe7c6282f99e8cc8"} Mar 18 13:21:12.205377 master-0 kubenswrapper[7599]: I0318 13:21:12.205338 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4" event={"ID":"902909ca-ab08-49aa-9736-70e073f8e67d","Type":"ContainerStarted","Data":"57458ff3e47bb71f51462cfaad03298ba4f4252a840f0e60177178013f47586d"} Mar 18 13:21:12.253241 master-0 kubenswrapper[7599]: I0318 13:21:12.253130 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:12.253241 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:21:12.253241 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:21:12.253241 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:21:12.253241 master-0 kubenswrapper[7599]: I0318 13:21:12.253183 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:13.253958 master-0 kubenswrapper[7599]: I0318 13:21:13.253876 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:13.253958 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:21:13.253958 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:21:13.253958 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:21:13.254539 master-0 kubenswrapper[7599]: I0318 13:21:13.253978 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:14.229602 master-0 kubenswrapper[7599]: I0318 13:21:14.229535 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" Mar 18 13:21:14.253791 master-0 kubenswrapper[7599]: I0318 13:21:14.253719 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:14.253791 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:21:14.253791 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:21:14.253791 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:21:14.253791 master-0 kubenswrapper[7599]: I0318 13:21:14.253786 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:14.372928 master-0 kubenswrapper[7599]: I0318 13:21:14.372869 7599 scope.go:117] "RemoveContainer" containerID="caba03fd38cc380067c9996cdbf9e7480fdd072e4dbef7e8d359a205ae43b4e1" Mar 18 13:21:14.373578 master-0 kubenswrapper[7599]: E0318 13:21:14.373539 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-wqxpk_openshift-ingress-operator(d9d09a56-ed4c-40b7-8be1-f3934c07296e)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" podUID="d9d09a56-ed4c-40b7-8be1-f3934c07296e" Mar 18 13:21:15.254671 master-0 kubenswrapper[7599]: I0318 13:21:15.254613 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:15.254671 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:21:15.254671 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:21:15.254671 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:21:15.255238 master-0 kubenswrapper[7599]: I0318 13:21:15.254721 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:16.253564 master-0 kubenswrapper[7599]: I0318 13:21:16.253321 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:16.253564 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:21:16.253564 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:21:16.253564 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:21:16.253564 master-0 kubenswrapper[7599]: I0318 13:21:16.253467 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:17.255456 master-0 kubenswrapper[7599]: I0318 13:21:17.255347 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:17.255456 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:21:17.255456 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:21:17.255456 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:21:17.256513 master-0 kubenswrapper[7599]: I0318 13:21:17.255474 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:18.254782 master-0 kubenswrapper[7599]: I0318 13:21:18.254722 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:18.254782 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:21:18.254782 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:21:18.254782 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:21:18.255616 master-0 kubenswrapper[7599]: I0318 13:21:18.255548 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:19.255169 master-0 kubenswrapper[7599]: I0318 13:21:19.255064 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:19.255169 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:21:19.255169 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:21:19.255169 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:21:19.255169 master-0 kubenswrapper[7599]: I0318 13:21:19.255159 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:19.371886 master-0 kubenswrapper[7599]: I0318 13:21:19.371808 7599 scope.go:117] "RemoveContainer" containerID="32969b726c2002b7577cb3809f73ec56502c0bea8740fe451098a36b38c87375" Mar 18 13:21:19.372363 master-0 kubenswrapper[7599]: E0318 13:21:19.372282 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(ce43e217adc4d0869adee3ba7c628c00)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ce43e217adc4d0869adee3ba7c628c00" Mar 18 13:21:20.255251 master-0 kubenswrapper[7599]: I0318 13:21:20.255177 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:20.255251 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:21:20.255251 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:21:20.255251 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:21:20.256501 master-0 kubenswrapper[7599]: I0318 13:21:20.255335 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:21.254524 master-0 kubenswrapper[7599]: I0318 13:21:21.254398 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:21.254524 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:21:21.254524 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:21:21.254524 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:21:21.254941 master-0 kubenswrapper[7599]: I0318 13:21:21.254551 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:22.255146 master-0 kubenswrapper[7599]: I0318 13:21:22.255057 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:22.255146 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:21:22.255146 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:21:22.255146 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:21:22.255895 master-0 kubenswrapper[7599]: I0318 13:21:22.255141 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:23.254073 master-0 kubenswrapper[7599]: I0318 13:21:23.253992 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:23.254073 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:21:23.254073 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:21:23.254073 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:21:23.254399 master-0 kubenswrapper[7599]: I0318 13:21:23.254078 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:24.253879 master-0 kubenswrapper[7599]: I0318 13:21:24.253779 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:24.253879 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:21:24.253879 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:21:24.253879 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:21:24.253879 master-0 kubenswrapper[7599]: I0318 13:21:24.253865 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:25.253300 master-0 kubenswrapper[7599]: I0318 13:21:25.253222 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:25.253300 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:21:25.253300 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:21:25.253300 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:21:25.253300 master-0 kubenswrapper[7599]: I0318 13:21:25.253288 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:25.379001 master-0 kubenswrapper[7599]: I0318 13:21:25.378908 7599 scope.go:117] "RemoveContainer" containerID="caba03fd38cc380067c9996cdbf9e7480fdd072e4dbef7e8d359a205ae43b4e1" Mar 18 13:21:25.379647 master-0 kubenswrapper[7599]: E0318 13:21:25.379379 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-wqxpk_openshift-ingress-operator(d9d09a56-ed4c-40b7-8be1-f3934c07296e)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" podUID="d9d09a56-ed4c-40b7-8be1-f3934c07296e" Mar 18 13:21:26.254160 master-0 kubenswrapper[7599]: I0318 13:21:26.254118 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:26.254160 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:21:26.254160 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:21:26.254160 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:21:26.254625 master-0 kubenswrapper[7599]: I0318 13:21:26.254594 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:27.254350 master-0 kubenswrapper[7599]: I0318 13:21:27.254266 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:27.254350 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:21:27.254350 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:21:27.254350 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:21:27.254350 master-0 kubenswrapper[7599]: I0318 13:21:27.254347 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:28.254494 master-0 kubenswrapper[7599]: I0318 13:21:28.254352 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:21:28.254494 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:21:28.254494 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:21:28.254494 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:21:28.255446 master-0 kubenswrapper[7599]: I0318 13:21:28.254469 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:21:28.255446 master-0 kubenswrapper[7599]: I0318 13:21:28.254623 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:21:28.255596 master-0 kubenswrapper[7599]: I0318 13:21:28.255439 7599 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"ca11ff8dd74bbd57e44f6070d192194a64ab628351ce867a0ac332f4e51a71b5"} pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" containerMessage="Container router failed startup probe, will be restarted" Mar 18 13:21:28.255596 master-0 kubenswrapper[7599]: I0318 13:21:28.255499 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" containerID="cri-o://ca11ff8dd74bbd57e44f6070d192194a64ab628351ce867a0ac332f4e51a71b5" gracePeriod=3600 Mar 18 13:21:30.372023 master-0 kubenswrapper[7599]: I0318 13:21:30.371968 7599 scope.go:117] "RemoveContainer" containerID="32969b726c2002b7577cb3809f73ec56502c0bea8740fe451098a36b38c87375" Mar 18 13:21:31.354509 master-0 kubenswrapper[7599]: I0318 13:21:31.354454 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ce43e217adc4d0869adee3ba7c628c00/cluster-policy-controller/3.log" Mar 18 13:21:31.356699 master-0 kubenswrapper[7599]: I0318 13:21:31.356663 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ce43e217adc4d0869adee3ba7c628c00/kube-controller-manager/0.log" Mar 18 13:21:31.356831 master-0 kubenswrapper[7599]: I0318 13:21:31.356734 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"ce43e217adc4d0869adee3ba7c628c00","Type":"ContainerStarted","Data":"057d6561c0f4da44fc1dbbb3cf541c1859a6f838b5eed3e585b47f89bb483358"} Mar 18 13:21:31.795235 master-0 kubenswrapper[7599]: I0318 13:21:31.795141 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-retry-1-master-0"] Mar 18 13:21:31.795779 master-0 kubenswrapper[7599]: E0318 13:21:31.795591 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5217b77d-b517-45c3-b76d-eee86d72b141" containerName="installer" Mar 18 13:21:31.795779 master-0 kubenswrapper[7599]: I0318 13:21:31.795618 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="5217b77d-b517-45c3-b76d-eee86d72b141" containerName="installer" Mar 18 13:21:31.795779 master-0 kubenswrapper[7599]: E0318 13:21:31.795696 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62eae2a9-2667-431e-ad73-ca18124d01f6" containerName="installer" Mar 18 13:21:31.795779 master-0 kubenswrapper[7599]: I0318 13:21:31.795710 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="62eae2a9-2667-431e-ad73-ca18124d01f6" containerName="installer" Mar 18 13:21:31.795949 master-0 kubenswrapper[7599]: I0318 13:21:31.795910 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="62eae2a9-2667-431e-ad73-ca18124d01f6" containerName="installer" Mar 18 13:21:31.795949 master-0 kubenswrapper[7599]: I0318 13:21:31.795943 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="5217b77d-b517-45c3-b76d-eee86d72b141" containerName="installer" Mar 18 13:21:31.796808 master-0 kubenswrapper[7599]: I0318 13:21:31.796767 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 18 13:21:31.799461 master-0 kubenswrapper[7599]: I0318 13:21:31.799433 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-w7jpc" Mar 18 13:21:31.799990 master-0 kubenswrapper[7599]: I0318 13:21:31.799938 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 18 13:21:31.807803 master-0 kubenswrapper[7599]: I0318 13:21:31.807736 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-retry-1-master-0"] Mar 18 13:21:31.873050 master-0 kubenswrapper[7599]: I0318 13:21:31.872979 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb385758-78ae-46b3-994e-fec9b14b7322-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"cb385758-78ae-46b3-994e-fec9b14b7322\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 18 13:21:31.873275 master-0 kubenswrapper[7599]: I0318 13:21:31.873060 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb385758-78ae-46b3-994e-fec9b14b7322-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"cb385758-78ae-46b3-994e-fec9b14b7322\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 18 13:21:31.873275 master-0 kubenswrapper[7599]: I0318 13:21:31.873085 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cb385758-78ae-46b3-994e-fec9b14b7322-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"cb385758-78ae-46b3-994e-fec9b14b7322\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 18 13:21:31.974921 master-0 kubenswrapper[7599]: I0318 13:21:31.974848 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb385758-78ae-46b3-994e-fec9b14b7322-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"cb385758-78ae-46b3-994e-fec9b14b7322\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 18 13:21:31.974921 master-0 kubenswrapper[7599]: I0318 13:21:31.974917 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cb385758-78ae-46b3-994e-fec9b14b7322-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"cb385758-78ae-46b3-994e-fec9b14b7322\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 18 13:21:31.975189 master-0 kubenswrapper[7599]: I0318 13:21:31.974979 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb385758-78ae-46b3-994e-fec9b14b7322-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"cb385758-78ae-46b3-994e-fec9b14b7322\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 18 13:21:31.975592 master-0 kubenswrapper[7599]: I0318 13:21:31.975338 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cb385758-78ae-46b3-994e-fec9b14b7322-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"cb385758-78ae-46b3-994e-fec9b14b7322\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 18 13:21:31.975592 master-0 kubenswrapper[7599]: I0318 13:21:31.975408 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb385758-78ae-46b3-994e-fec9b14b7322-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"cb385758-78ae-46b3-994e-fec9b14b7322\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 18 13:21:31.995694 master-0 kubenswrapper[7599]: I0318 13:21:31.995634 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb385758-78ae-46b3-994e-fec9b14b7322-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"cb385758-78ae-46b3-994e-fec9b14b7322\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 18 13:21:32.116338 master-0 kubenswrapper[7599]: I0318 13:21:32.116205 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 18 13:21:32.569312 master-0 kubenswrapper[7599]: I0318 13:21:32.569266 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-retry-1-master-0"] Mar 18 13:21:33.393035 master-0 kubenswrapper[7599]: I0318 13:21:33.392978 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"cb385758-78ae-46b3-994e-fec9b14b7322","Type":"ContainerStarted","Data":"254c4c55fc5a8cefc576158a3cd6566c4e22decb0988ded62e89b98504ee1458"} Mar 18 13:21:33.393035 master-0 kubenswrapper[7599]: I0318 13:21:33.393029 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"cb385758-78ae-46b3-994e-fec9b14b7322","Type":"ContainerStarted","Data":"44cadcc137f107d216cd01d7217282fd78fcbb3fd1c79dd935088ac2165b138b"} Mar 18 13:21:33.407992 master-0 kubenswrapper[7599]: I0318 13:21:33.407912 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" podStartSLOduration=2.40788508 podStartE2EDuration="2.40788508s" podCreationTimestamp="2026-03-18 13:21:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:21:33.404383444 +0000 UTC m=+868.365437726" watchObservedRunningTime="2026-03-18 13:21:33.40788508 +0000 UTC m=+868.368939322" Mar 18 13:21:37.016909 master-0 kubenswrapper[7599]: I0318 13:21:37.016861 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:21:37.017676 master-0 kubenswrapper[7599]: I0318 13:21:37.017592 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:21:37.023216 master-0 kubenswrapper[7599]: I0318 13:21:37.023158 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:21:40.372260 master-0 kubenswrapper[7599]: I0318 13:21:40.372173 7599 scope.go:117] "RemoveContainer" containerID="caba03fd38cc380067c9996cdbf9e7480fdd072e4dbef7e8d359a205ae43b4e1" Mar 18 13:21:40.373402 master-0 kubenswrapper[7599]: E0318 13:21:40.372481 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-wqxpk_openshift-ingress-operator(d9d09a56-ed4c-40b7-8be1-f3934c07296e)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" podUID="d9d09a56-ed4c-40b7-8be1-f3934c07296e" Mar 18 13:21:47.021873 master-0 kubenswrapper[7599]: I0318 13:21:47.021802 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:21:54.372365 master-0 kubenswrapper[7599]: I0318 13:21:54.372293 7599 scope.go:117] "RemoveContainer" containerID="caba03fd38cc380067c9996cdbf9e7480fdd072e4dbef7e8d359a205ae43b4e1" Mar 18 13:21:54.372992 master-0 kubenswrapper[7599]: E0318 13:21:54.372613 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-wqxpk_openshift-ingress-operator(d9d09a56-ed4c-40b7-8be1-f3934c07296e)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" podUID="d9d09a56-ed4c-40b7-8be1-f3934c07296e" Mar 18 13:22:05.466873 master-0 kubenswrapper[7599]: I0318 13:22:05.466807 7599 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 13:22:05.467675 master-0 kubenswrapper[7599]: I0318 13:22:05.467155 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://704f6a2e758bf39725732abfe9688e37d2c20b27969fbf143112e26938fff48b" gracePeriod=30 Mar 18 13:22:05.467675 master-0 kubenswrapper[7599]: I0318 13:22:05.467211 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="kube-controller-manager" containerID="cri-o://fb60ab1fea57ec49871d5edaaf3891b0d60ae36efb59421fd58289dbb8a18b9d" gracePeriod=30 Mar 18 13:22:05.467675 master-0 kubenswrapper[7599]: I0318 13:22:05.467218 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="cluster-policy-controller" containerID="cri-o://057d6561c0f4da44fc1dbbb3cf541c1859a6f838b5eed3e585b47f89bb483358" gracePeriod=30 Mar 18 13:22:05.467675 master-0 kubenswrapper[7599]: I0318 13:22:05.467255 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://d37eeb556049e5b3b4f4b9b22a5d363b374b83b4717134010b728f561f5eda04" gracePeriod=30 Mar 18 13:22:05.474485 master-0 kubenswrapper[7599]: I0318 13:22:05.469190 7599 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 13:22:05.474485 master-0 kubenswrapper[7599]: E0318 13:22:05.469531 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="cluster-policy-controller" Mar 18 13:22:05.474485 master-0 kubenswrapper[7599]: I0318 13:22:05.469548 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="cluster-policy-controller" Mar 18 13:22:05.474485 master-0 kubenswrapper[7599]: E0318 13:22:05.469559 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="kube-controller-manager" Mar 18 13:22:05.474485 master-0 kubenswrapper[7599]: I0318 13:22:05.469567 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="kube-controller-manager" Mar 18 13:22:05.474485 master-0 kubenswrapper[7599]: E0318 13:22:05.469591 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="cluster-policy-controller" Mar 18 13:22:05.474485 master-0 kubenswrapper[7599]: I0318 13:22:05.469601 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="cluster-policy-controller" Mar 18 13:22:05.474485 master-0 kubenswrapper[7599]: E0318 13:22:05.469621 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="cluster-policy-controller" Mar 18 13:22:05.474485 master-0 kubenswrapper[7599]: I0318 13:22:05.469630 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="cluster-policy-controller" Mar 18 13:22:05.474485 master-0 kubenswrapper[7599]: E0318 13:22:05.469647 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="cluster-policy-controller" Mar 18 13:22:05.474485 master-0 kubenswrapper[7599]: I0318 13:22:05.469656 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="cluster-policy-controller" Mar 18 13:22:05.474485 master-0 kubenswrapper[7599]: E0318 13:22:05.469671 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="cluster-policy-controller" Mar 18 13:22:05.474485 master-0 kubenswrapper[7599]: I0318 13:22:05.469681 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="cluster-policy-controller" Mar 18 13:22:05.474485 master-0 kubenswrapper[7599]: E0318 13:22:05.469700 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="kube-controller-manager-cert-syncer" Mar 18 13:22:05.474485 master-0 kubenswrapper[7599]: I0318 13:22:05.469712 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="kube-controller-manager-cert-syncer" Mar 18 13:22:05.474485 master-0 kubenswrapper[7599]: E0318 13:22:05.469728 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="kube-controller-manager-recovery-controller" Mar 18 13:22:05.474485 master-0 kubenswrapper[7599]: I0318 13:22:05.469741 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="kube-controller-manager-recovery-controller" Mar 18 13:22:05.474485 master-0 kubenswrapper[7599]: I0318 13:22:05.469919 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="kube-controller-manager" Mar 18 13:22:05.474485 master-0 kubenswrapper[7599]: I0318 13:22:05.469938 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="cluster-policy-controller" Mar 18 13:22:05.474485 master-0 kubenswrapper[7599]: I0318 13:22:05.469956 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="cluster-policy-controller" Mar 18 13:22:05.474485 master-0 kubenswrapper[7599]: I0318 13:22:05.469969 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="cluster-policy-controller" Mar 18 13:22:05.474485 master-0 kubenswrapper[7599]: I0318 13:22:05.469985 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="cluster-policy-controller" Mar 18 13:22:05.474485 master-0 kubenswrapper[7599]: I0318 13:22:05.470000 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="cluster-policy-controller" Mar 18 13:22:05.474485 master-0 kubenswrapper[7599]: I0318 13:22:05.470010 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="kube-controller-manager" Mar 18 13:22:05.474485 master-0 kubenswrapper[7599]: I0318 13:22:05.470026 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="kube-controller-manager-recovery-controller" Mar 18 13:22:05.474485 master-0 kubenswrapper[7599]: I0318 13:22:05.470040 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="kube-controller-manager-cert-syncer" Mar 18 13:22:05.474485 master-0 kubenswrapper[7599]: E0318 13:22:05.470232 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="kube-controller-manager" Mar 18 13:22:05.474485 master-0 kubenswrapper[7599]: I0318 13:22:05.470249 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="kube-controller-manager" Mar 18 13:22:05.580276 master-0 kubenswrapper[7599]: I0318 13:22:05.580223 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c129e07da670ff3af256d72652e4b1da-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c129e07da670ff3af256d72652e4b1da\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:05.580465 master-0 kubenswrapper[7599]: I0318 13:22:05.580287 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c129e07da670ff3af256d72652e4b1da-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c129e07da670ff3af256d72652e4b1da\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:05.632196 master-0 kubenswrapper[7599]: I0318 13:22:05.632135 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ce43e217adc4d0869adee3ba7c628c00/cluster-policy-controller/3.log" Mar 18 13:22:05.634217 master-0 kubenswrapper[7599]: I0318 13:22:05.634188 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ce43e217adc4d0869adee3ba7c628c00/kube-controller-manager-cert-syncer/0.log" Mar 18 13:22:05.634886 master-0 kubenswrapper[7599]: I0318 13:22:05.634858 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ce43e217adc4d0869adee3ba7c628c00/kube-controller-manager/0.log" Mar 18 13:22:05.634937 master-0 kubenswrapper[7599]: I0318 13:22:05.634914 7599 generic.go:334] "Generic (PLEG): container finished" podID="ce43e217adc4d0869adee3ba7c628c00" containerID="057d6561c0f4da44fc1dbbb3cf541c1859a6f838b5eed3e585b47f89bb483358" exitCode=0 Mar 18 13:22:05.634937 master-0 kubenswrapper[7599]: I0318 13:22:05.634933 7599 generic.go:334] "Generic (PLEG): container finished" podID="ce43e217adc4d0869adee3ba7c628c00" containerID="fb60ab1fea57ec49871d5edaaf3891b0d60ae36efb59421fd58289dbb8a18b9d" exitCode=0 Mar 18 13:22:05.635018 master-0 kubenswrapper[7599]: I0318 13:22:05.634940 7599 generic.go:334] "Generic (PLEG): container finished" podID="ce43e217adc4d0869adee3ba7c628c00" containerID="d37eeb556049e5b3b4f4b9b22a5d363b374b83b4717134010b728f561f5eda04" exitCode=0 Mar 18 13:22:05.635018 master-0 kubenswrapper[7599]: I0318 13:22:05.634947 7599 generic.go:334] "Generic (PLEG): container finished" podID="ce43e217adc4d0869adee3ba7c628c00" containerID="704f6a2e758bf39725732abfe9688e37d2c20b27969fbf143112e26938fff48b" exitCode=2 Mar 18 13:22:05.635018 master-0 kubenswrapper[7599]: I0318 13:22:05.634981 7599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60156cd8a457797db9bf54e48022d6e4ae174300834ce3ef829021fe366c28b0" Mar 18 13:22:05.635018 master-0 kubenswrapper[7599]: I0318 13:22:05.634995 7599 scope.go:117] "RemoveContainer" containerID="32969b726c2002b7577cb3809f73ec56502c0bea8740fe451098a36b38c87375" Mar 18 13:22:05.649924 master-0 kubenswrapper[7599]: I0318 13:22:05.649882 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ce43e217adc4d0869adee3ba7c628c00/kube-controller-manager-cert-syncer/0.log" Mar 18 13:22:05.650761 master-0 kubenswrapper[7599]: I0318 13:22:05.650723 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ce43e217adc4d0869adee3ba7c628c00/kube-controller-manager/0.log" Mar 18 13:22:05.650880 master-0 kubenswrapper[7599]: I0318 13:22:05.650848 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:05.652504 master-0 kubenswrapper[7599]: I0318 13:22:05.652471 7599 scope.go:117] "RemoveContainer" containerID="44e87b551cd25fba74201071dfbbc65a904f19d68cc2d608c5f938a0ac57ad14" Mar 18 13:22:05.654605 master-0 kubenswrapper[7599]: I0318 13:22:05.654437 7599 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="ce43e217adc4d0869adee3ba7c628c00" podUID="c129e07da670ff3af256d72652e4b1da" Mar 18 13:22:05.681642 master-0 kubenswrapper[7599]: I0318 13:22:05.681579 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c129e07da670ff3af256d72652e4b1da-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c129e07da670ff3af256d72652e4b1da\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:05.681642 master-0 kubenswrapper[7599]: I0318 13:22:05.681638 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c129e07da670ff3af256d72652e4b1da-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c129e07da670ff3af256d72652e4b1da\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:05.681956 master-0 kubenswrapper[7599]: I0318 13:22:05.681697 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c129e07da670ff3af256d72652e4b1da-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c129e07da670ff3af256d72652e4b1da\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:05.681956 master-0 kubenswrapper[7599]: I0318 13:22:05.681801 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c129e07da670ff3af256d72652e4b1da-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c129e07da670ff3af256d72652e4b1da\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:05.783206 master-0 kubenswrapper[7599]: I0318 13:22:05.783054 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ce43e217adc4d0869adee3ba7c628c00-cert-dir\") pod \"ce43e217adc4d0869adee3ba7c628c00\" (UID: \"ce43e217adc4d0869adee3ba7c628c00\") " Mar 18 13:22:05.783206 master-0 kubenswrapper[7599]: I0318 13:22:05.783175 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce43e217adc4d0869adee3ba7c628c00-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "ce43e217adc4d0869adee3ba7c628c00" (UID: "ce43e217adc4d0869adee3ba7c628c00"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:22:05.783514 master-0 kubenswrapper[7599]: I0318 13:22:05.783231 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ce43e217adc4d0869adee3ba7c628c00-resource-dir\") pod \"ce43e217adc4d0869adee3ba7c628c00\" (UID: \"ce43e217adc4d0869adee3ba7c628c00\") " Mar 18 13:22:05.783514 master-0 kubenswrapper[7599]: I0318 13:22:05.783313 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce43e217adc4d0869adee3ba7c628c00-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "ce43e217adc4d0869adee3ba7c628c00" (UID: "ce43e217adc4d0869adee3ba7c628c00"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:22:05.783514 master-0 kubenswrapper[7599]: I0318 13:22:05.783482 7599 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ce43e217adc4d0869adee3ba7c628c00-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:22:05.783514 master-0 kubenswrapper[7599]: I0318 13:22:05.783494 7599 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ce43e217adc4d0869adee3ba7c628c00-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:22:06.644487 master-0 kubenswrapper[7599]: I0318 13:22:06.644363 7599 generic.go:334] "Generic (PLEG): container finished" podID="cb385758-78ae-46b3-994e-fec9b14b7322" containerID="254c4c55fc5a8cefc576158a3cd6566c4e22decb0988ded62e89b98504ee1458" exitCode=0 Mar 18 13:22:06.645492 master-0 kubenswrapper[7599]: I0318 13:22:06.644505 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"cb385758-78ae-46b3-994e-fec9b14b7322","Type":"ContainerDied","Data":"254c4c55fc5a8cefc576158a3cd6566c4e22decb0988ded62e89b98504ee1458"} Mar 18 13:22:06.647662 master-0 kubenswrapper[7599]: I0318 13:22:06.647602 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_ce43e217adc4d0869adee3ba7c628c00/kube-controller-manager-cert-syncer/0.log" Mar 18 13:22:06.647893 master-0 kubenswrapper[7599]: I0318 13:22:06.647744 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:06.678076 master-0 kubenswrapper[7599]: I0318 13:22:06.673548 7599 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="ce43e217adc4d0869adee3ba7c628c00" podUID="c129e07da670ff3af256d72652e4b1da" Mar 18 13:22:07.372589 master-0 kubenswrapper[7599]: I0318 13:22:07.372408 7599 scope.go:117] "RemoveContainer" containerID="caba03fd38cc380067c9996cdbf9e7480fdd072e4dbef7e8d359a205ae43b4e1" Mar 18 13:22:07.381844 master-0 kubenswrapper[7599]: I0318 13:22:07.381778 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce43e217adc4d0869adee3ba7c628c00" path="/var/lib/kubelet/pods/ce43e217adc4d0869adee3ba7c628c00/volumes" Mar 18 13:22:07.658105 master-0 kubenswrapper[7599]: I0318 13:22:07.658006 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-wqxpk_d9d09a56-ed4c-40b7-8be1-f3934c07296e/ingress-operator/4.log" Mar 18 13:22:07.659299 master-0 kubenswrapper[7599]: I0318 13:22:07.659236 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" event={"ID":"d9d09a56-ed4c-40b7-8be1-f3934c07296e","Type":"ContainerStarted","Data":"a272363aabc94bf515887116c3094b118b2c3e6ac7802ab09d5f4466b9ec2a97"} Mar 18 13:22:07.972339 master-0 kubenswrapper[7599]: I0318 13:22:07.972296 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 18 13:22:08.123392 master-0 kubenswrapper[7599]: I0318 13:22:08.123345 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb385758-78ae-46b3-994e-fec9b14b7322-kube-api-access\") pod \"cb385758-78ae-46b3-994e-fec9b14b7322\" (UID: \"cb385758-78ae-46b3-994e-fec9b14b7322\") " Mar 18 13:22:08.123737 master-0 kubenswrapper[7599]: I0318 13:22:08.123720 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb385758-78ae-46b3-994e-fec9b14b7322-kubelet-dir\") pod \"cb385758-78ae-46b3-994e-fec9b14b7322\" (UID: \"cb385758-78ae-46b3-994e-fec9b14b7322\") " Mar 18 13:22:08.123828 master-0 kubenswrapper[7599]: I0318 13:22:08.123789 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb385758-78ae-46b3-994e-fec9b14b7322-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cb385758-78ae-46b3-994e-fec9b14b7322" (UID: "cb385758-78ae-46b3-994e-fec9b14b7322"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:22:08.123919 master-0 kubenswrapper[7599]: I0318 13:22:08.123901 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cb385758-78ae-46b3-994e-fec9b14b7322-var-lock\") pod \"cb385758-78ae-46b3-994e-fec9b14b7322\" (UID: \"cb385758-78ae-46b3-994e-fec9b14b7322\") " Mar 18 13:22:08.124152 master-0 kubenswrapper[7599]: I0318 13:22:08.123933 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb385758-78ae-46b3-994e-fec9b14b7322-var-lock" (OuterVolumeSpecName: "var-lock") pod "cb385758-78ae-46b3-994e-fec9b14b7322" (UID: "cb385758-78ae-46b3-994e-fec9b14b7322"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:22:08.124358 master-0 kubenswrapper[7599]: I0318 13:22:08.124343 7599 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb385758-78ae-46b3-994e-fec9b14b7322-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:22:08.124458 master-0 kubenswrapper[7599]: I0318 13:22:08.124445 7599 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cb385758-78ae-46b3-994e-fec9b14b7322-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:22:08.125870 master-0 kubenswrapper[7599]: I0318 13:22:08.125843 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb385758-78ae-46b3-994e-fec9b14b7322-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cb385758-78ae-46b3-994e-fec9b14b7322" (UID: "cb385758-78ae-46b3-994e-fec9b14b7322"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:22:08.225795 master-0 kubenswrapper[7599]: I0318 13:22:08.225696 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb385758-78ae-46b3-994e-fec9b14b7322-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:22:08.667023 master-0 kubenswrapper[7599]: I0318 13:22:08.666949 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"cb385758-78ae-46b3-994e-fec9b14b7322","Type":"ContainerDied","Data":"44cadcc137f107d216cd01d7217282fd78fcbb3fd1c79dd935088ac2165b138b"} Mar 18 13:22:08.667023 master-0 kubenswrapper[7599]: I0318 13:22:08.667004 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 18 13:22:08.667023 master-0 kubenswrapper[7599]: I0318 13:22:08.667021 7599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44cadcc137f107d216cd01d7217282fd78fcbb3fd1c79dd935088ac2165b138b" Mar 18 13:22:11.090331 master-0 kubenswrapper[7599]: I0318 13:22:11.090278 7599 scope.go:117] "RemoveContainer" containerID="d37eeb556049e5b3b4f4b9b22a5d363b374b83b4717134010b728f561f5eda04" Mar 18 13:22:11.119637 master-0 kubenswrapper[7599]: I0318 13:22:11.119571 7599 scope.go:117] "RemoveContainer" containerID="704f6a2e758bf39725732abfe9688e37d2c20b27969fbf143112e26938fff48b" Mar 18 13:22:11.371111 master-0 kubenswrapper[7599]: I0318 13:22:11.370972 7599 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="9f52b11b-6ba0-49bd-8220-405f8b5303fe" Mar 18 13:22:11.371111 master-0 kubenswrapper[7599]: I0318 13:22:11.371013 7599 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="9f52b11b-6ba0-49bd-8220-405f8b5303fe" Mar 18 13:22:11.397346 master-0 kubenswrapper[7599]: I0318 13:22:11.397252 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 18 13:22:11.403622 master-0 kubenswrapper[7599]: I0318 13:22:11.403356 7599 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-0" Mar 18 13:22:11.420590 master-0 kubenswrapper[7599]: I0318 13:22:11.417328 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 18 13:22:11.439878 master-0 kubenswrapper[7599]: I0318 13:22:11.439749 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 18 13:22:11.687035 master-0 kubenswrapper[7599]: I0318 13:22:11.686975 7599 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="9f52b11b-6ba0-49bd-8220-405f8b5303fe" Mar 18 13:22:11.687035 master-0 kubenswrapper[7599]: I0318 13:22:11.687013 7599 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="9f52b11b-6ba0-49bd-8220-405f8b5303fe" Mar 18 13:22:14.709912 master-0 kubenswrapper[7599]: I0318 13:22:14.709859 7599 generic.go:334] "Generic (PLEG): container finished" podID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerID="ca11ff8dd74bbd57e44f6070d192194a64ab628351ce867a0ac332f4e51a71b5" exitCode=0 Mar 18 13:22:14.710470 master-0 kubenswrapper[7599]: I0318 13:22:14.709924 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" event={"ID":"00375107-9a3b-4161-a90d-72ea8827c5fc","Type":"ContainerDied","Data":"ca11ff8dd74bbd57e44f6070d192194a64ab628351ce867a0ac332f4e51a71b5"} Mar 18 13:22:14.710470 master-0 kubenswrapper[7599]: I0318 13:22:14.709967 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" event={"ID":"00375107-9a3b-4161-a90d-72ea8827c5fc","Type":"ContainerStarted","Data":"3ad887c0a7265b813a19c4352fb7d718fc8a0cbf00d4ec6a7cef361eef024983"} Mar 18 13:22:14.710470 master-0 kubenswrapper[7599]: I0318 13:22:14.709995 7599 scope.go:117] "RemoveContainer" containerID="9c551ae25ef9367709ba8842a822330b49626584583ec5ef49474f8a67486429" Mar 18 13:22:14.753203 master-0 kubenswrapper[7599]: I0318 13:22:14.753070 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=3.753037731 podStartE2EDuration="3.753037731s" podCreationTimestamp="2026-03-18 13:22:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:22:14.750558522 +0000 UTC m=+909.711612824" watchObservedRunningTime="2026-03-18 13:22:14.753037731 +0000 UTC m=+909.714092003" Mar 18 13:22:15.251154 master-0 kubenswrapper[7599]: I0318 13:22:15.251060 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:22:15.251154 master-0 kubenswrapper[7599]: I0318 13:22:15.251113 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:22:15.254507 master-0 kubenswrapper[7599]: I0318 13:22:15.254445 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:15.254507 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:15.254507 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:15.254507 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:15.254788 master-0 kubenswrapper[7599]: I0318 13:22:15.254521 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:16.253831 master-0 kubenswrapper[7599]: I0318 13:22:16.253748 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:16.253831 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:16.253831 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:16.253831 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:16.254713 master-0 kubenswrapper[7599]: I0318 13:22:16.253832 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:17.253493 master-0 kubenswrapper[7599]: I0318 13:22:17.253396 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:17.253493 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:17.253493 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:17.253493 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:17.253493 master-0 kubenswrapper[7599]: I0318 13:22:17.253473 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:18.254122 master-0 kubenswrapper[7599]: I0318 13:22:18.254065 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:18.254122 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:18.254122 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:18.254122 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:18.254733 master-0 kubenswrapper[7599]: I0318 13:22:18.254121 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:18.372084 master-0 kubenswrapper[7599]: I0318 13:22:18.371991 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:18.406881 master-0 kubenswrapper[7599]: I0318 13:22:18.406789 7599 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c9d194a5-5ca1-48c3-addf-dfe7fcbc2527" Mar 18 13:22:18.406881 master-0 kubenswrapper[7599]: I0318 13:22:18.406848 7599 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c9d194a5-5ca1-48c3-addf-dfe7fcbc2527" Mar 18 13:22:18.424329 master-0 kubenswrapper[7599]: I0318 13:22:18.424258 7599 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:18.433005 master-0 kubenswrapper[7599]: I0318 13:22:18.432910 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 13:22:18.439402 master-0 kubenswrapper[7599]: I0318 13:22:18.439350 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:18.444652 master-0 kubenswrapper[7599]: I0318 13:22:18.444616 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 13:22:18.449088 master-0 kubenswrapper[7599]: I0318 13:22:18.449037 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 13:22:18.472866 master-0 kubenswrapper[7599]: W0318 13:22:18.472788 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc129e07da670ff3af256d72652e4b1da.slice/crio-12c899e36dc6ffd83c34c2d6e92c233e31c0860e033db20595d2d07c037dd6e7 WatchSource:0}: Error finding container 12c899e36dc6ffd83c34c2d6e92c233e31c0860e033db20595d2d07c037dd6e7: Status 404 returned error can't find the container with id 12c899e36dc6ffd83c34c2d6e92c233e31c0860e033db20595d2d07c037dd6e7 Mar 18 13:22:18.768629 master-0 kubenswrapper[7599]: I0318 13:22:18.768585 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c129e07da670ff3af256d72652e4b1da","Type":"ContainerStarted","Data":"771bd5b4b91a07c5659ebb9ce85816fcbf0812eb5cfe253bf1a7b334533c5d55"} Mar 18 13:22:18.768629 master-0 kubenswrapper[7599]: I0318 13:22:18.768631 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c129e07da670ff3af256d72652e4b1da","Type":"ContainerStarted","Data":"12c899e36dc6ffd83c34c2d6e92c233e31c0860e033db20595d2d07c037dd6e7"} Mar 18 13:22:19.254432 master-0 kubenswrapper[7599]: I0318 13:22:19.254082 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:19.254432 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:19.254432 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:19.254432 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:19.254432 master-0 kubenswrapper[7599]: I0318 13:22:19.254161 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:19.777464 master-0 kubenswrapper[7599]: I0318 13:22:19.777394 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c129e07da670ff3af256d72652e4b1da","Type":"ContainerStarted","Data":"d48e15ab911c60989cfb99c327ae7e0567590b1356659404b161af05907e4337"} Mar 18 13:22:19.777758 master-0 kubenswrapper[7599]: I0318 13:22:19.777737 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c129e07da670ff3af256d72652e4b1da","Type":"ContainerStarted","Data":"907f62a3ed0a66ba580ec70c52ca37e615478a0dc4f9200560aa2d1b0a9edbd8"} Mar 18 13:22:19.777849 master-0 kubenswrapper[7599]: I0318 13:22:19.777831 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c129e07da670ff3af256d72652e4b1da","Type":"ContainerStarted","Data":"80ea1bc510c56f8c04524b33659a9124c49ea6274b90ad277511efa074d7ad89"} Mar 18 13:22:19.802203 master-0 kubenswrapper[7599]: I0318 13:22:19.802108 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=1.802090322 podStartE2EDuration="1.802090322s" podCreationTimestamp="2026-03-18 13:22:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:22:19.7991227 +0000 UTC m=+914.760176952" watchObservedRunningTime="2026-03-18 13:22:19.802090322 +0000 UTC m=+914.763144564" Mar 18 13:22:20.253353 master-0 kubenswrapper[7599]: I0318 13:22:20.253288 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:20.253353 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:20.253353 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:20.253353 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:20.253353 master-0 kubenswrapper[7599]: I0318 13:22:20.253370 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:21.255292 master-0 kubenswrapper[7599]: I0318 13:22:21.255208 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:21.255292 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:21.255292 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:21.255292 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:21.256006 master-0 kubenswrapper[7599]: I0318 13:22:21.255305 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:22.254362 master-0 kubenswrapper[7599]: I0318 13:22:22.254293 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:22.254362 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:22.254362 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:22.254362 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:22.254961 master-0 kubenswrapper[7599]: I0318 13:22:22.254378 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:23.253767 master-0 kubenswrapper[7599]: I0318 13:22:23.253705 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:23.253767 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:23.253767 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:23.253767 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:23.254676 master-0 kubenswrapper[7599]: I0318 13:22:23.253778 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:24.254830 master-0 kubenswrapper[7599]: I0318 13:22:24.254772 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:24.254830 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:24.254830 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:24.254830 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:24.255998 master-0 kubenswrapper[7599]: I0318 13:22:24.255818 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:25.254554 master-0 kubenswrapper[7599]: I0318 13:22:25.254304 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:25.254554 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:25.254554 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:25.254554 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:25.254554 master-0 kubenswrapper[7599]: I0318 13:22:25.254408 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:26.253079 master-0 kubenswrapper[7599]: I0318 13:22:26.253004 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:26.253079 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:26.253079 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:26.253079 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:26.253079 master-0 kubenswrapper[7599]: I0318 13:22:26.253062 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:27.254713 master-0 kubenswrapper[7599]: I0318 13:22:27.254641 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:27.254713 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:27.254713 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:27.254713 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:27.255652 master-0 kubenswrapper[7599]: I0318 13:22:27.254724 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:28.254085 master-0 kubenswrapper[7599]: I0318 13:22:28.254018 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:28.254085 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:28.254085 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:28.254085 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:28.254503 master-0 kubenswrapper[7599]: I0318 13:22:28.254085 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:28.439787 master-0 kubenswrapper[7599]: I0318 13:22:28.439681 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:28.439787 master-0 kubenswrapper[7599]: I0318 13:22:28.439742 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:28.439787 master-0 kubenswrapper[7599]: I0318 13:22:28.439755 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:28.439787 master-0 kubenswrapper[7599]: I0318 13:22:28.439791 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:28.440733 master-0 kubenswrapper[7599]: I0318 13:22:28.440098 7599 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 18 13:22:28.440733 master-0 kubenswrapper[7599]: I0318 13:22:28.440133 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c129e07da670ff3af256d72652e4b1da" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 18 13:22:28.444508 master-0 kubenswrapper[7599]: I0318 13:22:28.444470 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:28.858628 master-0 kubenswrapper[7599]: I0318 13:22:28.858453 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:29.259205 master-0 kubenswrapper[7599]: I0318 13:22:29.258701 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:29.259205 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:29.259205 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:29.259205 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:29.259205 master-0 kubenswrapper[7599]: I0318 13:22:29.258786 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:30.253753 master-0 kubenswrapper[7599]: I0318 13:22:30.253692 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:30.253753 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:30.253753 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:30.253753 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:30.254341 master-0 kubenswrapper[7599]: I0318 13:22:30.253783 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:31.255109 master-0 kubenswrapper[7599]: I0318 13:22:31.255022 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:31.255109 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:31.255109 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:31.255109 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:31.255814 master-0 kubenswrapper[7599]: I0318 13:22:31.255149 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:32.253335 master-0 kubenswrapper[7599]: I0318 13:22:32.253281 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:32.253335 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:32.253335 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:32.253335 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:32.253622 master-0 kubenswrapper[7599]: I0318 13:22:32.253366 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:33.140592 master-0 kubenswrapper[7599]: I0318 13:22:33.140526 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 18 13:22:33.141315 master-0 kubenswrapper[7599]: E0318 13:22:33.140872 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb385758-78ae-46b3-994e-fec9b14b7322" containerName="installer" Mar 18 13:22:33.141315 master-0 kubenswrapper[7599]: I0318 13:22:33.140893 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb385758-78ae-46b3-994e-fec9b14b7322" containerName="installer" Mar 18 13:22:33.141315 master-0 kubenswrapper[7599]: I0318 13:22:33.141088 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb385758-78ae-46b3-994e-fec9b14b7322" containerName="installer" Mar 18 13:22:33.141820 master-0 kubenswrapper[7599]: I0318 13:22:33.141777 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 13:22:33.143907 master-0 kubenswrapper[7599]: I0318 13:22:33.143878 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-sd5ht" Mar 18 13:22:33.145347 master-0 kubenswrapper[7599]: I0318 13:22:33.145312 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 18 13:22:33.156631 master-0 kubenswrapper[7599]: I0318 13:22:33.156573 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 18 13:22:33.253579 master-0 kubenswrapper[7599]: I0318 13:22:33.253544 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:33.253579 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:33.253579 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:33.253579 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:33.253873 master-0 kubenswrapper[7599]: I0318 13:22:33.253848 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:33.300757 master-0 kubenswrapper[7599]: I0318 13:22:33.300701 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32b975a4-4df1-4196-b3b4-b66a682f1c07-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"32b975a4-4df1-4196-b3b4-b66a682f1c07\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 13:22:33.300757 master-0 kubenswrapper[7599]: I0318 13:22:33.300743 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/32b975a4-4df1-4196-b3b4-b66a682f1c07-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"32b975a4-4df1-4196-b3b4-b66a682f1c07\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 13:22:33.300974 master-0 kubenswrapper[7599]: I0318 13:22:33.300805 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/32b975a4-4df1-4196-b3b4-b66a682f1c07-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"32b975a4-4df1-4196-b3b4-b66a682f1c07\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 13:22:33.411187 master-0 kubenswrapper[7599]: I0318 13:22:33.411097 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/32b975a4-4df1-4196-b3b4-b66a682f1c07-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"32b975a4-4df1-4196-b3b4-b66a682f1c07\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 13:22:33.411538 master-0 kubenswrapper[7599]: I0318 13:22:33.411309 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32b975a4-4df1-4196-b3b4-b66a682f1c07-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"32b975a4-4df1-4196-b3b4-b66a682f1c07\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 13:22:33.411538 master-0 kubenswrapper[7599]: I0318 13:22:33.411371 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/32b975a4-4df1-4196-b3b4-b66a682f1c07-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"32b975a4-4df1-4196-b3b4-b66a682f1c07\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 13:22:33.411761 master-0 kubenswrapper[7599]: I0318 13:22:33.411705 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/32b975a4-4df1-4196-b3b4-b66a682f1c07-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"32b975a4-4df1-4196-b3b4-b66a682f1c07\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 13:22:33.412314 master-0 kubenswrapper[7599]: I0318 13:22:33.412274 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32b975a4-4df1-4196-b3b4-b66a682f1c07-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"32b975a4-4df1-4196-b3b4-b66a682f1c07\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 13:22:33.435025 master-0 kubenswrapper[7599]: I0318 13:22:33.434951 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/32b975a4-4df1-4196-b3b4-b66a682f1c07-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"32b975a4-4df1-4196-b3b4-b66a682f1c07\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 13:22:33.469653 master-0 kubenswrapper[7599]: I0318 13:22:33.469564 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 13:22:33.957637 master-0 kubenswrapper[7599]: I0318 13:22:33.957486 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 18 13:22:34.254893 master-0 kubenswrapper[7599]: I0318 13:22:34.254726 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:34.254893 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:34.254893 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:34.254893 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:34.254893 master-0 kubenswrapper[7599]: I0318 13:22:34.254809 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:34.891464 master-0 kubenswrapper[7599]: I0318 13:22:34.891380 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"32b975a4-4df1-4196-b3b4-b66a682f1c07","Type":"ContainerStarted","Data":"aa8d03ff85a8f58a45c281a82585326d1af9c93c72fbdc7dc00702eea6ae2a0d"} Mar 18 13:22:34.891464 master-0 kubenswrapper[7599]: I0318 13:22:34.891455 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"32b975a4-4df1-4196-b3b4-b66a682f1c07","Type":"ContainerStarted","Data":"9c0c0424d6321c2bca6a91938ccdfd31f722ed50892f3ff8b8322a4e7437e11e"} Mar 18 13:22:34.909140 master-0 kubenswrapper[7599]: I0318 13:22:34.909051 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podStartSLOduration=1.909032966 podStartE2EDuration="1.909032966s" podCreationTimestamp="2026-03-18 13:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:22:34.908259685 +0000 UTC m=+929.869313967" watchObservedRunningTime="2026-03-18 13:22:34.909032966 +0000 UTC m=+929.870087208" Mar 18 13:22:35.253896 master-0 kubenswrapper[7599]: I0318 13:22:35.253847 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:35.253896 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:35.253896 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:35.253896 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:35.254254 master-0 kubenswrapper[7599]: I0318 13:22:35.254223 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:36.255268 master-0 kubenswrapper[7599]: I0318 13:22:36.255196 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:36.255268 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:36.255268 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:36.255268 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:36.255951 master-0 kubenswrapper[7599]: I0318 13:22:36.255280 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:37.253359 master-0 kubenswrapper[7599]: I0318 13:22:37.253290 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:37.253359 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:37.253359 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:37.253359 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:37.253693 master-0 kubenswrapper[7599]: I0318 13:22:37.253362 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:38.254775 master-0 kubenswrapper[7599]: I0318 13:22:38.254683 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:38.254775 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:38.254775 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:38.254775 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:38.255613 master-0 kubenswrapper[7599]: I0318 13:22:38.254789 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:38.440045 master-0 kubenswrapper[7599]: I0318 13:22:38.439975 7599 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 18 13:22:38.440326 master-0 kubenswrapper[7599]: I0318 13:22:38.440086 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c129e07da670ff3af256d72652e4b1da" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 18 13:22:39.254772 master-0 kubenswrapper[7599]: I0318 13:22:39.254691 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:39.254772 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:39.254772 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:39.254772 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:39.256083 master-0 kubenswrapper[7599]: I0318 13:22:39.254788 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:39.739712 master-0 kubenswrapper[7599]: I0318 13:22:39.739614 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 18 13:22:39.740096 master-0 kubenswrapper[7599]: I0318 13:22:39.739997 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podUID="32b975a4-4df1-4196-b3b4-b66a682f1c07" containerName="installer" containerID="cri-o://aa8d03ff85a8f58a45c281a82585326d1af9c93c72fbdc7dc00702eea6ae2a0d" gracePeriod=30 Mar 18 13:22:40.253886 master-0 kubenswrapper[7599]: I0318 13:22:40.253815 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:40.253886 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:40.253886 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:40.253886 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:40.253886 master-0 kubenswrapper[7599]: I0318 13:22:40.253879 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:41.255193 master-0 kubenswrapper[7599]: I0318 13:22:41.255070 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:41.255193 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:41.255193 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:41.255193 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:41.255193 master-0 kubenswrapper[7599]: I0318 13:22:41.255152 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:42.253776 master-0 kubenswrapper[7599]: I0318 13:22:42.253691 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:42.253776 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:42.253776 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:42.253776 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:42.254280 master-0 kubenswrapper[7599]: I0318 13:22:42.253812 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:43.253496 master-0 kubenswrapper[7599]: I0318 13:22:43.253453 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:43.253496 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:43.253496 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:43.253496 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:43.254093 master-0 kubenswrapper[7599]: I0318 13:22:43.254067 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:44.254269 master-0 kubenswrapper[7599]: I0318 13:22:44.254219 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:44.254269 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:44.254269 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:44.254269 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:44.254269 master-0 kubenswrapper[7599]: I0318 13:22:44.254272 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:44.343014 master-0 kubenswrapper[7599]: I0318 13:22:44.342889 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 18 13:22:44.344605 master-0 kubenswrapper[7599]: I0318 13:22:44.344544 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 13:22:44.351087 master-0 kubenswrapper[7599]: I0318 13:22:44.351022 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 18 13:22:44.486498 master-0 kubenswrapper[7599]: I0318 13:22:44.486458 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/552316d4-cfd4-45fe-8a05-d614c7326641-kube-api-access\") pod \"installer-2-master-0\" (UID: \"552316d4-cfd4-45fe-8a05-d614c7326641\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 13:22:44.486789 master-0 kubenswrapper[7599]: I0318 13:22:44.486766 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/552316d4-cfd4-45fe-8a05-d614c7326641-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"552316d4-cfd4-45fe-8a05-d614c7326641\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 13:22:44.487032 master-0 kubenswrapper[7599]: I0318 13:22:44.487017 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/552316d4-cfd4-45fe-8a05-d614c7326641-var-lock\") pod \"installer-2-master-0\" (UID: \"552316d4-cfd4-45fe-8a05-d614c7326641\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 13:22:44.588452 master-0 kubenswrapper[7599]: I0318 13:22:44.588234 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/552316d4-cfd4-45fe-8a05-d614c7326641-kube-api-access\") pod \"installer-2-master-0\" (UID: \"552316d4-cfd4-45fe-8a05-d614c7326641\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 13:22:44.588452 master-0 kubenswrapper[7599]: I0318 13:22:44.588383 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/552316d4-cfd4-45fe-8a05-d614c7326641-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"552316d4-cfd4-45fe-8a05-d614c7326641\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 13:22:44.588733 master-0 kubenswrapper[7599]: I0318 13:22:44.588490 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/552316d4-cfd4-45fe-8a05-d614c7326641-var-lock\") pod \"installer-2-master-0\" (UID: \"552316d4-cfd4-45fe-8a05-d614c7326641\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 13:22:44.588733 master-0 kubenswrapper[7599]: I0318 13:22:44.588525 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/552316d4-cfd4-45fe-8a05-d614c7326641-var-lock\") pod \"installer-2-master-0\" (UID: \"552316d4-cfd4-45fe-8a05-d614c7326641\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 13:22:44.588733 master-0 kubenswrapper[7599]: I0318 13:22:44.588494 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/552316d4-cfd4-45fe-8a05-d614c7326641-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"552316d4-cfd4-45fe-8a05-d614c7326641\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 13:22:44.604181 master-0 kubenswrapper[7599]: I0318 13:22:44.604117 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/552316d4-cfd4-45fe-8a05-d614c7326641-kube-api-access\") pod \"installer-2-master-0\" (UID: \"552316d4-cfd4-45fe-8a05-d614c7326641\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 13:22:44.712468 master-0 kubenswrapper[7599]: I0318 13:22:44.711941 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 13:22:45.118362 master-0 kubenswrapper[7599]: I0318 13:22:45.118312 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 18 13:22:45.124275 master-0 kubenswrapper[7599]: W0318 13:22:45.124198 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod552316d4_cfd4_45fe_8a05_d614c7326641.slice/crio-4cba852792f376c53ac741dbadebf84341d398d26e5948059c709e559af01fdf WatchSource:0}: Error finding container 4cba852792f376c53ac741dbadebf84341d398d26e5948059c709e559af01fdf: Status 404 returned error can't find the container with id 4cba852792f376c53ac741dbadebf84341d398d26e5948059c709e559af01fdf Mar 18 13:22:45.253330 master-0 kubenswrapper[7599]: I0318 13:22:45.253119 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:45.253330 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:45.253330 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:45.253330 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:45.253330 master-0 kubenswrapper[7599]: I0318 13:22:45.253201 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:45.961606 master-0 kubenswrapper[7599]: I0318 13:22:45.961552 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"552316d4-cfd4-45fe-8a05-d614c7326641","Type":"ContainerStarted","Data":"e58c0807de65e2040eacaa35e551e04f36267f67d17d480710d54867d79170a4"} Mar 18 13:22:45.962608 master-0 kubenswrapper[7599]: I0318 13:22:45.962550 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"552316d4-cfd4-45fe-8a05-d614c7326641","Type":"ContainerStarted","Data":"4cba852792f376c53ac741dbadebf84341d398d26e5948059c709e559af01fdf"} Mar 18 13:22:45.991772 master-0 kubenswrapper[7599]: I0318 13:22:45.991680 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-master-0" podStartSLOduration=1.991660942 podStartE2EDuration="1.991660942s" podCreationTimestamp="2026-03-18 13:22:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:22:45.978189986 +0000 UTC m=+940.939244258" watchObservedRunningTime="2026-03-18 13:22:45.991660942 +0000 UTC m=+940.952715184" Mar 18 13:22:46.254375 master-0 kubenswrapper[7599]: I0318 13:22:46.254297 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:46.254375 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:46.254375 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:46.254375 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:46.254873 master-0 kubenswrapper[7599]: I0318 13:22:46.254384 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:47.255216 master-0 kubenswrapper[7599]: I0318 13:22:47.255111 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:47.255216 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:47.255216 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:47.255216 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:47.256224 master-0 kubenswrapper[7599]: I0318 13:22:47.255218 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:48.255107 master-0 kubenswrapper[7599]: I0318 13:22:48.254970 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:48.255107 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:48.255107 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:48.255107 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:48.256188 master-0 kubenswrapper[7599]: I0318 13:22:48.255113 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:48.444285 master-0 kubenswrapper[7599]: I0318 13:22:48.444200 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:48.448970 master-0 kubenswrapper[7599]: I0318 13:22:48.448915 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:22:49.254036 master-0 kubenswrapper[7599]: I0318 13:22:49.253971 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:49.254036 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:49.254036 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:49.254036 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:49.254036 master-0 kubenswrapper[7599]: I0318 13:22:49.254034 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:50.255217 master-0 kubenswrapper[7599]: I0318 13:22:50.255121 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:50.255217 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:50.255217 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:50.255217 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:50.256622 master-0 kubenswrapper[7599]: I0318 13:22:50.255225 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:51.254126 master-0 kubenswrapper[7599]: I0318 13:22:51.254010 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:51.254126 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:51.254126 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:51.254126 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:51.254126 master-0 kubenswrapper[7599]: I0318 13:22:51.254094 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:52.255173 master-0 kubenswrapper[7599]: I0318 13:22:52.255053 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:52.255173 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:52.255173 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:52.255173 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:52.255173 master-0 kubenswrapper[7599]: I0318 13:22:52.255166 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:53.253988 master-0 kubenswrapper[7599]: I0318 13:22:53.253891 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:53.253988 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:53.253988 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:53.253988 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:53.254457 master-0 kubenswrapper[7599]: I0318 13:22:53.253984 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:54.254059 master-0 kubenswrapper[7599]: I0318 13:22:54.253975 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:54.254059 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:54.254059 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:54.254059 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:54.254059 master-0 kubenswrapper[7599]: I0318 13:22:54.254041 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:55.254451 master-0 kubenswrapper[7599]: I0318 13:22:55.254374 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:55.254451 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:55.254451 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:55.254451 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:55.255062 master-0 kubenswrapper[7599]: I0318 13:22:55.254496 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:56.254806 master-0 kubenswrapper[7599]: I0318 13:22:56.254690 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:56.254806 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:56.254806 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:56.254806 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:56.254806 master-0 kubenswrapper[7599]: I0318 13:22:56.254784 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:57.254076 master-0 kubenswrapper[7599]: I0318 13:22:57.254012 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:57.254076 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:57.254076 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:57.254076 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:57.254474 master-0 kubenswrapper[7599]: I0318 13:22:57.254093 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:58.253983 master-0 kubenswrapper[7599]: I0318 13:22:58.253927 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:58.253983 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:58.253983 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:58.253983 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:58.254582 master-0 kubenswrapper[7599]: I0318 13:22:58.253990 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:58.535388 master-0 kubenswrapper[7599]: I0318 13:22:58.535194 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 18 13:22:58.535878 master-0 kubenswrapper[7599]: I0318 13:22:58.535480 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-2-master-0" podUID="552316d4-cfd4-45fe-8a05-d614c7326641" containerName="installer" containerID="cri-o://e58c0807de65e2040eacaa35e551e04f36267f67d17d480710d54867d79170a4" gracePeriod=30 Mar 18 13:22:58.936998 master-0 kubenswrapper[7599]: I0318 13:22:58.936937 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_552316d4-cfd4-45fe-8a05-d614c7326641/installer/0.log" Mar 18 13:22:58.937212 master-0 kubenswrapper[7599]: I0318 13:22:58.937047 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 13:22:59.067060 master-0 kubenswrapper[7599]: I0318 13:22:59.067019 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_552316d4-cfd4-45fe-8a05-d614c7326641/installer/0.log" Mar 18 13:22:59.067294 master-0 kubenswrapper[7599]: I0318 13:22:59.067081 7599 generic.go:334] "Generic (PLEG): container finished" podID="552316d4-cfd4-45fe-8a05-d614c7326641" containerID="e58c0807de65e2040eacaa35e551e04f36267f67d17d480710d54867d79170a4" exitCode=1 Mar 18 13:22:59.067294 master-0 kubenswrapper[7599]: I0318 13:22:59.067113 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"552316d4-cfd4-45fe-8a05-d614c7326641","Type":"ContainerDied","Data":"e58c0807de65e2040eacaa35e551e04f36267f67d17d480710d54867d79170a4"} Mar 18 13:22:59.067294 master-0 kubenswrapper[7599]: I0318 13:22:59.067139 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"552316d4-cfd4-45fe-8a05-d614c7326641","Type":"ContainerDied","Data":"4cba852792f376c53ac741dbadebf84341d398d26e5948059c709e559af01fdf"} Mar 18 13:22:59.067294 master-0 kubenswrapper[7599]: I0318 13:22:59.067157 7599 scope.go:117] "RemoveContainer" containerID="e58c0807de65e2040eacaa35e551e04f36267f67d17d480710d54867d79170a4" Mar 18 13:22:59.067461 master-0 kubenswrapper[7599]: I0318 13:22:59.067298 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 13:22:59.090235 master-0 kubenswrapper[7599]: I0318 13:22:59.090175 7599 scope.go:117] "RemoveContainer" containerID="e58c0807de65e2040eacaa35e551e04f36267f67d17d480710d54867d79170a4" Mar 18 13:22:59.090806 master-0 kubenswrapper[7599]: E0318 13:22:59.090768 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e58c0807de65e2040eacaa35e551e04f36267f67d17d480710d54867d79170a4\": container with ID starting with e58c0807de65e2040eacaa35e551e04f36267f67d17d480710d54867d79170a4 not found: ID does not exist" containerID="e58c0807de65e2040eacaa35e551e04f36267f67d17d480710d54867d79170a4" Mar 18 13:22:59.090874 master-0 kubenswrapper[7599]: I0318 13:22:59.090802 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e58c0807de65e2040eacaa35e551e04f36267f67d17d480710d54867d79170a4"} err="failed to get container status \"e58c0807de65e2040eacaa35e551e04f36267f67d17d480710d54867d79170a4\": rpc error: code = NotFound desc = could not find container \"e58c0807de65e2040eacaa35e551e04f36267f67d17d480710d54867d79170a4\": container with ID starting with e58c0807de65e2040eacaa35e551e04f36267f67d17d480710d54867d79170a4 not found: ID does not exist" Mar 18 13:22:59.108514 master-0 kubenswrapper[7599]: I0318 13:22:59.108461 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/552316d4-cfd4-45fe-8a05-d614c7326641-var-lock\") pod \"552316d4-cfd4-45fe-8a05-d614c7326641\" (UID: \"552316d4-cfd4-45fe-8a05-d614c7326641\") " Mar 18 13:22:59.108514 master-0 kubenswrapper[7599]: I0318 13:22:59.108520 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/552316d4-cfd4-45fe-8a05-d614c7326641-kubelet-dir\") pod \"552316d4-cfd4-45fe-8a05-d614c7326641\" (UID: \"552316d4-cfd4-45fe-8a05-d614c7326641\") " Mar 18 13:22:59.108882 master-0 kubenswrapper[7599]: I0318 13:22:59.108597 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/552316d4-cfd4-45fe-8a05-d614c7326641-kube-api-access\") pod \"552316d4-cfd4-45fe-8a05-d614c7326641\" (UID: \"552316d4-cfd4-45fe-8a05-d614c7326641\") " Mar 18 13:22:59.108882 master-0 kubenswrapper[7599]: I0318 13:22:59.108589 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/552316d4-cfd4-45fe-8a05-d614c7326641-var-lock" (OuterVolumeSpecName: "var-lock") pod "552316d4-cfd4-45fe-8a05-d614c7326641" (UID: "552316d4-cfd4-45fe-8a05-d614c7326641"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:22:59.108882 master-0 kubenswrapper[7599]: I0318 13:22:59.108604 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/552316d4-cfd4-45fe-8a05-d614c7326641-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "552316d4-cfd4-45fe-8a05-d614c7326641" (UID: "552316d4-cfd4-45fe-8a05-d614c7326641"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:22:59.109020 master-0 kubenswrapper[7599]: I0318 13:22:59.108947 7599 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/552316d4-cfd4-45fe-8a05-d614c7326641-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:22:59.109020 master-0 kubenswrapper[7599]: I0318 13:22:59.108965 7599 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/552316d4-cfd4-45fe-8a05-d614c7326641-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:22:59.112599 master-0 kubenswrapper[7599]: I0318 13:22:59.112198 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/552316d4-cfd4-45fe-8a05-d614c7326641-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "552316d4-cfd4-45fe-8a05-d614c7326641" (UID: "552316d4-cfd4-45fe-8a05-d614c7326641"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:22:59.211108 master-0 kubenswrapper[7599]: I0318 13:22:59.211008 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/552316d4-cfd4-45fe-8a05-d614c7326641-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:22:59.254013 master-0 kubenswrapper[7599]: I0318 13:22:59.253899 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:22:59.254013 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:22:59.254013 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:22:59.254013 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:22:59.254753 master-0 kubenswrapper[7599]: I0318 13:22:59.254078 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:22:59.415327 master-0 kubenswrapper[7599]: I0318 13:22:59.415271 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 18 13:22:59.425456 master-0 kubenswrapper[7599]: I0318 13:22:59.425376 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 18 13:23:00.254102 master-0 kubenswrapper[7599]: I0318 13:23:00.254020 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:00.254102 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:00.254102 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:00.254102 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:00.254102 master-0 kubenswrapper[7599]: I0318 13:23:00.254106 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:01.254506 master-0 kubenswrapper[7599]: I0318 13:23:01.254380 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:01.254506 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:01.254506 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:01.254506 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:01.255686 master-0 kubenswrapper[7599]: I0318 13:23:01.254504 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:01.388740 master-0 kubenswrapper[7599]: I0318 13:23:01.388669 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="552316d4-cfd4-45fe-8a05-d614c7326641" path="/var/lib/kubelet/pods/552316d4-cfd4-45fe-8a05-d614c7326641/volumes" Mar 18 13:23:02.254075 master-0 kubenswrapper[7599]: I0318 13:23:02.253998 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:02.254075 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:02.254075 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:02.254075 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:02.254075 master-0 kubenswrapper[7599]: I0318 13:23:02.254061 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:02.535855 master-0 kubenswrapper[7599]: I0318 13:23:02.535712 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 18 13:23:02.536178 master-0 kubenswrapper[7599]: E0318 13:23:02.536139 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="552316d4-cfd4-45fe-8a05-d614c7326641" containerName="installer" Mar 18 13:23:02.536178 master-0 kubenswrapper[7599]: I0318 13:23:02.536173 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="552316d4-cfd4-45fe-8a05-d614c7326641" containerName="installer" Mar 18 13:23:02.536405 master-0 kubenswrapper[7599]: I0318 13:23:02.536371 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="552316d4-cfd4-45fe-8a05-d614c7326641" containerName="installer" Mar 18 13:23:02.537095 master-0 kubenswrapper[7599]: I0318 13:23:02.537059 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:23:02.552025 master-0 kubenswrapper[7599]: I0318 13:23:02.551968 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 18 13:23:02.660513 master-0 kubenswrapper[7599]: I0318 13:23:02.659784 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b853631-ff77-4643-aa07-b1f8056320a3-kube-api-access\") pod \"installer-3-master-0\" (UID: \"9b853631-ff77-4643-aa07-b1f8056320a3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:23:02.660513 master-0 kubenswrapper[7599]: I0318 13:23:02.659877 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b853631-ff77-4643-aa07-b1f8056320a3-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"9b853631-ff77-4643-aa07-b1f8056320a3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:23:02.660513 master-0 kubenswrapper[7599]: I0318 13:23:02.659967 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9b853631-ff77-4643-aa07-b1f8056320a3-var-lock\") pod \"installer-3-master-0\" (UID: \"9b853631-ff77-4643-aa07-b1f8056320a3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:23:02.761830 master-0 kubenswrapper[7599]: I0318 13:23:02.761736 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b853631-ff77-4643-aa07-b1f8056320a3-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"9b853631-ff77-4643-aa07-b1f8056320a3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:23:02.762047 master-0 kubenswrapper[7599]: I0318 13:23:02.761900 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b853631-ff77-4643-aa07-b1f8056320a3-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"9b853631-ff77-4643-aa07-b1f8056320a3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:23:02.762047 master-0 kubenswrapper[7599]: I0318 13:23:02.761937 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9b853631-ff77-4643-aa07-b1f8056320a3-var-lock\") pod \"installer-3-master-0\" (UID: \"9b853631-ff77-4643-aa07-b1f8056320a3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:23:02.762047 master-0 kubenswrapper[7599]: I0318 13:23:02.762007 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9b853631-ff77-4643-aa07-b1f8056320a3-var-lock\") pod \"installer-3-master-0\" (UID: \"9b853631-ff77-4643-aa07-b1f8056320a3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:23:02.762484 master-0 kubenswrapper[7599]: I0318 13:23:02.762387 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b853631-ff77-4643-aa07-b1f8056320a3-kube-api-access\") pod \"installer-3-master-0\" (UID: \"9b853631-ff77-4643-aa07-b1f8056320a3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:23:02.789007 master-0 kubenswrapper[7599]: I0318 13:23:02.788876 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b853631-ff77-4643-aa07-b1f8056320a3-kube-api-access\") pod \"installer-3-master-0\" (UID: \"9b853631-ff77-4643-aa07-b1f8056320a3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:23:02.862899 master-0 kubenswrapper[7599]: I0318 13:23:02.862838 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:23:03.258837 master-0 kubenswrapper[7599]: I0318 13:23:03.257158 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 18 13:23:03.258837 master-0 kubenswrapper[7599]: I0318 13:23:03.258077 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:03.258837 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:03.258837 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:03.258837 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:03.258837 master-0 kubenswrapper[7599]: I0318 13:23:03.258148 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:03.265961 master-0 kubenswrapper[7599]: W0318 13:23:03.265913 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod9b853631_ff77_4643_aa07_b1f8056320a3.slice/crio-a971c7d2244a974c92fa9c0c47762387f2198f4f305068fbf35726add0f3183d WatchSource:0}: Error finding container a971c7d2244a974c92fa9c0c47762387f2198f4f305068fbf35726add0f3183d: Status 404 returned error can't find the container with id a971c7d2244a974c92fa9c0c47762387f2198f4f305068fbf35726add0f3183d Mar 18 13:23:04.124392 master-0 kubenswrapper[7599]: I0318 13:23:04.124293 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"9b853631-ff77-4643-aa07-b1f8056320a3","Type":"ContainerStarted","Data":"64aef303c60ed75302cdf53b54c1f5e7b01831e38260821ecee71573b2f8873b"} Mar 18 13:23:04.124392 master-0 kubenswrapper[7599]: I0318 13:23:04.124375 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"9b853631-ff77-4643-aa07-b1f8056320a3","Type":"ContainerStarted","Data":"a971c7d2244a974c92fa9c0c47762387f2198f4f305068fbf35726add0f3183d"} Mar 18 13:23:04.162026 master-0 kubenswrapper[7599]: I0318 13:23:04.161923 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-3-master-0" podStartSLOduration=2.161900603 podStartE2EDuration="2.161900603s" podCreationTimestamp="2026-03-18 13:23:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:23:04.153182789 +0000 UTC m=+959.114237051" watchObservedRunningTime="2026-03-18 13:23:04.161900603 +0000 UTC m=+959.122954855" Mar 18 13:23:04.254804 master-0 kubenswrapper[7599]: I0318 13:23:04.254719 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:04.254804 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:04.254804 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:04.254804 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:04.254804 master-0 kubenswrapper[7599]: I0318 13:23:04.254782 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:05.254518 master-0 kubenswrapper[7599]: I0318 13:23:05.254173 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:05.254518 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:05.254518 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:05.254518 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:05.255199 master-0 kubenswrapper[7599]: I0318 13:23:05.254537 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:05.991831 master-0 kubenswrapper[7599]: I0318 13:23:05.991763 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_32b975a4-4df1-4196-b3b4-b66a682f1c07/installer/0.log" Mar 18 13:23:05.992082 master-0 kubenswrapper[7599]: I0318 13:23:05.991870 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 13:23:06.109786 master-0 kubenswrapper[7599]: I0318 13:23:06.109710 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32b975a4-4df1-4196-b3b4-b66a682f1c07-var-lock\") pod \"32b975a4-4df1-4196-b3b4-b66a682f1c07\" (UID: \"32b975a4-4df1-4196-b3b4-b66a682f1c07\") " Mar 18 13:23:06.110060 master-0 kubenswrapper[7599]: I0318 13:23:06.109837 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/32b975a4-4df1-4196-b3b4-b66a682f1c07-kube-api-access\") pod \"32b975a4-4df1-4196-b3b4-b66a682f1c07\" (UID: \"32b975a4-4df1-4196-b3b4-b66a682f1c07\") " Mar 18 13:23:06.110060 master-0 kubenswrapper[7599]: I0318 13:23:06.109870 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/32b975a4-4df1-4196-b3b4-b66a682f1c07-kubelet-dir\") pod \"32b975a4-4df1-4196-b3b4-b66a682f1c07\" (UID: \"32b975a4-4df1-4196-b3b4-b66a682f1c07\") " Mar 18 13:23:06.110472 master-0 kubenswrapper[7599]: I0318 13:23:06.110399 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32b975a4-4df1-4196-b3b4-b66a682f1c07-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "32b975a4-4df1-4196-b3b4-b66a682f1c07" (UID: "32b975a4-4df1-4196-b3b4-b66a682f1c07"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:23:06.110724 master-0 kubenswrapper[7599]: I0318 13:23:06.110493 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32b975a4-4df1-4196-b3b4-b66a682f1c07-var-lock" (OuterVolumeSpecName: "var-lock") pod "32b975a4-4df1-4196-b3b4-b66a682f1c07" (UID: "32b975a4-4df1-4196-b3b4-b66a682f1c07"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:23:06.123498 master-0 kubenswrapper[7599]: I0318 13:23:06.115643 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32b975a4-4df1-4196-b3b4-b66a682f1c07-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "32b975a4-4df1-4196-b3b4-b66a682f1c07" (UID: "32b975a4-4df1-4196-b3b4-b66a682f1c07"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:23:06.142285 master-0 kubenswrapper[7599]: I0318 13:23:06.142221 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_32b975a4-4df1-4196-b3b4-b66a682f1c07/installer/0.log" Mar 18 13:23:06.142285 master-0 kubenswrapper[7599]: I0318 13:23:06.142276 7599 generic.go:334] "Generic (PLEG): container finished" podID="32b975a4-4df1-4196-b3b4-b66a682f1c07" containerID="aa8d03ff85a8f58a45c281a82585326d1af9c93c72fbdc7dc00702eea6ae2a0d" exitCode=1 Mar 18 13:23:06.142626 master-0 kubenswrapper[7599]: I0318 13:23:06.142308 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"32b975a4-4df1-4196-b3b4-b66a682f1c07","Type":"ContainerDied","Data":"aa8d03ff85a8f58a45c281a82585326d1af9c93c72fbdc7dc00702eea6ae2a0d"} Mar 18 13:23:06.142626 master-0 kubenswrapper[7599]: I0318 13:23:06.142335 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"32b975a4-4df1-4196-b3b4-b66a682f1c07","Type":"ContainerDied","Data":"9c0c0424d6321c2bca6a91938ccdfd31f722ed50892f3ff8b8322a4e7437e11e"} Mar 18 13:23:06.142626 master-0 kubenswrapper[7599]: I0318 13:23:06.142352 7599 scope.go:117] "RemoveContainer" containerID="aa8d03ff85a8f58a45c281a82585326d1af9c93c72fbdc7dc00702eea6ae2a0d" Mar 18 13:23:06.142626 master-0 kubenswrapper[7599]: I0318 13:23:06.142481 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 13:23:06.161313 master-0 kubenswrapper[7599]: I0318 13:23:06.161252 7599 scope.go:117] "RemoveContainer" containerID="aa8d03ff85a8f58a45c281a82585326d1af9c93c72fbdc7dc00702eea6ae2a0d" Mar 18 13:23:06.162134 master-0 kubenswrapper[7599]: E0318 13:23:06.162076 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa8d03ff85a8f58a45c281a82585326d1af9c93c72fbdc7dc00702eea6ae2a0d\": container with ID starting with aa8d03ff85a8f58a45c281a82585326d1af9c93c72fbdc7dc00702eea6ae2a0d not found: ID does not exist" containerID="aa8d03ff85a8f58a45c281a82585326d1af9c93c72fbdc7dc00702eea6ae2a0d" Mar 18 13:23:06.162239 master-0 kubenswrapper[7599]: I0318 13:23:06.162124 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa8d03ff85a8f58a45c281a82585326d1af9c93c72fbdc7dc00702eea6ae2a0d"} err="failed to get container status \"aa8d03ff85a8f58a45c281a82585326d1af9c93c72fbdc7dc00702eea6ae2a0d\": rpc error: code = NotFound desc = could not find container \"aa8d03ff85a8f58a45c281a82585326d1af9c93c72fbdc7dc00702eea6ae2a0d\": container with ID starting with aa8d03ff85a8f58a45c281a82585326d1af9c93c72fbdc7dc00702eea6ae2a0d not found: ID does not exist" Mar 18 13:23:06.211529 master-0 kubenswrapper[7599]: I0318 13:23:06.211353 7599 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32b975a4-4df1-4196-b3b4-b66a682f1c07-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:23:06.211529 master-0 kubenswrapper[7599]: I0318 13:23:06.211400 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/32b975a4-4df1-4196-b3b4-b66a682f1c07-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:23:06.211529 master-0 kubenswrapper[7599]: I0318 13:23:06.211434 7599 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/32b975a4-4df1-4196-b3b4-b66a682f1c07-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:23:06.261664 master-0 kubenswrapper[7599]: I0318 13:23:06.261555 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:06.261664 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:06.261664 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:06.261664 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:06.261664 master-0 kubenswrapper[7599]: I0318 13:23:06.261630 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:06.262716 master-0 kubenswrapper[7599]: I0318 13:23:06.262660 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 18 13:23:06.288217 master-0 kubenswrapper[7599]: I0318 13:23:06.288159 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 18 13:23:07.254561 master-0 kubenswrapper[7599]: I0318 13:23:07.254488 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:07.254561 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:07.254561 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:07.254561 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:07.255178 master-0 kubenswrapper[7599]: I0318 13:23:07.254582 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:07.392892 master-0 kubenswrapper[7599]: I0318 13:23:07.392832 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32b975a4-4df1-4196-b3b4-b66a682f1c07" path="/var/lib/kubelet/pods/32b975a4-4df1-4196-b3b4-b66a682f1c07/volumes" Mar 18 13:23:08.254502 master-0 kubenswrapper[7599]: I0318 13:23:08.254395 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:08.254502 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:08.254502 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:08.254502 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:08.255061 master-0 kubenswrapper[7599]: I0318 13:23:08.255019 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:09.254274 master-0 kubenswrapper[7599]: I0318 13:23:09.254204 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:09.254274 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:09.254274 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:09.254274 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:09.254274 master-0 kubenswrapper[7599]: I0318 13:23:09.254269 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:10.254955 master-0 kubenswrapper[7599]: I0318 13:23:10.254868 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:10.254955 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:10.254955 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:10.254955 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:10.256107 master-0 kubenswrapper[7599]: I0318 13:23:10.254961 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:11.164208 master-0 kubenswrapper[7599]: I0318 13:23:11.164151 7599 scope.go:117] "RemoveContainer" containerID="fb60ab1fea57ec49871d5edaaf3891b0d60ae36efb59421fd58289dbb8a18b9d" Mar 18 13:23:11.254547 master-0 kubenswrapper[7599]: I0318 13:23:11.254474 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:11.254547 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:11.254547 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:11.254547 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:11.254825 master-0 kubenswrapper[7599]: I0318 13:23:11.254558 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:12.254445 master-0 kubenswrapper[7599]: I0318 13:23:12.254355 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:12.254445 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:12.254445 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:12.254445 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:12.254445 master-0 kubenswrapper[7599]: I0318 13:23:12.254433 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:13.254138 master-0 kubenswrapper[7599]: I0318 13:23:13.254060 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:13.254138 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:13.254138 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:13.254138 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:13.254880 master-0 kubenswrapper[7599]: I0318 13:23:13.254168 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:14.253649 master-0 kubenswrapper[7599]: I0318 13:23:14.253572 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:14.253649 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:14.253649 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:14.253649 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:14.254000 master-0 kubenswrapper[7599]: I0318 13:23:14.253660 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:15.253972 master-0 kubenswrapper[7599]: I0318 13:23:15.253907 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:15.253972 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:15.253972 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:15.253972 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:15.253972 master-0 kubenswrapper[7599]: I0318 13:23:15.253967 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:16.253510 master-0 kubenswrapper[7599]: I0318 13:23:16.253444 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:16.253510 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:16.253510 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:16.253510 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:16.253766 master-0 kubenswrapper[7599]: I0318 13:23:16.253527 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:17.255049 master-0 kubenswrapper[7599]: I0318 13:23:17.254990 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:17.255049 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:17.255049 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:17.255049 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:17.255856 master-0 kubenswrapper[7599]: I0318 13:23:17.255063 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:18.254025 master-0 kubenswrapper[7599]: I0318 13:23:18.253919 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:18.254025 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:18.254025 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:18.254025 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:18.254456 master-0 kubenswrapper[7599]: I0318 13:23:18.254034 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:19.253955 master-0 kubenswrapper[7599]: I0318 13:23:19.253872 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:19.253955 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:19.253955 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:19.253955 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:19.254778 master-0 kubenswrapper[7599]: I0318 13:23:19.253988 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:20.253278 master-0 kubenswrapper[7599]: I0318 13:23:20.253187 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:20.253278 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:20.253278 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:20.253278 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:20.253580 master-0 kubenswrapper[7599]: I0318 13:23:20.253288 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:21.254914 master-0 kubenswrapper[7599]: I0318 13:23:21.254826 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:21.254914 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:21.254914 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:21.254914 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:21.255909 master-0 kubenswrapper[7599]: I0318 13:23:21.254924 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:22.254227 master-0 kubenswrapper[7599]: I0318 13:23:22.254175 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:22.254227 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:22.254227 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:22.254227 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:22.254567 master-0 kubenswrapper[7599]: I0318 13:23:22.254239 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:23.254662 master-0 kubenswrapper[7599]: I0318 13:23:23.254589 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:23.254662 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:23.254662 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:23.254662 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:23.254662 master-0 kubenswrapper[7599]: I0318 13:23:23.254648 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:24.254943 master-0 kubenswrapper[7599]: I0318 13:23:24.254879 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:24.254943 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:24.254943 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:24.254943 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:24.256016 master-0 kubenswrapper[7599]: I0318 13:23:24.254955 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:25.254739 master-0 kubenswrapper[7599]: I0318 13:23:25.254667 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:25.254739 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:25.254739 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:25.254739 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:25.255507 master-0 kubenswrapper[7599]: I0318 13:23:25.254765 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:26.255614 master-0 kubenswrapper[7599]: I0318 13:23:26.255519 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:26.255614 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:26.255614 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:26.255614 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:26.256772 master-0 kubenswrapper[7599]: I0318 13:23:26.255648 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:27.254272 master-0 kubenswrapper[7599]: I0318 13:23:27.254197 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:27.254272 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:27.254272 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:27.254272 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:27.254654 master-0 kubenswrapper[7599]: I0318 13:23:27.254284 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:28.253596 master-0 kubenswrapper[7599]: I0318 13:23:28.253540 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:28.253596 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:28.253596 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:28.253596 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:28.254309 master-0 kubenswrapper[7599]: I0318 13:23:28.253606 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:29.255647 master-0 kubenswrapper[7599]: I0318 13:23:29.255557 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:29.255647 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:29.255647 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:29.255647 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:29.256797 master-0 kubenswrapper[7599]: I0318 13:23:29.255650 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:30.254997 master-0 kubenswrapper[7599]: I0318 13:23:30.254935 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:30.254997 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:30.254997 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:30.254997 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:30.255359 master-0 kubenswrapper[7599]: I0318 13:23:30.255003 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:31.255035 master-0 kubenswrapper[7599]: I0318 13:23:31.254956 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:31.255035 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:31.255035 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:31.255035 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:31.256299 master-0 kubenswrapper[7599]: I0318 13:23:31.255057 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:32.257556 master-0 kubenswrapper[7599]: I0318 13:23:32.257481 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:32.257556 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:32.257556 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:32.257556 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:32.258610 master-0 kubenswrapper[7599]: I0318 13:23:32.257566 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:33.255067 master-0 kubenswrapper[7599]: I0318 13:23:33.254975 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:33.255067 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:33.255067 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:33.255067 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:33.255550 master-0 kubenswrapper[7599]: I0318 13:23:33.255099 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:34.255287 master-0 kubenswrapper[7599]: I0318 13:23:34.255193 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:34.255287 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:34.255287 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:34.255287 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:34.256263 master-0 kubenswrapper[7599]: I0318 13:23:34.255291 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:35.258201 master-0 kubenswrapper[7599]: I0318 13:23:35.258150 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:35.258201 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:35.258201 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:35.258201 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:35.259168 master-0 kubenswrapper[7599]: I0318 13:23:35.259129 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:36.254744 master-0 kubenswrapper[7599]: I0318 13:23:36.254673 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:36.254744 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:36.254744 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:36.254744 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:36.255089 master-0 kubenswrapper[7599]: I0318 13:23:36.254773 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:37.255149 master-0 kubenswrapper[7599]: I0318 13:23:37.255023 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:37.255149 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:37.255149 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:37.255149 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:37.256003 master-0 kubenswrapper[7599]: I0318 13:23:37.255172 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:38.253984 master-0 kubenswrapper[7599]: I0318 13:23:38.253932 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:38.253984 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:38.253984 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:38.253984 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:38.254589 master-0 kubenswrapper[7599]: I0318 13:23:38.254537 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:39.254817 master-0 kubenswrapper[7599]: I0318 13:23:39.254731 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:39.254817 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:39.254817 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:39.254817 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:39.255480 master-0 kubenswrapper[7599]: I0318 13:23:39.254827 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:40.255111 master-0 kubenswrapper[7599]: I0318 13:23:40.255025 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:40.255111 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:40.255111 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:40.255111 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:40.255850 master-0 kubenswrapper[7599]: I0318 13:23:40.255127 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:40.439400 master-0 kubenswrapper[7599]: I0318 13:23:40.439306 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-58c9f8fc64-lk5k7"] Mar 18 13:23:40.439807 master-0 kubenswrapper[7599]: E0318 13:23:40.439764 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32b975a4-4df1-4196-b3b4-b66a682f1c07" containerName="installer" Mar 18 13:23:40.439807 master-0 kubenswrapper[7599]: I0318 13:23:40.439806 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="32b975a4-4df1-4196-b3b4-b66a682f1c07" containerName="installer" Mar 18 13:23:40.440231 master-0 kubenswrapper[7599]: I0318 13:23:40.440184 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="32b975a4-4df1-4196-b3b4-b66a682f1c07" containerName="installer" Mar 18 13:23:40.441833 master-0 kubenswrapper[7599]: I0318 13:23:40.441793 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-58c9f8fc64-lk5k7" Mar 18 13:23:40.444521 master-0 kubenswrapper[7599]: I0318 13:23:40.444482 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-5dnvq" Mar 18 13:23:40.480974 master-0 kubenswrapper[7599]: I0318 13:23:40.460888 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-58c9f8fc64-lk5k7"] Mar 18 13:23:40.582362 master-0 kubenswrapper[7599]: I0318 13:23:40.582219 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvq2h\" (UniqueName: \"kubernetes.io/projected/830ff1d6-332e-46b1-b13c-c2507fdc3c19-kube-api-access-dvq2h\") pod \"multus-admission-controller-58c9f8fc64-lk5k7\" (UID: \"830ff1d6-332e-46b1-b13c-c2507fdc3c19\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-lk5k7" Mar 18 13:23:40.582362 master-0 kubenswrapper[7599]: I0318 13:23:40.582306 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/830ff1d6-332e-46b1-b13c-c2507fdc3c19-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-lk5k7\" (UID: \"830ff1d6-332e-46b1-b13c-c2507fdc3c19\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-lk5k7" Mar 18 13:23:40.683538 master-0 kubenswrapper[7599]: I0318 13:23:40.683478 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvq2h\" (UniqueName: \"kubernetes.io/projected/830ff1d6-332e-46b1-b13c-c2507fdc3c19-kube-api-access-dvq2h\") pod \"multus-admission-controller-58c9f8fc64-lk5k7\" (UID: \"830ff1d6-332e-46b1-b13c-c2507fdc3c19\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-lk5k7" Mar 18 13:23:40.683722 master-0 kubenswrapper[7599]: I0318 13:23:40.683558 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/830ff1d6-332e-46b1-b13c-c2507fdc3c19-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-lk5k7\" (UID: \"830ff1d6-332e-46b1-b13c-c2507fdc3c19\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-lk5k7" Mar 18 13:23:40.693127 master-0 kubenswrapper[7599]: I0318 13:23:40.693082 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/830ff1d6-332e-46b1-b13c-c2507fdc3c19-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-lk5k7\" (UID: \"830ff1d6-332e-46b1-b13c-c2507fdc3c19\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-lk5k7" Mar 18 13:23:40.702080 master-0 kubenswrapper[7599]: I0318 13:23:40.702044 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvq2h\" (UniqueName: \"kubernetes.io/projected/830ff1d6-332e-46b1-b13c-c2507fdc3c19-kube-api-access-dvq2h\") pod \"multus-admission-controller-58c9f8fc64-lk5k7\" (UID: \"830ff1d6-332e-46b1-b13c-c2507fdc3c19\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-lk5k7" Mar 18 13:23:40.786332 master-0 kubenswrapper[7599]: I0318 13:23:40.786240 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-58c9f8fc64-lk5k7" Mar 18 13:23:41.232107 master-0 kubenswrapper[7599]: I0318 13:23:41.232057 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-58c9f8fc64-lk5k7"] Mar 18 13:23:41.242164 master-0 kubenswrapper[7599]: W0318 13:23:41.240006 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod830ff1d6_332e_46b1_b13c_c2507fdc3c19.slice/crio-b4b9a672b76f3adc2ab4b631c1e084b51c152e435a2f95e756fd77ce61bb9196 WatchSource:0}: Error finding container b4b9a672b76f3adc2ab4b631c1e084b51c152e435a2f95e756fd77ce61bb9196: Status 404 returned error can't find the container with id b4b9a672b76f3adc2ab4b631c1e084b51c152e435a2f95e756fd77ce61bb9196 Mar 18 13:23:41.253453 master-0 kubenswrapper[7599]: I0318 13:23:41.253394 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:41.253453 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:41.253453 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:41.253453 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:41.253664 master-0 kubenswrapper[7599]: I0318 13:23:41.253469 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:41.418480 master-0 kubenswrapper[7599]: I0318 13:23:41.418406 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-58c9f8fc64-lk5k7" event={"ID":"830ff1d6-332e-46b1-b13c-c2507fdc3c19","Type":"ContainerStarted","Data":"b4b9a672b76f3adc2ab4b631c1e084b51c152e435a2f95e756fd77ce61bb9196"} Mar 18 13:23:42.254308 master-0 kubenswrapper[7599]: I0318 13:23:42.254236 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:42.254308 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:42.254308 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:42.254308 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:42.254711 master-0 kubenswrapper[7599]: I0318 13:23:42.254327 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:42.429343 master-0 kubenswrapper[7599]: I0318 13:23:42.429260 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-58c9f8fc64-lk5k7" event={"ID":"830ff1d6-332e-46b1-b13c-c2507fdc3c19","Type":"ContainerStarted","Data":"b6426d584feaf1dccc9586fadfcc5b8411ec145f968fc4d370c3068013252e93"} Mar 18 13:23:42.429343 master-0 kubenswrapper[7599]: I0318 13:23:42.429347 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-58c9f8fc64-lk5k7" event={"ID":"830ff1d6-332e-46b1-b13c-c2507fdc3c19","Type":"ContainerStarted","Data":"badd417202c4299e05d5e5c0664cbf010b21bb652f30b93278ac43926e68a829"} Mar 18 13:23:42.468121 master-0 kubenswrapper[7599]: I0318 13:23:42.467954 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-58c9f8fc64-lk5k7" podStartSLOduration=45.467930813 podStartE2EDuration="45.467930813s" podCreationTimestamp="2026-03-18 13:22:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:23:42.46352437 +0000 UTC m=+997.424578662" watchObservedRunningTime="2026-03-18 13:23:42.467930813 +0000 UTC m=+997.428985065" Mar 18 13:23:42.505496 master-0 kubenswrapper[7599]: I0318 13:23:42.503966 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h"] Mar 18 13:23:42.505496 master-0 kubenswrapper[7599]: I0318 13:23:42.504834 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" podUID="720a1f60-c1cb-4aef-aaec-f082090ca631" containerName="multus-admission-controller" containerID="cri-o://b6d0118c2fdf2cbc54c92133c6e31568d8996365d7d961746064b4d6f7f3d6e8" gracePeriod=30 Mar 18 13:23:42.505496 master-0 kubenswrapper[7599]: I0318 13:23:42.505286 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" podUID="720a1f60-c1cb-4aef-aaec-f082090ca631" containerName="kube-rbac-proxy" containerID="cri-o://6fc3b00292545591e6c5349f2483ea9d57bac5ac21bd098a1969c029ee5e5b9a" gracePeriod=30 Mar 18 13:23:43.253690 master-0 kubenswrapper[7599]: I0318 13:23:43.253648 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:43.253690 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:43.253690 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:43.253690 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:43.254131 master-0 kubenswrapper[7599]: I0318 13:23:43.254107 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:43.440634 master-0 kubenswrapper[7599]: I0318 13:23:43.440512 7599 generic.go:334] "Generic (PLEG): container finished" podID="720a1f60-c1cb-4aef-aaec-f082090ca631" containerID="6fc3b00292545591e6c5349f2483ea9d57bac5ac21bd098a1969c029ee5e5b9a" exitCode=0 Mar 18 13:23:43.440634 master-0 kubenswrapper[7599]: I0318 13:23:43.440549 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" event={"ID":"720a1f60-c1cb-4aef-aaec-f082090ca631","Type":"ContainerDied","Data":"6fc3b00292545591e6c5349f2483ea9d57bac5ac21bd098a1969c029ee5e5b9a"} Mar 18 13:23:44.254038 master-0 kubenswrapper[7599]: I0318 13:23:44.253978 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:44.254038 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:44.254038 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:44.254038 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:44.254306 master-0 kubenswrapper[7599]: I0318 13:23:44.254055 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:45.254491 master-0 kubenswrapper[7599]: I0318 13:23:45.254387 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:45.254491 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:45.254491 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:45.254491 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:45.256846 master-0 kubenswrapper[7599]: I0318 13:23:45.254502 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:46.254226 master-0 kubenswrapper[7599]: I0318 13:23:46.254143 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:46.254226 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:46.254226 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:46.254226 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:46.255312 master-0 kubenswrapper[7599]: I0318 13:23:46.254238 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:47.254838 master-0 kubenswrapper[7599]: I0318 13:23:47.254718 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:47.254838 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:47.254838 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:47.254838 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:47.254838 master-0 kubenswrapper[7599]: I0318 13:23:47.254814 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:48.255032 master-0 kubenswrapper[7599]: I0318 13:23:48.254886 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:48.255032 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:48.255032 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:48.255032 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:48.256213 master-0 kubenswrapper[7599]: I0318 13:23:48.255079 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:49.254777 master-0 kubenswrapper[7599]: I0318 13:23:49.254629 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:49.254777 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:49.254777 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:49.254777 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:49.254777 master-0 kubenswrapper[7599]: I0318 13:23:49.254753 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:50.255180 master-0 kubenswrapper[7599]: I0318 13:23:50.255093 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:50.255180 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:50.255180 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:50.255180 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:50.256294 master-0 kubenswrapper[7599]: I0318 13:23:50.255173 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:51.254332 master-0 kubenswrapper[7599]: I0318 13:23:51.254272 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:51.254332 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:51.254332 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:51.254332 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:51.254689 master-0 kubenswrapper[7599]: I0318 13:23:51.254361 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:51.573512 master-0 kubenswrapper[7599]: E0318 13:23:51.573328 7599 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml\": /etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Mar 18 13:23:51.574981 master-0 kubenswrapper[7599]: I0318 13:23:51.574921 7599 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 13:23:51.575852 master-0 kubenswrapper[7599]: I0318 13:23:51.575822 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:51.576343 master-0 kubenswrapper[7599]: I0318 13:23:51.576303 7599 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 18 13:23:51.576556 master-0 kubenswrapper[7599]: I0318 13:23:51.576520 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" containerID="cri-o://2bf6097f58997d4a7857bf19b235ef6fc055563709efd33dba529c95e95347b7" gracePeriod=15 Mar 18 13:23:51.576661 master-0 kubenswrapper[7599]: I0318 13:23:51.576589 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://a4f58e12d74ce5ae7d84f01a469edde896e16989e12c3d7d74c8926694ed4605" gracePeriod=15 Mar 18 13:23:51.576760 master-0 kubenswrapper[7599]: I0318 13:23:51.576735 7599 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 13:23:51.576942 master-0 kubenswrapper[7599]: E0318 13:23:51.576901 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 18 13:23:51.576942 master-0 kubenswrapper[7599]: I0318 13:23:51.576918 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 18 13:23:51.576942 master-0 kubenswrapper[7599]: E0318 13:23:51.576940 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 18 13:23:51.576942 master-0 kubenswrapper[7599]: I0318 13:23:51.576947 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 18 13:23:51.577162 master-0 kubenswrapper[7599]: E0318 13:23:51.576958 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 18 13:23:51.577162 master-0 kubenswrapper[7599]: I0318 13:23:51.576966 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 18 13:23:51.577162 master-0 kubenswrapper[7599]: I0318 13:23:51.577069 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 18 13:23:51.577162 master-0 kubenswrapper[7599]: I0318 13:23:51.577080 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 18 13:23:51.577162 master-0 kubenswrapper[7599]: I0318 13:23:51.577094 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 18 13:23:51.578663 master-0 kubenswrapper[7599]: I0318 13:23:51.578638 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:51.654363 master-0 kubenswrapper[7599]: I0318 13:23:51.654279 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:51.654363 master-0 kubenswrapper[7599]: I0318 13:23:51.654353 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:51.654712 master-0 kubenswrapper[7599]: I0318 13:23:51.654441 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:51.654712 master-0 kubenswrapper[7599]: I0318 13:23:51.654476 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:51.654932 master-0 kubenswrapper[7599]: I0318 13:23:51.654884 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:51.654997 master-0 kubenswrapper[7599]: I0318 13:23:51.654967 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:51.654997 master-0 kubenswrapper[7599]: I0318 13:23:51.654984 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:51.655087 master-0 kubenswrapper[7599]: I0318 13:23:51.655012 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:51.668454 master-0 kubenswrapper[7599]: E0318 13:23:51.665725 7599 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:51.700320 master-0 kubenswrapper[7599]: E0318 13:23:51.699914 7599 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:51.759576 master-0 kubenswrapper[7599]: I0318 13:23:51.759208 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:51.759576 master-0 kubenswrapper[7599]: I0318 13:23:51.759307 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:51.759576 master-0 kubenswrapper[7599]: I0318 13:23:51.759370 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:51.759576 master-0 kubenswrapper[7599]: I0318 13:23:51.759404 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:51.759576 master-0 kubenswrapper[7599]: I0318 13:23:51.759473 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:51.760076 master-0 kubenswrapper[7599]: I0318 13:23:51.759605 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:51.760076 master-0 kubenswrapper[7599]: I0318 13:23:51.759637 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:51.760076 master-0 kubenswrapper[7599]: I0318 13:23:51.759669 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:51.760076 master-0 kubenswrapper[7599]: I0318 13:23:51.759688 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:51.760076 master-0 kubenswrapper[7599]: I0318 13:23:51.759707 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:51.760076 master-0 kubenswrapper[7599]: I0318 13:23:51.759605 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:51.760076 master-0 kubenswrapper[7599]: I0318 13:23:51.759796 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:51.760076 master-0 kubenswrapper[7599]: I0318 13:23:51.759859 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:51.760076 master-0 kubenswrapper[7599]: I0318 13:23:51.759871 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:51.760076 master-0 kubenswrapper[7599]: I0318 13:23:51.759911 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:51.760076 master-0 kubenswrapper[7599]: I0318 13:23:51.759998 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:51.966715 master-0 kubenswrapper[7599]: I0318 13:23:51.966652 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:51.987872 master-0 kubenswrapper[7599]: E0318 13:23:51.987735 7599 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189df24af3dd3a68 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:8e7a82869988463543d3d8dd1f0b5fe3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:23:51.986707048 +0000 UTC m=+1006.947761330,LastTimestamp:2026-03-18 13:23:51.986707048 +0000 UTC m=+1006.947761330,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:23:52.001456 master-0 kubenswrapper[7599]: I0318 13:23:52.000855 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:52.027143 master-0 kubenswrapper[7599]: W0318 13:23:52.027072 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb45ea2ef1cf2bc9d1d994d6538ae0a64.slice/crio-930c71fe1f23ece50da1d42638c299a59f7406da6f3b38ce884bbbd9a8e9fd63 WatchSource:0}: Error finding container 930c71fe1f23ece50da1d42638c299a59f7406da6f3b38ce884bbbd9a8e9fd63: Status 404 returned error can't find the container with id 930c71fe1f23ece50da1d42638c299a59f7406da6f3b38ce884bbbd9a8e9fd63 Mar 18 13:23:52.254468 master-0 kubenswrapper[7599]: I0318 13:23:52.254262 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:52.254468 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:52.254468 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:52.254468 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:52.254468 master-0 kubenswrapper[7599]: I0318 13:23:52.254331 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:52.513814 master-0 kubenswrapper[7599]: I0318 13:23:52.513723 7599 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="04b008430621fdfcd606415bb682fb2830b69521c04743dd7425bc1288a8ae9b" exitCode=0 Mar 18 13:23:52.513814 master-0 kubenswrapper[7599]: I0318 13:23:52.513800 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerDied","Data":"04b008430621fdfcd606415bb682fb2830b69521c04743dd7425bc1288a8ae9b"} Mar 18 13:23:52.514157 master-0 kubenswrapper[7599]: I0318 13:23:52.513840 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"930c71fe1f23ece50da1d42638c299a59f7406da6f3b38ce884bbbd9a8e9fd63"} Mar 18 13:23:52.515289 master-0 kubenswrapper[7599]: E0318 13:23:52.514960 7599 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:52.515939 master-0 kubenswrapper[7599]: I0318 13:23:52.515774 7599 generic.go:334] "Generic (PLEG): container finished" podID="9b853631-ff77-4643-aa07-b1f8056320a3" containerID="64aef303c60ed75302cdf53b54c1f5e7b01831e38260821ecee71573b2f8873b" exitCode=0 Mar 18 13:23:52.515939 master-0 kubenswrapper[7599]: I0318 13:23:52.515827 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"9b853631-ff77-4643-aa07-b1f8056320a3","Type":"ContainerDied","Data":"64aef303c60ed75302cdf53b54c1f5e7b01831e38260821ecee71573b2f8873b"} Mar 18 13:23:52.516684 master-0 kubenswrapper[7599]: I0318 13:23:52.516634 7599 status_manager.go:851] "Failed to get status for pod" podUID="9b853631-ff77-4643-aa07-b1f8056320a3" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:23:52.518499 master-0 kubenswrapper[7599]: I0318 13:23:52.518469 7599 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="a4f58e12d74ce5ae7d84f01a469edde896e16989e12c3d7d74c8926694ed4605" exitCode=0 Mar 18 13:23:52.524132 master-0 kubenswrapper[7599]: I0318 13:23:52.524075 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"8e7a82869988463543d3d8dd1f0b5fe3","Type":"ContainerStarted","Data":"70f992d6ed6a10b9fe4f1fff57a0961086b5274d71307bfff82b9e6a9a664092"} Mar 18 13:23:52.524252 master-0 kubenswrapper[7599]: I0318 13:23:52.524135 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"8e7a82869988463543d3d8dd1f0b5fe3","Type":"ContainerStarted","Data":"aad28fcc9206746f0d26ad1538815d0d7f16ddcfe6c46b81f66fd625f49ae815"} Mar 18 13:23:52.525143 master-0 kubenswrapper[7599]: E0318 13:23:52.525104 7599 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:23:52.525229 master-0 kubenswrapper[7599]: I0318 13:23:52.525110 7599 status_manager.go:851] "Failed to get status for pod" podUID="9b853631-ff77-4643-aa07-b1f8056320a3" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:23:53.253282 master-0 kubenswrapper[7599]: I0318 13:23:53.253233 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:53.253282 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:53.253282 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:53.253282 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:53.253794 master-0 kubenswrapper[7599]: I0318 13:23:53.253305 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:53.537498 master-0 kubenswrapper[7599]: I0318 13:23:53.537433 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"9cf2f9b8128dbf10ede3eb562a24cc58c7aba33dba8eeddf4d4f3e24e5798ca1"} Mar 18 13:23:53.537498 master-0 kubenswrapper[7599]: I0318 13:23:53.537484 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"1f5fb1784d061c1838ee1821bce9e8751dbeeab4df3260e90e439d05f3ac2d1d"} Mar 18 13:23:53.537498 master-0 kubenswrapper[7599]: I0318 13:23:53.537497 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"65d7e5b55a010fe304ccf53f634d090df67438b3eddaa2aa17e4e898f972f8b1"} Mar 18 13:23:54.121787 master-0 kubenswrapper[7599]: I0318 13:23:54.121739 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:23:54.122101 master-0 kubenswrapper[7599]: I0318 13:23:54.122050 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:23:54.196902 master-0 kubenswrapper[7599]: I0318 13:23:54.196836 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"49fac1b46a11e49501805e891baae4a9\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " Mar 18 13:23:54.196902 master-0 kubenswrapper[7599]: I0318 13:23:54.196897 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"49fac1b46a11e49501805e891baae4a9\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " Mar 18 13:23:54.197080 master-0 kubenswrapper[7599]: I0318 13:23:54.196941 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b853631-ff77-4643-aa07-b1f8056320a3-kube-api-access\") pod \"9b853631-ff77-4643-aa07-b1f8056320a3\" (UID: \"9b853631-ff77-4643-aa07-b1f8056320a3\") " Mar 18 13:23:54.197080 master-0 kubenswrapper[7599]: I0318 13:23:54.197008 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"49fac1b46a11e49501805e891baae4a9\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " Mar 18 13:23:54.197080 master-0 kubenswrapper[7599]: I0318 13:23:54.197041 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"49fac1b46a11e49501805e891baae4a9\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " Mar 18 13:23:54.197080 master-0 kubenswrapper[7599]: I0318 13:23:54.197072 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"49fac1b46a11e49501805e891baae4a9\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " Mar 18 13:23:54.197200 master-0 kubenswrapper[7599]: I0318 13:23:54.197108 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9b853631-ff77-4643-aa07-b1f8056320a3-var-lock\") pod \"9b853631-ff77-4643-aa07-b1f8056320a3\" (UID: \"9b853631-ff77-4643-aa07-b1f8056320a3\") " Mar 18 13:23:54.197200 master-0 kubenswrapper[7599]: I0318 13:23:54.197194 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"49fac1b46a11e49501805e891baae4a9\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " Mar 18 13:23:54.197262 master-0 kubenswrapper[7599]: I0318 13:23:54.197215 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b853631-ff77-4643-aa07-b1f8056320a3-kubelet-dir\") pod \"9b853631-ff77-4643-aa07-b1f8056320a3\" (UID: \"9b853631-ff77-4643-aa07-b1f8056320a3\") " Mar 18 13:23:54.197569 master-0 kubenswrapper[7599]: I0318 13:23:54.197543 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b853631-ff77-4643-aa07-b1f8056320a3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9b853631-ff77-4643-aa07-b1f8056320a3" (UID: "9b853631-ff77-4643-aa07-b1f8056320a3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:23:54.197618 master-0 kubenswrapper[7599]: I0318 13:23:54.197592 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "49fac1b46a11e49501805e891baae4a9" (UID: "49fac1b46a11e49501805e891baae4a9"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:23:54.197653 master-0 kubenswrapper[7599]: I0318 13:23:54.197626 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets" (OuterVolumeSpecName: "secrets") pod "49fac1b46a11e49501805e891baae4a9" (UID: "49fac1b46a11e49501805e891baae4a9"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:23:54.198025 master-0 kubenswrapper[7599]: I0318 13:23:54.197968 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config" (OuterVolumeSpecName: "config") pod "49fac1b46a11e49501805e891baae4a9" (UID: "49fac1b46a11e49501805e891baae4a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:23:54.198067 master-0 kubenswrapper[7599]: I0318 13:23:54.198045 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs" (OuterVolumeSpecName: "logs") pod "49fac1b46a11e49501805e891baae4a9" (UID: "49fac1b46a11e49501805e891baae4a9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:23:54.198067 master-0 kubenswrapper[7599]: I0318 13:23:54.198040 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "49fac1b46a11e49501805e891baae4a9" (UID: "49fac1b46a11e49501805e891baae4a9"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:23:54.198135 master-0 kubenswrapper[7599]: I0318 13:23:54.198076 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b853631-ff77-4643-aa07-b1f8056320a3-var-lock" (OuterVolumeSpecName: "var-lock") pod "9b853631-ff77-4643-aa07-b1f8056320a3" (UID: "9b853631-ff77-4643-aa07-b1f8056320a3"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:23:54.198135 master-0 kubenswrapper[7599]: I0318 13:23:54.198116 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "49fac1b46a11e49501805e891baae4a9" (UID: "49fac1b46a11e49501805e891baae4a9"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:23:54.200890 master-0 kubenswrapper[7599]: I0318 13:23:54.200855 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b853631-ff77-4643-aa07-b1f8056320a3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9b853631-ff77-4643-aa07-b1f8056320a3" (UID: "9b853631-ff77-4643-aa07-b1f8056320a3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:23:54.253389 master-0 kubenswrapper[7599]: I0318 13:23:54.253270 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:54.253389 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:54.253389 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:54.253389 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:54.253389 master-0 kubenswrapper[7599]: I0318 13:23:54.253351 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:54.299053 master-0 kubenswrapper[7599]: I0318 13:23:54.298999 7599 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Mar 18 13:23:54.299053 master-0 kubenswrapper[7599]: I0318 13:23:54.299054 7599 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b853631-ff77-4643-aa07-b1f8056320a3-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:23:54.299053 master-0 kubenswrapper[7599]: I0318 13:23:54.299066 7599 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Mar 18 13:23:54.299317 master-0 kubenswrapper[7599]: I0318 13:23:54.299081 7599 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") on node \"master-0\" DevicePath \"\"" Mar 18 13:23:54.299317 master-0 kubenswrapper[7599]: I0318 13:23:54.299091 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b853631-ff77-4643-aa07-b1f8056320a3-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:23:54.299317 master-0 kubenswrapper[7599]: I0318 13:23:54.299099 7599 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:23:54.299317 master-0 kubenswrapper[7599]: I0318 13:23:54.299107 7599 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:23:54.299317 master-0 kubenswrapper[7599]: I0318 13:23:54.299115 7599 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:23:54.299317 master-0 kubenswrapper[7599]: I0318 13:23:54.299139 7599 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9b853631-ff77-4643-aa07-b1f8056320a3-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:23:54.600776 master-0 kubenswrapper[7599]: I0318 13:23:54.600614 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"721fa0a6e32ffbe367060749a069ffa65b9f6ad129708e70bf8fe6c632945146"} Mar 18 13:23:54.600776 master-0 kubenswrapper[7599]: I0318 13:23:54.600714 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"c7fc7f0847ee79291a6574d7683fcc6d9f536ed82d5caf4aefcfccd3e9a849e8"} Mar 18 13:23:54.600776 master-0 kubenswrapper[7599]: I0318 13:23:54.600740 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:23:54.602365 master-0 kubenswrapper[7599]: I0318 13:23:54.602329 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"9b853631-ff77-4643-aa07-b1f8056320a3","Type":"ContainerDied","Data":"a971c7d2244a974c92fa9c0c47762387f2198f4f305068fbf35726add0f3183d"} Mar 18 13:23:54.602447 master-0 kubenswrapper[7599]: I0318 13:23:54.602370 7599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a971c7d2244a974c92fa9c0c47762387f2198f4f305068fbf35726add0f3183d" Mar 18 13:23:54.602447 master-0 kubenswrapper[7599]: I0318 13:23:54.602432 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:23:54.605746 master-0 kubenswrapper[7599]: I0318 13:23:54.605704 7599 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="2bf6097f58997d4a7857bf19b235ef6fc055563709efd33dba529c95e95347b7" exitCode=0 Mar 18 13:23:54.605827 master-0 kubenswrapper[7599]: I0318 13:23:54.605775 7599 scope.go:117] "RemoveContainer" containerID="a4f58e12d74ce5ae7d84f01a469edde896e16989e12c3d7d74c8926694ed4605" Mar 18 13:23:54.605916 master-0 kubenswrapper[7599]: I0318 13:23:54.605894 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 13:23:54.684119 master-0 kubenswrapper[7599]: I0318 13:23:54.682478 7599 scope.go:117] "RemoveContainer" containerID="2bf6097f58997d4a7857bf19b235ef6fc055563709efd33dba529c95e95347b7" Mar 18 13:23:54.723441 master-0 kubenswrapper[7599]: I0318 13:23:54.716360 7599 scope.go:117] "RemoveContainer" containerID="d6a80d543c22a63f6dea0e7d105a4e1ffc771640a4b28b8a800b1b2d79f67b1d" Mar 18 13:23:54.763438 master-0 kubenswrapper[7599]: I0318 13:23:54.762491 7599 scope.go:117] "RemoveContainer" containerID="a4f58e12d74ce5ae7d84f01a469edde896e16989e12c3d7d74c8926694ed4605" Mar 18 13:23:54.771427 master-0 kubenswrapper[7599]: E0318 13:23:54.768613 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4f58e12d74ce5ae7d84f01a469edde896e16989e12c3d7d74c8926694ed4605\": container with ID starting with a4f58e12d74ce5ae7d84f01a469edde896e16989e12c3d7d74c8926694ed4605 not found: ID does not exist" containerID="a4f58e12d74ce5ae7d84f01a469edde896e16989e12c3d7d74c8926694ed4605" Mar 18 13:23:54.771427 master-0 kubenswrapper[7599]: I0318 13:23:54.768677 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4f58e12d74ce5ae7d84f01a469edde896e16989e12c3d7d74c8926694ed4605"} err="failed to get container status \"a4f58e12d74ce5ae7d84f01a469edde896e16989e12c3d7d74c8926694ed4605\": rpc error: code = NotFound desc = could not find container \"a4f58e12d74ce5ae7d84f01a469edde896e16989e12c3d7d74c8926694ed4605\": container with ID starting with a4f58e12d74ce5ae7d84f01a469edde896e16989e12c3d7d74c8926694ed4605 not found: ID does not exist" Mar 18 13:23:54.771427 master-0 kubenswrapper[7599]: I0318 13:23:54.768705 7599 scope.go:117] "RemoveContainer" containerID="2bf6097f58997d4a7857bf19b235ef6fc055563709efd33dba529c95e95347b7" Mar 18 13:23:54.775476 master-0 kubenswrapper[7599]: E0318 13:23:54.772112 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bf6097f58997d4a7857bf19b235ef6fc055563709efd33dba529c95e95347b7\": container with ID starting with 2bf6097f58997d4a7857bf19b235ef6fc055563709efd33dba529c95e95347b7 not found: ID does not exist" containerID="2bf6097f58997d4a7857bf19b235ef6fc055563709efd33dba529c95e95347b7" Mar 18 13:23:54.775476 master-0 kubenswrapper[7599]: I0318 13:23:54.772159 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bf6097f58997d4a7857bf19b235ef6fc055563709efd33dba529c95e95347b7"} err="failed to get container status \"2bf6097f58997d4a7857bf19b235ef6fc055563709efd33dba529c95e95347b7\": rpc error: code = NotFound desc = could not find container \"2bf6097f58997d4a7857bf19b235ef6fc055563709efd33dba529c95e95347b7\": container with ID starting with 2bf6097f58997d4a7857bf19b235ef6fc055563709efd33dba529c95e95347b7 not found: ID does not exist" Mar 18 13:23:54.775476 master-0 kubenswrapper[7599]: I0318 13:23:54.772205 7599 scope.go:117] "RemoveContainer" containerID="d6a80d543c22a63f6dea0e7d105a4e1ffc771640a4b28b8a800b1b2d79f67b1d" Mar 18 13:23:54.775476 master-0 kubenswrapper[7599]: E0318 13:23:54.774959 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6a80d543c22a63f6dea0e7d105a4e1ffc771640a4b28b8a800b1b2d79f67b1d\": container with ID starting with d6a80d543c22a63f6dea0e7d105a4e1ffc771640a4b28b8a800b1b2d79f67b1d not found: ID does not exist" containerID="d6a80d543c22a63f6dea0e7d105a4e1ffc771640a4b28b8a800b1b2d79f67b1d" Mar 18 13:23:54.775476 master-0 kubenswrapper[7599]: I0318 13:23:54.774994 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6a80d543c22a63f6dea0e7d105a4e1ffc771640a4b28b8a800b1b2d79f67b1d"} err="failed to get container status \"d6a80d543c22a63f6dea0e7d105a4e1ffc771640a4b28b8a800b1b2d79f67b1d\": rpc error: code = NotFound desc = could not find container \"d6a80d543c22a63f6dea0e7d105a4e1ffc771640a4b28b8a800b1b2d79f67b1d\": container with ID starting with d6a80d543c22a63f6dea0e7d105a4e1ffc771640a4b28b8a800b1b2d79f67b1d not found: ID does not exist" Mar 18 13:23:55.254035 master-0 kubenswrapper[7599]: I0318 13:23:55.253984 7599 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:23:55.254035 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 18 13:23:55.254035 master-0 kubenswrapper[7599]: [+]process-running ok Mar 18 13:23:55.254035 master-0 kubenswrapper[7599]: healthz check failed Mar 18 13:23:55.255041 master-0 kubenswrapper[7599]: I0318 13:23:55.254050 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:23:55.386162 master-0 kubenswrapper[7599]: I0318 13:23:55.386114 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49fac1b46a11e49501805e891baae4a9" path="/var/lib/kubelet/pods/49fac1b46a11e49501805e891baae4a9/volumes" Mar 18 13:23:55.386634 master-0 kubenswrapper[7599]: I0318 13:23:55.386610 7599 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 18 13:23:55.877642 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 18 13:23:55.908894 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 18 13:23:55.909467 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 18 13:23:55.913360 master-0 systemd[1]: kubelet.service: Consumed 2min 20.490s CPU time. Mar 18 13:23:55.965662 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 18 13:23:56.105301 master-0 kubenswrapper[27835]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 13:23:56.105301 master-0 kubenswrapper[27835]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 18 13:23:56.105301 master-0 kubenswrapper[27835]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 13:23:56.105301 master-0 kubenswrapper[27835]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 13:23:56.105301 master-0 kubenswrapper[27835]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 18 13:23:56.105301 master-0 kubenswrapper[27835]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 13:23:56.106280 master-0 kubenswrapper[27835]: I0318 13:23:56.105376 27835 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 18 13:23:56.107726 master-0 kubenswrapper[27835]: W0318 13:23:56.107698 27835 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 13:23:56.107726 master-0 kubenswrapper[27835]: W0318 13:23:56.107714 27835 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 13:23:56.107726 master-0 kubenswrapper[27835]: W0318 13:23:56.107721 27835 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 13:23:56.107726 master-0 kubenswrapper[27835]: W0318 13:23:56.107726 27835 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 13:23:56.107726 master-0 kubenswrapper[27835]: W0318 13:23:56.107730 27835 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 13:23:56.107726 master-0 kubenswrapper[27835]: W0318 13:23:56.107735 27835 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 13:23:56.107992 master-0 kubenswrapper[27835]: W0318 13:23:56.107740 27835 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 13:23:56.107992 master-0 kubenswrapper[27835]: W0318 13:23:56.107745 27835 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 13:23:56.107992 master-0 kubenswrapper[27835]: W0318 13:23:56.107749 27835 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 13:23:56.107992 master-0 kubenswrapper[27835]: W0318 13:23:56.107753 27835 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 13:23:56.107992 master-0 kubenswrapper[27835]: W0318 13:23:56.107756 27835 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 13:23:56.107992 master-0 kubenswrapper[27835]: W0318 13:23:56.107761 27835 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 13:23:56.107992 master-0 kubenswrapper[27835]: W0318 13:23:56.107764 27835 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 13:23:56.107992 master-0 kubenswrapper[27835]: W0318 13:23:56.107768 27835 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 13:23:56.107992 master-0 kubenswrapper[27835]: W0318 13:23:56.107772 27835 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 13:23:56.107992 master-0 kubenswrapper[27835]: W0318 13:23:56.107775 27835 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 13:23:56.107992 master-0 kubenswrapper[27835]: W0318 13:23:56.107779 27835 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 13:23:56.107992 master-0 kubenswrapper[27835]: W0318 13:23:56.107782 27835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 13:23:56.107992 master-0 kubenswrapper[27835]: W0318 13:23:56.107786 27835 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 13:23:56.107992 master-0 kubenswrapper[27835]: W0318 13:23:56.107790 27835 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 13:23:56.107992 master-0 kubenswrapper[27835]: W0318 13:23:56.107800 27835 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 13:23:56.107992 master-0 kubenswrapper[27835]: W0318 13:23:56.107805 27835 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 13:23:56.107992 master-0 kubenswrapper[27835]: W0318 13:23:56.107809 27835 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 13:23:56.107992 master-0 kubenswrapper[27835]: W0318 13:23:56.107813 27835 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 13:23:56.107992 master-0 kubenswrapper[27835]: W0318 13:23:56.107817 27835 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 13:23:56.108715 master-0 kubenswrapper[27835]: W0318 13:23:56.107821 27835 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 13:23:56.108715 master-0 kubenswrapper[27835]: W0318 13:23:56.107825 27835 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 13:23:56.108715 master-0 kubenswrapper[27835]: W0318 13:23:56.107829 27835 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 13:23:56.108715 master-0 kubenswrapper[27835]: W0318 13:23:56.107832 27835 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 13:23:56.108715 master-0 kubenswrapper[27835]: W0318 13:23:56.107837 27835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 13:23:56.108715 master-0 kubenswrapper[27835]: W0318 13:23:56.107840 27835 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 13:23:56.108715 master-0 kubenswrapper[27835]: W0318 13:23:56.107844 27835 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 13:23:56.108715 master-0 kubenswrapper[27835]: W0318 13:23:56.107848 27835 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 13:23:56.108715 master-0 kubenswrapper[27835]: W0318 13:23:56.107851 27835 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 13:23:56.108715 master-0 kubenswrapper[27835]: W0318 13:23:56.107855 27835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 13:23:56.108715 master-0 kubenswrapper[27835]: W0318 13:23:56.107859 27835 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 13:23:56.108715 master-0 kubenswrapper[27835]: W0318 13:23:56.107862 27835 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 13:23:56.108715 master-0 kubenswrapper[27835]: W0318 13:23:56.107866 27835 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 13:23:56.108715 master-0 kubenswrapper[27835]: W0318 13:23:56.107870 27835 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 13:23:56.108715 master-0 kubenswrapper[27835]: W0318 13:23:56.107873 27835 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 13:23:56.108715 master-0 kubenswrapper[27835]: W0318 13:23:56.107877 27835 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 13:23:56.108715 master-0 kubenswrapper[27835]: W0318 13:23:56.107881 27835 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 13:23:56.108715 master-0 kubenswrapper[27835]: W0318 13:23:56.107885 27835 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 13:23:56.108715 master-0 kubenswrapper[27835]: W0318 13:23:56.107889 27835 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 13:23:56.108715 master-0 kubenswrapper[27835]: W0318 13:23:56.107892 27835 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 13:23:56.109469 master-0 kubenswrapper[27835]: W0318 13:23:56.107896 27835 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 13:23:56.109469 master-0 kubenswrapper[27835]: W0318 13:23:56.107900 27835 feature_gate.go:330] unrecognized feature gate: Example Mar 18 13:23:56.109469 master-0 kubenswrapper[27835]: W0318 13:23:56.107903 27835 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 13:23:56.109469 master-0 kubenswrapper[27835]: W0318 13:23:56.107906 27835 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 13:23:56.109469 master-0 kubenswrapper[27835]: W0318 13:23:56.107910 27835 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 13:23:56.109469 master-0 kubenswrapper[27835]: W0318 13:23:56.107914 27835 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 13:23:56.109469 master-0 kubenswrapper[27835]: W0318 13:23:56.107917 27835 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 13:23:56.109469 master-0 kubenswrapper[27835]: W0318 13:23:56.107921 27835 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 13:23:56.109469 master-0 kubenswrapper[27835]: W0318 13:23:56.107925 27835 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 13:23:56.109469 master-0 kubenswrapper[27835]: W0318 13:23:56.107929 27835 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 13:23:56.109469 master-0 kubenswrapper[27835]: W0318 13:23:56.107933 27835 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 13:23:56.109469 master-0 kubenswrapper[27835]: W0318 13:23:56.107936 27835 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 13:23:56.109469 master-0 kubenswrapper[27835]: W0318 13:23:56.107941 27835 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 13:23:56.109469 master-0 kubenswrapper[27835]: W0318 13:23:56.107944 27835 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 13:23:56.109469 master-0 kubenswrapper[27835]: W0318 13:23:56.107947 27835 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 13:23:56.109469 master-0 kubenswrapper[27835]: W0318 13:23:56.107951 27835 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 13:23:56.109469 master-0 kubenswrapper[27835]: W0318 13:23:56.107955 27835 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 13:23:56.109469 master-0 kubenswrapper[27835]: W0318 13:23:56.107959 27835 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 13:23:56.109469 master-0 kubenswrapper[27835]: W0318 13:23:56.107963 27835 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 13:23:56.109469 master-0 kubenswrapper[27835]: W0318 13:23:56.107966 27835 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 13:23:56.109469 master-0 kubenswrapper[27835]: W0318 13:23:56.107970 27835 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 13:23:56.110567 master-0 kubenswrapper[27835]: W0318 13:23:56.107973 27835 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 13:23:56.110567 master-0 kubenswrapper[27835]: W0318 13:23:56.107977 27835 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 13:23:56.110567 master-0 kubenswrapper[27835]: W0318 13:23:56.107981 27835 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 13:23:56.110567 master-0 kubenswrapper[27835]: W0318 13:23:56.107984 27835 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 13:23:56.110567 master-0 kubenswrapper[27835]: W0318 13:23:56.107989 27835 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 13:23:56.110567 master-0 kubenswrapper[27835]: W0318 13:23:56.107994 27835 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 13:23:56.110567 master-0 kubenswrapper[27835]: I0318 13:23:56.108073 27835 flags.go:64] FLAG: --address="0.0.0.0" Mar 18 13:23:56.110567 master-0 kubenswrapper[27835]: I0318 13:23:56.108083 27835 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 18 13:23:56.110567 master-0 kubenswrapper[27835]: I0318 13:23:56.108090 27835 flags.go:64] FLAG: --anonymous-auth="true" Mar 18 13:23:56.110567 master-0 kubenswrapper[27835]: I0318 13:23:56.108095 27835 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 18 13:23:56.110567 master-0 kubenswrapper[27835]: I0318 13:23:56.108101 27835 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 18 13:23:56.110567 master-0 kubenswrapper[27835]: I0318 13:23:56.108105 27835 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 18 13:23:56.110567 master-0 kubenswrapper[27835]: I0318 13:23:56.108111 27835 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 18 13:23:56.110567 master-0 kubenswrapper[27835]: I0318 13:23:56.108116 27835 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 18 13:23:56.110567 master-0 kubenswrapper[27835]: I0318 13:23:56.108120 27835 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 18 13:23:56.110567 master-0 kubenswrapper[27835]: I0318 13:23:56.108124 27835 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 18 13:23:56.110567 master-0 kubenswrapper[27835]: I0318 13:23:56.108129 27835 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 18 13:23:56.110567 master-0 kubenswrapper[27835]: I0318 13:23:56.108134 27835 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 18 13:23:56.110567 master-0 kubenswrapper[27835]: I0318 13:23:56.108140 27835 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 18 13:23:56.110567 master-0 kubenswrapper[27835]: I0318 13:23:56.108144 27835 flags.go:64] FLAG: --cgroup-root="" Mar 18 13:23:56.110567 master-0 kubenswrapper[27835]: I0318 13:23:56.108149 27835 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 18 13:23:56.110567 master-0 kubenswrapper[27835]: I0318 13:23:56.108154 27835 flags.go:64] FLAG: --client-ca-file="" Mar 18 13:23:56.112025 master-0 kubenswrapper[27835]: I0318 13:23:56.108159 27835 flags.go:64] FLAG: --cloud-config="" Mar 18 13:23:56.112025 master-0 kubenswrapper[27835]: I0318 13:23:56.108163 27835 flags.go:64] FLAG: --cloud-provider="" Mar 18 13:23:56.112025 master-0 kubenswrapper[27835]: I0318 13:23:56.108167 27835 flags.go:64] FLAG: --cluster-dns="[]" Mar 18 13:23:56.112025 master-0 kubenswrapper[27835]: I0318 13:23:56.108172 27835 flags.go:64] FLAG: --cluster-domain="" Mar 18 13:23:56.112025 master-0 kubenswrapper[27835]: I0318 13:23:56.108176 27835 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 18 13:23:56.112025 master-0 kubenswrapper[27835]: I0318 13:23:56.108181 27835 flags.go:64] FLAG: --config-dir="" Mar 18 13:23:56.112025 master-0 kubenswrapper[27835]: I0318 13:23:56.108185 27835 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 18 13:23:56.112025 master-0 kubenswrapper[27835]: I0318 13:23:56.108189 27835 flags.go:64] FLAG: --container-log-max-files="5" Mar 18 13:23:56.112025 master-0 kubenswrapper[27835]: I0318 13:23:56.108195 27835 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 18 13:23:56.112025 master-0 kubenswrapper[27835]: I0318 13:23:56.108200 27835 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 18 13:23:56.112025 master-0 kubenswrapper[27835]: I0318 13:23:56.108204 27835 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 18 13:23:56.112025 master-0 kubenswrapper[27835]: I0318 13:23:56.108208 27835 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 18 13:23:56.112025 master-0 kubenswrapper[27835]: I0318 13:23:56.108212 27835 flags.go:64] FLAG: --contention-profiling="false" Mar 18 13:23:56.112025 master-0 kubenswrapper[27835]: I0318 13:23:56.108217 27835 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 18 13:23:56.112025 master-0 kubenswrapper[27835]: I0318 13:23:56.108221 27835 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 18 13:23:56.112025 master-0 kubenswrapper[27835]: I0318 13:23:56.108227 27835 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 18 13:23:56.112025 master-0 kubenswrapper[27835]: I0318 13:23:56.108231 27835 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 18 13:23:56.112025 master-0 kubenswrapper[27835]: I0318 13:23:56.108236 27835 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 18 13:23:56.112025 master-0 kubenswrapper[27835]: I0318 13:23:56.108240 27835 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 18 13:23:56.112025 master-0 kubenswrapper[27835]: I0318 13:23:56.108244 27835 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 18 13:23:56.112025 master-0 kubenswrapper[27835]: I0318 13:23:56.108248 27835 flags.go:64] FLAG: --enable-load-reader="false" Mar 18 13:23:56.112025 master-0 kubenswrapper[27835]: I0318 13:23:56.108253 27835 flags.go:64] FLAG: --enable-server="true" Mar 18 13:23:56.112025 master-0 kubenswrapper[27835]: I0318 13:23:56.108257 27835 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 18 13:23:56.112025 master-0 kubenswrapper[27835]: I0318 13:23:56.108262 27835 flags.go:64] FLAG: --event-burst="100" Mar 18 13:23:56.112025 master-0 kubenswrapper[27835]: I0318 13:23:56.108266 27835 flags.go:64] FLAG: --event-qps="50" Mar 18 13:23:56.113106 master-0 kubenswrapper[27835]: I0318 13:23:56.108270 27835 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 18 13:23:56.113106 master-0 kubenswrapper[27835]: I0318 13:23:56.108275 27835 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 18 13:23:56.113106 master-0 kubenswrapper[27835]: I0318 13:23:56.108279 27835 flags.go:64] FLAG: --eviction-hard="" Mar 18 13:23:56.113106 master-0 kubenswrapper[27835]: I0318 13:23:56.108285 27835 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 18 13:23:56.113106 master-0 kubenswrapper[27835]: I0318 13:23:56.108289 27835 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 18 13:23:56.113106 master-0 kubenswrapper[27835]: I0318 13:23:56.108294 27835 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 18 13:23:56.113106 master-0 kubenswrapper[27835]: I0318 13:23:56.108298 27835 flags.go:64] FLAG: --eviction-soft="" Mar 18 13:23:56.113106 master-0 kubenswrapper[27835]: I0318 13:23:56.108302 27835 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 18 13:23:56.113106 master-0 kubenswrapper[27835]: I0318 13:23:56.108306 27835 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 18 13:23:56.113106 master-0 kubenswrapper[27835]: I0318 13:23:56.108311 27835 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 18 13:23:56.113106 master-0 kubenswrapper[27835]: I0318 13:23:56.108315 27835 flags.go:64] FLAG: --experimental-mounter-path="" Mar 18 13:23:56.113106 master-0 kubenswrapper[27835]: I0318 13:23:56.108319 27835 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 18 13:23:56.113106 master-0 kubenswrapper[27835]: I0318 13:23:56.108323 27835 flags.go:64] FLAG: --fail-swap-on="true" Mar 18 13:23:56.113106 master-0 kubenswrapper[27835]: I0318 13:23:56.108328 27835 flags.go:64] FLAG: --feature-gates="" Mar 18 13:23:56.113106 master-0 kubenswrapper[27835]: I0318 13:23:56.108332 27835 flags.go:64] FLAG: --file-check-frequency="20s" Mar 18 13:23:56.113106 master-0 kubenswrapper[27835]: I0318 13:23:56.108337 27835 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 18 13:23:56.113106 master-0 kubenswrapper[27835]: I0318 13:23:56.108342 27835 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 18 13:23:56.113106 master-0 kubenswrapper[27835]: I0318 13:23:56.108347 27835 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 18 13:23:56.113106 master-0 kubenswrapper[27835]: I0318 13:23:56.108352 27835 flags.go:64] FLAG: --healthz-port="10248" Mar 18 13:23:56.113106 master-0 kubenswrapper[27835]: I0318 13:23:56.108358 27835 flags.go:64] FLAG: --help="false" Mar 18 13:23:56.113106 master-0 kubenswrapper[27835]: I0318 13:23:56.108363 27835 flags.go:64] FLAG: --hostname-override="" Mar 18 13:23:56.113106 master-0 kubenswrapper[27835]: I0318 13:23:56.108368 27835 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 18 13:23:56.113106 master-0 kubenswrapper[27835]: I0318 13:23:56.108373 27835 flags.go:64] FLAG: --http-check-frequency="20s" Mar 18 13:23:56.113106 master-0 kubenswrapper[27835]: I0318 13:23:56.108378 27835 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 18 13:23:56.113106 master-0 kubenswrapper[27835]: I0318 13:23:56.108384 27835 flags.go:64] FLAG: --image-credential-provider-config="" Mar 18 13:23:56.114065 master-0 kubenswrapper[27835]: I0318 13:23:56.108388 27835 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 18 13:23:56.114065 master-0 kubenswrapper[27835]: I0318 13:23:56.108394 27835 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 18 13:23:56.114065 master-0 kubenswrapper[27835]: I0318 13:23:56.108400 27835 flags.go:64] FLAG: --image-service-endpoint="" Mar 18 13:23:56.114065 master-0 kubenswrapper[27835]: I0318 13:23:56.108405 27835 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 18 13:23:56.114065 master-0 kubenswrapper[27835]: I0318 13:23:56.108438 27835 flags.go:64] FLAG: --kube-api-burst="100" Mar 18 13:23:56.114065 master-0 kubenswrapper[27835]: I0318 13:23:56.108444 27835 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 18 13:23:56.114065 master-0 kubenswrapper[27835]: I0318 13:23:56.108448 27835 flags.go:64] FLAG: --kube-api-qps="50" Mar 18 13:23:56.114065 master-0 kubenswrapper[27835]: I0318 13:23:56.108453 27835 flags.go:64] FLAG: --kube-reserved="" Mar 18 13:23:56.114065 master-0 kubenswrapper[27835]: I0318 13:23:56.108458 27835 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 18 13:23:56.114065 master-0 kubenswrapper[27835]: I0318 13:23:56.108463 27835 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 18 13:23:56.114065 master-0 kubenswrapper[27835]: I0318 13:23:56.108469 27835 flags.go:64] FLAG: --kubelet-cgroups="" Mar 18 13:23:56.114065 master-0 kubenswrapper[27835]: I0318 13:23:56.108474 27835 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 18 13:23:56.114065 master-0 kubenswrapper[27835]: I0318 13:23:56.108480 27835 flags.go:64] FLAG: --lock-file="" Mar 18 13:23:56.114065 master-0 kubenswrapper[27835]: I0318 13:23:56.108489 27835 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 18 13:23:56.114065 master-0 kubenswrapper[27835]: I0318 13:23:56.108495 27835 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 18 13:23:56.114065 master-0 kubenswrapper[27835]: I0318 13:23:56.108500 27835 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 18 13:23:56.114065 master-0 kubenswrapper[27835]: I0318 13:23:56.108508 27835 flags.go:64] FLAG: --log-json-split-stream="false" Mar 18 13:23:56.114065 master-0 kubenswrapper[27835]: I0318 13:23:56.108512 27835 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 18 13:23:56.114065 master-0 kubenswrapper[27835]: I0318 13:23:56.108516 27835 flags.go:64] FLAG: --log-text-split-stream="false" Mar 18 13:23:56.114065 master-0 kubenswrapper[27835]: I0318 13:23:56.108520 27835 flags.go:64] FLAG: --logging-format="text" Mar 18 13:23:56.114065 master-0 kubenswrapper[27835]: I0318 13:23:56.108524 27835 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 18 13:23:56.114065 master-0 kubenswrapper[27835]: I0318 13:23:56.108529 27835 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 18 13:23:56.114065 master-0 kubenswrapper[27835]: I0318 13:23:56.108533 27835 flags.go:64] FLAG: --manifest-url="" Mar 18 13:23:56.114065 master-0 kubenswrapper[27835]: I0318 13:23:56.108537 27835 flags.go:64] FLAG: --manifest-url-header="" Mar 18 13:23:56.114065 master-0 kubenswrapper[27835]: I0318 13:23:56.108542 27835 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 18 13:23:56.115002 master-0 kubenswrapper[27835]: I0318 13:23:56.108547 27835 flags.go:64] FLAG: --max-open-files="1000000" Mar 18 13:23:56.115002 master-0 kubenswrapper[27835]: I0318 13:23:56.108552 27835 flags.go:64] FLAG: --max-pods="110" Mar 18 13:23:56.115002 master-0 kubenswrapper[27835]: I0318 13:23:56.108556 27835 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 18 13:23:56.115002 master-0 kubenswrapper[27835]: I0318 13:23:56.108560 27835 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 18 13:23:56.115002 master-0 kubenswrapper[27835]: I0318 13:23:56.108564 27835 flags.go:64] FLAG: --memory-manager-policy="None" Mar 18 13:23:56.115002 master-0 kubenswrapper[27835]: I0318 13:23:56.108569 27835 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 18 13:23:56.115002 master-0 kubenswrapper[27835]: I0318 13:23:56.108573 27835 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 18 13:23:56.115002 master-0 kubenswrapper[27835]: I0318 13:23:56.108577 27835 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 18 13:23:56.115002 master-0 kubenswrapper[27835]: I0318 13:23:56.108581 27835 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 18 13:23:56.115002 master-0 kubenswrapper[27835]: I0318 13:23:56.108592 27835 flags.go:64] FLAG: --node-status-max-images="50" Mar 18 13:23:56.115002 master-0 kubenswrapper[27835]: I0318 13:23:56.108596 27835 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 18 13:23:56.115002 master-0 kubenswrapper[27835]: I0318 13:23:56.108601 27835 flags.go:64] FLAG: --oom-score-adj="-999" Mar 18 13:23:56.115002 master-0 kubenswrapper[27835]: I0318 13:23:56.108610 27835 flags.go:64] FLAG: --pod-cidr="" Mar 18 13:23:56.115002 master-0 kubenswrapper[27835]: I0318 13:23:56.108614 27835 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422" Mar 18 13:23:56.115002 master-0 kubenswrapper[27835]: I0318 13:23:56.108621 27835 flags.go:64] FLAG: --pod-manifest-path="" Mar 18 13:23:56.115002 master-0 kubenswrapper[27835]: I0318 13:23:56.108625 27835 flags.go:64] FLAG: --pod-max-pids="-1" Mar 18 13:23:56.115002 master-0 kubenswrapper[27835]: I0318 13:23:56.108629 27835 flags.go:64] FLAG: --pods-per-core="0" Mar 18 13:23:56.115002 master-0 kubenswrapper[27835]: I0318 13:23:56.108633 27835 flags.go:64] FLAG: --port="10250" Mar 18 13:23:56.115002 master-0 kubenswrapper[27835]: I0318 13:23:56.108638 27835 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 18 13:23:56.115002 master-0 kubenswrapper[27835]: I0318 13:23:56.108642 27835 flags.go:64] FLAG: --provider-id="" Mar 18 13:23:56.115002 master-0 kubenswrapper[27835]: I0318 13:23:56.108648 27835 flags.go:64] FLAG: --qos-reserved="" Mar 18 13:23:56.115002 master-0 kubenswrapper[27835]: I0318 13:23:56.108652 27835 flags.go:64] FLAG: --read-only-port="10255" Mar 18 13:23:56.115002 master-0 kubenswrapper[27835]: I0318 13:23:56.108656 27835 flags.go:64] FLAG: --register-node="true" Mar 18 13:23:56.115002 master-0 kubenswrapper[27835]: I0318 13:23:56.108661 27835 flags.go:64] FLAG: --register-schedulable="true" Mar 18 13:23:56.115970 master-0 kubenswrapper[27835]: I0318 13:23:56.108665 27835 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 18 13:23:56.115970 master-0 kubenswrapper[27835]: I0318 13:23:56.108672 27835 flags.go:64] FLAG: --registry-burst="10" Mar 18 13:23:56.115970 master-0 kubenswrapper[27835]: I0318 13:23:56.108676 27835 flags.go:64] FLAG: --registry-qps="5" Mar 18 13:23:56.115970 master-0 kubenswrapper[27835]: I0318 13:23:56.108680 27835 flags.go:64] FLAG: --reserved-cpus="" Mar 18 13:23:56.115970 master-0 kubenswrapper[27835]: I0318 13:23:56.108684 27835 flags.go:64] FLAG: --reserved-memory="" Mar 18 13:23:56.115970 master-0 kubenswrapper[27835]: I0318 13:23:56.108690 27835 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 18 13:23:56.115970 master-0 kubenswrapper[27835]: I0318 13:23:56.108694 27835 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 18 13:23:56.115970 master-0 kubenswrapper[27835]: I0318 13:23:56.108698 27835 flags.go:64] FLAG: --rotate-certificates="false" Mar 18 13:23:56.115970 master-0 kubenswrapper[27835]: I0318 13:23:56.108703 27835 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 18 13:23:56.115970 master-0 kubenswrapper[27835]: I0318 13:23:56.108707 27835 flags.go:64] FLAG: --runonce="false" Mar 18 13:23:56.115970 master-0 kubenswrapper[27835]: I0318 13:23:56.108711 27835 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 18 13:23:56.115970 master-0 kubenswrapper[27835]: I0318 13:23:56.108715 27835 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 18 13:23:56.115970 master-0 kubenswrapper[27835]: I0318 13:23:56.108720 27835 flags.go:64] FLAG: --seccomp-default="false" Mar 18 13:23:56.115970 master-0 kubenswrapper[27835]: I0318 13:23:56.108724 27835 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 18 13:23:56.115970 master-0 kubenswrapper[27835]: I0318 13:23:56.108728 27835 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 18 13:23:56.115970 master-0 kubenswrapper[27835]: I0318 13:23:56.108733 27835 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 18 13:23:56.115970 master-0 kubenswrapper[27835]: I0318 13:23:56.108738 27835 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 18 13:23:56.115970 master-0 kubenswrapper[27835]: I0318 13:23:56.108743 27835 flags.go:64] FLAG: --storage-driver-password="root" Mar 18 13:23:56.115970 master-0 kubenswrapper[27835]: I0318 13:23:56.108747 27835 flags.go:64] FLAG: --storage-driver-secure="false" Mar 18 13:23:56.115970 master-0 kubenswrapper[27835]: I0318 13:23:56.108752 27835 flags.go:64] FLAG: --storage-driver-table="stats" Mar 18 13:23:56.115970 master-0 kubenswrapper[27835]: I0318 13:23:56.108756 27835 flags.go:64] FLAG: --storage-driver-user="root" Mar 18 13:23:56.115970 master-0 kubenswrapper[27835]: I0318 13:23:56.108760 27835 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 18 13:23:56.115970 master-0 kubenswrapper[27835]: I0318 13:23:56.108765 27835 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 18 13:23:56.115970 master-0 kubenswrapper[27835]: I0318 13:23:56.108769 27835 flags.go:64] FLAG: --system-cgroups="" Mar 18 13:23:56.115970 master-0 kubenswrapper[27835]: I0318 13:23:56.108774 27835 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 18 13:23:56.116911 master-0 kubenswrapper[27835]: I0318 13:23:56.108781 27835 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 18 13:23:56.116911 master-0 kubenswrapper[27835]: I0318 13:23:56.108789 27835 flags.go:64] FLAG: --tls-cert-file="" Mar 18 13:23:56.116911 master-0 kubenswrapper[27835]: I0318 13:23:56.108793 27835 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 18 13:23:56.116911 master-0 kubenswrapper[27835]: I0318 13:23:56.108800 27835 flags.go:64] FLAG: --tls-min-version="" Mar 18 13:23:56.116911 master-0 kubenswrapper[27835]: I0318 13:23:56.108805 27835 flags.go:64] FLAG: --tls-private-key-file="" Mar 18 13:23:56.116911 master-0 kubenswrapper[27835]: I0318 13:23:56.108810 27835 flags.go:64] FLAG: --topology-manager-policy="none" Mar 18 13:23:56.116911 master-0 kubenswrapper[27835]: I0318 13:23:56.108815 27835 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 18 13:23:56.116911 master-0 kubenswrapper[27835]: I0318 13:23:56.108821 27835 flags.go:64] FLAG: --topology-manager-scope="container" Mar 18 13:23:56.116911 master-0 kubenswrapper[27835]: I0318 13:23:56.108826 27835 flags.go:64] FLAG: --v="2" Mar 18 13:23:56.116911 master-0 kubenswrapper[27835]: I0318 13:23:56.108833 27835 flags.go:64] FLAG: --version="false" Mar 18 13:23:56.116911 master-0 kubenswrapper[27835]: I0318 13:23:56.108840 27835 flags.go:64] FLAG: --vmodule="" Mar 18 13:23:56.116911 master-0 kubenswrapper[27835]: I0318 13:23:56.108846 27835 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 18 13:23:56.116911 master-0 kubenswrapper[27835]: I0318 13:23:56.108851 27835 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 18 13:23:56.116911 master-0 kubenswrapper[27835]: W0318 13:23:56.108955 27835 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 13:23:56.116911 master-0 kubenswrapper[27835]: W0318 13:23:56.108960 27835 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 13:23:56.116911 master-0 kubenswrapper[27835]: W0318 13:23:56.108966 27835 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 13:23:56.116911 master-0 kubenswrapper[27835]: W0318 13:23:56.108970 27835 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 13:23:56.116911 master-0 kubenswrapper[27835]: W0318 13:23:56.108974 27835 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 13:23:56.116911 master-0 kubenswrapper[27835]: W0318 13:23:56.108978 27835 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 13:23:56.116911 master-0 kubenswrapper[27835]: W0318 13:23:56.108981 27835 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 13:23:56.116911 master-0 kubenswrapper[27835]: W0318 13:23:56.108986 27835 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 13:23:56.116911 master-0 kubenswrapper[27835]: W0318 13:23:56.108989 27835 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 13:23:56.116911 master-0 kubenswrapper[27835]: W0318 13:23:56.108993 27835 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 13:23:56.117800 master-0 kubenswrapper[27835]: W0318 13:23:56.108996 27835 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 13:23:56.117800 master-0 kubenswrapper[27835]: W0318 13:23:56.109000 27835 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 13:23:56.117800 master-0 kubenswrapper[27835]: W0318 13:23:56.109004 27835 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 13:23:56.117800 master-0 kubenswrapper[27835]: W0318 13:23:56.109007 27835 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 13:23:56.117800 master-0 kubenswrapper[27835]: W0318 13:23:56.109012 27835 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 13:23:56.117800 master-0 kubenswrapper[27835]: W0318 13:23:56.109018 27835 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 13:23:56.117800 master-0 kubenswrapper[27835]: W0318 13:23:56.109022 27835 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 13:23:56.117800 master-0 kubenswrapper[27835]: W0318 13:23:56.109026 27835 feature_gate.go:330] unrecognized feature gate: Example Mar 18 13:23:56.117800 master-0 kubenswrapper[27835]: W0318 13:23:56.109030 27835 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 13:23:56.117800 master-0 kubenswrapper[27835]: W0318 13:23:56.109033 27835 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 13:23:56.117800 master-0 kubenswrapper[27835]: W0318 13:23:56.109039 27835 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 13:23:56.117800 master-0 kubenswrapper[27835]: W0318 13:23:56.109043 27835 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 13:23:56.117800 master-0 kubenswrapper[27835]: W0318 13:23:56.109049 27835 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 13:23:56.117800 master-0 kubenswrapper[27835]: W0318 13:23:56.109054 27835 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 13:23:56.117800 master-0 kubenswrapper[27835]: W0318 13:23:56.109058 27835 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 13:23:56.117800 master-0 kubenswrapper[27835]: W0318 13:23:56.109063 27835 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 13:23:56.117800 master-0 kubenswrapper[27835]: W0318 13:23:56.109067 27835 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 13:23:56.117800 master-0 kubenswrapper[27835]: W0318 13:23:56.109070 27835 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 13:23:56.117800 master-0 kubenswrapper[27835]: W0318 13:23:56.109075 27835 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 13:23:56.118605 master-0 kubenswrapper[27835]: W0318 13:23:56.109079 27835 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 13:23:56.118605 master-0 kubenswrapper[27835]: W0318 13:23:56.109082 27835 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 13:23:56.118605 master-0 kubenswrapper[27835]: W0318 13:23:56.109086 27835 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 13:23:56.118605 master-0 kubenswrapper[27835]: W0318 13:23:56.109090 27835 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 13:23:56.118605 master-0 kubenswrapper[27835]: W0318 13:23:56.109094 27835 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 13:23:56.118605 master-0 kubenswrapper[27835]: W0318 13:23:56.109097 27835 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 13:23:56.118605 master-0 kubenswrapper[27835]: W0318 13:23:56.109101 27835 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 13:23:56.118605 master-0 kubenswrapper[27835]: W0318 13:23:56.109104 27835 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 13:23:56.118605 master-0 kubenswrapper[27835]: W0318 13:23:56.109108 27835 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 13:23:56.118605 master-0 kubenswrapper[27835]: W0318 13:23:56.109111 27835 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 13:23:56.118605 master-0 kubenswrapper[27835]: W0318 13:23:56.109115 27835 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 13:23:56.118605 master-0 kubenswrapper[27835]: W0318 13:23:56.109119 27835 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 13:23:56.118605 master-0 kubenswrapper[27835]: W0318 13:23:56.109122 27835 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 13:23:56.118605 master-0 kubenswrapper[27835]: W0318 13:23:56.109126 27835 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 13:23:56.118605 master-0 kubenswrapper[27835]: W0318 13:23:56.109129 27835 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 13:23:56.118605 master-0 kubenswrapper[27835]: W0318 13:23:56.109133 27835 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 13:23:56.118605 master-0 kubenswrapper[27835]: W0318 13:23:56.109136 27835 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 13:23:56.118605 master-0 kubenswrapper[27835]: W0318 13:23:56.109140 27835 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 13:23:56.118605 master-0 kubenswrapper[27835]: W0318 13:23:56.109143 27835 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 13:23:56.118605 master-0 kubenswrapper[27835]: W0318 13:23:56.109147 27835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 13:23:56.119350 master-0 kubenswrapper[27835]: W0318 13:23:56.109151 27835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 13:23:56.119350 master-0 kubenswrapper[27835]: W0318 13:23:56.109154 27835 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 13:23:56.119350 master-0 kubenswrapper[27835]: W0318 13:23:56.109158 27835 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 13:23:56.119350 master-0 kubenswrapper[27835]: W0318 13:23:56.109163 27835 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 13:23:56.119350 master-0 kubenswrapper[27835]: W0318 13:23:56.109166 27835 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 13:23:56.119350 master-0 kubenswrapper[27835]: W0318 13:23:56.109173 27835 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 13:23:56.119350 master-0 kubenswrapper[27835]: W0318 13:23:56.109177 27835 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 13:23:56.119350 master-0 kubenswrapper[27835]: W0318 13:23:56.109181 27835 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 13:23:56.119350 master-0 kubenswrapper[27835]: W0318 13:23:56.109184 27835 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 13:23:56.119350 master-0 kubenswrapper[27835]: W0318 13:23:56.109188 27835 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 13:23:56.119350 master-0 kubenswrapper[27835]: W0318 13:23:56.109193 27835 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 13:23:56.119350 master-0 kubenswrapper[27835]: W0318 13:23:56.109197 27835 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 13:23:56.119350 master-0 kubenswrapper[27835]: W0318 13:23:56.109201 27835 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 13:23:56.119350 master-0 kubenswrapper[27835]: W0318 13:23:56.109204 27835 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 13:23:56.119350 master-0 kubenswrapper[27835]: W0318 13:23:56.109208 27835 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 13:23:56.119350 master-0 kubenswrapper[27835]: W0318 13:23:56.109211 27835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 13:23:56.119350 master-0 kubenswrapper[27835]: W0318 13:23:56.109215 27835 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 13:23:56.119350 master-0 kubenswrapper[27835]: W0318 13:23:56.109218 27835 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 13:23:56.119350 master-0 kubenswrapper[27835]: W0318 13:23:56.109222 27835 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 13:23:56.119350 master-0 kubenswrapper[27835]: W0318 13:23:56.109227 27835 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 13:23:56.120167 master-0 kubenswrapper[27835]: W0318 13:23:56.109231 27835 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 13:23:56.120167 master-0 kubenswrapper[27835]: W0318 13:23:56.109235 27835 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 13:23:56.120167 master-0 kubenswrapper[27835]: W0318 13:23:56.109239 27835 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 13:23:56.120167 master-0 kubenswrapper[27835]: I0318 13:23:56.109251 27835 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 13:23:56.120167 master-0 kubenswrapper[27835]: I0318 13:23:56.115877 27835 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 18 13:23:56.120167 master-0 kubenswrapper[27835]: I0318 13:23:56.115928 27835 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 18 13:23:56.120167 master-0 kubenswrapper[27835]: W0318 13:23:56.116065 27835 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 13:23:56.120167 master-0 kubenswrapper[27835]: W0318 13:23:56.116079 27835 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 13:23:56.120167 master-0 kubenswrapper[27835]: W0318 13:23:56.116090 27835 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 13:23:56.120167 master-0 kubenswrapper[27835]: W0318 13:23:56.116098 27835 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 13:23:56.120167 master-0 kubenswrapper[27835]: W0318 13:23:56.116107 27835 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 13:23:56.120167 master-0 kubenswrapper[27835]: W0318 13:23:56.116115 27835 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 13:23:56.120167 master-0 kubenswrapper[27835]: W0318 13:23:56.116123 27835 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 13:23:56.120167 master-0 kubenswrapper[27835]: W0318 13:23:56.116131 27835 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 13:23:56.120167 master-0 kubenswrapper[27835]: W0318 13:23:56.116139 27835 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 13:23:56.120859 master-0 kubenswrapper[27835]: W0318 13:23:56.116147 27835 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 13:23:56.120859 master-0 kubenswrapper[27835]: W0318 13:23:56.116155 27835 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 13:23:56.120859 master-0 kubenswrapper[27835]: W0318 13:23:56.116163 27835 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 13:23:56.120859 master-0 kubenswrapper[27835]: W0318 13:23:56.116172 27835 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 13:23:56.120859 master-0 kubenswrapper[27835]: W0318 13:23:56.116180 27835 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 13:23:56.120859 master-0 kubenswrapper[27835]: W0318 13:23:56.116187 27835 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 13:23:56.120859 master-0 kubenswrapper[27835]: W0318 13:23:56.116196 27835 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 13:23:56.120859 master-0 kubenswrapper[27835]: W0318 13:23:56.116204 27835 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 13:23:56.120859 master-0 kubenswrapper[27835]: W0318 13:23:56.116212 27835 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 13:23:56.120859 master-0 kubenswrapper[27835]: W0318 13:23:56.116219 27835 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 13:23:56.120859 master-0 kubenswrapper[27835]: W0318 13:23:56.116228 27835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 13:23:56.120859 master-0 kubenswrapper[27835]: W0318 13:23:56.116236 27835 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 13:23:56.120859 master-0 kubenswrapper[27835]: W0318 13:23:56.116244 27835 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 13:23:56.120859 master-0 kubenswrapper[27835]: W0318 13:23:56.116251 27835 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 13:23:56.120859 master-0 kubenswrapper[27835]: W0318 13:23:56.116262 27835 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 13:23:56.120859 master-0 kubenswrapper[27835]: W0318 13:23:56.116275 27835 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 13:23:56.120859 master-0 kubenswrapper[27835]: W0318 13:23:56.116286 27835 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 13:23:56.120859 master-0 kubenswrapper[27835]: W0318 13:23:56.116294 27835 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 13:23:56.120859 master-0 kubenswrapper[27835]: W0318 13:23:56.116302 27835 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 13:23:56.120859 master-0 kubenswrapper[27835]: W0318 13:23:56.116312 27835 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 13:23:56.121602 master-0 kubenswrapper[27835]: W0318 13:23:56.116320 27835 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 13:23:56.121602 master-0 kubenswrapper[27835]: W0318 13:23:56.116329 27835 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 13:23:56.121602 master-0 kubenswrapper[27835]: W0318 13:23:56.116339 27835 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 13:23:56.121602 master-0 kubenswrapper[27835]: W0318 13:23:56.116347 27835 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 13:23:56.121602 master-0 kubenswrapper[27835]: W0318 13:23:56.116355 27835 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 13:23:56.121602 master-0 kubenswrapper[27835]: W0318 13:23:56.116365 27835 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 13:23:56.121602 master-0 kubenswrapper[27835]: W0318 13:23:56.116373 27835 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 13:23:56.121602 master-0 kubenswrapper[27835]: W0318 13:23:56.116381 27835 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 13:23:56.121602 master-0 kubenswrapper[27835]: W0318 13:23:56.116390 27835 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 13:23:56.121602 master-0 kubenswrapper[27835]: W0318 13:23:56.116397 27835 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 13:23:56.121602 master-0 kubenswrapper[27835]: W0318 13:23:56.116408 27835 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 13:23:56.121602 master-0 kubenswrapper[27835]: W0318 13:23:56.116443 27835 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 13:23:56.121602 master-0 kubenswrapper[27835]: W0318 13:23:56.116453 27835 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 13:23:56.121602 master-0 kubenswrapper[27835]: W0318 13:23:56.116463 27835 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 13:23:56.121602 master-0 kubenswrapper[27835]: W0318 13:23:56.116472 27835 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 13:23:56.121602 master-0 kubenswrapper[27835]: W0318 13:23:56.116483 27835 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 13:23:56.121602 master-0 kubenswrapper[27835]: W0318 13:23:56.116493 27835 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 13:23:56.121602 master-0 kubenswrapper[27835]: W0318 13:23:56.116503 27835 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 13:23:56.121602 master-0 kubenswrapper[27835]: W0318 13:23:56.116512 27835 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 13:23:56.122458 master-0 kubenswrapper[27835]: W0318 13:23:56.116521 27835 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 13:23:56.122458 master-0 kubenswrapper[27835]: W0318 13:23:56.116530 27835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 13:23:56.122458 master-0 kubenswrapper[27835]: W0318 13:23:56.116541 27835 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 13:23:56.122458 master-0 kubenswrapper[27835]: W0318 13:23:56.116552 27835 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 13:23:56.122458 master-0 kubenswrapper[27835]: W0318 13:23:56.116561 27835 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 13:23:56.122458 master-0 kubenswrapper[27835]: W0318 13:23:56.116570 27835 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 13:23:56.122458 master-0 kubenswrapper[27835]: W0318 13:23:56.116578 27835 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 13:23:56.122458 master-0 kubenswrapper[27835]: W0318 13:23:56.116586 27835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 13:23:56.122458 master-0 kubenswrapper[27835]: W0318 13:23:56.116594 27835 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 13:23:56.122458 master-0 kubenswrapper[27835]: W0318 13:23:56.116602 27835 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 13:23:56.122458 master-0 kubenswrapper[27835]: W0318 13:23:56.116610 27835 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 13:23:56.122458 master-0 kubenswrapper[27835]: W0318 13:23:56.116618 27835 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 13:23:56.122458 master-0 kubenswrapper[27835]: W0318 13:23:56.116625 27835 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 13:23:56.122458 master-0 kubenswrapper[27835]: W0318 13:23:56.116633 27835 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 13:23:56.122458 master-0 kubenswrapper[27835]: W0318 13:23:56.116642 27835 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 13:23:56.122458 master-0 kubenswrapper[27835]: W0318 13:23:56.116650 27835 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 13:23:56.122458 master-0 kubenswrapper[27835]: W0318 13:23:56.116658 27835 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 13:23:56.122458 master-0 kubenswrapper[27835]: W0318 13:23:56.116666 27835 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 13:23:56.122458 master-0 kubenswrapper[27835]: W0318 13:23:56.116674 27835 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 13:23:56.123181 master-0 kubenswrapper[27835]: W0318 13:23:56.116682 27835 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 13:23:56.123181 master-0 kubenswrapper[27835]: W0318 13:23:56.116689 27835 feature_gate.go:330] unrecognized feature gate: Example Mar 18 13:23:56.123181 master-0 kubenswrapper[27835]: W0318 13:23:56.116697 27835 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 13:23:56.123181 master-0 kubenswrapper[27835]: W0318 13:23:56.116707 27835 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 13:23:56.123181 master-0 kubenswrapper[27835]: W0318 13:23:56.116715 27835 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 13:23:56.123181 master-0 kubenswrapper[27835]: I0318 13:23:56.116728 27835 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 13:23:56.123181 master-0 kubenswrapper[27835]: W0318 13:23:56.116995 27835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 13:23:56.123181 master-0 kubenswrapper[27835]: W0318 13:23:56.117010 27835 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 13:23:56.123181 master-0 kubenswrapper[27835]: W0318 13:23:56.117020 27835 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 13:23:56.123181 master-0 kubenswrapper[27835]: W0318 13:23:56.117030 27835 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 13:23:56.123181 master-0 kubenswrapper[27835]: W0318 13:23:56.117040 27835 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 13:23:56.123181 master-0 kubenswrapper[27835]: W0318 13:23:56.117049 27835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 13:23:56.123181 master-0 kubenswrapper[27835]: W0318 13:23:56.117058 27835 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 13:23:56.123181 master-0 kubenswrapper[27835]: W0318 13:23:56.117066 27835 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 13:23:56.123181 master-0 kubenswrapper[27835]: W0318 13:23:56.117074 27835 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 13:23:56.124083 master-0 kubenswrapper[27835]: W0318 13:23:56.117083 27835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 13:23:56.124083 master-0 kubenswrapper[27835]: W0318 13:23:56.117091 27835 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 13:23:56.124083 master-0 kubenswrapper[27835]: W0318 13:23:56.117099 27835 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 13:23:56.124083 master-0 kubenswrapper[27835]: W0318 13:23:56.117107 27835 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 13:23:56.124083 master-0 kubenswrapper[27835]: W0318 13:23:56.117115 27835 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 13:23:56.124083 master-0 kubenswrapper[27835]: W0318 13:23:56.117123 27835 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 13:23:56.124083 master-0 kubenswrapper[27835]: W0318 13:23:56.117131 27835 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 13:23:56.124083 master-0 kubenswrapper[27835]: W0318 13:23:56.117139 27835 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 13:23:56.124083 master-0 kubenswrapper[27835]: W0318 13:23:56.117147 27835 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 13:23:56.124083 master-0 kubenswrapper[27835]: W0318 13:23:56.117158 27835 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 13:23:56.124083 master-0 kubenswrapper[27835]: W0318 13:23:56.117169 27835 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 13:23:56.124083 master-0 kubenswrapper[27835]: W0318 13:23:56.117178 27835 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 13:23:56.124083 master-0 kubenswrapper[27835]: W0318 13:23:56.117187 27835 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 13:23:56.124083 master-0 kubenswrapper[27835]: W0318 13:23:56.117195 27835 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 13:23:56.124083 master-0 kubenswrapper[27835]: W0318 13:23:56.117203 27835 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 13:23:56.124083 master-0 kubenswrapper[27835]: W0318 13:23:56.117210 27835 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 13:23:56.124083 master-0 kubenswrapper[27835]: W0318 13:23:56.117219 27835 feature_gate.go:330] unrecognized feature gate: Example Mar 18 13:23:56.124083 master-0 kubenswrapper[27835]: W0318 13:23:56.117226 27835 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 13:23:56.124083 master-0 kubenswrapper[27835]: W0318 13:23:56.117234 27835 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 13:23:56.124083 master-0 kubenswrapper[27835]: W0318 13:23:56.117242 27835 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 13:23:56.125014 master-0 kubenswrapper[27835]: W0318 13:23:56.117250 27835 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 13:23:56.125014 master-0 kubenswrapper[27835]: W0318 13:23:56.117259 27835 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 13:23:56.125014 master-0 kubenswrapper[27835]: W0318 13:23:56.117266 27835 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 13:23:56.125014 master-0 kubenswrapper[27835]: W0318 13:23:56.117276 27835 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 13:23:56.125014 master-0 kubenswrapper[27835]: W0318 13:23:56.117285 27835 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 13:23:56.125014 master-0 kubenswrapper[27835]: W0318 13:23:56.117293 27835 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 13:23:56.125014 master-0 kubenswrapper[27835]: W0318 13:23:56.117305 27835 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 13:23:56.125014 master-0 kubenswrapper[27835]: W0318 13:23:56.117315 27835 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 13:23:56.125014 master-0 kubenswrapper[27835]: W0318 13:23:56.117323 27835 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 13:23:56.125014 master-0 kubenswrapper[27835]: W0318 13:23:56.117332 27835 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 13:23:56.125014 master-0 kubenswrapper[27835]: W0318 13:23:56.117341 27835 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 13:23:56.125014 master-0 kubenswrapper[27835]: W0318 13:23:56.117350 27835 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 13:23:56.125014 master-0 kubenswrapper[27835]: W0318 13:23:56.117360 27835 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 13:23:56.125014 master-0 kubenswrapper[27835]: W0318 13:23:56.117371 27835 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 13:23:56.125014 master-0 kubenswrapper[27835]: W0318 13:23:56.117381 27835 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 13:23:56.125014 master-0 kubenswrapper[27835]: W0318 13:23:56.117390 27835 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 13:23:56.125014 master-0 kubenswrapper[27835]: W0318 13:23:56.117398 27835 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 13:23:56.125014 master-0 kubenswrapper[27835]: W0318 13:23:56.117407 27835 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 13:23:56.125014 master-0 kubenswrapper[27835]: W0318 13:23:56.117438 27835 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 13:23:56.125948 master-0 kubenswrapper[27835]: W0318 13:23:56.117447 27835 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 13:23:56.125948 master-0 kubenswrapper[27835]: W0318 13:23:56.117457 27835 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 13:23:56.125948 master-0 kubenswrapper[27835]: W0318 13:23:56.117465 27835 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 13:23:56.125948 master-0 kubenswrapper[27835]: W0318 13:23:56.117474 27835 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 13:23:56.125948 master-0 kubenswrapper[27835]: W0318 13:23:56.117482 27835 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 13:23:56.125948 master-0 kubenswrapper[27835]: W0318 13:23:56.117490 27835 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 13:23:56.125948 master-0 kubenswrapper[27835]: W0318 13:23:56.117498 27835 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 13:23:56.125948 master-0 kubenswrapper[27835]: W0318 13:23:56.117506 27835 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 13:23:56.125948 master-0 kubenswrapper[27835]: W0318 13:23:56.117515 27835 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 13:23:56.125948 master-0 kubenswrapper[27835]: W0318 13:23:56.117525 27835 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 13:23:56.125948 master-0 kubenswrapper[27835]: W0318 13:23:56.117534 27835 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 13:23:56.125948 master-0 kubenswrapper[27835]: W0318 13:23:56.117544 27835 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 13:23:56.125948 master-0 kubenswrapper[27835]: W0318 13:23:56.117553 27835 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 13:23:56.125948 master-0 kubenswrapper[27835]: W0318 13:23:56.117562 27835 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 13:23:56.125948 master-0 kubenswrapper[27835]: W0318 13:23:56.117570 27835 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 13:23:56.125948 master-0 kubenswrapper[27835]: W0318 13:23:56.117578 27835 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 13:23:56.125948 master-0 kubenswrapper[27835]: W0318 13:23:56.117586 27835 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 13:23:56.125948 master-0 kubenswrapper[27835]: W0318 13:23:56.117594 27835 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 13:23:56.125948 master-0 kubenswrapper[27835]: W0318 13:23:56.117602 27835 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 13:23:56.126972 master-0 kubenswrapper[27835]: W0318 13:23:56.117610 27835 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 13:23:56.126972 master-0 kubenswrapper[27835]: W0318 13:23:56.117618 27835 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 13:23:56.126972 master-0 kubenswrapper[27835]: W0318 13:23:56.117627 27835 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 13:23:56.126972 master-0 kubenswrapper[27835]: W0318 13:23:56.117635 27835 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 13:23:56.126972 master-0 kubenswrapper[27835]: W0318 13:23:56.117642 27835 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 13:23:56.126972 master-0 kubenswrapper[27835]: I0318 13:23:56.117656 27835 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 13:23:56.126972 master-0 kubenswrapper[27835]: I0318 13:23:56.117931 27835 server.go:940] "Client rotation is on, will bootstrap in background" Mar 18 13:23:56.126972 master-0 kubenswrapper[27835]: I0318 13:23:56.121159 27835 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 18 13:23:56.126972 master-0 kubenswrapper[27835]: I0318 13:23:56.121293 27835 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 18 13:23:56.126972 master-0 kubenswrapper[27835]: I0318 13:23:56.121707 27835 server.go:997] "Starting client certificate rotation" Mar 18 13:23:56.126972 master-0 kubenswrapper[27835]: I0318 13:23:56.121726 27835 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 18 13:23:56.126972 master-0 kubenswrapper[27835]: I0318 13:23:56.121926 27835 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-19 12:57:20 +0000 UTC, rotation deadline is 2026-03-19 07:00:22.972284867 +0000 UTC Mar 18 13:23:56.126972 master-0 kubenswrapper[27835]: I0318 13:23:56.122011 27835 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 17h36m26.85027787s for next certificate rotation Mar 18 13:23:56.128839 master-0 kubenswrapper[27835]: I0318 13:23:56.122856 27835 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 13:23:56.128839 master-0 kubenswrapper[27835]: I0318 13:23:56.125335 27835 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 13:23:56.131559 master-0 kubenswrapper[27835]: I0318 13:23:56.131491 27835 log.go:25] "Validated CRI v1 runtime API" Mar 18 13:23:56.138792 master-0 kubenswrapper[27835]: I0318 13:23:56.138709 27835 log.go:25] "Validated CRI v1 image API" Mar 18 13:23:56.140243 master-0 kubenswrapper[27835]: I0318 13:23:56.139852 27835 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 18 13:23:56.156425 master-0 kubenswrapper[27835]: I0318 13:23:56.156329 27835 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 b51f6abc-d651-468e-ae51-7c88144268ce:/dev/vda3] Mar 18 13:23:56.158106 master-0 kubenswrapper[27835]: I0318 13:23:56.156395 27835 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/03b02d62589926abe5e0b1261c9a635f1d2bfbcefca79eac740978fb36ffa6b1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/03b02d62589926abe5e0b1261c9a635f1d2bfbcefca79eac740978fb36ffa6b1/userdata/shm major:0 minor:1014 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/06b37ad3c0f2f564ede6e81cb5f87c31e9193aa64abb54a08ee07cad5168cccd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/06b37ad3c0f2f564ede6e81cb5f87c31e9193aa64abb54a08ee07cad5168cccd/userdata/shm major:0 minor:961 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/087f1bfbdd93b7cd9cc4547bdaad7f0c837b1009fb9f9eea1b3ef9f1330544d2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/087f1bfbdd93b7cd9cc4547bdaad7f0c837b1009fb9f9eea1b3ef9f1330544d2/userdata/shm major:0 minor:286 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0bec5b0b6152a0f7f02d36d9ef96fae029938eb181145d603f9ae776f9e6ecbd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0bec5b0b6152a0f7f02d36d9ef96fae029938eb181145d603f9ae776f9e6ecbd/userdata/shm major:0 minor:607 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/12c899e36dc6ffd83c34c2d6e92c233e31c0860e033db20595d2d07c037dd6e7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/12c899e36dc6ffd83c34c2d6e92c233e31c0860e033db20595d2d07c037dd6e7/userdata/shm major:0 minor:78 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/15ab52d652113ef266940e33258fee75e250f493080fb37576944ab0faae3a29/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/15ab52d652113ef266940e33258fee75e250f493080fb37576944ab0faae3a29/userdata/shm major:0 minor:480 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1be597ce241a4c605b967a1c6529bc798d2a367805fff6066c48887fdc2a2af1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1be597ce241a4c605b967a1c6529bc798d2a367805fff6066c48887fdc2a2af1/userdata/shm major:0 minor:405 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1bf3d426d907a1cb94f7713355be45a70fb7cd061dca794ecb62191beca0b9d4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1bf3d426d907a1cb94f7713355be45a70fb7cd061dca794ecb62191beca0b9d4/userdata/shm major:0 minor:545 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1cceb4712c77ca2fdf0849f1bea9fd2ebeb3d8a95d1db4ec067d2a7d333a8d1f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1cceb4712c77ca2fdf0849f1bea9fd2ebeb3d8a95d1db4ec067d2a7d333a8d1f/userdata/shm major:0 minor:653 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/217f2ddac8460682f53f483f75566ba056797e6cb9215803ff6c892d4d2a8575/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/217f2ddac8460682f53f483f75566ba056797e6cb9215803ff6c892d4d2a8575/userdata/shm major:0 minor:785 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/29147c6f3d8625422f173796ecc5c56624b69d9bc34abe3727182adc4dde3e20/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/29147c6f3d8625422f173796ecc5c56624b69d9bc34abe3727182adc4dde3e20/userdata/shm major:0 minor:649 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2df3167c99041fb8b521641e83cdf585c987ff07f0be8411cb46dd3d61303f4c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2df3167c99041fb8b521641e83cdf585c987ff07f0be8411cb46dd3d61303f4c/userdata/shm major:0 minor:100 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/354c2a6b66c065fe648ce36ee5e4c7bbfed1c688af2120800fda750d61548f3b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/354c2a6b66c065fe648ce36ee5e4c7bbfed1c688af2120800fda750d61548f3b/userdata/shm major:0 minor:965 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/35c33231cc5394e541c516a963005ff2abf91292685c4e1cbb8e7e960d479ab2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/35c33231cc5394e541c516a963005ff2abf91292685c4e1cbb8e7e960d479ab2/userdata/shm major:0 minor:440 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/35f2a49474234a3cc3d6b357341939ab9604ca7cc08b21e5412a5ae4810169c5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/35f2a49474234a3cc3d6b357341939ab9604ca7cc08b21e5412a5ae4810169c5/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/381d29d4a5ad407e637362bbe1b13c2af8936f3cc15562644f115d2bb0e3ff71/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/381d29d4a5ad407e637362bbe1b13c2af8936f3cc15562644f115d2bb0e3ff71/userdata/shm major:0 minor:886 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3e755bfdf969ae0aedc8dea1041ea98192494df2fdc6f217c2fff168055bbf86/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3e755bfdf969ae0aedc8dea1041ea98192494df2fdc6f217c2fff168055bbf86/userdata/shm major:0 minor:250 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3f26792b173013020f69888fc826973fdb52355d71160dda571060f1b858412f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3f26792b173013020f69888fc826973fdb52355d71160dda571060f1b858412f/userdata/shm major:0 minor:240 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3f4c5edfdc04ff6f06a18f7e79a33fe2c7ca34a279290a61c3b81818bc079d6b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3f4c5edfdc04ff6f06a18f7e79a33fe2c7ca34a279290a61c3b81818bc079d6b/userdata/shm major:0 minor:787 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4a97b24b2b4402b956c009659df6a92e6079c267e12ae961ceccadc636caf34a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4a97b24b2b4402b956c009659df6a92e6079c267e12ae961ceccadc636caf34a/userdata/shm major:0 minor:256 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4b4e7da45f4c21e2af79784fe7c50524be71fedf894b5ecf38d0d12d29080573/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4b4e7da45f4c21e2af79784fe7c50524be71fedf894b5ecf38d0d12d29080573/userdata/shm major:0 minor:130 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/54bd19e9b4d7f9ab310771b8b4db448ca0ec68978bb44a7d76ba5895f6b7148d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/54bd19e9b4d7f9ab310771b8b4db448ca0ec68978bb44a7d76ba5895f6b7148d/userdata/shm major:0 minor:654 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5e20d46e2ff68c35ec5f71de1a7613daa62264adc487ab5ef65e9454569fe466/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5e20d46e2ff68c35ec5f71de1a7613daa62264adc487ab5ef65e9454569fe466/userdata/shm major:0 minor:479 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5ea36913089cb553f8b6a17431d06736cf6ac63c1508cc4d7903325dd9e50f7f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5ea36913089cb553f8b6a17431d06736cf6ac63c1508cc4d7903325dd9e50f7f/userdata/shm major:0 minor:661 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5ebf31a11d3c2bc39d6ab17ef9ad48b30eadf37d8f9dbe43bf78a5b88a48eeba/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5ebf31a11d3c2bc39d6ab17ef9ad48b30eadf37d8f9dbe43bf78a5b88a48eeba/userdata/shm major:0 minor:270 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5eea39afe08c6fda2308b0aa93f656fdde076cef1d17307c9c4b3694c8a0bf52/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5eea39afe08c6fda2308b0aa93f656fdde076cef1d17307c9c4b3694c8a0bf52/userdata/shm major:0 minor:917 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5f5a7d7c0e9750e48ccca14b1c41ca2a57206319db458c1aefe78bdb62a1f334/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5f5a7d7c0e9750e48ccca14b1c41ca2a57206319db458c1aefe78bdb62a1f334/userdata/shm major:0 minor:951 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6269dfbb0082e40be315007eb2be8e6ed68859c371da0d4ee487418e5943d283/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6269dfbb0082e40be315007eb2be8e6ed68859c371da0d4ee487418e5943d283/userdata/shm major:0 minor:993 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/637824f5bb31724423d6735813857b47b37d15ab88987d8a010fd58f58c5ab69/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/637824f5bb31724423d6735813857b47b37d15ab88987d8a010fd58f58c5ab69/userdata/shm major:0 minor:663 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/66854fab27d048679dff3730825d0acfff884899a282ccd890ab724bab9d3de2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/66854fab27d048679dff3730825d0acfff884899a282ccd890ab724bab9d3de2/userdata/shm major:0 minor:536 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6a1e7d1316bc2cf34931aa8f311ea4a4a0854b6b6c453298e452fe499d1407a7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6a1e7d1316bc2cf34931aa8f311ea4a4a0854b6b6c453298e452fe499d1407a7/userdata/shm major:0 minor:139 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6aa30a9c358b647ee82a65206dafbfb29c8b2259010a68ba5311f3d4fa4e3bc2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6aa30a9c358b647ee82a65206dafbfb29c8b2259010a68ba5311f3d4fa4e3bc2/userdata/shm major:0 minor:258 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6d8e82ffbe075824d8315d20c4a3c5c63d1c4a778f543315fadbc9c6a49fcd1c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6d8e82ffbe075824d8315d20c4a3c5c63d1c4a778f543315fadbc9c6a49fcd1c/userdata/shm major:0 minor:398 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/70ca4cb931b7545d294f00c69b8bfe23595c69c1d94a66566a713806aa3eda58/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/70ca4cb931b7545d294f00c69b8bfe23595c69c1d94a66566a713806aa3eda58/userdata/shm major:0 minor:555 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/71a8c5f3dcdb995fab9a53a08bf0c85f486ffb1845cc1c509788983f480ff491/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/71a8c5f3dcdb995fab9a53a08bf0c85f486ffb1845cc1c509788983f480ff491/userdata/shm major:0 minor:571 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/77922c67e22a90e02f2bc6f9c2c3361d1f9624d65d1b4a186c450f61aa3c27f3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/77922c67e22a90e02f2bc6f9c2c3361d1f9624d65d1b4a186c450f61aa3c27f3/userdata/shm major:0 minor:848 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/79c45dcce1d819c7fccd19f2123bf5227e7882d825c4cbdf8c140e544e9eccec/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/79c45dcce1d819c7fccd19f2123bf5227e7882d825c4cbdf8c140e544e9eccec/userdata/shm major:0 minor:370 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7c0a9d3ecc02d97801da90faa78ea9a04fc4381142a502c2ebc0a26f2eb9f11b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7c0a9d3ecc02d97801da90faa78ea9a04fc4381142a502c2ebc0a26f2eb9f11b/userdata/shm major:0 minor:513 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/85d2f2197e1e2ff4c1589210cd39f7a91df442afde156e6d9ca6ea0a582e9f7e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/85d2f2197e1e2ff4c1589210cd39f7a91df442afde156e6d9ca6ea0a582e9f7e/userdata/shm major:0 minor:843 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/86270375ddd9ef7091a168593f24db7b8afc117f301f953944886d249627818f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/86270375ddd9ef7091a168593f24db7b8afc117f301f953944886d249627818f/userdata/shm major:0 minor:394 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8a23814e5648f40975e3bf4990cc1d8a9b9e996452a93cd95f8834fb95ae4fd9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8a23814e5648f40975e3bf4990cc1d8a9b9e996452a93cd95f8834fb95ae4fd9/userdata/shm major:0 minor:657 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8bdb6f1dfbc7856feb8d743ecbf26b76b58e1fc414ee00508489367f3860efae/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8bdb6f1dfbc7856feb8d743ecbf26b76b58e1fc414ee00508489367f3860efae/userdata/shm major:0 minor:287 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8e4ba40307f1e3c32ed5043b13eaa8d528a5352038969de985182a9daf4f59ae/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8e4ba40307f1e3c32ed5043b13eaa8d528a5352038969de985182a9daf4f59ae/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8f5a82461be0913418e367f26894d38008c42543db14c5256b1c342d3bda363f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8f5a82461be0913418e367f26894d38008c42543db14c5256b1c342d3bda363f/userdata/shm major:0 minor:485 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/91e6cc574de9ab0ab69c5ac67d10cbf7cd272238dd17877d6c8486b06ad54731/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/91e6cc574de9ab0ab69c5ac67d10cbf7cd272238dd17877d6c8486b06ad54731/userdata/shm major:0 minor:874 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/930c71fe1f23ece50da1d42638c299a59f7406da6f3b38ce884bbbd9a8e9fd63/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/930c71fe1f23ece50da1d42638c299a59f7406da6f3b38ce884bbbd9a8e9fd63/userdata/shm major:0 minor:89 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/95c74df7f62f7dc8b78c4b8838181b79403ce4060666c2043d78b39ebd9f0419/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/95c74df7f62f7dc8b78c4b8838181b79403ce4060666c2043d78b39ebd9f0419/userdata/shm major:0 minor:114 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/99a3ea12b4f55e1c479ad9ada5ad2452af1ac0e39904d45fd6656f0a1828ea6f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/99a3ea12b4f55e1c479ad9ada5ad2452af1ac0e39904d45fd6656f0a1828ea6f/userdata/shm major:0 minor:873 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9d840b1327f66205cf6b23b15b1f1425e68ae2cb9d5dd3a177c50ba638a9ce65/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9d840b1327f66205cf6b23b15b1f1425e68ae2cb9d5dd3a177c50ba638a9ce65/userdata/shm major:0 minor:700 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9e3149a06c6f175072a4f298029a63d5886a08058f2cfbf229c65bf7015d1f34/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9e3149a06c6f175072a4f298029a63d5886a08058f2cfbf229c65bf7015d1f34/userdata/shm major:0 minor:503 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9f1629a9c890b158ad74d9b6c35c2de2573e526e00eff6015bd3861ec48b5231/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9f1629a9c890b158ad74d9b6c35c2de2573e526e00eff6015bd3861ec48b5231/userdata/shm major:0 minor:1117 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/aad28fcc9206746f0d26ad1538815d0d7f16ddcfe6c46b81f66fd625f49ae815/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/aad28fcc9206746f0d26ad1538815d0d7f16ddcfe6c46b81f66fd625f49ae815/userdata/shm major:0 minor:69 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/aadd21574589df05a94b4c4fadbf0dfafa5f50f06c631557a3bc30c9b28ade98/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/aadd21574589df05a94b4c4fadbf0dfafa5f50f06c631557a3bc30c9b28ade98/userdata/shm major:0 minor:265 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b4b9a672b76f3adc2ab4b631c1e084b51c152e435a2f95e756fd77ce61bb9196/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b4b9a672b76f3adc2ab4b631c1e084b51c152e435a2f95e756fd77ce61bb9196/userdata/shm major:0 minor:1174 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b8e76ab6e36792c638116c40619921d7addf605312998f00e62d98e5a5614955/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b8e76ab6e36792c638116c40619921d7addf605312998f00e62d98e5a5614955/userdata/shm major:0 minor:936 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bb505f490d9f0d175bd48b40f2b116d9b59fe037e6e27e85a04f72f615f5d521/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bb505f490d9f0d175bd48b40f2b116d9b59fe037e6e27e85a04f72f615f5d521/userdata/shm major:0 minor:845 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c0f3b16d1ffaf44cfe1f64310fd36df2459a40e3dd2e9014c6e2cf3307b7b8c5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c0f3b16d1ffaf44cfe1f64310fd36df2459a40e3dd2e9014c6e2cf3307b7b8c5/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c338b30f4ce7d3b65e0ff2e507deac121b209e8c01583658956897e30a06262e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c338b30f4ce7d3b65e0ff2e507deac121b209e8c01583658956897e30a06262e/userdata/shm major:0 minor:811 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c3a20ede6cada5383a3c17314cdc63a1bd82056b7193b0a825d73322086a74cd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c3a20ede6cada5383a3c17314cdc63a1bd82056b7193b0a825d73322086a74cd/userdata/shm major:0 minor:292 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c9e13e3e5b49e845caaa9344e46631934d49f01be6a7ead87d5884c85f85894d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c9e13e3e5b49e845caaa9344e46631934d49f01be6a7ead87d5884c85f85894d/userdata/shm major:0 minor:119 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ca95e515f4a5a1b63626328ea2ad328d0f3f07c258a5281fc61399ac842b383f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ca95e515f4a5a1b63626328ea2ad328d0f3f07c258a5281fc61399ac842b383f/userdata/shm major:0 minor:838 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/caf8685ec1d7171c12646ad4a2c704d85c1985e24c1994b6f4a18dfa14666d6f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/caf8685ec1d7171c12646ad4a2c704d85c1985e24c1994b6f4a18dfa14666d6f/userdata/shm major:0 minor:356 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cb3a395c88586f9726036952a749f0819efe1ca07bfec591e8bf77ac60734a87/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cb3a395c88586f9726036952a749f0819efe1ca07bfec591e8bf77ac60734a87/userdata/shm major:0 minor:655 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cb59122d7a7b042121b64340b8ada26c1823fa00f9c980926b47cbaa0d20cc3f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cb59122d7a7b042121b64340b8ada26c1823fa00f9c980926b47cbaa0d20cc3f/userdata/shm major:0 minor:937 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cb74a42e367af8586d98d799b6ded81e9d93e7b3d806a9a925a94b3e763a3830/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cb74a42e367af8586d98d799b6ded81e9d93e7b3d806a9a925a94b3e763a3830/userdata/shm major:0 minor:461 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d01d4e5c147c00b55f6b6b29aae7404f1c65f4d9542857788934f8f1acdf5475/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d01d4e5c147c00b55f6b6b29aae7404f1c65f4d9542857788934f8f1acdf5475/userdata/shm major:0 minor:283 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d085d8a019f7e2c66eb4ee6d163b9b2393cab47ea58008a519ad2cb921a6f6d3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d085d8a019f7e2c66eb4ee6d163b9b2393cab47ea58008a519ad2cb921a6f6d3/userdata/shm major:0 minor:933 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d5c45f47f10bb08721004bc944edd8b049be91900e107372ecc9bc0e512a2248/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d5c45f47f10bb08721004bc944edd8b049be91900e107372ecc9bc0e512a2248/userdata/shm major:0 minor:650 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/daaff2e16f5e705f64dc5a7b025fa31e1b94f1cba87483d97066f316342671c2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/daaff2e16f5e705f64dc5a7b025fa31e1b94f1cba87483d97066f316342671c2/userdata/shm major:0 minor:934 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e311ec640a1a240867598468c9fc4d6af26b5595345ee1d6fdaa4ef38454491f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e311ec640a1a240867598468c9fc4d6af26b5595345ee1d6fdaa4ef38454491f/userdata/shm major:0 minor:228 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e3813939efa506945bdf3b3ce075a38e8fa5a4f203ce2d57587a61e54ae68d09/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e3813939efa506945bdf3b3ce075a38e8fa5a4f203ce2d57587a61e54ae68d09/userdata/shm major:0 minor:307 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e48d984bde067fff459bf66d3627856479bf9e2fe952a4228b45cfe581507bda/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e48d984bde067fff459bf66d3627856479bf9e2fe952a4228b45cfe581507bda/userdata/shm major:0 minor:483 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ed3daf11e343e1b2061522afa05ec8c54dad41a761078c089559715ea58a7e8b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ed3daf11e343e1b2061522afa05ec8c54dad41a761078c089559715ea58a7e8b/userdata/shm major:0 minor:941 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f060cf0da8cda14bc2c113a9d1ffe07541127085f23390156d2bcdad8537aa45/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f060cf0da8cda14bc2c113a9d1ffe07541127085f23390156d2bcdad8537aa45/userdata/shm major:0 minor:252 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f8ec4afc73563013c96a8e1eace508a943272ee46d78033f1795223ee51579db/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f8ec4afc73563013c96a8e1eace508a943272ee46d78033f1795223ee51579db/userdata/shm major:0 minor:1115 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f9589a25d07ced54d4fbfa68774c413e985214ddc531362c9f8430ade544bfcc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f9589a25d07ced54d4fbfa68774c413e985214ddc531362c9f8430ade544bfcc/userdata/shm major:0 minor:266 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fa7bdc6eb3bcdebec3d64b4ce8194bafce362b67c9019cd975ec6f9a5ac40f46/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fa7bdc6eb3bcdebec3d64b4ce8194bafce362b67c9019cd975ec6f9a5ac40f46/userdata/shm major:0 minor:64 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fd58bf4306c0d3457858f4ec24d59cd979f6f4afdc73f13f04c121d2cc971fc3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fd58bf4306c0d3457858f4ec24d59cd979f6f4afdc73f13f04c121d2cc971fc3/userdata/shm major:0 minor:1103 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/00375107-9a3b-4161-a90d-72ea8827c5fc/volumes/kubernetes.io~projected/kube-api-access-zlb8t:{mountpoint:/var/lib/kubelet/pods/00375107-9a3b-4161-a90d-72ea8827c5fc/volumes/kubernetes.io~projected/kube-api-access-zlb8t major:0 minor:837 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/00375107-9a3b-4161-a90d-72ea8827c5fc/volumes/kubernetes.io~secret/default-certificate:{mountpoint:/var/lib/kubelet/pods/00375107-9a3b-4161-a90d-72ea8827c5fc/volumes/kubernetes.io~secret/default-certificate major:0 minor:822 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/00375107-9a3b-4161-a90d-72ea8827c5fc/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/00375107-9a3b-4161-a90d-72ea8827c5fc/volumes/kubernetes.io~secret/metrics-certs major:0 minor:834 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/00375107-9a3b-4161-a90d-72ea8827c5fc/volumes/kubernetes.io~secret/stats-auth:{mountpoint:/var/lib/kubelet/pods/00375107-9a3b-4161-a90d-72ea8827c5fc/volumes/kubernetes.io~secret/stats-auth major:0 minor:835 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0278b04b-b27b-4717-a009-a70315fd05a6/volumes/kubernetes.io~projected/kube-api-access-2snjj:{mountpoint:/var/lib/kubelet/pods/0278b04b-b27b-4717-a009-a70315fd05a6/volumes/kubernetes.io~projected/kube-api-access-2snjj major:0 minor:350 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/029b127e-0faf-4957-b591-9c561b053cda/volumes/kubernetes.io~projected/kube-api-access-wgt55:{mountpoint:/var/lib/kubelet/pods/029b127e-0faf-4957-b591-9c561b053cda/volumes/kubernetes.io~projected/kube-api-access-wgt55 major:0 minor:601 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/029b127e-0faf-4957-b591-9c561b053cda/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/029b127e-0faf-4957-b591-9c561b053cda/volumes/kubernetes.io~secret/metrics-tls major:0 minor:600 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/053cc9bc-f98e-46f6-93bb-b5344d20bf74/volumes/kubernetes.io~projected/kube-api-access-gnxv5:{mountpoint:/var/lib/kubelet/pods/053cc9bc-f98e-46f6-93bb-b5344d20bf74/volumes/kubernetes.io~projected/kube-api-access-gnxv5 major:0 minor:269 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/07505113-d5e7-4ea3-b9cc-8f08cba45ccc/volumes/kubernetes.io~projected/kube-api-access-lgzkd:{mountpoint:/var/lib/kubelet/pods/07505113-d5e7-4ea3-b9cc-8f08cba45ccc/volumes/kubernetes.io~projected/kube-api-access-lgzkd major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/07505113-d5e7-4ea3-b9cc-8f08cba45ccc/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/07505113-d5e7-4ea3-b9cc-8f08cba45ccc/volumes/kubernetes.io~secret/serving-cert major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0a6090f0-3a27-4102-b8dd-b071644a3543/volumes/kubernetes.io~projected/kube-api-access-bd8ff:{mountpoint:/var/lib/kubelet/pods/0a6090f0-3a27-4102-b8dd-b071644a3543/volumes/kubernetes.io~projected/kube-api-access-bd8ff major:0 minor:932 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0a6090f0-3a27-4102-b8dd-b071644a3543/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0a6090f0-3a27-4102-b8dd-b071644a3543/volumes/kubernetes.io~secret/serving-cert major:0 minor:931 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0c2c4a58-9780-4ecd-b417-e590ac3576ed/volumes/kubernetes.io~projected/kube-api-access-v6zmc:{mountpoint:/var/lib/kubelet/pods/0c2c4a58-9780-4ecd-b417-e590ac3576ed/volumes/kubernetes.io~projected/kube-api-access-v6zmc major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0c2c4a58-9780-4ecd-b417-e590ac3576ed/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0c2c4a58-9780-4ecd-b417-e590ac3576ed/volumes/kubernetes.io~secret/serving-cert major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0e7156cf-2d68-4de8-b7e7-60e1539590dd/volumes/kubernetes.io~projected/kube-api-access-z84cq:{mountpoint:/var/lib/kubelet/pods/0e7156cf-2d68-4de8-b7e7-60e1539590dd/volumes/kubernetes.io~projected/kube-api-access-z84cq major:0 minor:143 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0e7156cf-2d68-4de8-b7e7-60e1539590dd/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/0e7156cf-2d68-4de8-b7e7-60e1539590dd/volumes/kubernetes.io~secret/webhook-cert major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/13c71f7d-1485-4f86-beb2-ee16cf420350/volumes/kubernetes.io~projected/kube-api-access-zplb4:{mountpoint:/var/lib/kubelet/pods/13c71f7d-1485-4f86-beb2-ee16cf420350/volumes/kubernetes.io~projected/kube-api-access-zplb4 major:0 minor:618 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15a97fe2-5022-4997-9936-4247ae7ecb43/volumes/kubernetes.io~projected/kube-api-access-h4vtf:{mountpoint:/var/lib/kubelet/pods/15a97fe2-5022-4997-9936-4247ae7ecb43/volumes/kubernetes.io~projected/kube-api-access-h4vtf major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15a97fe2-5022-4997-9936-4247ae7ecb43/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/15a97fe2-5022-4997-9936-4247ae7ecb43/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/16f8e725-f18a-478e-88c5-87d54aeb4857/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/16f8e725-f18a-478e-88c5-87d54aeb4857/volumes/kubernetes.io~projected/ca-certs major:0 minor:534 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/16f8e725-f18a-478e-88c5-87d54aeb4857/volumes/kubernetes.io~projected/kube-api-access-8hvsl:{mountpoint:/var/lib/kubelet/pods/16f8e725-f18a-478e-88c5-87d54aeb4857/volumes/kubernetes.io~projected/kube-api-access-8hvsl major:0 minor:535 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/16f8e725-f18a-478e-88c5-87d54aeb4857/volumes/kubernetes.io~secret/catalogserver-certs:{mountpoint:/var/lib/kubelet/pods/16f8e725-f18a-478e-88c5-87d54aeb4857/volumes/kubernetes.io~secret/catalogserver-certs major:0 minor:522 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/19a76585-a9ac-4ed9-9146-bb77b31848c6/volumes/kubernetes.io~projected/kube-api-access-w9zbp:{mountpoint:/var/lib/kubelet/pods/19a76585-a9ac-4ed9-9146-bb77b31848c6/volumes/kubernetes.io~projected/kube-api-access-w9zbp major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/19a76585-a9ac-4ed9-9146-bb77b31848c6/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/19a76585-a9ac-4ed9-9146-bb77b31848c6/volumes/kubernetes.io~secret/etcd-client major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/19a76585-a9ac-4ed9-9146-bb77b31848c6/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/19a76585-a9ac-4ed9-9146-bb77b31848c6/volumes/kubernetes.io~secret/serving-cert major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/290d1f84-5c5c-4bff-b045-e6020793cded/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/290d1f84-5c5c-4bff-b045-e6020793cded/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:237 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/290d1f84-5c5c-4bff-b045-e6020793cded/volumes/kubernetes.io~projected/kube-api-access-rdkx7:{mountpoint:/var/lib/kubelet/pods/290d1f84-5c5c-4bff-b045-e6020793cded/volumes/kubernetes.io~projected/kube-api-access-rdkx7 major:0 minor:236 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/290d1f84-5c5c-4bff-b045-e6020793cded/volumes/kubernetes.io~secret/image-registry-operator-tls:{mountpoint:/var/lib/kubelet/pods/290d1f84-5c5c-4bff-b045-e6020793cded/volumes/kubernetes.io~secret/image-registry-operator-tls major:0 minor:462 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2a25632e-32d0-43d2-9be7-f515d29a1720/volumes/kubernetes.io~projected/kube-api-access-bcfsk:{mountpoint:/var/lib/kubelet/pods/2a25632e-32d0-43d2-9be7-f515d29a1720/volumes/kubernetes.io~projected/kube-api-access-bcfsk major:0 minor:1102 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2b12af9a-8041-477f-90eb-05bb6ae7861a/volumes/kubernetes.io~projected/kube-api-access-sn8qc:{mountpoint:/var/lib/kubelet/pods/2b12af9a-8041-477f-90eb-05bb6ae7861a/volumes/kubernetes.io~projected/kube-api-access-sn8qc major:0 minor:930 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2b12af9a-8041-477f-90eb-05bb6ae7861a/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/2b12af9a-8041-477f-90eb-05bb6ae7861a/volumes/kubernetes.io~secret/cert major:0 minor:929 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2ea9eb53-0385-4a1a-a64f-696f8520cf49/volumes/kubernetes.io~projected/kube-api-access-4dw4r:{mountpoint:/var/lib/kubelet/pods/2ea9eb53-0385-4a1a-a64f-696f8520cf49/volumes/kubernetes.io~projected/kube-api-access-4dw4r major:0 minor:253 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2ea9eb53-0385-4a1a-a64f-696f8520cf49/volumes/kubernetes.io~secret/package-server-manager-serving-cert:{mountpoint:/var/lib/kubelet/pods/2ea9eb53-0385-4a1a-a64f-696f8520cf49/volumes/kubernetes.io~secret/package-server-manager-serving-cert major:0 minor:647 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/34a3a84b-048f-4822-9f05-0e7509327ca2/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/34a3a84b-048f-4822-9f05-0e7509327ca2/volumes/kubernetes.io~projected/kube-api-access major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/34a3a84b-048f-4822-9f05-0e7509327ca2/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/34a3a84b-048f-4822-9f05-0e7509327ca2/volumes/kubernetes.io~secret/serving-cert major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/394061b4-1bac-4699-96d2-88558c1adaf8/volumes/kubernetes.io~projected/kube-api-access-r7bpz:{mountpoint:/var/lib/kubelet/pods/394061b4-1bac-4699-96d2-88558c1adaf8/volumes/kubernetes.io~projected/kube-api-access-r7bpz major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3ee0f85b-219b-47cb-a22a-67d359a69881/volumes/kubernetes.io~projected/kube-api-access-82f9g:{mountpoint:/var/lib/kubelet/pods/3ee0f85b-219b-47cb-a22a-67d359a69881/volumes/kubernetes.io~projected/kube-api-access-82f9g major:0 minor:869 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3ee0f85b-219b-47cb-a22a-67d359a69881/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/3ee0f85b-219b-47cb-a22a-67d359a69881/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:868 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3ee0f85b-219b-47cb-a22a-67d359a69881/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/3ee0f85b-219b-47cb-a22a-67d359a69881/volumes/kubernetes.io~secret/webhook-cert major:0 minor:863 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/41cc6278-8f99-407c-ba5f-750a40e3058c/volumes/kubernetes.io~projected/kube-api-access-g2w6b:{mountpoint:/var/lib/kubelet/pods/41cc6278-8f99-407c-ba5f-750a40e3058c/volumes/kubernetes.io~projected/kube-api-access-g2w6b major:0 minor:1013 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/41cc6278-8f99-407c-ba5f-750a40e3058c/volumes/kubernetes.io~secret/client-ca-bundle:{mountpoint:/var/lib/kubelet/pods/41cc6278-8f99-407c-ba5f-750a40e3058c/volumes/kubernetes.io~secret/client-ca-bundle major:0 minor:1011 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/41cc6278-8f99-407c-ba5f-750a40e3058c/volumes/kubernetes.io~secret/secret-metrics-client-certs:{mountpoint:/var/lib/kubelet/pods/41cc6278-8f99-407c-ba5f-750a40e3058c/volumes/kubernetes.io~secret/secret-metrics-client-certs major:0 minor:1012 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/41cc6278-8f99-407c-ba5f-750a40e3058c/volumes/kubernetes.io~secret/secret-metrics-server-tls:{mountpoint:/var/lib/kubelet/pods/41cc6278-8f99-407c-ba5f-750a40e3058c/volumes/kubernetes.io~secret/secret-metrics-server-tls major:0 minor:1001 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/595f697b-d238-4500-84ce-1ea00377f05e/volumes/kubernetes.io~projected/kube-api-access-4fxgl:{mountpoint:/var/lib/kubelet/pods/595f697b-d238-4500-84ce-1ea00377f05e/volumes/kubernetes.io~projected/kube-api-access-4fxgl major:0 minor:238 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/595f697b-d238-4500-84ce-1ea00377f05e/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/595f697b-d238-4500-84ce-1ea00377f05e/volumes/kubernetes.io~secret/serving-cert major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/59bf5114-29f9-4f70-8582-108e95327cb2/volumes/kubernetes.io~projected/kube-api-access-z5xgh:{mountpoint:/var/lib/kubelet/pods/59bf5114-29f9-4f70-8582-108e95327cb2/volumes/kubernetes.io~projected/kube-api-access-z5xgh major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/59bf5114-29f9-4f70-8582-108e95327cb2/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/59bf5114-29f9-4f70-8582-108e95327cb2/volumes/kubernetes.io~secret/metrics-tls major:0 minor:448 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5a4202c2-c330-4a5d-87e7-0a63d069113f/volumes/kubernetes.io~projected/kube-api-access-kb5b6:{mountpoint:/var/lib/kubelet/pods/5a4202c2-c330-4a5d-87e7-0a63d069113f/volumes/kubernetes.io~projected/kube-api-access-kb5b6 major:0 minor:262 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5a4202c2-c330-4a5d-87e7-0a63d069113f/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/5a4202c2-c330-4a5d-87e7-0a63d069113f/volumes/kubernetes.io~secret/proxy-tls major:0 minor:630 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5b2acd84-85c0-4c47-90a4-44745b79976d/volumes/kubernetes.io~projected/kube-api-access-28z2f:{mountpoint:/var/lib/kubelet/pods/5b2acd84-85c0-4c47-90a4-44745b79976d/volumes/kubernetes.io~projected/kube-api-access-28z2f major:0 minor:393 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/68104a8c-3fac-4d4b-b975-bc2d045b3375/volumes/kubernetes.io~projected/kube-api-access-sx8j5:{mountpoint:/var/lib/kubelet/pods/68104a8c-3fac-4d4b-b975-bc2d045b3375/volumes/kubernetes.io~projected/kube-api-access-sx8j5 major:0 minor:957 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/68104a8c-3fac-4d4b-b975-bc2d045b3375/volumes/kubernetes.io~secret/machine-api-operator-tls:{mountpoint:/var/lib/kubelet/pods/68104a8c-3fac-4d4b-b975-bc2d045b3375/volumes/kubernetes.io~secret/machine-api-operator-tls major:0 minor:956 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6a93ff56-362e-44fc-a54f-666a01559892/volumes/kubernetes.io~projected/kube-api-access-wmlh2:{mountpoint:/var/lib/kubelet/pods/6a93ff56-362e-44fc-a54f-666a01559892/volumes/kubernetes.io~projected/kube-api-access-wmlh2 major:0 minor:923 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6a93ff56-362e-44fc-a54f-666a01559892/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/6a93ff56-362e-44fc-a54f-666a01559892/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config major:0 minor:922 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6a93ff56-362e-44fc-a54f-666a01559892/volumes/kubernetes.io~secret/kube-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/6a93ff56-362e-44fc-a54f-666a01559892/volumes/kubernetes.io~secret/kube-state-metrics-tls major:0 minor:928 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6db2bfbd-d8db-4384-8979-23e8a1e87e5e/volumes/kubernetes.io~secret/tls-certificates:{mountpoint:/var/lib/kubelet/pods/6db2bfbd-d8db-4384-8979-23e8a1e87e5e/volumes/kubernetes.io~secret/tls-certificates major:0 minor:827 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/702076a9-b542-4768-9e9e-99b2cac0a66e/volumes/kubernetes.io~projected/kube-api-access-bkfkr:{mountpoint:/var/lib/kubelet/pods/702076a9-b542-4768-9e9e-99b2cac0a66e/volumes/kubernetes.io~projected/kube-api-access-bkfkr major:0 minor:924 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/702076a9-b542-4768-9e9e-99b2cac0a66e/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/702076a9-b542-4768-9e9e-99b2cac0a66e/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config major:0 minor:920 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/702076a9-b542-4768-9e9e-99b2cac0a66e/volumes/kubernetes.io~secret/node-exporter-tls:{mountpoint:/var/lib/kubelet/pods/702076a9-b542-4768-9e9e-99b2cac0a66e/volumes/kubernetes.io~secret/node-exporter-tls major:0 minor:926 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/708812af-3249-4d57-8f28-055da22a7329/volumes/kubernetes.io~projected/kube-api-access-clhcj:{mountpoint:/var/lib/kubelet/pods/708812af-3249-4d57-8f28-055da22a7329/volumes/kubernetes.io~projected/kube-api-access-clhcj major:0 minor:722 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/708812af-3249-4d57-8f28-055da22a7329/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/708812af-3249-4d57-8f28-055da22a7329/volumes/kubernetes.io~secret/proxy-tls major:0 minor:549 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/720a1f60-c1cb-4aef-aaec-f082090ca631/volumes/kubernetes.io~projected/kube-api-access-nbqfh:{mountpoint:/var/lib/kubelet/pods/720a1f60-c1cb-4aef-aaec-f082090ca631/volumes/kubernetes.io~projected/kube-api-access-nbqfh major:0 minor:263 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/720a1f60-c1cb-4aef-aaec-f082090ca631/volumes/kubernetes.io~secret/webhook-certs:{mountpoint:/var/lib/kubelet/pods/720a1f60-c1cb-4aef-aaec-f082090ca631/volumes/kubernetes.io~secret/webhook-certs major:0 minor:643 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/74f296d4-40d1-449e-88ea-db6c1574a11a/volumes/kubernetes.io~projected/kube-api-access-ff8tm:{mountpoint:/var/lib/kubelet/pods/74f296d4-40d1-449e-88ea-db6c1574a11a/volumes/kubernetes.io~projected/kube-api-access-ff8tm major:0 minor:919 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/74f296d4-40d1-449e-88ea-db6c1574a11a/volumes/kubernetes.io~secret/samples-operator-tls:{mountpoint:/var/lib/kubelet/pods/74f296d4-40d1-449e-88ea-db6c1574a11a/volumes/kubernetes.io~secret/samples-operator-tls major:0 minor:918 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/767da57e-44e4-4861-bc6f-427c5bbb4d9d/volumes/kubernetes.io~projected/kube-api-access-2nxzr:{mountpoint:/var/lib/kubelet/pods/767da57e-44e4-4861-bc6f-427c5bbb4d9d/volumes/kubernetes.io~projected/kube-api-access-2nxzr major:0 minor:118 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc/volumes/kubernetes.io~projected/kube-api-access-qhs5w:{mountpoint:/var/lib/kubelet/pods/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc/volumes/kubernetes.io~projected/kube-api-access-qhs5w major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc/volumes/kubernetes.io~secret/serving-cert major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7fb5bad7-07d9-45ac-ad27-a887d12d148f/volumes/kubernetes.io~projected/kube-api-access-sdkqm:{mountpoint:/var/lib/kubelet/pods/7fb5bad7-07d9-45ac-ad27-a887d12d148f/volumes/kubernetes.io~projected/kube-api-access-sdkqm major:0 minor:543 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7fb5bad7-07d9-45ac-ad27-a887d12d148f/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/7fb5bad7-07d9-45ac-ad27-a887d12d148f/volumes/kubernetes.io~secret/encryption-config major:0 minor:469 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7fb5bad7-07d9-45ac-ad27-a887d12d148f/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/7fb5bad7-07d9-45ac-ad27-a887d12d148f/volumes/kubernetes.io~secret/etcd-client major:0 minor:470 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7fb5bad7-07d9-45ac-ad27-a887d12d148f/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/7fb5bad7-07d9-45ac-ad27-a887d12d148f/volumes/kubernetes.io~secret/serving-cert major:0 minor:542 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/80994f33-21e7-45d6-9f21-1cfd8e1f41ce/volumes/kubernetes.io~projected/kube-api-access-gwqln:{mountpoint:/var/lib/kubelet/pods/80994f33-21e7-45d6-9f21-1cfd8e1f41ce/volumes/kubernetes.io~projected/kube-api-access-gwqln major:0 minor:966 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/80994f33-21e7-45d6-9f21-1cfd8e1f41ce/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/80994f33-21e7-45d6-9f21-1cfd8e1f41ce/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:955 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9/volumes/kubernetes.io~projected/kube-api-access-qvdg2:{mountpoint:/var/lib/kubelet/pods/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9/volumes/kubernetes.io~projected/kube-api-access-qvdg2 major:0 minor:260 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9/volumes/kubernetes.io~secret/srv-cert major:0 minor:644 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/830ff1d6-332e-46b1-b13c-c2507fdc3c19/volumes/kubernetes.io~projected/kube-api-access-dvq2h:{mountpoint:/var/lib/kubelet/pods/830ff1d6-332e-46b1-b13c-c2507fdc3c19/volumes/kubernetes.io~projected/kube-api-access-dvq2h major:0 minor:1173 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/830ff1d6-332e-46b1-b13c-c2507fdc3c19/volumes/kubernetes.io~secret/webhook-certs:{mountpoint:/var/lib/kubelet/pods/830ff1d6-332e-46b1-b13c-c2507fdc3c19/volumes/kubernetes.io~secret/webhook-certs major:0 minor:1169 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94/volumes/kubernetes.io~projected/kube-api-access-lgt5t:{mountpoint:/var/lib/kubelet/pods/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94/volumes/kubernetes.io~projected/kube-api-access-lgt5t major:0 minor:255 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls:{mountpoint:/var/lib/kubelet/pods/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls major:0 minor:645 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8ffe2e75-9cc3-4244-95c8-800463c5aa28/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/8ffe2e75-9cc3-4244-95c8-800463c5aa28/volumes/kubernetes.io~projected/kube-api-access major:0 minor:844 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8ffe2e75-9cc3-4244-95c8-800463c5aa28/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/8ffe2e75-9cc3-4244-95c8-800463c5aa28/volumes/kubernetes.io~secret/serving-cert major:0 minor:842 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/902909ca-ab08-49aa-9736-70e073f8e67d/volumes/kubernetes.io~projected/kube-api-access-kskqr:{mountpoint:/var/lib/kubelet/pods/902909ca-ab08-49aa-9736-70e073f8e67d/volumes/kubernetes.io~projected/kube-api-access-kskqr major:0 minor:261 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/902909ca-ab08-49aa-9736-70e073f8e67d/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/902909ca-ab08-49aa-9736-70e073f8e67d/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9548e397-0db4-41c8-9cc8-b575060e9c66/volumes/kubernetes.io~projected/kube-api-access-kbwfq:{mountpoint:/var/lib/kubelet/pods/9548e397-0db4-41c8-9cc8-b575060e9c66/volumes/kubernetes.io~projected/kube-api-access-kbwfq major:0 minor:913 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617/volumes/kubernetes.io~projected/ca-certs major:0 minor:533 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617/volumes/kubernetes.io~projected/kube-api-access-fbsq9:{mountpoint:/var/lib/kubelet/pods/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617/volumes/kubernetes.io~projected/kube-api-access-fbsq9 major:0 minor:538 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a2bdf5b0-8764-4b15-97c9-20af36634fd0/volumes/kubernetes.io~projected/kube-api-access-sfb5c:{mountpoint:/var/lib/kubelet/pods/a2bdf5b0-8764-4b15-97c9-20af36634fd0/volumes/kubernetes.io~projected/kube-api-access-sfb5c major:0 minor:436 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a2bdf5b0-8764-4b15-97c9-20af36634fd0/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/a2bdf5b0-8764-4b15-97c9-20af36634fd0/volumes/kubernetes.io~secret/encryption-config major:0 minor:435 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a2bdf5b0-8764-4b15-97c9-20af36634fd0/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/a2bdf5b0-8764-4b15-97c9-20af36634fd0/volumes/kubernetes.io~secret/etcd-client major:0 minor:434 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a2bdf5b0-8764-4b15-97c9-20af36634fd0/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a2bdf5b0-8764-4b15-97c9-20af36634fd0/volumes/kubernetes.io~secret/serving-cert major:0 minor:507 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a350f317-f058-4102-af5c-cbba46d35e02/volumes/kubernetes.io~projected/kube-api-access-t56bf:{mountpoint:/var/lib/kubelet/pods/a350f317-f058-4102-af5c-cbba46d35e02/volumes/kubernetes.io~projected/kube-api-access-t56bf major:0 minor:508 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a350f317-f058-4102-af5c-cbba46d35e02/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a350f317-f058-4102-af5c-cbba46d35e02/volumes/kubernetes.io~secret/serving-cert major:0 minor Mar 18 13:23:56.158389 master-0 kubenswrapper[27835]: :372 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a8eff549-02f3-446e-b3a1-a66cecdc02a6/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/a8eff549-02f3-446e-b3a1-a66cecdc02a6/volumes/kubernetes.io~projected/kube-api-access major:0 minor:278 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a8eff549-02f3-446e-b3a1-a66cecdc02a6/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a8eff549-02f3-446e-b3a1-a66cecdc02a6/volumes/kubernetes.io~secret/serving-cert major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a9de7243-90c0-49c4-8059-34e0558fca40/volumes/kubernetes.io~projected/kube-api-access-75jwh:{mountpoint:/var/lib/kubelet/pods/a9de7243-90c0-49c4-8059-34e0558fca40/volumes/kubernetes.io~projected/kube-api-access-75jwh major:0 minor:806 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a9de7243-90c0-49c4-8059-34e0558fca40/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/a9de7243-90c0-49c4-8059-34e0558fca40/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert major:0 minor:805 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab2f96fb-ef55-4427-a598-7e3f1e224045/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/ab2f96fb-ef55-4427-a598-7e3f1e224045/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab2f96fb-ef55-4427-a598-7e3f1e224045/volumes/kubernetes.io~projected/kube-api-access-mddh9:{mountpoint:/var/lib/kubelet/pods/ab2f96fb-ef55-4427-a598-7e3f1e224045/volumes/kubernetes.io~projected/kube-api-access-mddh9 major:0 minor:127 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab2f96fb-ef55-4427-a598-7e3f1e224045/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/ab2f96fb-ef55-4427-a598-7e3f1e224045/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:126 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ac6d8eb6-1d5e-4757-9823-5ffe478c711c/volumes/kubernetes.io~projected/kube-api-access-zlzqd:{mountpoint:/var/lib/kubelet/pods/ac6d8eb6-1d5e-4757-9823-5ffe478c711c/volumes/kubernetes.io~projected/kube-api-access-zlzqd major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ac6d8eb6-1d5e-4757-9823-5ffe478c711c/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/ac6d8eb6-1d5e-4757-9823-5ffe478c711c/volumes/kubernetes.io~secret/cert major:0 minor:437 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ac6d8eb6-1d5e-4757-9823-5ffe478c711c/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls:{mountpoint:/var/lib/kubelet/pods/ac6d8eb6-1d5e-4757-9823-5ffe478c711c/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls major:0 minor:456 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10/volumes/kubernetes.io~projected/kube-api-access-xsvmx:{mountpoint:/var/lib/kubelet/pods/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10/volumes/kubernetes.io~projected/kube-api-access-xsvmx major:0 minor:105 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b75d4622-ac12-4f82-afc9-ab63e6278b0c/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/b75d4622-ac12-4f82-afc9-ab63e6278b0c/volumes/kubernetes.io~projected/kube-api-access major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b75d4622-ac12-4f82-afc9-ab63e6278b0c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/b75d4622-ac12-4f82-afc9-ab63e6278b0c/volumes/kubernetes.io~secret/serving-cert major:0 minor:234 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b89fb313-d01a-4305-b123-e253b3382b85/volumes/kubernetes.io~projected/kube-api-access-dm77k:{mountpoint:/var/lib/kubelet/pods/b89fb313-d01a-4305-b123-e253b3382b85/volumes/kubernetes.io~projected/kube-api-access-dm77k major:0 minor:403 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b89fb313-d01a-4305-b123-e253b3382b85/volumes/kubernetes.io~secret/signing-key:{mountpoint:/var/lib/kubelet/pods/b89fb313-d01a-4305-b123-e253b3382b85/volumes/kubernetes.io~secret/signing-key major:0 minor:402 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bd8aa7c1-0a04-4df0-9047-63ab846b9535/volumes/kubernetes.io~projected/kube-api-access-w477x:{mountpoint:/var/lib/kubelet/pods/bd8aa7c1-0a04-4df0-9047-63ab846b9535/volumes/kubernetes.io~projected/kube-api-access-w477x major:0 minor:872 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bd8aa7c1-0a04-4df0-9047-63ab846b9535/volumes/kubernetes.io~secret/certs:{mountpoint:/var/lib/kubelet/pods/bd8aa7c1-0a04-4df0-9047-63ab846b9535/volumes/kubernetes.io~secret/certs major:0 minor:870 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bd8aa7c1-0a04-4df0-9047-63ab846b9535/volumes/kubernetes.io~secret/node-bootstrap-token:{mountpoint:/var/lib/kubelet/pods/bd8aa7c1-0a04-4df0-9047-63ab846b9535/volumes/kubernetes.io~secret/node-bootstrap-token major:0 minor:871 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bdcd72a6-a8e8-47ba-8b51-7325d35bad6b/volumes/kubernetes.io~projected/kube-api-access-6fw5f:{mountpoint:/var/lib/kubelet/pods/bdcd72a6-a8e8-47ba-8b51-7325d35bad6b/volumes/kubernetes.io~projected/kube-api-access-6fw5f major:0 minor:783 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bdcd72a6-a8e8-47ba-8b51-7325d35bad6b/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls:{mountpoint:/var/lib/kubelet/pods/bdcd72a6-a8e8-47ba-8b51-7325d35bad6b/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls major:0 minor:775 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bf1cc230-0a79-4a1d-b500-a65d02e50973/volumes/kubernetes.io~projected/kube-api-access-dvdtw:{mountpoint:/var/lib/kubelet/pods/bf1cc230-0a79-4a1d-b500-a65d02e50973/volumes/kubernetes.io~projected/kube-api-access-dvdtw major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bf1cc230-0a79-4a1d-b500-a65d02e50973/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/bf1cc230-0a79-4a1d-b500-a65d02e50973/volumes/kubernetes.io~secret/metrics-certs major:0 minor:648 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bf4c5410-fb44-45e8-ab66-24806e6349b8/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/bf4c5410-fb44-45e8-ab66-24806e6349b8/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:496 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bf4c5410-fb44-45e8-ab66-24806e6349b8/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/bf4c5410-fb44-45e8-ab66-24806e6349b8/volumes/kubernetes.io~empty-dir/tmp major:0 minor:497 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bf4c5410-fb44-45e8-ab66-24806e6349b8/volumes/kubernetes.io~projected/kube-api-access-hnkdr:{mountpoint:/var/lib/kubelet/pods/bf4c5410-fb44-45e8-ab66-24806e6349b8/volumes/kubernetes.io~projected/kube-api-access-hnkdr major:0 minor:498 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bf9d21f9-64d6-4e21-a985-491197038568/volumes/kubernetes.io~projected/kube-api-access-qgffb:{mountpoint:/var/lib/kubelet/pods/bf9d21f9-64d6-4e21-a985-491197038568/volumes/kubernetes.io~projected/kube-api-access-qgffb major:0 minor:264 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bf9d21f9-64d6-4e21-a985-491197038568/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/bf9d21f9-64d6-4e21-a985-491197038568/volumes/kubernetes.io~secret/serving-cert major:0 minor:232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c3ff09ab-cbe1-49e7-8121-5f71997a5176/volumes/kubernetes.io~projected/kube-api-access-n2hxh:{mountpoint:/var/lib/kubelet/pods/c3ff09ab-cbe1-49e7-8121-5f71997a5176/volumes/kubernetes.io~projected/kube-api-access-n2hxh major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c3ff09ab-cbe1-49e7-8121-5f71997a5176/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/c3ff09ab-cbe1-49e7-8121-5f71997a5176/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:438 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c3ff09ab-cbe1-49e7-8121-5f71997a5176/volumes/kubernetes.io~secret/node-tuning-operator-tls:{mountpoint:/var/lib/kubelet/pods/c3ff09ab-cbe1-49e7-8121-5f71997a5176/volumes/kubernetes.io~secret/node-tuning-operator-tls major:0 minor:439 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7/volumes/kubernetes.io~projected/kube-api-access-5qn7f:{mountpoint:/var/lib/kubelet/pods/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7/volumes/kubernetes.io~projected/kube-api-access-5qn7f major:0 minor:550 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7/volumes/kubernetes.io~secret/serving-cert major:0 minor:548 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ce3728ab-5d50-40ac-95b3-74a5b62a557f/volumes/kubernetes.io~projected/kube-api-access-29qbv:{mountpoint:/var/lib/kubelet/pods/ce3728ab-5d50-40ac-95b3-74a5b62a557f/volumes/kubernetes.io~projected/kube-api-access-29qbv major:0 minor:284 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ce3728ab-5d50-40ac-95b3-74a5b62a557f/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/ce3728ab-5d50-40ac-95b3-74a5b62a557f/volumes/kubernetes.io~secret/serving-cert major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ce3c462e-b655-40bc-811a-95ccde49fdb8/volumes/kubernetes.io~projected/kube-api-access-8jxdg:{mountpoint:/var/lib/kubelet/pods/ce3c462e-b655-40bc-811a-95ccde49fdb8/volumes/kubernetes.io~projected/kube-api-access-8jxdg major:0 minor:699 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ce3c462e-b655-40bc-811a-95ccde49fdb8/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/ce3c462e-b655-40bc-811a-95ccde49fdb8/volumes/kubernetes.io~secret/proxy-tls major:0 minor:694 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d2316774-4ebc-4fa9-be07-eb1f16f614dd/volumes/kubernetes.io~projected/kube-api-access-lrgxg:{mountpoint:/var/lib/kubelet/pods/d2316774-4ebc-4fa9-be07-eb1f16f614dd/volumes/kubernetes.io~projected/kube-api-access-lrgxg major:0 minor:1098 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d2455453-5943-49ef-bfea-cba077197da0/volumes/kubernetes.io~projected/kube-api-access-lxk9v:{mountpoint:/var/lib/kubelet/pods/d2455453-5943-49ef-bfea-cba077197da0/volumes/kubernetes.io~projected/kube-api-access-lxk9v major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d2455453-5943-49ef-bfea-cba077197da0/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/d2455453-5943-49ef-bfea-cba077197da0/volumes/kubernetes.io~secret/srv-cert major:0 minor:646 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d325c523-8e6f-4665-9f54-334eaf301141/volumes/kubernetes.io~projected/kube-api-access-mk4ql:{mountpoint:/var/lib/kubelet/pods/d325c523-8e6f-4665-9f54-334eaf301141/volumes/kubernetes.io~projected/kube-api-access-mk4ql major:0 minor:925 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d325c523-8e6f-4665-9f54-334eaf301141/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/d325c523-8e6f-4665-9f54-334eaf301141/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config major:0 minor:921 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d325c523-8e6f-4665-9f54-334eaf301141/volumes/kubernetes.io~secret/openshift-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/d325c523-8e6f-4665-9f54-334eaf301141/volumes/kubernetes.io~secret/openshift-state-metrics-tls major:0 minor:927 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d42bcf13-548b-46c4-9a3d-a46f1b6ec045/volumes/kubernetes.io~projected/kube-api-access-vkxxg:{mountpoint:/var/lib/kubelet/pods/d42bcf13-548b-46c4-9a3d-a46f1b6ec045/volumes/kubernetes.io~projected/kube-api-access-vkxxg major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d42bcf13-548b-46c4-9a3d-a46f1b6ec045/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/d42bcf13-548b-46c4-9a3d-a46f1b6ec045/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9d09a56-ed4c-40b7-8be1-f3934c07296e/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/d9d09a56-ed4c-40b7-8be1-f3934c07296e/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9d09a56-ed4c-40b7-8be1-f3934c07296e/volumes/kubernetes.io~projected/kube-api-access-wmzr4:{mountpoint:/var/lib/kubelet/pods/d9d09a56-ed4c-40b7-8be1-f3934c07296e/volumes/kubernetes.io~projected/kube-api-access-wmzr4 major:0 minor:231 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9d09a56-ed4c-40b7-8be1-f3934c07296e/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/d9d09a56-ed4c-40b7-8be1-f3934c07296e/volumes/kubernetes.io~secret/metrics-tls major:0 minor:463 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/deb67ea0-8342-40cb-b0f4-115270e878dd/volumes/kubernetes.io~projected/kube-api-access-62lvq:{mountpoint:/var/lib/kubelet/pods/deb67ea0-8342-40cb-b0f4-115270e878dd/volumes/kubernetes.io~projected/kube-api-access-62lvq major:0 minor:392 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e390416b-4fa1-41d5-bc74-9e779b252350/volumes/kubernetes.io~projected/kube-api-access-cz6h6:{mountpoint:/var/lib/kubelet/pods/e390416b-4fa1-41d5-bc74-9e779b252350/volumes/kubernetes.io~projected/kube-api-access-cz6h6 major:0 minor:1119 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e54baea8-6c3e-45a0-ac8c-880a8aaa8208/volumes/kubernetes.io~projected/kube-api-access-kkw55:{mountpoint:/var/lib/kubelet/pods/e54baea8-6c3e-45a0-ac8c-880a8aaa8208/volumes/kubernetes.io~projected/kube-api-access-kkw55 major:0 minor:108 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e54baea8-6c3e-45a0-ac8c-880a8aaa8208/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/e54baea8-6c3e-45a0-ac8c-880a8aaa8208/volumes/kubernetes.io~secret/cert major:0 minor:153 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f38b464d-a218-4753-b7ac-a7d373952c4d/volumes/kubernetes.io~projected/kube-api-access-lfbx8:{mountpoint:/var/lib/kubelet/pods/f38b464d-a218-4753-b7ac-a7d373952c4d/volumes/kubernetes.io~projected/kube-api-access-lfbx8 major:0 minor:947 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f38b464d-a218-4753-b7ac-a7d373952c4d/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/f38b464d-a218-4753-b7ac-a7d373952c4d/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:946 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f3be6654-f969-4952-976d-218c86af7d2d/volumes/kubernetes.io~projected/kube-api-access-9wnqw:{mountpoint:/var/lib/kubelet/pods/f3be6654-f969-4952-976d-218c86af7d2d/volumes/kubernetes.io~projected/kube-api-access-9wnqw major:0 minor:836 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f7f4ae93-428b-4ebd-bfaa-18359b407ede/volumes/kubernetes.io~projected/kube-api-access-mc8t5:{mountpoint:/var/lib/kubelet/pods/f7f4ae93-428b-4ebd-bfaa-18359b407ede/volumes/kubernetes.io~projected/kube-api-access-mc8t5 major:0 minor:99 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f7f4ae93-428b-4ebd-bfaa-18359b407ede/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/f7f4ae93-428b-4ebd-bfaa-18359b407ede/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fb65c095-ca20-432c-a069-ad6719fca9c8/volumes/kubernetes.io~projected/kube-api-access-j5lv2:{mountpoint:/var/lib/kubelet/pods/fb65c095-ca20-432c-a069-ad6719fca9c8/volumes/kubernetes.io~projected/kube-api-access-j5lv2 major:0 minor:823 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fb65c095-ca20-432c-a069-ad6719fca9c8/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/fb65c095-ca20-432c-a069-ad6719fca9c8/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config major:0 minor:819 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fb65c095-ca20-432c-a069-ad6719fca9c8/volumes/kubernetes.io~secret/prometheus-operator-tls:{mountpoint:/var/lib/kubelet/pods/fb65c095-ca20-432c-a069-ad6719fca9c8/volumes/kubernetes.io~secret/prometheus-operator-tls major:0 minor:832 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fe643e40-d06d-4e69-9be3-0065c2a78567/volumes/kubernetes.io~projected/kube-api-access-w4cqp:{mountpoint:/var/lib/kubelet/pods/fe643e40-d06d-4e69-9be3-0065c2a78567/volumes/kubernetes.io~projected/kube-api-access-w4cqp major:0 minor:273 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fe643e40-d06d-4e69-9be3-0065c2a78567/volumes/kubernetes.io~secret/marketplace-operator-metrics:{mountpoint:/var/lib/kubelet/pods/fe643e40-d06d-4e69-9be3-0065c2a78567/volumes/kubernetes.io~secret/marketplace-operator-metrics major:0 minor:642 fsType:tmpfs blockSize:0} overlay_0-1002:{mountpoint:/var/lib/containers/storage/overlay/01783e1cd315b6ff781f25fb5c9ca797ef92a0172eef9d2dd48c8ff0c21a2670/merged major:0 minor:1002 fsType:overlay blockSize:0} overlay_0-1016:{mountpoint:/var/lib/containers/storage/overlay/b90c101634868caad9af688196bbe6ec9717c120cd4999f778dba9b6eddc442d/merged major:0 minor:1016 fsType:overlay blockSize:0} overlay_0-1018:{mountpoint:/var/lib/containers/storage/overlay/55e88fb78cba02efb41f00d961f2857bc28317c4b18607d7f0c0b83a0e2e3283/merged major:0 minor:1018 fsType:overlay blockSize:0} overlay_0-102:{mountpoint:/var/lib/containers/storage/overlay/f94be2944f1b37080c2ffff6eafe29e3dc1d78fbef4d1ff27d31b7bc61c29b77/merged major:0 minor:102 fsType:overlay blockSize:0} overlay_0-1020:{mountpoint:/var/lib/containers/storage/overlay/9469635d3e267ca14b825f2f3fb682b3477b255c27415605bab797825b76abea/merged major:0 minor:1020 fsType:overlay blockSize:0} overlay_0-1025:{mountpoint:/var/lib/containers/storage/overlay/5d55ebe69614bb90545c61b3e84c6ea375fe10609c144f2a4d7691ef0fe63f96/merged major:0 minor:1025 fsType:overlay blockSize:0} overlay_0-1027:{mountpoint:/var/lib/containers/storage/overlay/f5660f3d340c6234740e91c333dc69a31e29591a4d4d8277e33ed81b9d18c80e/merged major:0 minor:1027 fsType:overlay blockSize:0} overlay_0-1029:{mountpoint:/var/lib/containers/storage/overlay/66eb6bcbdc9ce3ef85c0dcc4ff08ce9d29aab25bcc7942fd0325361722dc0158/merged major:0 minor:1029 fsType:overlay blockSize:0} overlay_0-1031:{mountpoint:/var/lib/containers/storage/overlay/09d39db70a45de3d00f9d45d2fd5152752e47932c298269242cfe199cc859355/merged major:0 minor:1031 fsType:overlay blockSize:0} overlay_0-1034:{mountpoint:/var/lib/containers/storage/overlay/a8dd176d0bb00bca863ee6df4aa34e546a6494b13e371ad286017968f44a59e6/merged major:0 minor:1034 fsType:overlay blockSize:0} overlay_0-1045:{mountpoint:/var/lib/containers/storage/overlay/1b4d7c8ba1add6b7e95cfdb5f7d82a562b8fcdf55ed2df11e0c4b1f3605aea0e/merged major:0 minor:1045 fsType:overlay blockSize:0} overlay_0-1047:{mountpoint:/var/lib/containers/storage/overlay/af2b2220eef641f25743807cb1100c5df4d0cf86370a08d23b23084e85375813/merged major:0 minor:1047 fsType:overlay blockSize:0} overlay_0-1053:{mountpoint:/var/lib/containers/storage/overlay/f802cdad5c307160788a96771f7af051fcb70ef9ded8af83dc72d3020275a288/merged major:0 minor:1053 fsType:overlay blockSize:0} overlay_0-1071:{mountpoint:/var/lib/containers/storage/overlay/45bddd33bdcd98c679b24643201ac8ac337892072ca3c075279af30cd7523b0e/merged major:0 minor:1071 fsType:overlay blockSize:0} overlay_0-1078:{mountpoint:/var/lib/containers/storage/overlay/8bd9f1edcd92979d4e26120fb1142eb67c40f7a3ddbcc50370feb59495c26a9f/merged major:0 minor:1078 fsType:overlay blockSize:0} overlay_0-1079:{mountpoint:/var/lib/containers/storage/overlay/7688a97d1d9d74dabf6c677a762d27b544b9b77a4afc66c1b1a9e205d74330f4/merged major:0 minor:1079 fsType:overlay blockSize:0} overlay_0-1085:{mountpoint:/var/lib/containers/storage/overlay/e4d3c1c55563b44b53b2cb0c6da861869bd9e7d3291d19c1deb702cf547219ed/merged major:0 minor:1085 fsType:overlay blockSize:0} overlay_0-1090:{mountpoint:/var/lib/containers/storage/overlay/ba6efdbb35096020048579eee6675d7a909d7e8089941a98deb63d5181554716/merged major:0 minor:1090 fsType:overlay blockSize:0} overlay_0-1105:{mountpoint:/var/lib/containers/storage/overlay/fe545b7afcfbb27173b6916b00542d0c0409373dd4cc95655d1d98840869ee49/merged major:0 minor:1105 fsType:overlay blockSize:0} overlay_0-1107:{mountpoint:/var/lib/containers/storage/overlay/05293f993febeca17508aaab8de64505eeb8021e68e1707757f5a899a8236c17/merged major:0 minor:1107 fsType:overlay blockSize:0} overlay_0-1113:{mountpoint:/var/lib/containers/storage/overlay/f96f7e92f1d74812984efb88b41795be046ef8d637941fa3c272484dd5f800b2/merged major:0 minor:1113 fsType:overlay blockSize:0} overlay_0-1120:{mountpoint:/var/lib/containers/storage/overlay/a611e9c48136b2f9bec995cb6be5c4813422456e6cacbdc8e510c5cd1c7fffb5/merged major:0 minor:1120 fsType:overlay blockSize:0} overlay_0-1123:{mountpoint:/var/lib/containers/storage/overlay/b134a1ace76fd5c66743c1314808f6239867cd5f88fa46ae1a71d8cd5689add2/merged major:0 minor:1123 fsType:overlay blockSize:0} overlay_0-1124:{mountpoint:/var/lib/containers/storage/overlay/979d85d412de257f271aff43819b14e4308943a1659577154699c561c76ca079/merged major:0 minor:1124 fsType:overlay blockSize:0} overlay_0-1132:{mountpoint:/var/lib/containers/storage/overlay/5c2f3a8c6816741e9b18dcc24d69e723150b1463d405793456dbc22e49186040/merged major:0 minor:1132 fsType:overlay blockSize:0} overlay_0-1134:{mountpoint:/var/lib/containers/storage/overlay/0c51ed9651b4ecd71eb45501b23a63658250b05b7724a4f1e4ad4eec37ccbf74/merged major:0 minor:1134 fsType:overlay blockSize:0} overlay_0-1151:{mountpoint:/var/lib/containers/storage/overlay/cf3d5f4b9d32fd137790fe8c97de5d91a24ae4f45a5c9ee55ccdc361022b358c/merged major:0 minor:1151 fsType:overlay blockSize:0} overlay_0-116:{mountpoint:/var/lib/containers/storage/overlay/0735b3202eefa367d2e2bf8c9918199fd45d3c905433a2b45ac7dcfc6a5867f3/merged major:0 minor:116 fsType:overlay blockSize:0} overlay_0-1165:{mountpoint:/var/lib/containers/storage/overlay/e637c2e83ea13f07e4fdad7ca1598a7ce632a983469f0ac86800938ae5c95e20/merged major:0 minor:1165 fsType:overlay blockSize:0} overlay_0-1176:{mountpoint:/var/lib/containers/storage/overlay/127e80641893e04540f048ca2a33e72360d5feac1afb2eaf1eb261c653932954/merged major:0 minor:1176 fsType:overlay blockSize:0} overlay_0-1178:{mountpoint:/var/lib/containers/storage/overlay/38c15d2030f987a2bc8203222ec4f72d79c967d7707f191effeb1da89a9809eb/merged major:0 minor:1178 fsType:overlay blockSize:0} overlay_0-1181:{mountpoint:/var/lib/containers/storage/overlay/a94ac371307ce53279eaceb7ad7e804bdb9c90685bd571efd081c82aeb00c688/merged major:0 minor:1181 fsType:overlay blockSize:0} overlay_0-1183:{mountpoint:/var/lib/containers/storage/overlay/018189784ca1bdbed0b6b923509de0a6c14afeaaeb98aee1a147b23051c5b4b0/merged major:0 minor:1183 fsType:overlay blockSize:0} overlay_0-1189:{mountpoint:/var/lib/containers/storage/overlay/1f5e9e108599b671e216ac967f65e3db48ac70625dd768d8532d43cff406c7a2/merged major:0 minor:1189 fsType:overlay blockSize:0} overlay_0-1204:{mountpoint:/var/lib/containers/storage/overlay/537764a20e10cbb0cc5d6ad2cd1e7e94094535b6daf53fbcc97010918869ee11/merged major:0 minor:1204 fsType:overlay blockSize:0} overlay_0-1209:{mountpoint:/var/lib/containers/storage/overlay/e358e890856ec6ca536f4d6382ea46fa23abbf5e0345ca8d3cdc3cdd629c703f/merged major:0 minor:1209 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/157e1f54ca48648e41da36293453ff16c93b4722d12946dff3eb1642c6139611/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-1214:{mountpoint:/var/lib/containers/storage/overlay/6f3e19338d834a7c1f03afc22e192cce914ea8715da288009602c2aaadfbd65d/merged major:0 minor:1214 fsType:overlay blockSize:0} overlay_0-131:{mountpoint:/var/lib/containers/storage/overlay/e4f39858d7dbc18eacb5d0462192aac1e4abbd24c86cb6bd029bc4697139e490/merged major:0 minor:131 fsType:overlay blockSize:0} overlay_0-134:{mountpoint:/var/lib/containers/storage/overlay/feaa86af7dd75e79edc740950a9e0152e688c53901cfd7604df849927b7287a3/merged major:0 minor:134 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/dcdc4e8ba8826281e7c48d612932a8283eca90e33683e6b152d27de6fd6b2e86/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-140:{mountpoint:/var/lib/containers/storage/overlay/bd7bab062f77bf03de9196d10e71403888065cb937fb823e196d7f904de4d2bd/merged major:0 minor:140 fsType:overlay blockSize:0} overlay_0-144:{mountpoint:/var/lib/containers/storage/overlay/ef85dc54f6fcbc4f28acd3824a659281588e2c52dd6225bf08927e64f8a27ea1/merged major:0 minor:144 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/21950acd515b322dd3043586a49f7d90767a54668529984980041ff9d210a9fc/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-155:{mountpoint:/var/lib/containers/storage/overlay/4246ab1d6e9a8a00f0d97266dc0500d4ae8489756bb36813e1ec54f41637aa14/merged major:0 minor:155 fsType:overlay blockSize:0} overlay_0-157:{mountpoint:/var/lib/containers/storage/overlay/e1ebd9316edb7c36a602989d853bd0fa20c24bccb5b998f485d2459b42777462/merged major:0 minor:157 fsType:overlay blockSize:0} overlay_0-159:{mountpoint:/var/lib/containers/storage/overlay/c2034c98cf3b50d944b38ad7c3cc732dfede273657536854d0237c14ca43d337/merged major:0 minor:159 fsType:overlay blockSize:0} overlay_0-164:{mountpoint:/var/lib/containers/storage/overlay/d036ac79db3579566b9f082874ca7cba70cf8521f51f459f164bc0c3911577a0/merged major:0 minor:164 fsType:overlay blockSize:0} overlay_0-169:{mountpoint:/var/lib/containers/storage/overlay/f85503cb2381ef71b95bb53999a50aa13076ac6e30fb3f802a04cccf06aadc78/merged major:0 minor:169 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/2f49ceb38fb13fbc573f4f6545c4ece5a35ed7649dbbca3bc64725e0971cc0d7/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/229da0185fd0270169de6748533c5fea41278a36cc273108430fa76d070eac1d/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/51411eb6139db5ed816ee46b541b7bc2cfd9737fba520b4f6e99b7b6f25ef049/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-185:{mountpoint:/var/lib/containers/storage/overlay/657960edd263bbcba4240d0f00ef32f1b1137600861e4b928e950bf5dbe19341/merged major:0 minor:185 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/95582931e54039022659918d30650f0d072b780faad8511dd4e30b23aebf99bf/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-194:{mountpoint:/var/lib/containers/storage/overlay/419a3e4c35d34c43a3c9bf5cb4bbb9c85b2ba671531141864048342452cb5b77/merged major:0 minor:194 fsType:overlay blockSize:0} overlay_0-200:{mountpoint:/var/lib/containers/storage/overlay/607f233641ed57ea479467f7d43f45fa712452a8c4ff33cf66965d9d47846854/merged major:0 minor:200 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/0f8a3b4c2b206d258b50d3c588c905db0be0cf7271f1e9b4a99a1a926c114a93/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-242:{mountpoint:/var/lib/containers/storage/overlay/786e8c1aead46264e9407de4e8b6aeb236155456022af57865756870d3596e32/merged major:0 minor:242 fsType:overlay blockSize:0} overlay_0-248:{mountpoint:/var/lib/containers/storage/overlay/614f9b39bd4b3f55fe92c291ef782f20e5f72f47e630051625c5f6a6aa2008c8/merged major:0 minor:248 fsType:overlay blockSize:0} overlay_0-249:{mountpoint:/var/lib/containers/storage/overlay/3223bac71a6f3e252f09ff63e99103a1c940a430f3b2e88f7ae923bb8963bda6/merged major:0 minor:249 fsType:overlay blockSize:0} overlay_0-271:{mountpoint:/var/lib/containers/storage/overlay/4de6d96f91604e9ed1eb5509c61b1b17ae0cf75029204790447a138485f6514d/merged major:0 minor:271 fsType:overlay blockSize:0} overlay_0-274:{mountpoint:/var/lib/containers/storage/overlay/e70df017947419d5d640516785ffd646c9d57b12c05c6c21ecee7649f1a0113e/merged major:0 minor:274 fsType:overlay blockSize:0} overlay_0-276:{mountpoint:/var/lib/containers/storage/overlay/b987e6fb46bb065749b28f77ff957a27abd7400d158649f160f0bae6618fb514/merged major:0 minor:276 fsType:overlay blockSize:0} overlay_0-290:{mountpoint:/var/lib/containers/storage/overlay/1e25e05c83287370a90b98c810023b84fdb0162d55fcc086e87fd5d6a516a87b/merged major:0 minor:290 fsType:overlay blockSize:0} overlay_0-293:{mountpoint:/var/lib/containers/storage/overlay/47bad877d182dec1a4563f3823eb5cdb4b8042e68496fbac655671f25d6555e1/merged major:0 minor:293 fsType:overlay blockSize:0} overlay_0-296:{mountpoint:/var/lib/containers/storage/overlay/45935998ec23985e73533d1f2a7c4401c6b339f25ba9cc57f3a2eb892431639e/merged major:0 minor:296 fsType:overlay blockSize:0} overlay_0-299:{mountpoint:/var/lib/containers/storage/overlay/b9a09ea8ce4f5b238043b8fe430dd3d67b60ec5970624fe0d3771e6f8ceb0faa/merged major:0 minor:299 fsType:overlay blockSize:0} overlay_0-301:{mountpoint:/var/lib/containers/storage/overlay/18e917e829cd5e34c8d03af304423da9d4220d28ec9261fe63c03801051b85c9/merged major:0 minor:301 fsType:overlay blockSize:0} overlay_0-303:{mountpoint:/var/lib/containers/storage/overlay/ae88d967fb6107201788847cbab7b83cc380c45c22148c347d2b54ed59b2e427/merged major:0 minor:303 fsType:overlay blockSize:0} overlay_0-305:{mountpoint:/var/lib/containers/storage/overlay/6fbfeeb04079a0c340807555834030154ad0df27cc93b2bef119f97ef92f9023/merged major:0 minor:305 fsType:overlay blockSize:0} overlay_0-309:{mountpoint:/var/lib/containers/storage/overlay/f408ae97d6dfe662463e184f3d317638d3d31f77089f23b1929e4eb745a9debc/merged major:0 minor:309 fsType:overlay blockSize:0} overlay_0-311:{mountpoint:/var/lib/containers/storage/overlay/419ad7a44dea2fb852ce7b17d9ced853e80a07d47fad574944dc8e1a0d45b295/merged major:0 minor:311 fsType:overlay blockSize:0} overlay_0-313:{mountpoint:/var/lib/containers/storage/overlay/057f8438f42935419cbb70ef0bb99e5572cea990b315452e1ca6dc13c666a060/merged major:0 minor:313 fsType:overlay blockSize:0} overlay_0-316:{mountpoint:/var/lib/containers/storage/overlay/a5efc63eeb5f7fb85d3c008c839c0206cbc1332d6d1dba80fdc45811a626360f/merged major:0 minor:316 fsType:overlay blockSize:0} overlay_0-317:{mountpoint:/var/lib/containers/storage/overlay/98831302aa957e0bed51c2d1f3a5d91be65605571b811c43160e265ec8ee6916/merged major:0 minor:317 fsType:overlay blockSize:0} overlay_0-318:{mountpoint:/var/lib/containers/storage/overlay/27fe3e90ab9cc94e0aaaf78017660b96cf41bd7a690e679205b621d69080999f/merged major:0 minor:318 fsType:overlay blockSize:0} overlay_0-320:{mountpoint:/var/lib/containers/storage/overlay/357bce2e30544ab27062976bcd79e3faf637a91c7874fbdd034d97d3af0b1d77/merged major:0 minor:320 fsType:overlay blockSize:0} overlay_0-322:{mountpoint:/var/lib/containers/storage/overlay/a7157f1a2c497edcdc82040086baf39f471d95a384bf9b82c2e08060bdff9eb8/merged major:0 minor:322 fsType:overlay blockSize:0} overlay_0-330:{mountpoint:/var/lib/containers/storage/overlay/75ce5e0b006fcb0f97c8fe06aba4b534098a52a7bd026359532813ca4efc2a66/merged major:0 minor:330 fsType:overlay blockSize:0} overlay_0-337:{mountpoint:/var/lib/containers/storage/overlay/0207844fd857465a29f15e8a65a595262bc8332faeacb0900740013f1d05804b/merged major:0 minor:337 fsType:overlay blockSize:0} overlay_0-339:{mountpoint:/var/lib/containers/storage/overlay/a42209a6251500234097676b5b7d9e2480df9344382bfafe7d8a5a92d9e27440/merged major:0 minor:339 fsType:overlay blockSize:0} overlay_0-341:{mountpoint:/var/lib/containers/storage/overlay/0f287acb7aef0b9413eec85f109c5481272ed1e837c3baa33cd840b80656ed41/merged major:0 minor:341 fsType:overlay blockSize:0} overlay_0-343:{mountpoint:/var/lib/containers/storage/overlay/b395fefd8a09745a92c31e110ee976ee3f793ac250619226fab4636b3f6957f4/merged major:0 minor:343 fsType:overlay blockSize:0} overlay_0-345:{mountpoint:/var/lib/containers/storage/overlay/38d58efb88765d03fc9d030cbc0341326878a22d769b684317024dc96abbacde/merged major:0 minor:345 fsType:overlay blockSize:0} overlay_0-348:{mountpoint:/var/lib/containers/storage/overlay/2deabf818feb100febab998071baa2126a68f8a739420c674f50f668dac1d2a9/merged major:0 minor:348 fsType:overlay blockSize:0} overlay_0-351:{mountpoint:/var/lib/containers/storage/overlay/cd79baf8c2df4a0476a73f9251c926d1d1c00c9901c210b83bd44414a3941aed/merged major:0 minor:351 fsType:overlay blockSize:0} overlay_0-353:{mountpoint:/var/lib/containers/storage/overlay/749bcf475ee22348e52db6501d761e1a33f38d6974b02bbdb95e56d67ccdb0bd/merged major:0 minor:353 fsType:overlay blockSize:0} overlay_0-354:{mountpoint:/var/lib/containers/storage/overlay/c1c2116f365aa03ade56cecde77578e48873c704d68e25d1d546bb7fd80bba85/merged major:0 minor:354 fsType:overlay blockSize:0} overlay_0-358:{mountpoint:/var/lib/containers/storage/overlay/cb393bae69f6cca683e2e607b5af72f7340e653022b74579c8e516bcbef3260f/merged major:0 minor:358 fsType:overlay blockSize:0} overlay_0-364:{mountpoint:/var/lib/containers/storage/overlay/a10f524658b4005e17ebbc9a3439101dd0e02c0e1def07e28816fad7c693c22b/merged major:0 minor:364 fsType:overlay blockSize:0} overlay_0-373:{mountpoint:/var/lib/containers/storage/overlay/36ce2f857ffe39512d2e99bde8161f8c17555a99a06dc10d05ff44a4c3990151/merged major:0 minor:373 fsType:overlay blockSize:0} overlay_0-374:{mountpoint:/var/lib/containers/storage/overlay/7678cfe6e34ee4b1299ce53a446681c966c924caf3a722c24019e172783832e9/merged major:0 minor:374 fsType:overlay blockSize:0} overlay_0-376:{mountpoint:/var/lib/containers/storage/overlay/8e4bc31e22ab20777ba973b519b189ddf1717ecc9eddec144dba2d5c5ee5029c/merged major:0 minor:376 fsType:overlay blockSize:0} overlay_0-396:{mountpoint:/var/lib/containers/storage/overlay/2e29f250fd9dfcc4c1db8e4c1f3fff28d605307efbe89ec39fa3ef57e3a88b54/merged major:0 minor:396 fsType:overlay blockSize:0} overlay_0-400:{mountpoint:/var/lib/containers/storage/overlay/40a59c43d3d7746901c1a5e2c02d53b8977e7474c31766fc3eb1e0e934f23a16/merged major:0 minor:400 fsType:overlay blockSize:0} overlay_0-407:{mountpoint:/var/lib/containers/storage/overlay/83cae054a4d9fdeb971dd7cc9e7d5637c6178cc489fcba904b63e0da5832ffb9/merged major:0 minor:407 fsType:overlay blockSize:0} overlay_0-409:{mountpoint:/var/lib/containers/storage/overlay/03f3b9a8c578a986eb7f635a629de130d9af444ff35303a28e87591eab614007/merged major:0 minor:409 fsType:overlay blockSize:0} overlay_0-41:{mountpoint:/var/lib/containers/storage/overlay/9ecf71ecf151469eabb7a206282e2ff87daac69ac112107d1be2b179bf34ed3a/merged major:0 minor:41 fsType:overlay blockSize:0} overlay_0-413:{mountpoint:/var/lib/containers/storage/overlay/d2c272678a66e459f6ce8ab622513aefca8bd1ef6c1ddfbfe92fc4aef9782325/merged major:0 minor:413 fsType:overlay blockSize:0} overlay_0-415:{mountpoint:/var/lib/containers/storage/overlay/427bc99a65bf6bc5e568d44dd73b28a7538d01b4d42daf1139d5564f3177a009/merged major:0 minor:415 fsType:overlay blockSize:0} overlay_0-423:{mountpoint:/var/lib/containers/storage/overlay/ad580acb46d01f279faacc3d2d510282e93ebf86d12d3f32a7877de4022969c4/merged major:0 minor:423 fsType:overlay blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/324bd0f3047c2cd9ad726317b7808b72aac6e2a6de82481098787679f0fa1c6f/merged major:0 minor:44 fsType:overlay blockSize:0} overlay_0-442:{mountpoint:/var/lib/containers/storage/overlay/f25dcd6255fdccc52faaae23ae8ac2c2cc9b21f353d0566827251efc7820fb61/merged major:0 minor:442 fsType:overlay blockSize:0} overlay_0-449:{mountpoint:/var/lib/containers/storage/overlay/a2427217b7ba370ceea14cdbb7983a876bc32284fd6022285cd14dd2c5480c15/merged major:0 minor:449 fsType:overlay blockSize:0} overlay_0-450:{mountpoint:/var/lib/containers/storage/overlay/e9ad375cf97834352ab3aa7c5cc6aa80a576cad0227c09e25edc02f2066f74bf/merged major:0 minor:450 fsType:overlay blockSize:0} overlay_0-454:{mountpoint:/var/lib/containers/storage/overlay/5e6cf1dd615b1c476830fa5284e954fbdf91fe2376a7fc59f67950bef1ac9fc2/merged major:0 minor:454 fsType:overlay blockSize:0} overlay_0-457:{mountpoint:/var/lib/containers/storage/overlay/26d21e02652345cd9fa0bcdae05bab7418cb4414fa49b3e6d69486d4a6473fcf/merged major:0 minor:457 fsType:overlay blockSize:0} overlay_0-46:{mountpoint:/var/lib/containers/storage/overlay/bec3f7b0c860031f87ca47ab4095c79c8f079fda05554edc37c1272ace377a95/merged major:0 minor:46 fsType:overlay blockSize:0} overlay_0-465:{mountpoint:/var/lib/containers/storage/overlay/d7233fc50b7f47f4ab68281f3e0c3761ea51e1b3afdafc71d1fbc8a23fa717ae/merged major:0 minor:465 fsType:overlay blockSize:0} overlay_0-475:{mountpoint:/var/lib/containers/storage/overlay/e9f892c5eb7ca376b253411c56d531e444a3cd767bdd735b7688fd32c0d7ac7a/merged major:0 minor:475 fsType:overlay blockSize:0} overlay_0-478:{mountpoint:/var/lib/containers/storage/overlay/c74e374dc1bcf394c2b081f11d5f13a518d4a73396039a0c3c35dd9a9299df5c/merged major:0 minor:478 fsType:overlay blockSize:0} overlay_0-488:{mountpoint:/var/lib/containers/storage/overlay/192e3ed87ceeb9c45e34a1a56b626e2b80add27ed9849ce1d13c3c9814e802ef/merged major:0 minor:488 fsType:overlay blockSize:0} overlay_0-490:{mountpoint:/var/lib/containers/storage/overlay/ed0dee13606023b32f1790ec9cf1ada3e3cc7799009142879a10105f0b065c61/merged major:0 minor:490 fsType:overlay blockSize:0} overlay_0-492:{mountpoint:/var/lib/containers/storage/overlay/7b98322ad936e45f96534806151ec1e77b1b81801e8ed41ac0d0f55afa4809d4/merged major:0 minor:492 fsType:overlay blockSize:0} overlay_0-494:{mountpoint:/var/lib/containers/storage/overlay/28b6d968e2096f46334582f37ff56799df5966337cdba39069e2c2aecd00cd3e/merged major:0 minor:494 fsType:overlay blockSize:0} overlay_0-499:{mountpoint:/var/lib/containers/storage/overlay/5791feae268d0e056e8efd9acc05df7d9ae2cf89bb1c7246dde73cb2328c8431/merged major:0 minor:499 fsType:overlay blockSize:0} overlay_0-501:{mountpoint:/var/lib/containers/storage/overlay/3a116a96f486b1b10223889562382e01715f6791b28dfd0c4297d3758a6f9781/merged major:0 minor:501 fsType:overlay blockSize:0} overlay_0-516:{mountpoint:/var/lib/containers/storage/overlay/8a749d73d4e6f69af1138859b79dd31a98b1bd3bec47b09103e3220cbcca7317/merged major:0 minor:516 fsType:overlay blockSize:0} overlay_0-518:{mountpoint:/var/lib/containers/storage/overlay/acdd11ae6a490de11ee36ebfee0da24a22f5a9b5644ffca8970b818f15a99a10/merged major:0 minor:518 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/9d16d54267c88ea313cb5bded24c7c228d1b8664dcfe41751d1d3b8f9d8cd0bd/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-521:{mountpoint:/var/lib/containers/storage/overlay/976cdfcf5417b735ea260d41ada2db0bf0f0efac16216e202b325588f9f636b9/merged major:0 minor:521 fsType:overlay blockSize:0} overlay_0-529:{mountpoint:/var/lib/containers/storage/overlay/21e80cabf90489cdca69165aff6b6d6221012d8f13e6dda9fad47d22c46f18b0/merged major:0 minor:529 fsType:overlay blockSize:0} overlay_0-531:{mountpoint:/var/lib/containers/storage/overlay/e0fa84e21d4f260038a622112b4bfa1225aee6f6e79c3e5a0ca44c2c33f7cece/merged major:0 minor:531 fsType:overlay blockSize:0} overlay_0-539:{mountpoint:/var/lib/containers/storage/overlay/662d3b046959c029830dab179750771d07de280d91c933d2524b0bbffd89e9c2/merged major:0 minor:539 fsType:overlay blockSize:0} overlay_0-554:{mountpoint:/var/lib/containers/storage/overlay/663425f7ceac35740bbe8d588c14bbbc8afa1c5128f25685e772727a96abfc78/merged major:0 minor:554 fsType:overlay blockSize:0} overlay_0-557:{mountpoint:/var/lib/containers/storage/overlay/8425b47df4523dec9f3e72e8345d70c1c1c523dc365fe8f78b0e4adc4b455055/merged major:0 minor:557 fsType:overlay blockSize:0} overlay_0-563:{mountpoint:/var/lib/containers/storage/overlay/6ad9702a4ca929aaa311b78b2a8486a0960b4783f79afac48f287c9ca5e86e28/merged major:0 minor:563 fsType:overlay blockSize:0} overlay_0-570:{mountpoint:/var/lib/containers/storage/overlay/b1140509c8f6b7b02a4f36ca7444dbcd826fb1af9fd6cf72683d068d9ac90792/merged major:0 minor:570 fsType:overlay blockSize:0} overlay_0-575:{mountpoint:/var/lib/containers/storage/overlay/d9172c58865910f3d9925638189384536f1d27b5b231b03752587ffbcfa1a3da/merged major:0 minor:575 fsType:overlay blockSize:0} overlay_0-579:{mountpoint:/var/lib/containers/storage/overlay/0c4ac182eaafe81e59618bc114ebfab7d99402e4599f3d249f0e304bdaf0a95a/merged major:0 minor:579 fsType:overlay blockSize:0} overlay_0-581:{mountpoint:/var/lib/containers/storage/overlay/80689e2708d8e139cf148a6153316739d0619fda3bade85449da728327c3e25b/merged major:0 minor:581 fsType:overlay blockSize:0} overlay_0-584:{mountpoint:/var/lib/containers/storage/overlay/ab204bed63316c7a8c9a240e20354b30099d620989901272f18ae458571dc47c/merged major:0 minor:584 fsType:overlay blockSize:0} overlay_0-597:{mountpoint:/var/lib/containers/storage/overlay/a43afd61d7edbe73efc09cfee2fc1749dbdf48b694acbcd7f7693e4dca396e14/merged major:0 minor:597 fsType:overlay blockSize:0} overlay_0-599:{mountpoint:/var/lib/containers/storage/overlay/12d3ed9e2bb538a408b19100cf8ac59d50ec1468ae63f1e0868d8158a06cfb02/merged major:0 minor:599 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/4cf011e3bb2fcc4c2806da969cca4a50a58109c7b240818077de975f9b67fa4b/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-610:{mountpoint:/var/lib/containers/storage/overlay/b7d26585f221d28a29c03127a7e25f7e224d84b922250a1784f2926c53b45532/merged major:0 minor:610 fsType:overlay blockSize:0} overlay_0-619:{mountpoint:/var/lib/containers/storage/overlay/9454ad1651c88a1de1776709ee401e527788b8a45885cdf089dd28084e69845e/merged major:0 minor:619 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/2a3701dc5e2a4c93ecccc1c3f3acf88518bd1af4e9795298190d3ce773d814fd/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-623:{mountpoint:/var/lib/containers/storage/overlay/3de4390bba4713e948b61edf1904607fe0ed207948654556a52e5ff6e0d9e2e0/merged major:0 minor:623 fsType:overlay blockSize:0} overlay_0-626:{mountpoint:/var/lib/containers/storage/overlay/a7045f4da5342eedd507001b4453db6ee8a465163f3a02ff919421ceaf6ce593/merged major:0 minor:626 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/35bb31a050aa7f1e2be954bcf5bf229b15017d4eebe09532d438a0c9127238d4/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-665:{mountpoint:/var/lib/containers/storage/overlay/11301a2a42c58cbbe88f5b4cd9c1b010b3c5b9c3de9715a1f3515c8a18bc636d/merged major:0 minor:665 fsType:overlay blockSize:0} overlay_0-667:{mountpoint:/var/lib/containers/storage/overlay/c18e5b2a01fae7b0441d146b8bdb84f06dfce9c521c50f1f57a066bbbab181d1/merged major:0 minor:667 fsType:overlay blockSize:0} overlay_0-669:{mountpoint:/var/lib/containers/storage/overlay/d816605543ad22a02fa9a8b29498702733904e15cb6e7c06d31d66795bfbf887/merged major:0 minor:669 fsType:overlay blockSize:0} overlay_0-671:{mountpoint:/var/lib/containers/storage/overlay/3d245b286f7b552fb8fe27a3ea84182dbb344114b6350e937f18862d20669ce0/merged major:0 minor:671 fsType:overlay blockSize:0} overlay_0-673:{mountpoint:/var/lib/containers/storage/overlay/43d703376dd3553e55f9b7b1efda9ff938b9aebfeb3a2474042300d308e1df47/merged major:0 minor:673 fsType:overlay blockSize:0} overlay_0-675:{mountpoint:/var/lib/containers/storage/overlay/246c1d048875ffa7bf04de96828e9837a2c3eda9a3fd67e14007eaa95f4d1b8c/merged major:0 minor:675 fsType:overlay blockSize:0} overlay_0-677:{mountpoint:/var/lib/containers/storage/overlay/4804ece2ea1f921a62328d2a51410b828e1fc0cebd736dbc3a4e96feeeaf060e/merged major:0 minor:677 fsType:overlay blockSize:0} overlay_0-683:{mountpoint:/var/lib/containers/storage/overlay/03e96043a879199243a333f8a4dc7a8404326c24b25a7a70b9ac3cc7340f4222/merged major:0 minor:683 fsType:overlay blockSize:0} overlay_0-685:{mountpoint:/var/lib/containers/storage/overlay/15f00d5836697ee0f695c1a4caf1a9973f87153f5dd8f75f05e7a61fb0caafd8/merged major:0 minor:685 fsType:overlay blockSize:0} overlay_0-686:{mountpoint:/var/lib/containers/storage/overlay/089cd83e750e5d231dd0470e62678bb11e8a9222faab24d2124ce19253512a3e/merged major:0 minor:686 fsType:overlay blockSize:0} overlay_0-688:{mountpoint:/var/lib/containers/storage/overlay/c83e7620555cfa76d3cefef63ff79c51042d6c196bf2e3d7ac1a69d74cf99715/merged major:0 minor:688 fsType:overlay blockSize:0} overlay_0-704:{mountpoint:/var/lib/containers/storage/overlay/7b70129ddebc66f354e5d73861439ad351953d06ea1a1489e198b32c6847ca7a/merged major:0 minor:704 fsType:overlay blockSize:0} overlay_0-706:{mountpoint:/var/lib/containers/storage/overlay/4123f625363f79fd6416404971be1d88a2f7c917cf0e12fb3a620b431359e722/merged major:0 minor:706 fsType:overlay blockSize:0} overlay_0-710:{mountpoint:/var/lib/containers/storage/overlay/c0368c9e75e1906aa60039ab30615f8078868bf6b5aa972116b8fa3786534f44/merged major:0 minor:710 fsType:overlay blockSize:0} overlay_0-712:{mountpoint:/var/lib/containers/storage/overlay/f1a4a03ed1ae74462c79b09b0b0783dec4eedaa83c5c790a8e655b79deee594a/merged major:0 minor:712 fsType:overlay blockSize:0} overlay_0-717:{mountpoint:/var/lib/containers/storage/overlay/8cac925e1412d73c9b13610b2830179634319e1f17880b2e231c7e7c40c52ec4/merged major:0 minor:717 fsType:overlay blockSize:0} overlay_0-726:{mountpoint:/var/lib/containers/storage/overlay/f3b6e181eebf01013d8510b0b9dd724b08c5c29504bd4ba1195a2c2c5cb8ffd5/merged major:0 minor:726 fsType:overlay blockSize:0} overlay_0-729:{mountpoint:/var/lib/containers/storage/overlay/094030e093e254da45ce2b0d29b1b80f46f4829a6eb102f5e6cb2d588f1753ec/merged major:0 minor:729 fsType:overlay blockSize:0} overlay_0-733:{mountpoint:/var/lib/containers/storage/overlay/6ca2e0c15c45e1a3a4623c0e3ba58fd1947e7ead867b8e9b36247d38a3fee135/merged major:0 minor:733 fsType:overlay blockSize:0} overlay_0-74:{mountpoint:/var/lib/containers/storage/overlay/41550b045fc1cfa56f328d2dfa65600a37f8ffec0d58bd22539c5b0f851b6003/merged major:0 minor:74 fsType:overlay blockSize:0} overlay_0-741:{mountpoint:/var/lib/containers/storage/overlay/3120f23171fb4706e2eff5be58a1c25d8949be9caede48c47542b1f723ccefe4/merged major:0 minor:741 fsType:overlay blockSize:0} overlay_0-755:{mountpoint:/var/lib/containers/storage/overlay/4849058ed663d33ca933e21e996638234d831167805787e6bcd3caee19d28e17/merged major:0 minor:755 fsType:overlay blockSize:0} overlay_0-757:{mountpoint:/var/lib/containers/storage/overlay/79ce803a56664f708833f973ed55f2aceec9cd526804f0dc8754b8ae86ca0ba6/merged major:0 minor:757 fsType:overlay blockSize:0} overlay_0-788:{mountpoint:/var/lib/containers/storage/overlay/bb02f1d77bb0055165fc2f45b0740da59b8b8720b88553a08680e3bf3592cd30/merged major:0 minor:788 fsType:overlay blockSize:0} overlay_0-790:{mountpoint:/var/lib/containers/storage/overlay/7e529d4f4ded67613f79619ccae94aea24dc81c916adcd989e559f34be8fb827/merged major:0 minor:790 fsType:overlay blockSize:0} overlay_0-791:{mountpoint:/var/lib/containers/storage/overlay/d472dd429f3468378bfd48befbeb65d233f2bb939c0f348fb870a16b7bbcc0df/merged major:0 minor:791 fsType:overlay blockSize:0} overlay_0-807:{mountpoint:/var/lib/containers/storage/overlay/34ad10d481df473d91d45dde7f3d36648bed72819e218cb390cf8bdd14c108c9/merged major:0 minor:807 fsType:overlay blockSize:0} overlay_0-809:{mountpoint:/var/lib/containers/storage/overlay/692ef13d66cf001849db3b8fb1cde2127ba83bee51043b53195b4fadbfe55ecd/merged major:0 minor:809 fsType:overlay blockSize:0} overlay_0-820:{mountpoint:/var/lib/containers/storage/overlay/be568ce6f7fb96f05fd48750ccfa76db8c8799d285a5a9b8b5e850a2ca5f93d8/merged major:0 minor:820 fsType:overlay blockSize:0} overlay_0-824:{mountpoint:/var/lib/containers/storage/overlay/e2495e338b2ffca9fbf0d005188038328a79c516eccf80b61457baf304653b55/merged major:0 minor:824 fsType:overlay blockSize:0} overlay_0-826:{mountpoint:/var/lib/containers/storage/overlay/d1d54b220dae23e334f6f6c663ac88420dca3a8fd47af0d2c0cac02996f7a9ee/merged major:0 minor:826 fsType:overlay blockSize:0} overlay_0-84:{mountpoint:/var/lib/containers/storage/overlay/bde79b42c9ee6a84b0ece5d6420f91cde03b75343b812ad28ad3a5d6de6a1e0b/merged major:0 minor:84 fsType:overlay blockSize:0} overlay_0-840:{mountpoint:/var/lib/containers/storage/overlay/3674c6922f3f90d50e0e89d1ef3ddab7b44efe3f0edaaaad9b856a2992f730bb/merged major:0 minor:840 fsType:overlay blockSize:0} overlay_0-85:{mountpoint:/var/lib/containers/storage/overlay/f6cecf974d6888b6fcd14f31f844ef932a723353f93ddf9794913258c9d92aa3/merged major:0 minor:85 fsType:overlay blockSize:0} overlay_0-850:{mountpoint:/var/lib/containers/storage/overlay/26c8ff73e9c6d7626fe9aefe5debd295cc630b0ad7d6d7d94232e755bd70d592/merged major:0 minor:850 fsType:overlay blockSize:0} overlay_0-852:{mountpoint:/var/lib/containers/storage/overlay/dbebb588299b7bcae0e1a7ccb61ec22a44c356b0fcc3244d532465937cbe13e8/merged major:0 minor:852 fsType:overlay blockSize:0} overlay_0-854:{mountpoint:/var/lib/containers/storage/overlay/5e481551910ba8f095e523453bb9f9a70699a8d0e9debb2f39c487d2fcbcbe5f/merged major:0 minor:854 fsType:overlay blockSize:0} overlay_0-859:{mountpoint:/var/lib/containers/storage/overlay/2f06180a63d472ae394a829a75a8e93d272cb09878289489ad89175c658fa069/merged major:0 minor:859 fsType:overlay blockSize:0} overlay_0-861:{mountpoint: Mar 18 13:23:56.158747 master-0 kubenswrapper[27835]: /var/lib/containers/storage/overlay/dd1220c045f381a9d2069178f448913dbee884f9e293ca813635dec0afaec861/merged major:0 minor:861 fsType:overlay blockSize:0} overlay_0-87:{mountpoint:/var/lib/containers/storage/overlay/cded49c138b71d21e1d2496869688957c6afd77d09badfe472aa3033ca89e571/merged major:0 minor:87 fsType:overlay blockSize:0} overlay_0-877:{mountpoint:/var/lib/containers/storage/overlay/a3c7710d60fab0e997f15363f962144c4b004a791d40cfc34b0ba20bcf64fc00/merged major:0 minor:877 fsType:overlay blockSize:0} overlay_0-881:{mountpoint:/var/lib/containers/storage/overlay/6e8947ec16f937802c8360f52ec443dc3e75353001069941aa9aada5be49902c/merged major:0 minor:881 fsType:overlay blockSize:0} overlay_0-883:{mountpoint:/var/lib/containers/storage/overlay/68c64ebc182afaa0c676b726b1a5698b4a7505129f88f97c5387b3c0e45ac354/merged major:0 minor:883 fsType:overlay blockSize:0} overlay_0-891:{mountpoint:/var/lib/containers/storage/overlay/477d3ad9a14ea8f6ea2498fb9167dfb96620a993a904dac50124811f2122a542/merged major:0 minor:891 fsType:overlay blockSize:0} overlay_0-893:{mountpoint:/var/lib/containers/storage/overlay/2989827b35557fcc868ed840780d97e5457003600ae638117766e433ae5f403a/merged major:0 minor:893 fsType:overlay blockSize:0} overlay_0-901:{mountpoint:/var/lib/containers/storage/overlay/9628cf47d581b320b122598a82cda7d575ca13fbf12a381ec3603fe0db24490f/merged major:0 minor:901 fsType:overlay blockSize:0} overlay_0-907:{mountpoint:/var/lib/containers/storage/overlay/65ab514c68f060048bab35ff01b6e798ffc7d84b32bdcf2ba2afcfe6702038c4/merged major:0 minor:907 fsType:overlay blockSize:0} overlay_0-92:{mountpoint:/var/lib/containers/storage/overlay/2bc930ef1c927570d3930eadce50110023bc5556212fc402f2222bcec391478e/merged major:0 minor:92 fsType:overlay blockSize:0} overlay_0-93:{mountpoint:/var/lib/containers/storage/overlay/582ded466467eb6496fdc19760d9ce27cfa3e2ee9f32cf0ed2952a280bc8d96f/merged major:0 minor:93 fsType:overlay blockSize:0} overlay_0-938:{mountpoint:/var/lib/containers/storage/overlay/35725e10723775bc5af8ae630348d8eba27b2f61d2c83ee17380fe2013ed7473/merged major:0 minor:938 fsType:overlay blockSize:0} overlay_0-94:{mountpoint:/var/lib/containers/storage/overlay/31cbeec8f080821660addd2a47a789bfbc05d6824863728af56c54c5e2ec0f6b/merged major:0 minor:94 fsType:overlay blockSize:0} overlay_0-948:{mountpoint:/var/lib/containers/storage/overlay/849c48102787d84fc76b16c466f6fc3c9a3be9888700362269d0ca4751e4e632/merged major:0 minor:948 fsType:overlay blockSize:0} overlay_0-953:{mountpoint:/var/lib/containers/storage/overlay/fe9d210c9b7d799a5e56785349ed2f8232c1099bca63dc969085bcb90404593b/merged major:0 minor:953 fsType:overlay blockSize:0} overlay_0-967:{mountpoint:/var/lib/containers/storage/overlay/aac4b111d926c87358bbcd2a73d5717c5bf1a28210158b6fed5d11f2fd728fed/merged major:0 minor:967 fsType:overlay blockSize:0} overlay_0-969:{mountpoint:/var/lib/containers/storage/overlay/8bab112d4951e1644c99c0330bd5a14365d680f4fac27e9145737b240155cf27/merged major:0 minor:969 fsType:overlay blockSize:0} overlay_0-97:{mountpoint:/var/lib/containers/storage/overlay/5eeefba9de808d89861b13807ca32961122f6f6570d13250dc245c4caf63b5a3/merged major:0 minor:97 fsType:overlay blockSize:0} overlay_0-971:{mountpoint:/var/lib/containers/storage/overlay/f302919645e9c7479f537237f441df6d7f3d65bc35318ad2182184c14c30a0dc/merged major:0 minor:971 fsType:overlay blockSize:0} overlay_0-973:{mountpoint:/var/lib/containers/storage/overlay/7242ed4947a510f2137d6210eed04484374405bee222c2b9082c8e8d3e68878a/merged major:0 minor:973 fsType:overlay blockSize:0} overlay_0-975:{mountpoint:/var/lib/containers/storage/overlay/71ab26693ce146b6c4e7f39493045211221102617e0c6711472c38b0afb79f12/merged major:0 minor:975 fsType:overlay blockSize:0} overlay_0-977:{mountpoint:/var/lib/containers/storage/overlay/6bae31da9163ba549f9b024f013992ac35f03ffd7c50b1ae6250ff3db836d77b/merged major:0 minor:977 fsType:overlay blockSize:0} overlay_0-979:{mountpoint:/var/lib/containers/storage/overlay/dbef0cd5c8d08a0dba1fa9749a7571d6e00646efb41dfb9cb0a47a6505b22f6c/merged major:0 minor:979 fsType:overlay blockSize:0} overlay_0-98:{mountpoint:/var/lib/containers/storage/overlay/4225031728afce2b477317b4848836e66d7492b7af608b013a66c563a199dead/merged major:0 minor:98 fsType:overlay blockSize:0} overlay_0-983:{mountpoint:/var/lib/containers/storage/overlay/98703b4929329bd5e945332e5f2166cac6171afe790c88fabdcb93be84b28d89/merged major:0 minor:983 fsType:overlay blockSize:0} overlay_0-992:{mountpoint:/var/lib/containers/storage/overlay/a457174aae5891afb48370c45aa8ef18382bb105281c119b599351d364fff4cf/merged major:0 minor:992 fsType:overlay blockSize:0} overlay_0-996:{mountpoint:/var/lib/containers/storage/overlay/7ccdbbfc5e5bca2b08275eba0ca73bb8e97033a3ce441fa5103ae34de8e20c18/merged major:0 minor:996 fsType:overlay blockSize:0}] Mar 18 13:23:56.203078 master-0 kubenswrapper[27835]: I0318 13:23:56.200515 27835 manager.go:217] Machine: {Timestamp:2026-03-18 13:23:56.19872097 +0000 UTC m=+0.163932610 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:0b28c177d1c547b6b192765c9d5bc20c SystemUUID:0b28c177-d1c5-47b6-b192-765c9d5bc20c BootID:82754421-b051-4950-9dab-4c3886d93f55 Filesystems:[{Device:overlay_0-1151 DeviceMajor:0 DeviceMinor:1151 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/aad28fcc9206746f0d26ad1538815d0d7f16ddcfe6c46b81f66fd625f49ae815/userdata/shm DeviceMajor:0 DeviceMinor:69 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/95c74df7f62f7dc8b78c4b8838181b79403ce4060666c2043d78b39ebd9f0419/userdata/shm DeviceMajor:0 DeviceMinor:114 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-729 DeviceMajor:0 DeviceMinor:729 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/087f1bfbdd93b7cd9cc4547bdaad7f0c837b1009fb9f9eea1b3ef9f1330544d2/userdata/shm DeviceMajor:0 DeviceMinor:286 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/16f8e725-f18a-478e-88c5-87d54aeb4857/volumes/kubernetes.io~projected/kube-api-access-8hvsl DeviceMajor:0 DeviceMinor:535 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/708812af-3249-4d57-8f28-055da22a7329/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:549 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-791 DeviceMajor:0 DeviceMinor:791 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4a97b24b2b4402b956c009659df6a92e6079c267e12ae961ceccadc636caf34a/userdata/shm DeviceMajor:0 DeviceMinor:256 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-584 DeviceMajor:0 DeviceMinor:584 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-87 DeviceMajor:0 DeviceMinor:87 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d9d09a56-ed4c-40b7-8be1-f3934c07296e/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:463 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-531 DeviceMajor:0 DeviceMinor:531 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1107 DeviceMajor:0 DeviceMinor:1107 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8e4ba40307f1e3c32ed5043b13eaa8d528a5352038969de985182a9daf4f59ae/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-46 DeviceMajor:0 DeviceMinor:46 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-311 DeviceMajor:0 DeviceMinor:311 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bf4c5410-fb44-45e8-ab66-24806e6349b8/volumes/kubernetes.io~projected/kube-api-access-hnkdr DeviceMajor:0 DeviceMinor:498 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8ffe2e75-9cc3-4244-95c8-800463c5aa28/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:844 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1090 DeviceMajor:0 DeviceMinor:1090 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-501 DeviceMajor:0 DeviceMinor:501 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-450 DeviceMajor:0 DeviceMinor:450 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-457 DeviceMajor:0 DeviceMinor:457 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0e7156cf-2d68-4de8-b7e7-60e1539590dd/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:138 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/595f697b-d238-4500-84ce-1ea00377f05e/volumes/kubernetes.io~projected/kube-api-access-4fxgl DeviceMajor:0 DeviceMinor:238 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1105 DeviceMajor:0 DeviceMinor:1105 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d01d4e5c147c00b55f6b6b29aae7404f1c65f4d9542857788934f8f1acdf5475/userdata/shm DeviceMajor:0 DeviceMinor:283 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c3a20ede6cada5383a3c17314cdc63a1bd82056b7193b0a825d73322086a74cd/userdata/shm DeviceMajor:0 DeviceMinor:292 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-673 DeviceMajor:0 DeviceMinor:673 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d2316774-4ebc-4fa9-be07-eb1f16f614dd/volumes/kubernetes.io~projected/kube-api-access-lrgxg DeviceMajor:0 DeviceMinor:1098 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bb505f490d9f0d175bd48b40f2b116d9b59fe037e6e27e85a04f72f615f5d521/userdata/shm DeviceMajor:0 DeviceMinor:845 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-883 DeviceMajor:0 DeviceMinor:883 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-169 DeviceMajor:0 DeviceMinor:169 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a8eff549-02f3-446e-b3a1-a66cecdc02a6/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:278 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-296 DeviceMajor:0 DeviceMinor:296 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1bf3d426d907a1cb94f7713355be45a70fb7cd061dca794ecb62191beca0b9d4/userdata/shm DeviceMajor:0 DeviceMinor:545 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/34a3a84b-048f-4822-9f05-0e7509327ca2/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:223 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1204 DeviceMajor:0 DeviceMinor:1204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-539 DeviceMajor:0 DeviceMinor:539 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fb65c095-ca20-432c-a069-ad6719fca9c8/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:819 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/290d1f84-5c5c-4bff-b045-e6020793cded/volumes/kubernetes.io~secret/image-registry-operator-tls DeviceMajor:0 DeviceMinor:462 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/16f8e725-f18a-478e-88c5-87d54aeb4857/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:534 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-557 DeviceMajor:0 DeviceMinor:557 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3ee0f85b-219b-47cb-a22a-67d359a69881/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:863 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-599 DeviceMajor:0 DeviceMinor:599 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1079 DeviceMajor:0 DeviceMinor:1079 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-131 DeviceMajor:0 DeviceMinor:131 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6a93ff56-362e-44fc-a54f-666a01559892/volumes/kubernetes.io~secret/kube-state-metrics-tls DeviceMajor:0 DeviceMinor:928 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b4b9a672b76f3adc2ab4b631c1e084b51c152e435a2f95e756fd77ce61bb9196/userdata/shm DeviceMajor:0 DeviceMinor:1174 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/bf4c5410-fb44-45e8-ab66-24806e6349b8/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:497 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-529 DeviceMajor:0 DeviceMinor:529 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-826 DeviceMajor:0 DeviceMinor:826 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7/volumes/kubernetes.io~projected/kube-api-access-5qn7f DeviceMajor:0 DeviceMinor:550 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/0c2c4a58-9780-4ecd-b417-e590ac3576ed/volumes/kubernetes.io~projected/kube-api-access-v6zmc DeviceMajor:0 DeviceMinor:247 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/13c71f7d-1485-4f86-beb2-ee16cf420350/volumes/kubernetes.io~projected/kube-api-access-zplb4 DeviceMajor:0 DeviceMinor:618 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-318 DeviceMajor:0 DeviceMinor:318 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94/volumes/kubernetes.io~projected/kube-api-access-lgt5t DeviceMajor:0 DeviceMinor:255 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-305 DeviceMajor:0 DeviceMinor:305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-345 DeviceMajor:0 DeviceMinor:345 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0a6090f0-3a27-4102-b8dd-b071644a3543/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:931 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-563 DeviceMajor:0 DeviceMinor:563 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1025 DeviceMajor:0 DeviceMinor:1025 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3f26792b173013020f69888fc826973fdb52355d71160dda571060f1b858412f/userdata/shm DeviceMajor:0 DeviceMinor:240 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-396 DeviceMajor:0 DeviceMinor:396 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9d840b1327f66205cf6b23b15b1f1425e68ae2cb9d5dd3a177c50ba638a9ce65/userdata/shm DeviceMajor:0 DeviceMinor:700 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-671 DeviceMajor:0 DeviceMinor:671 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a350f317-f058-4102-af5c-cbba46d35e02/volumes/kubernetes.io~projected/kube-api-access-t56bf DeviceMajor:0 DeviceMinor:508 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-852 DeviceMajor:0 DeviceMinor:852 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-320 DeviceMajor:0 DeviceMinor:320 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/07505113-d5e7-4ea3-b9cc-8f08cba45ccc/volumes/kubernetes.io~projected/kube-api-access-lgzkd DeviceMajor:0 DeviceMinor:219 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2ea9eb53-0385-4a1a-a64f-696f8520cf49/volumes/kubernetes.io~projected/kube-api-access-4dw4r DeviceMajor:0 DeviceMinor:253 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/77922c67e22a90e02f2bc6f9c2c3361d1f9624d65d1b4a186c450f61aa3c27f3/userdata/shm DeviceMajor:0 DeviceMinor:848 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-893 DeviceMajor:0 DeviceMinor:893 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-478 DeviceMajor:0 DeviceMinor:478 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/290d1f84-5c5c-4bff-b045-e6020793cded/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:237 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5b2acd84-85c0-4c47-90a4-44745b79976d/volumes/kubernetes.io~projected/kube-api-access-28z2f DeviceMajor:0 DeviceMinor:393 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7fb5bad7-07d9-45ac-ad27-a887d12d148f/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:470 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a350f317-f058-4102-af5c-cbba46d35e02/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:372 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1124 DeviceMajor:0 DeviceMinor:1124 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-140 DeviceMajor:0 DeviceMinor:140 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/70ca4cb931b7545d294f00c69b8bfe23595c69c1d94a66566a713806aa3eda58/userdata/shm DeviceMajor:0 DeviceMinor:555 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-364 DeviceMajor:0 DeviceMinor:364 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ce3728ab-5d50-40ac-95b3-74a5b62a557f/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:246 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/59bf5114-29f9-4f70-8582-108e95327cb2/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:448 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1181 DeviceMajor:0 DeviceMinor:1181 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/702076a9-b542-4768-9e9e-99b2cac0a66e/volumes/kubernetes.io~projected/kube-api-access-bkfkr DeviceMajor:0 DeviceMinor:924 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1016 DeviceMajor:0 DeviceMinor:1016 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-449 DeviceMajor:0 DeviceMinor:449 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fe643e40-d06d-4e69-9be3-0065c2a78567/volumes/kubernetes.io~projected/kube-api-access-w4cqp DeviceMajor:0 DeviceMinor:273 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-348 DeviceMajor:0 DeviceMinor:348 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a8eff549-02f3-446e-b3a1-a66cecdc02a6/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:244 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-788 DeviceMajor:0 DeviceMinor:788 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/217f2ddac8460682f53f483f75566ba056797e6cb9215803ff6c892d4d2a8575/userdata/shm DeviceMajor:0 DeviceMinor:785 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617/volumes/kubernetes.io~projected/kube-api-access-fbsq9 DeviceMajor:0 DeviceMinor:538 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-575 DeviceMajor:0 DeviceMinor:575 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-623 DeviceMajor:0 DeviceMinor:623 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-494 DeviceMajor:0 DeviceMinor:494 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-741 DeviceMajor:0 DeviceMinor:741 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-330 DeviceMajor:0 DeviceMinor:330 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-516 DeviceMajor:0 DeviceMinor:516 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1178 DeviceMajor:0 DeviceMinor:1178 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f38b464d-a218-4753-b7ac-a7d373952c4d/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:946 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-157 DeviceMajor:0 DeviceMinor:157 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/637824f5bb31724423d6735813857b47b37d15ab88987d8a010fd58f58c5ab69/userdata/shm DeviceMajor:0 DeviceMinor:663 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/bdcd72a6-a8e8-47ba-8b51-7325d35bad6b/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls DeviceMajor:0 DeviceMinor:775 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/702076a9-b542-4768-9e9e-99b2cac0a66e/volumes/kubernetes.io~secret/node-exporter-tls DeviceMajor:0 DeviceMinor:926 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-850 DeviceMajor:0 DeviceMinor:850 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/66854fab27d048679dff3730825d0acfff884899a282ccd890ab724bab9d3de2/userdata/shm DeviceMajor:0 DeviceMinor:536 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5a4202c2-c330-4a5d-87e7-0a63d069113f/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:630 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/fe643e40-d06d-4e69-9be3-0065c2a78567/volumes/kubernetes.io~secret/marketplace-operator-metrics DeviceMajor:0 DeviceMinor:642 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/91e6cc574de9ab0ab69c5ac67d10cbf7cd272238dd17877d6c8486b06ad54731/userdata/shm DeviceMajor:0 DeviceMinor:874 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6269dfbb0082e40be315007eb2be8e6ed68859c371da0d4ee487418e5943d283/userdata/shm DeviceMajor:0 DeviceMinor:993 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-521 DeviceMajor:0 DeviceMinor:521 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-570 DeviceMajor:0 DeviceMinor:570 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/00375107-9a3b-4161-a90d-72ea8827c5fc/volumes/kubernetes.io~secret/stats-auth DeviceMajor:0 DeviceMinor:835 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-755 DeviceMajor:0 DeviceMinor:755 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e48d984bde067fff459bf66d3627856479bf9e2fe952a4228b45cfe581507bda/userdata/shm DeviceMajor:0 DeviceMinor:483 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-499 DeviceMajor:0 DeviceMinor:499 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-554 DeviceMajor:0 DeviceMinor:554 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e390416b-4fa1-41d5-bc74-9e779b252350/volumes/kubernetes.io~projected/kube-api-access-cz6h6 DeviceMajor:0 DeviceMinor:1119 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1027 DeviceMajor:0 DeviceMinor:1027 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bf9d21f9-64d6-4e21-a985-491197038568/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:232 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-373 DeviceMajor:0 DeviceMinor:373 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ac6d8eb6-1d5e-4757-9823-5ffe478c711c/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:437 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/bf1cc230-0a79-4a1d-b500-a65d02e50973/volumes/kubernetes.io~projected/kube-api-access-dvdtw DeviceMajor:0 DeviceMinor:123 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/caf8685ec1d7171c12646ad4a2c704d85c1985e24c1994b6f4a18dfa14666d6f/userdata/shm DeviceMajor:0 DeviceMinor:356 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9f1629a9c890b158ad74d9b6c35c2de2573e526e00eff6015bd3861ec48b5231/userdata/shm DeviceMajor:0 DeviceMinor:1117 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1134 DeviceMajor:0 DeviceMinor:1134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ab2f96fb-ef55-4427-a598-7e3f1e224045/volumes/kubernetes.io~projected/kube-api-access-mddh9 DeviceMajor:0 DeviceMinor:127 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3e755bfdf969ae0aedc8dea1041ea98192494df2fdc6f217c2fff168055bbf86/userdata/shm DeviceMajor:0 DeviceMinor:250 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-400 DeviceMajor:0 DeviceMinor:400 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ca95e515f4a5a1b63626328ea2ad328d0f3f07c258a5281fc61399ac842b383f/userdata/shm DeviceMajor:0 DeviceMinor:838 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2df3167c99041fb8b521641e83cdf585c987ff07f0be8411cb46dd3d61303f4c/userdata/shm DeviceMajor:0 DeviceMinor:100 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ab2f96fb-ef55-4427-a598-7e3f1e224045/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:126 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2ea9eb53-0385-4a1a-a64f-696f8520cf49/volumes/kubernetes.io~secret/package-server-manager-serving-cert DeviceMajor:0 DeviceMinor:647 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/15ab52d652113ef266940e33258fee75e250f493080fb37576944ab0faae3a29/userdata/shm DeviceMajor:0 DeviceMinor:480 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6db2bfbd-d8db-4384-8979-23e8a1e87e5e/volumes/kubernetes.io~secret/tls-certificates DeviceMajor:0 DeviceMinor:827 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5f5a7d7c0e9750e48ccca14b1c41ca2a57206319db458c1aefe78bdb62a1f334/userdata/shm DeviceMajor:0 DeviceMinor:951 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-983 DeviceMajor:0 DeviceMinor:983 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-200 DeviceMajor:0 DeviceMinor:200 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-102 DeviceMajor:0 DeviceMinor:102 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-807 DeviceMajor:0 DeviceMinor:807 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-93 DeviceMajor:0 DeviceMinor:93 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-597 DeviceMajor:0 DeviceMinor:597 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-610 DeviceMajor:0 DeviceMinor:610 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-859 DeviceMajor:0 DeviceMinor:859 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0a6090f0-3a27-4102-b8dd-b071644a3543/volumes/kubernetes.io~projected/kube-api-access-bd8ff DeviceMajor:0 DeviceMinor:932 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:548 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f7f4ae93-428b-4ebd-bfaa-18359b407ede/volumes/kubernetes.io~projected/kube-api-access-mc8t5 DeviceMajor:0 DeviceMinor:99 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1120 DeviceMajor:0 DeviceMinor:1120 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/930c71fe1f23ece50da1d42638c299a59f7406da6f3b38ce884bbbd9a8e9fd63/userdata/shm DeviceMajor:0 DeviceMinor:89 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-675 DeviceMajor:0 DeviceMinor:675 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-907 DeviceMajor:0 DeviceMinor:907 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d9d09a56-ed4c-40b7-8be1-f3934c07296e/volumes/kubernetes.io~projected/kube-api-access-wmzr4 DeviceMajor:0 DeviceMinor:231 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-293 DeviceMajor:0 DeviceMinor:293 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/00375107-9a3b-4161-a90d-72ea8827c5fc/volumes/kubernetes.io~projected/kube-api-access-zlb8t DeviceMajor:0 DeviceMinor:837 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/381d29d4a5ad407e637362bbe1b13c2af8936f3cc15562644f115d2bb0e3ff71/userdata/shm DeviceMajor:0 DeviceMinor:886 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c3ff09ab-cbe1-49e7-8121-5f71997a5176/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:438 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:533 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-303 DeviceMajor:0 DeviceMinor:303 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/deb67ea0-8342-40cb-b0f4-115270e878dd/volumes/kubernetes.io~projected/kube-api-access-62lvq DeviceMajor:0 DeviceMinor:392 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-475 DeviceMajor:0 DeviceMinor:475 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/74f296d4-40d1-449e-88ea-db6c1574a11a/volumes/kubernetes.io~projected/kube-api-access-ff8tm DeviceMajor:0 DeviceMinor:919 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d325c523-8e6f-4665-9f54-334eaf301141/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:921 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:overlay_0-134 DeviceMajor:0 DeviceMinor:134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e3813939efa506945bdf3b3ce075a38e8fa5a4f203ce2d57587a61e54ae68d09/userdata/shm DeviceMajor:0 DeviceMinor:307 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-490 DeviceMajor:0 DeviceMinor:490 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-407 DeviceMajor:0 DeviceMinor:407 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-155 DeviceMajor:0 DeviceMinor:155 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-969 DeviceMajor:0 DeviceMinor:969 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1078 DeviceMajor:0 DeviceMinor:1078 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-164 DeviceMajor:0 DeviceMinor:164 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5ebf31a11d3c2bc39d6ab17ef9ad48b30eadf37d8f9dbe43bf78a5b88a48eeba/userdata/shm DeviceMajor:0 DeviceMinor:270 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d2455453-5943-49ef-bfea-cba077197da0/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:646 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/00375107-9a3b-4161-a90d-72ea8827c5fc/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:834 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2b12af9a-8041-477f-90eb-05bb6ae7861a/volumes/kubernetes.io~projected/kube-api-access-sn8qc DeviceMajor:0 DeviceMinor:930 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/34a3a84b-048f-4822-9f05-0e7509327ca2/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:217 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/19a76585-a9ac-4ed9-9146-bb77b31848c6/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:218 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e311ec640a1a240867598468c9fc4d6af26b5595345ee1d6fdaa4ef38454491f/userdata/shm DeviceMajor:0 DeviceMinor:228 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-271 DeviceMajor:0 DeviceMinor:271 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/595f697b-d238-4500-84ce-1ea00377f05e/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:216 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5a4202c2-c330-4a5d-87e7-0a63d069113f/volumes/kubernetes.io~projected/kube-api-access-kb5b6 DeviceMajor:0 DeviceMinor:262 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1034 DeviceMajor:0 DeviceMinor:1034 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ce3728ab-5d50-40ac-95b3-74a5b62a557f/volumes/kubernetes.io~projected/kube-api-access-29qbv DeviceMajor:0 DeviceMinor:284 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-309 DeviceMajor:0 DeviceMinor:309 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-953 DeviceMajor:0 DeviceMinor:953 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-84 DeviceMajor:0 DeviceMinor:84 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-854 DeviceMajor:0 DeviceMinor:854 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:209 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ac6d8eb6-1d5e-4757-9823-5ffe478c711c/volumes/kubernetes.io~projected/kube-api-access-zlzqd DeviceMajor:0 DeviceMinor:225 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5ea36913089cb553f8b6a17431d06736cf6ac63c1508cc4d7903325dd9e50f7f/userdata/shm DeviceMajor:0 DeviceMinor:661 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-665 DeviceMajor:0 DeviceMinor:665 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-116 DeviceMajor:0 DeviceMinor:116 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-938 DeviceMajor:0 DeviceMinor:938 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/15a97fe2-5022-4997-9936-4247ae7ecb43/volumes/kubernetes.io~projected/kube-api-access-h4vtf DeviceMajor:0 DeviceMinor:227 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-677 DeviceMajor:0 DeviceMinor:677 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-996 DeviceMajor:0 DeviceMinor:996 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-341 DeviceMajor:0 DeviceMinor:341 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a2bdf5b0-8764-4b15-97c9-20af36634fd0/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:507 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-415 DeviceMajor:0 DeviceMinor:415 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d9d09a56-ed4c-40b7-8be1-f3934c07296e/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:230 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1002 DeviceMajor:0 DeviceMinor:1002 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fa7bdc6eb3bcdebec3d64b4ce8194bafce362b67c9019cd975ec6f9a5ac40f46/userdata/shm DeviceMajor:0 DeviceMinor:64 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-492 DeviceMajor:0 DeviceMinor:492 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/68104a8c-3fac-4d4b-b975-bc2d045b3375/volumes/kubernetes.io~secret/machine-api-operator-tls DeviceMajor:0 DeviceMinor:956 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/06b37ad3c0f2f564ede6e81cb5f87c31e9193aa64abb54a08ee07cad5168cccd/userdata/shm DeviceMajor:0 DeviceMinor:961 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-274 DeviceMajor:0 DeviceMinor:274 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b89fb313-d01a-4305-b123-e253b3382b85/volumes/kubernetes.io~projected/kube-api-access-dm77k DeviceMajor:0 DeviceMinor:403 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a2bdf5b0-8764-4b15-97c9-20af36634fd0/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:434 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/708812af-3249-4d57-8f28-055da22a7329/volumes/kubernetes.io~projected/kube-api-access-clhcj DeviceMajor:0 DeviceMinor:722 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f7f4ae93-428b-4ebd-bfaa-18359b407ede/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/19a76585-a9ac-4ed9-9146-bb77b31848c6/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:214 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-313 DeviceMajor:0 DeviceMinor:313 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c9e13e3e5b49e845caaa9344e46631934d49f01be6a7ead87d5884c85f85894d/userdata/shm DeviceMajor:0 DeviceMinor:119 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-339 DeviceMajor:0 DeviceMinor:339 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/720a1f60-c1cb-4aef-aaec-f082090ca631/volumes/kubernetes.io~secret/webhook-certs DeviceMajor:0 DeviceMinor:643 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c338b30f4ce7d3b65e0ff2e507deac121b209e8c01583658956897e30a06262e/userdata/shm DeviceMajor:0 DeviceMinor:811 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/fb65c095-ca20-432c-a069-ad6719fca9c8/volumes/kubernetes.io~projected/kube-api-access-j5lv2 DeviceMajor:0 DeviceMinor:823 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-337 DeviceMajor:0 DeviceMinor:337 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/07505113-d5e7-4ea3-b9cc-8f08cba45ccc/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:213 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/15a97fe2-5022-4997-9936-4247ae7ecb43/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert DeviceMajor:0 DeviceMinor:215 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc/volumes/kubernetes.io~projected/kube-api-access-qhs5w DeviceMajor:0 DeviceMinor:224 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2b12af9a-8041-477f-90eb-05bb6ae7861a/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:929 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1113 DeviceMajor:0 DeviceMinor:1113 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/19a76585-a9ac-4ed9-9146-bb77b31848c6/volumes/kubernetes.io~projected/kube-api-access-w9zbp DeviceMajor:0 DeviceMinor:222 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-276 DeviceMajor:0 DeviceMinor:276 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9e3149a06c6f175072a4f298029a63d5886a08058f2cfbf229c65bf7015d1f34/userdata/shm DeviceMajor:0 DeviceMinor:503 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/74f296d4-40d1-449e-88ea-db6c1574a11a/volumes/kubernetes.io~secret/samples-operator-tls DeviceMajor:0 DeviceMinor:918 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-881 DeviceMajor:0 DeviceMinor:881 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-992 DeviceMajor:0 DeviceMinor:992 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0e7156cf-2d68-4de8-b7e7-60e1539590dd/volumes/kubernetes.io~projected/kube-api-access-z84cq DeviceMajor:0 DeviceMinor:143 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/aadd21574589df05a94b4c4fadbf0dfafa5f50f06c631557a3bc30c9b28ade98/userdata/shm DeviceMajor:0 DeviceMinor:265 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-683 DeviceMajor:0 DeviceMinor:683 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-686 DeviceMajor:0 DeviceMinor:686 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5e20d46e2ff68c35ec5f71de1a7613daa62264adc487ab5ef65e9454569fe466/userdata/shm DeviceMajor:0 DeviceMinor:479 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1214 DeviceMajor:0 DeviceMinor:1214 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/720a1f60-c1cb-4aef-aaec-f082090ca631/volumes/kubernetes.io~projected/kube-api-access-nbqfh DeviceMajor:0 DeviceMinor:263 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/86270375ddd9ef7091a168593f24db7b8afc117f301f953944886d249627818f/userdata/shm DeviceMajor:0 DeviceMinor:394 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/fb65c095-ca20-432c-a069-ad6719fca9c8/volumes/kubernetes.io~secret/prometheus-operator-tls DeviceMajor:0 DeviceMinor:832 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-353 DeviceMajor:0 DeviceMinor:353 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/702076a9-b542-4768-9e9e-99b2cac0a66e/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:920 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/394061b4-1bac-4699-96d2-88558c1adaf8/volumes/kubernetes.io~projected/kube-api-access-r7bpz DeviceMajor:0 DeviceMinor:220 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-726 DeviceMajor:0 DeviceMinor:726 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d2455453-5943-49ef-bfea-cba077197da0/volumes/kubernetes.io~projected/kube-api-access-lxk9v DeviceMajor:0 DeviceMinor:226 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-579 DeviceMajor:0 DeviceMinor:579 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-465 DeviceMajor:0 DeviceMinor:465 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1053 DeviceMajor:0 DeviceMinor:1053 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1209 DeviceMajor:0 DeviceMinor:1209 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-710 DeviceMajor:0 DeviceMinor:710 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/902909ca-ab08-49aa-9736-70e073f8e67d/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:233 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/902909ca-ab08-49aa-9736-70e073f8e67d/volumes/kubernetes.io~projected/kube-api-access-kskqr DeviceMajor:0 DeviceMinor:261 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-374 DeviceMajor:0 DeviceMinor:374 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1cceb4712c77ca2fdf0849f1bea9fd2ebeb3d8a95d1db4ec067d2a7d333a8d1f/userdata/shm DeviceMajor:0 DeviceMinor:653 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-442 DeviceMajor:0 DeviceMinor:442 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-41 DeviceMajor:0 DeviceMinor:41 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/41cc6278-8f99-407c-ba5f-750a40e3058c/volumes/kubernetes.io~secret/client-ca-bundle DeviceMajor:0 DeviceMinor:1011 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3f4c5edfdc04ff6f06a18f7e79a33fe2c7ca34a279290a61c3b81818bc079d6b/userdata/shm DeviceMajor:0 DeviceMinor:787 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-712 DeviceMajor:0 DeviceMinor:712 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-977 DeviceMajor:0 DeviceMinor:977 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1018 DeviceMajor:0 DeviceMinor:1018 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1029 DeviceMajor:0 DeviceMinor:1029 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/767da57e-44e4-4861-bc6f-427c5bbb4d9d/volumes/kubernetes.io~projected/kube-api-access-2nxzr DeviceMajor:0 DeviceMinor:118 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/290d1f84-5c5c-4bff-b045-e6020793cded/volumes/kubernetes.io~projected/kube-api-access-rdkx7 DeviceMajor:0 DeviceMinor:236 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ac6d8eb6-1d5e-4757-9823-5ffe478c711c/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls DeviceMajor:0 DeviceMinor:456 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/bf1cc230-0a79-4a1d-b500-a65d02e50973/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:648 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/59bf5114-29f9-4f70-8582-108e95327cb2/volumes/kubernetes.io~projected/kube-api-access-z5xgh DeviceMajor:0 DeviceMinor:221 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-343 DeviceMajor:0 DeviceMinor:343 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-316 DeviceMajor:0 DeviceMinor:316 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a2bdf5b0-8764-4b15-97c9-20af36634fd0/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:435 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-626 DeviceMajor:0 DeviceMinor:626 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-351 DeviceMajor:0 DeviceMinor:351 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6d8e82ffbe075824d8315d20c4a3c5c63d1c4a778f543315fadbc9c6a49fcd1c/userdata/shm DeviceMajor:0 DeviceMinor:398 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2a25632e-32d0-43d2-9be7-f515d29a1720/volumes/kubernetes.io~projected/kube-api-access-bcfsk DeviceMajor:0 DeviceMinor:1102 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6aa30a9c358b647ee82a65206dafbfb29c8b2259010a68ba5311f3d4fa4e3bc2/userdata/shm DeviceMajor:0 DeviceMinor:258 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-299 DeviceMajor:0 DeviceMinor:299 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bf4c5410-fb44-45e8-ab66-24806e6349b8/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:496 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d325c523-8e6f-4665-9f54-334eaf301141/volumes/kubernetes.io~projected/kube-api-access-mk4ql DeviceMajor:0 DeviceMinor:925 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c0f3b16d1ffaf44cfe1f64310fd36df2459a40e3dd2e9014c6e2cf3307b7b8c5/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10/volumes/kubernetes.io~projected/kube-api-access-xsvmx DeviceMajor:0 DeviceMinor:105 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-242 DeviceMajor:0 DeviceMinor:242 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-581 DeviceMajor:0 DeviceMinor:581 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/029b127e-0faf-4957-b591-9c561b053cda/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:600 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a2bdf5b0-8764-4b15-97c9-20af36634fd0/volumes/kubernetes.io~projected/kube-api-access-sfb5c DeviceMajor:0 DeviceMinor:436 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7fb5bad7-07d9-45ac-ad27-a887d12d148f/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:469 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3ee0f85b-219b-47cb-a22a-67d359a69881/volumes/kubernetes.io~secret/apiservice Mar 18 13:23:56.203568 master-0 kubenswrapper[27835]: -cert DeviceMajor:0 DeviceMinor:868 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-973 DeviceMajor:0 DeviceMinor:973 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fd58bf4306c0d3457858f4ec24d59cd979f6f4afdc73f13f04c121d2cc971fc3/userdata/shm DeviceMajor:0 DeviceMinor:1103 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/54bd19e9b4d7f9ab310771b8b4db448ca0ec68978bb44a7d76ba5895f6b7148d/userdata/shm DeviceMajor:0 DeviceMinor:654 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/99a3ea12b4f55e1c479ad9ada5ad2452af1ac0e39904d45fd6656f0a1828ea6f/userdata/shm DeviceMajor:0 DeviceMinor:873 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-824 DeviceMajor:0 DeviceMinor:824 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/03b02d62589926abe5e0b1261c9a635f1d2bfbcefca79eac740978fb36ffa6b1/userdata/shm DeviceMajor:0 DeviceMinor:1014 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1165 DeviceMajor:0 DeviceMinor:1165 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-74 DeviceMajor:0 DeviceMinor:74 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b75d4622-ac12-4f82-afc9-ab63e6278b0c/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:239 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8f5a82461be0913418e367f26894d38008c42543db14c5256b1c342d3bda363f/userdata/shm DeviceMajor:0 DeviceMinor:485 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1085 DeviceMajor:0 DeviceMinor:1085 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/68104a8c-3fac-4d4b-b975-bc2d045b3375/volumes/kubernetes.io~projected/kube-api-access-sx8j5 DeviceMajor:0 DeviceMinor:957 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e54baea8-6c3e-45a0-ac8c-880a8aaa8208/volumes/kubernetes.io~projected/kube-api-access-kkw55 DeviceMajor:0 DeviceMinor:108 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-301 DeviceMajor:0 DeviceMinor:301 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c3ff09ab-cbe1-49e7-8121-5f71997a5176/volumes/kubernetes.io~secret/node-tuning-operator-tls DeviceMajor:0 DeviceMinor:439 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-358 DeviceMajor:0 DeviceMinor:358 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a9de7243-90c0-49c4-8059-34e0558fca40/volumes/kubernetes.io~projected/kube-api-access-75jwh DeviceMajor:0 DeviceMinor:806 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/85d2f2197e1e2ff4c1589210cd39f7a91df442afde156e6d9ca6ea0a582e9f7e/userdata/shm DeviceMajor:0 DeviceMinor:843 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-94 DeviceMajor:0 DeviceMinor:94 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6a1e7d1316bc2cf34931aa8f311ea4a4a0854b6b6c453298e452fe499d1407a7/userdata/shm DeviceMajor:0 DeviceMinor:139 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f9589a25d07ced54d4fbfa68774c413e985214ddc531362c9f8430ade544bfcc/userdata/shm DeviceMajor:0 DeviceMinor:266 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-454 DeviceMajor:0 DeviceMinor:454 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/00375107-9a3b-4161-a90d-72ea8827c5fc/volumes/kubernetes.io~secret/default-certificate DeviceMajor:0 DeviceMinor:822 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f8ec4afc73563013c96a8e1eace508a943272ee46d78033f1795223ee51579db/userdata/shm DeviceMajor:0 DeviceMinor:1115 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d42bcf13-548b-46c4-9a3d-a46f1b6ec045/volumes/kubernetes.io~projected/kube-api-access-vkxxg DeviceMajor:0 DeviceMinor:125 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:644 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f38b464d-a218-4753-b7ac-a7d373952c4d/volumes/kubernetes.io~projected/kube-api-access-lfbx8 DeviceMajor:0 DeviceMinor:947 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e54baea8-6c3e-45a0-ac8c-880a8aaa8208/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:153 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1189 DeviceMajor:0 DeviceMinor:1189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d42bcf13-548b-46c4-9a3d-a46f1b6ec045/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:124 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-185 DeviceMajor:0 DeviceMinor:185 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7fb5bad7-07d9-45ac-ad27-a887d12d148f/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:542 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-619 DeviceMajor:0 DeviceMinor:619 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f060cf0da8cda14bc2c113a9d1ffe07541127085f23390156d2bcdad8537aa45/userdata/shm DeviceMajor:0 DeviceMinor:252 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/029b127e-0faf-4957-b591-9c561b053cda/volumes/kubernetes.io~projected/kube-api-access-wgt55 DeviceMajor:0 DeviceMinor:601 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8a23814e5648f40975e3bf4990cc1d8a9b9e996452a93cd95f8834fb95ae4fd9/userdata/shm DeviceMajor:0 DeviceMinor:657 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1176 DeviceMajor:0 DeviceMinor:1176 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/35c33231cc5394e541c516a963005ff2abf91292685c4e1cbb8e7e960d479ab2/userdata/shm DeviceMajor:0 DeviceMinor:440 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cb59122d7a7b042121b64340b8ada26c1823fa00f9c980926b47cbaa0d20cc3f/userdata/shm DeviceMajor:0 DeviceMinor:937 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5eea39afe08c6fda2308b0aa93f656fdde076cef1d17307c9c4b3694c8a0bf52/userdata/shm DeviceMajor:0 DeviceMinor:917 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/830ff1d6-332e-46b1-b13c-c2507fdc3c19/volumes/kubernetes.io~projected/kube-api-access-dvq2h DeviceMajor:0 DeviceMinor:1173 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls DeviceMajor:0 DeviceMinor:645 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/354c2a6b66c065fe648ce36ee5e4c7bbfed1c688af2120800fda750d61548f3b/userdata/shm DeviceMajor:0 DeviceMinor:965 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-159 DeviceMajor:0 DeviceMinor:159 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1045 DeviceMajor:0 DeviceMinor:1045 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-688 DeviceMajor:0 DeviceMinor:688 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-979 DeviceMajor:0 DeviceMinor:979 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/41cc6278-8f99-407c-ba5f-750a40e3058c/volumes/kubernetes.io~secret/secret-metrics-server-tls DeviceMajor:0 DeviceMinor:1001 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/80994f33-21e7-45d6-9f21-1cfd8e1f41ce/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:955 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7fb5bad7-07d9-45ac-ad27-a887d12d148f/volumes/kubernetes.io~projected/kube-api-access-sdkqm DeviceMajor:0 DeviceMinor:543 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ce3c462e-b655-40bc-811a-95ccde49fdb8/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:694 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ce3c462e-b655-40bc-811a-95ccde49fdb8/volumes/kubernetes.io~projected/kube-api-access-8jxdg DeviceMajor:0 DeviceMinor:699 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/9548e397-0db4-41c8-9cc8-b575060e9c66/volumes/kubernetes.io~projected/kube-api-access-kbwfq DeviceMajor:0 DeviceMinor:913 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b8e76ab6e36792c638116c40619921d7addf605312998f00e62d98e5a5614955/userdata/shm DeviceMajor:0 DeviceMinor:936 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ed3daf11e343e1b2061522afa05ec8c54dad41a761078c089559715ea58a7e8b/userdata/shm DeviceMajor:0 DeviceMinor:941 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-975 DeviceMajor:0 DeviceMinor:975 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4b4e7da45f4c21e2af79784fe7c50524be71fedf894b5ecf38d0d12d29080573/userdata/shm DeviceMajor:0 DeviceMinor:130 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-669 DeviceMajor:0 DeviceMinor:669 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-92 DeviceMajor:0 DeviceMinor:92 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-667 DeviceMajor:0 DeviceMinor:667 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-840 DeviceMajor:0 DeviceMinor:840 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-861 DeviceMajor:0 DeviceMinor:861 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/35f2a49474234a3cc3d6b357341939ab9604ca7cc08b21e5412a5ae4810169c5/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ab2f96fb-ef55-4427-a598-7e3f1e224045/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-1047 DeviceMajor:0 DeviceMinor:1047 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-97 DeviceMajor:0 DeviceMinor:97 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-248 DeviceMajor:0 DeviceMinor:248 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-685 DeviceMajor:0 DeviceMinor:685 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-376 DeviceMajor:0 DeviceMinor:376 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-518 DeviceMajor:0 DeviceMinor:518 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0c2c4a58-9780-4ecd-b417-e590ac3576ed/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:235 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1132 DeviceMajor:0 DeviceMinor:1132 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1071 DeviceMajor:0 DeviceMinor:1071 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/80994f33-21e7-45d6-9f21-1cfd8e1f41ce/volumes/kubernetes.io~projected/kube-api-access-gwqln DeviceMajor:0 DeviceMinor:966 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/053cc9bc-f98e-46f6-93bb-b5344d20bf74/volumes/kubernetes.io~projected/kube-api-access-gnxv5 DeviceMajor:0 DeviceMinor:269 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1be597ce241a4c605b967a1c6529bc798d2a367805fff6066c48887fdc2a2af1/userdata/shm DeviceMajor:0 DeviceMinor:405 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/16f8e725-f18a-478e-88c5-87d54aeb4857/volumes/kubernetes.io~secret/catalogserver-certs DeviceMajor:0 DeviceMinor:522 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-891 DeviceMajor:0 DeviceMinor:891 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/79c45dcce1d819c7fccd19f2123bf5227e7882d825c4cbdf8c140e544e9eccec/userdata/shm DeviceMajor:0 DeviceMinor:370 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-317 DeviceMajor:0 DeviceMinor:317 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/29147c6f3d8625422f173796ecc5c56624b69d9bc34abe3727182adc4dde3e20/userdata/shm DeviceMajor:0 DeviceMinor:649 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-717 DeviceMajor:0 DeviceMinor:717 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/41cc6278-8f99-407c-ba5f-750a40e3058c/volumes/kubernetes.io~projected/kube-api-access-g2w6b DeviceMajor:0 DeviceMinor:1013 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1020 DeviceMajor:0 DeviceMinor:1020 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bdcd72a6-a8e8-47ba-8b51-7325d35bad6b/volumes/kubernetes.io~projected/kube-api-access-6fw5f DeviceMajor:0 DeviceMinor:783 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-413 DeviceMajor:0 DeviceMinor:413 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/71a8c5f3dcdb995fab9a53a08bf0c85f486ffb1845cc1c509788983f480ff491/userdata/shm DeviceMajor:0 DeviceMinor:571 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-98 DeviceMajor:0 DeviceMinor:98 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bd8aa7c1-0a04-4df0-9047-63ab846b9535/volumes/kubernetes.io~secret/certs DeviceMajor:0 DeviceMinor:870 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-290 DeviceMajor:0 DeviceMinor:290 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d085d8a019f7e2c66eb4ee6d163b9b2393cab47ea58008a519ad2cb921a6f6d3/userdata/shm DeviceMajor:0 DeviceMinor:933 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-194 DeviceMajor:0 DeviceMinor:194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1123 DeviceMajor:0 DeviceMinor:1123 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-706 DeviceMajor:0 DeviceMinor:706 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f3be6654-f969-4952-976d-218c86af7d2d/volumes/kubernetes.io~projected/kube-api-access-9wnqw DeviceMajor:0 DeviceMinor:836 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3ee0f85b-219b-47cb-a22a-67d359a69881/volumes/kubernetes.io~projected/kube-api-access-82f9g DeviceMajor:0 DeviceMinor:869 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-85 DeviceMajor:0 DeviceMinor:85 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d325c523-8e6f-4665-9f54-334eaf301141/volumes/kubernetes.io~secret/openshift-state-metrics-tls DeviceMajor:0 DeviceMinor:927 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bf9d21f9-64d6-4e21-a985-491197038568/volumes/kubernetes.io~projected/kube-api-access-qgffb DeviceMajor:0 DeviceMinor:264 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/0278b04b-b27b-4717-a009-a70315fd05a6/volumes/kubernetes.io~projected/kube-api-access-2snjj DeviceMajor:0 DeviceMinor:350 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/41cc6278-8f99-407c-ba5f-750a40e3058c/volumes/kubernetes.io~secret/secret-metrics-client-certs DeviceMajor:0 DeviceMinor:1012 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c3ff09ab-cbe1-49e7-8121-5f71997a5176/volumes/kubernetes.io~projected/kube-api-access-n2hxh DeviceMajor:0 DeviceMinor:245 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1183 DeviceMajor:0 DeviceMinor:1183 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a9de7243-90c0-49c4-8059-34e0558fca40/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert DeviceMajor:0 DeviceMinor:805 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/12c899e36dc6ffd83c34c2d6e92c233e31c0860e033db20595d2d07c037dd6e7/userdata/shm DeviceMajor:0 DeviceMinor:78 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:overlay_0-877 DeviceMajor:0 DeviceMinor:877 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0bec5b0b6152a0f7f02d36d9ef96fae029938eb181145d603f9ae776f9e6ecbd/userdata/shm DeviceMajor:0 DeviceMinor:607 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cb3a395c88586f9726036952a749f0819efe1ca07bfec591e8bf77ac60734a87/userdata/shm DeviceMajor:0 DeviceMinor:655 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1031 DeviceMajor:0 DeviceMinor:1031 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-967 DeviceMajor:0 DeviceMinor:967 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-354 DeviceMajor:0 DeviceMinor:354 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-249 DeviceMajor:0 DeviceMinor:249 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-820 DeviceMajor:0 DeviceMinor:820 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-809 DeviceMajor:0 DeviceMinor:809 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6a93ff56-362e-44fc-a54f-666a01559892/volumes/kubernetes.io~projected/kube-api-access-wmlh2 DeviceMajor:0 DeviceMinor:923 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cb74a42e367af8586d98d799b6ded81e9d93e7b3d806a9a925a94b3e763a3830/userdata/shm DeviceMajor:0 DeviceMinor:461 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/daaff2e16f5e705f64dc5a7b025fa31e1b94f1cba87483d97066f316342671c2/userdata/shm DeviceMajor:0 DeviceMinor:934 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-790 DeviceMajor:0 DeviceMinor:790 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-322 DeviceMajor:0 DeviceMinor:322 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-733 DeviceMajor:0 DeviceMinor:733 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-901 DeviceMajor:0 DeviceMinor:901 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-971 DeviceMajor:0 DeviceMinor:971 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/830ff1d6-332e-46b1-b13c-c2507fdc3c19/volumes/kubernetes.io~secret/webhook-certs DeviceMajor:0 DeviceMinor:1169 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-704 DeviceMajor:0 DeviceMinor:704 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bd8aa7c1-0a04-4df0-9047-63ab846b9535/volumes/kubernetes.io~secret/node-bootstrap-token DeviceMajor:0 DeviceMinor:871 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/bd8aa7c1-0a04-4df0-9047-63ab846b9535/volumes/kubernetes.io~projected/kube-api-access-w477x DeviceMajor:0 DeviceMinor:872 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d5c45f47f10bb08721004bc944edd8b049be91900e107372ecc9bc0e512a2248/userdata/shm DeviceMajor:0 DeviceMinor:650 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-423 DeviceMajor:0 DeviceMinor:423 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b89fb313-d01a-4305-b123-e253b3382b85/volumes/kubernetes.io~secret/signing-key DeviceMajor:0 DeviceMinor:402 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8ffe2e75-9cc3-4244-95c8-800463c5aa28/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:842 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b75d4622-ac12-4f82-afc9-ab63e6278b0c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:234 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9/volumes/kubernetes.io~projected/kube-api-access-qvdg2 DeviceMajor:0 DeviceMinor:260 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8bdb6f1dfbc7856feb8d743ecbf26b76b58e1fc414ee00508489367f3860efae/userdata/shm DeviceMajor:0 DeviceMinor:287 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-757 DeviceMajor:0 DeviceMinor:757 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-409 DeviceMajor:0 DeviceMinor:409 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-948 DeviceMajor:0 DeviceMinor:948 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-144 DeviceMajor:0 DeviceMinor:144 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-488 DeviceMajor:0 DeviceMinor:488 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7c0a9d3ecc02d97801da90faa78ea9a04fc4381142a502c2ebc0a26f2eb9f11b/userdata/shm DeviceMajor:0 DeviceMinor:513 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6a93ff56-362e-44fc-a54f-666a01559892/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:922 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:03b02d62589926a MacAddress:c2:e5:90:0f:78:2a Speed:10000 Mtu:8900} {Name:06b37ad3c0f2f56 MacAddress:8a:a6:3d:b6:3b:ab Speed:10000 Mtu:8900} {Name:087f1bfbdd93b7c MacAddress:22:b3:03:f4:7f:94 Speed:10000 Mtu:8900} {Name:0bec5b0b6152a0f MacAddress:02:ca:c7:72:1a:ae Speed:10000 Mtu:8900} {Name:15ab52d652113ef MacAddress:6e:4a:ee:37:31:71 Speed:10000 Mtu:8900} {Name:1be597ce241a4c6 MacAddress:fa:ea:20:6a:cb:61 Speed:10000 Mtu:8900} {Name:1bf3d426d907a1c MacAddress:36:49:d5:ef:79:eb Speed:10000 Mtu:8900} {Name:1cceb4712c77ca2 MacAddress:96:ad:13:2c:67:1a Speed:10000 Mtu:8900} {Name:217f2ddac846068 MacAddress:b2:eb:79:7f:9a:47 Speed:10000 Mtu:8900} {Name:29147c6f3d86254 MacAddress:be:ec:42:2c:47:a7 Speed:10000 Mtu:8900} {Name:35c33231cc5394e MacAddress:e2:bd:e0:6a:84:e1 Speed:10000 Mtu:8900} {Name:381d29d4a5ad407 MacAddress:4e:b7:cf:0a:67:c2 Speed:10000 Mtu:8900} {Name:3e755bfdf969ae0 MacAddress:2e:43:94:a2:74:af Speed:10000 Mtu:8900} {Name:3f26792b1730130 MacAddress:32:5a:8e:b6:66:39 Speed:10000 Mtu:8900} {Name:3f4c5edfdc04ff6 MacAddress:76:73:f6:fa:38:ab Speed:10000 Mtu:8900} {Name:4a97b24b2b4402b MacAddress:0e:4e:87:2d:fc:ea Speed:10000 Mtu:8900} {Name:54bd19e9b4d7f9a MacAddress:72:37:0d:ec:99:71 Speed:10000 Mtu:8900} {Name:5e20d46e2ff68c3 MacAddress:ae:59:0b:e6:71:ac Speed:10000 Mtu:8900} {Name:5ea36913089cb55 MacAddress:56:aa:31:fa:30:7d Speed:10000 Mtu:8900} {Name:5ebf31a11d3c2bc MacAddress:3a:ce:f4:bc:1f:d6 Speed:10000 Mtu:8900} {Name:5eea39afe08c6fd MacAddress:c2:98:b2:54:04:18 Speed:10000 Mtu:8900} {Name:5f5a7d7c0e9750e MacAddress:52:1e:80:fb:a8:d5 Speed:10000 Mtu:8900} {Name:637824f5bb31724 MacAddress:66:fa:80:0e:e8:5d Speed:10000 Mtu:8900} {Name:66854fab27d0486 MacAddress:f6:67:a2:ed:5f:ba Speed:10000 Mtu:8900} {Name:6aa30a9c358b647 MacAddress:3a:a4:50:40:e9:78 Speed:10000 Mtu:8900} {Name:6d8e82ffbe07582 MacAddress:3e:81:d3:52:77:01 Speed:10000 Mtu:8900} {Name:70ca4cb931b7545 MacAddress:c6:58:3d:7a:8e:07 Speed:10000 Mtu:8900} {Name:71a8c5f3dcdb995 MacAddress:1e:78:7e:5a:b3:f5 Speed:10000 Mtu:8900} {Name:79c45dcce1d819c MacAddress:9a:4f:fa:54:98:d8 Speed:10000 Mtu:8900} {Name:7c0a9d3ecc02d97 MacAddress:f6:72:ef:d0:c2:42 Speed:10000 Mtu:8900} {Name:85d2f2197e1e2ff MacAddress:1a:71:6d:49:03:3f Speed:10000 Mtu:8900} {Name:86270375ddd9ef7 MacAddress:e6:b3:fc:ac:b2:98 Speed:10000 Mtu:8900} {Name:8a23814e5648f40 MacAddress:2a:d4:ce:72:c0:5b Speed:10000 Mtu:8900} {Name:8bdb6f1dfbc7856 MacAddress:36:45:b0:50:fe:96 Speed:10000 Mtu:8900} {Name:8f5a82461be0913 MacAddress:ce:3c:13:6f:9b:45 Speed:10000 Mtu:8900} {Name:99a3ea12b4f55e1 MacAddress:56:c0:cb:5f:79:71 Speed:10000 Mtu:8900} {Name:9f1629a9c890b15 MacAddress:56:cd:88:1c:8e:b1 Speed:10000 Mtu:8900} {Name:aadd21574589df0 MacAddress:02:03:3e:e5:ce:44 Speed:10000 Mtu:8900} {Name:b4b9a672b76f3ad MacAddress:16:e3:44:8b:6c:7e Speed:10000 Mtu:8900} {Name:b8e76ab6e36792c MacAddress:36:7c:53:2a:92:c2 Speed:10000 Mtu:8900} {Name:bb505f490d9f0d1 MacAddress:ea:96:ce:c2:ab:d5 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:46:57:c0:4d:d0:6e Speed:0 Mtu:8900} {Name:c338b30f4ce7d3b MacAddress:76:29:19:7b:af:4b Speed:10000 Mtu:8900} {Name:caf8685ec1d7171 MacAddress:9e:da:6c:97:7e:c6 Speed:10000 Mtu:8900} {Name:cb3a395c88586f9 MacAddress:5e:54:70:6e:91:1b Speed:10000 Mtu:8900} {Name:cb59122d7a7b042 MacAddress:06:92:65:07:a7:f3 Speed:10000 Mtu:8900} {Name:cb74a42e367af85 MacAddress:32:61:26:f5:81:a5 Speed:10000 Mtu:8900} {Name:d01d4e5c147c00b MacAddress:92:a2:a4:f2:f6:c8 Speed:10000 Mtu:8900} {Name:d5c45f47f10bb08 MacAddress:4e:a9:34:d7:5f:af Speed:10000 Mtu:8900} {Name:daaff2e16f5e705 MacAddress:f2:82:c2:e3:74:11 Speed:10000 Mtu:8900} {Name:e311ec640a1a240 MacAddress:aa:d4:a2:0a:33:aa Speed:10000 Mtu:8900} {Name:e3813939efa5069 MacAddress:3e:03:ea:d6:47:b0 Speed:10000 Mtu:8900} {Name:e48d984bde067ff MacAddress:06:53:59:07:13:24 Speed:10000 Mtu:8900} {Name:ed3daf11e343e1b MacAddress:8a:39:72:c7:6c:98 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:c1:64:46 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:ac:24:0f Speed:-1 Mtu:9000} {Name:f060cf0da8cda14 MacAddress:72:cf:24:c3:76:32 Speed:10000 Mtu:8900} {Name:f8ec4afc7356301 MacAddress:06:32:e0:8a:5c:72 Speed:10000 Mtu:8900} {Name:f9589a25d07ced5 MacAddress:86:58:49:71:d6:9e Speed:10000 Mtu:8900} {Name:fd58bf4306c0d34 MacAddress:b6:69:54:c2:0d:7f Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:52:62:dd:3e:30:92 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 18 13:23:56.203568 master-0 kubenswrapper[27835]: I0318 13:23:56.203085 27835 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 18 13:23:56.203568 master-0 kubenswrapper[27835]: I0318 13:23:56.203234 27835 manager.go:233] Version: {KernelVersion:5.14.0-427.113.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202603021444-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 18 13:23:56.203855 master-0 kubenswrapper[27835]: I0318 13:23:56.203654 27835 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 18 13:23:56.203884 master-0 kubenswrapper[27835]: I0318 13:23:56.203851 27835 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 18 13:23:56.204109 master-0 kubenswrapper[27835]: I0318 13:23:56.203886 27835 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 18 13:23:56.204158 master-0 kubenswrapper[27835]: I0318 13:23:56.204132 27835 topology_manager.go:138] "Creating topology manager with none policy" Mar 18 13:23:56.204158 master-0 kubenswrapper[27835]: I0318 13:23:56.204143 27835 container_manager_linux.go:303] "Creating device plugin manager" Mar 18 13:23:56.204158 master-0 kubenswrapper[27835]: I0318 13:23:56.204154 27835 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 13:23:56.204237 master-0 kubenswrapper[27835]: I0318 13:23:56.204183 27835 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 13:23:56.204237 master-0 kubenswrapper[27835]: I0318 13:23:56.204229 27835 state_mem.go:36] "Initialized new in-memory state store" Mar 18 13:23:56.204339 master-0 kubenswrapper[27835]: I0318 13:23:56.204318 27835 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 18 13:23:56.204400 master-0 kubenswrapper[27835]: I0318 13:23:56.204388 27835 kubelet.go:418] "Attempting to sync node with API server" Mar 18 13:23:56.204461 master-0 kubenswrapper[27835]: I0318 13:23:56.204406 27835 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 18 13:23:56.204461 master-0 kubenswrapper[27835]: I0318 13:23:56.204449 27835 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 18 13:23:56.204545 master-0 kubenswrapper[27835]: I0318 13:23:56.204467 27835 kubelet.go:324] "Adding apiserver pod source" Mar 18 13:23:56.204545 master-0 kubenswrapper[27835]: I0318 13:23:56.204490 27835 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 18 13:23:56.205816 master-0 kubenswrapper[27835]: I0318 13:23:56.205679 27835 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 18 13:23:56.205874 master-0 kubenswrapper[27835]: I0318 13:23:56.205860 27835 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 18 13:23:56.207579 master-0 kubenswrapper[27835]: I0318 13:23:56.206217 27835 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 18 13:23:56.207579 master-0 kubenswrapper[27835]: I0318 13:23:56.206369 27835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 18 13:23:56.207579 master-0 kubenswrapper[27835]: I0318 13:23:56.206394 27835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 18 13:23:56.207579 master-0 kubenswrapper[27835]: I0318 13:23:56.206406 27835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 18 13:23:56.207579 master-0 kubenswrapper[27835]: I0318 13:23:56.206444 27835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 18 13:23:56.207579 master-0 kubenswrapper[27835]: I0318 13:23:56.206455 27835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 18 13:23:56.207579 master-0 kubenswrapper[27835]: I0318 13:23:56.206468 27835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 18 13:23:56.207579 master-0 kubenswrapper[27835]: I0318 13:23:56.206480 27835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 18 13:23:56.207579 master-0 kubenswrapper[27835]: I0318 13:23:56.206491 27835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 18 13:23:56.207579 master-0 kubenswrapper[27835]: I0318 13:23:56.206505 27835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 18 13:23:56.207579 master-0 kubenswrapper[27835]: I0318 13:23:56.206517 27835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 18 13:23:56.207579 master-0 kubenswrapper[27835]: I0318 13:23:56.206552 27835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 18 13:23:56.207579 master-0 kubenswrapper[27835]: I0318 13:23:56.206573 27835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 18 13:23:56.207579 master-0 kubenswrapper[27835]: I0318 13:23:56.206615 27835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 18 13:23:56.207579 master-0 kubenswrapper[27835]: I0318 13:23:56.207075 27835 server.go:1280] "Started kubelet" Mar 18 13:23:56.207579 master-0 kubenswrapper[27835]: I0318 13:23:56.207245 27835 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 18 13:23:56.207579 master-0 kubenswrapper[27835]: I0318 13:23:56.207345 27835 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 18 13:23:56.212856 master-0 kubenswrapper[27835]: I0318 13:23:56.209585 27835 server.go:449] "Adding debug handlers to kubelet server" Mar 18 13:23:56.212856 master-0 kubenswrapper[27835]: I0318 13:23:56.209826 27835 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 18 13:23:56.212856 master-0 kubenswrapper[27835]: I0318 13:23:56.210382 27835 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 18 13:23:56.208237 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 18 13:23:56.220577 master-0 kubenswrapper[27835]: I0318 13:23:56.220285 27835 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 18 13:23:56.223280 master-0 kubenswrapper[27835]: I0318 13:23:56.221767 27835 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 18 13:23:56.229550 master-0 kubenswrapper[27835]: E0318 13:23:56.229486 27835 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 18 13:23:56.233379 master-0 kubenswrapper[27835]: I0318 13:23:56.233332 27835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 18 13:23:56.233569 master-0 kubenswrapper[27835]: I0318 13:23:56.233436 27835 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 18 13:23:56.233569 master-0 kubenswrapper[27835]: I0318 13:23:56.233527 27835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-19 12:57:20 +0000 UTC, rotation deadline is 2026-03-19 09:06:42.780944325 +0000 UTC Mar 18 13:23:56.233569 master-0 kubenswrapper[27835]: I0318 13:23:56.233566 27835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 19h42m46.547380862s for next certificate rotation Mar 18 13:23:56.234797 master-0 kubenswrapper[27835]: I0318 13:23:56.233919 27835 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 18 13:23:56.234797 master-0 kubenswrapper[27835]: I0318 13:23:56.233946 27835 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 18 13:23:56.234797 master-0 kubenswrapper[27835]: I0318 13:23:56.234084 27835 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 18 13:23:56.239744 master-0 kubenswrapper[27835]: I0318 13:23:56.236779 27835 factory.go:153] Registering CRI-O factory Mar 18 13:23:56.239744 master-0 kubenswrapper[27835]: I0318 13:23:56.236808 27835 factory.go:221] Registration of the crio container factory successfully Mar 18 13:23:56.239744 master-0 kubenswrapper[27835]: I0318 13:23:56.236893 27835 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 18 13:23:56.239744 master-0 kubenswrapper[27835]: I0318 13:23:56.236902 27835 factory.go:55] Registering systemd factory Mar 18 13:23:56.239744 master-0 kubenswrapper[27835]: I0318 13:23:56.236911 27835 factory.go:221] Registration of the systemd container factory successfully Mar 18 13:23:56.239744 master-0 kubenswrapper[27835]: I0318 13:23:56.236935 27835 factory.go:103] Registering Raw factory Mar 18 13:23:56.239744 master-0 kubenswrapper[27835]: I0318 13:23:56.236951 27835 manager.go:1196] Started watching for new ooms in manager Mar 18 13:23:56.239744 master-0 kubenswrapper[27835]: I0318 13:23:56.237431 27835 manager.go:319] Starting recovery of all containers Mar 18 13:23:56.240145 master-0 kubenswrapper[27835]: I0318 13:23:56.239807 27835 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 18 13:23:56.252292 master-0 kubenswrapper[27835]: I0318 13:23:56.252215 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34a3a84b-048f-4822-9f05-0e7509327ca2" volumeName="kubernetes.io/secret/34a3a84b-048f-4822-9f05-0e7509327ca2-serving-cert" seLinuxMountContext="" Mar 18 13:23:56.252292 master-0 kubenswrapper[27835]: I0318 13:23:56.252278 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fb5bad7-07d9-45ac-ad27-a887d12d148f" volumeName="kubernetes.io/projected/7fb5bad7-07d9-45ac-ad27-a887d12d148f-kube-api-access-sdkqm" seLinuxMountContext="" Mar 18 13:23:56.252292 master-0 kubenswrapper[27835]: I0318 13:23:56.252291 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a2bdf5b0-8764-4b15-97c9-20af36634fd0" volumeName="kubernetes.io/configmap/a2bdf5b0-8764-4b15-97c9-20af36634fd0-etcd-serving-ca" seLinuxMountContext="" Mar 18 13:23:56.252583 master-0 kubenswrapper[27835]: I0318 13:23:56.252303 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a9de7243-90c0-49c4-8059-34e0558fca40" volumeName="kubernetes.io/configmap/a9de7243-90c0-49c4-8059-34e0558fca40-cco-trusted-ca" seLinuxMountContext="" Mar 18 13:23:56.252583 master-0 kubenswrapper[27835]: I0318 13:23:56.252313 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b89fb313-d01a-4305-b123-e253b3382b85" volumeName="kubernetes.io/configmap/b89fb313-d01a-4305-b123-e253b3382b85-signing-cabundle" seLinuxMountContext="" Mar 18 13:23:56.252583 master-0 kubenswrapper[27835]: I0318 13:23:56.252325 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf9d21f9-64d6-4e21-a985-491197038568" volumeName="kubernetes.io/configmap/bf9d21f9-64d6-4e21-a985-491197038568-trusted-ca-bundle" seLinuxMountContext="" Mar 18 13:23:56.252583 master-0 kubenswrapper[27835]: I0318 13:23:56.252337 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b12af9a-8041-477f-90eb-05bb6ae7861a" volumeName="kubernetes.io/projected/2b12af9a-8041-477f-90eb-05bb6ae7861a-kube-api-access-sn8qc" seLinuxMountContext="" Mar 18 13:23:56.252583 master-0 kubenswrapper[27835]: I0318 13:23:56.252348 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a93ff56-362e-44fc-a54f-666a01559892" volumeName="kubernetes.io/secret/6a93ff56-362e-44fc-a54f-666a01559892-kube-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Mar 18 13:23:56.252583 master-0 kubenswrapper[27835]: I0318 13:23:56.252362 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0278b04b-b27b-4717-a009-a70315fd05a6" volumeName="kubernetes.io/projected/0278b04b-b27b-4717-a009-a70315fd05a6-kube-api-access-2snjj" seLinuxMountContext="" Mar 18 13:23:56.252583 master-0 kubenswrapper[27835]: I0318 13:23:56.252375 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="702076a9-b542-4768-9e9e-99b2cac0a66e" volumeName="kubernetes.io/configmap/702076a9-b542-4768-9e9e-99b2cac0a66e-metrics-client-ca" seLinuxMountContext="" Mar 18 13:23:56.252583 master-0 kubenswrapper[27835]: I0318 13:23:56.252385 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="702076a9-b542-4768-9e9e-99b2cac0a66e" volumeName="kubernetes.io/projected/702076a9-b542-4768-9e9e-99b2cac0a66e-kube-api-access-bkfkr" seLinuxMountContext="" Mar 18 13:23:56.252583 master-0 kubenswrapper[27835]: I0318 13:23:56.252396 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617" volumeName="kubernetes.io/empty-dir/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617-cache" seLinuxMountContext="" Mar 18 13:23:56.252583 master-0 kubenswrapper[27835]: I0318 13:23:56.252406 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf4c5410-fb44-45e8-ab66-24806e6349b8" volumeName="kubernetes.io/empty-dir/bf4c5410-fb44-45e8-ab66-24806e6349b8-tmp" seLinuxMountContext="" Mar 18 13:23:56.252583 master-0 kubenswrapper[27835]: I0318 13:23:56.252435 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a2bdf5b0-8764-4b15-97c9-20af36634fd0" volumeName="kubernetes.io/secret/a2bdf5b0-8764-4b15-97c9-20af36634fd0-serving-cert" seLinuxMountContext="" Mar 18 13:23:56.252583 master-0 kubenswrapper[27835]: I0318 13:23:56.252449 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf1cc230-0a79-4a1d-b500-a65d02e50973" volumeName="kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs" seLinuxMountContext="" Mar 18 13:23:56.252583 master-0 kubenswrapper[27835]: I0318 13:23:56.252460 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="290d1f84-5c5c-4bff-b045-e6020793cded" volumeName="kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls" seLinuxMountContext="" Mar 18 13:23:56.252583 master-0 kubenswrapper[27835]: I0318 13:23:56.252471 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf4c5410-fb44-45e8-ab66-24806e6349b8" volumeName="kubernetes.io/empty-dir/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-tuned" seLinuxMountContext="" Mar 18 13:23:56.252583 master-0 kubenswrapper[27835]: I0318 13:23:56.252501 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="07505113-d5e7-4ea3-b9cc-8f08cba45ccc" volumeName="kubernetes.io/secret/07505113-d5e7-4ea3-b9cc-8f08cba45ccc-serving-cert" seLinuxMountContext="" Mar 18 13:23:56.252583 master-0 kubenswrapper[27835]: I0318 13:23:56.252515 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="830ff1d6-332e-46b1-b13c-c2507fdc3c19" volumeName="kubernetes.io/projected/830ff1d6-332e-46b1-b13c-c2507fdc3c19-kube-api-access-dvq2h" seLinuxMountContext="" Mar 18 13:23:56.252583 master-0 kubenswrapper[27835]: I0318 13:23:56.252527 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a2bdf5b0-8764-4b15-97c9-20af36634fd0" volumeName="kubernetes.io/configmap/a2bdf5b0-8764-4b15-97c9-20af36634fd0-audit" seLinuxMountContext="" Mar 18 13:23:56.252583 master-0 kubenswrapper[27835]: I0318 13:23:56.252554 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ac6d8eb6-1d5e-4757-9823-5ffe478c711c" volumeName="kubernetes.io/configmap/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-images" seLinuxMountContext="" Mar 18 13:23:56.252583 master-0 kubenswrapper[27835]: I0318 13:23:56.252569 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d325c523-8e6f-4665-9f54-334eaf301141" volumeName="kubernetes.io/projected/d325c523-8e6f-4665-9f54-334eaf301141-kube-api-access-mk4ql" seLinuxMountContext="" Mar 18 13:23:56.252583 master-0 kubenswrapper[27835]: I0318 13:23:56.252581 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19a76585-a9ac-4ed9-9146-bb77b31848c6" volumeName="kubernetes.io/configmap/19a76585-a9ac-4ed9-9146-bb77b31848c6-etcd-ca" seLinuxMountContext="" Mar 18 13:23:56.252583 master-0 kubenswrapper[27835]: I0318 13:23:56.252594 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2ea9eb53-0385-4a1a-a64f-696f8520cf49" volumeName="kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.252605 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd8aa7c1-0a04-4df0-9047-63ab846b9535" volumeName="kubernetes.io/secret/bd8aa7c1-0a04-4df0-9047-63ab846b9535-node-bootstrap-token" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.252617 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c6c35e08-cdbc-4a86-a64a-3e5c34e941d7" volumeName="kubernetes.io/configmap/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-proxy-ca-bundles" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.252633 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f38b464d-a218-4753-b7ac-a7d373952c4d" volumeName="kubernetes.io/configmap/f38b464d-a218-4753-b7ac-a7d373952c4d-auth-proxy-config" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.252647 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7dca7577-6bee-4dd3-917a-7b7ccc42f0fc" volumeName="kubernetes.io/secret/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc-serving-cert" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.252660 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ffe2e75-9cc3-4244-95c8-800463c5aa28" volumeName="kubernetes.io/projected/8ffe2e75-9cc3-4244-95c8-800463c5aa28-kube-api-access" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.252674 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe643e40-d06d-4e69-9be3-0065c2a78567" volumeName="kubernetes.io/projected/fe643e40-d06d-4e69-9be3-0065c2a78567-kube-api-access-w4cqp" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.252686 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="702076a9-b542-4768-9e9e-99b2cac0a66e" volumeName="kubernetes.io/secret/702076a9-b542-4768-9e9e-99b2cac0a66e-node-exporter-kube-rbac-proxy-config" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.252716 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a350f317-f058-4102-af5c-cbba46d35e02" volumeName="kubernetes.io/secret/a350f317-f058-4102-af5c-cbba46d35e02-serving-cert" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.252730 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b89fb313-d01a-4305-b123-e253b3382b85" volumeName="kubernetes.io/secret/b89fb313-d01a-4305-b123-e253b3382b85-signing-key" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.252743 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd8aa7c1-0a04-4df0-9047-63ab846b9535" volumeName="kubernetes.io/projected/bd8aa7c1-0a04-4df0-9047-63ab846b9535-kube-api-access-w477x" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.252756 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3ff09ab-cbe1-49e7-8121-5f71997a5176" volumeName="kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-apiservice-cert" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.252768 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="41cc6278-8f99-407c-ba5f-750a40e3058c" volumeName="kubernetes.io/empty-dir/41cc6278-8f99-407c-ba5f-750a40e3058c-audit-log" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.252781 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a2bdf5b0-8764-4b15-97c9-20af36634fd0" volumeName="kubernetes.io/secret/a2bdf5b0-8764-4b15-97c9-20af36634fd0-encryption-config" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.252794 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8eff549-02f3-446e-b3a1-a66cecdc02a6" volumeName="kubernetes.io/configmap/a8eff549-02f3-446e-b3a1-a66cecdc02a6-config" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.252807 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b89fb313-d01a-4305-b123-e253b3382b85" volumeName="kubernetes.io/projected/b89fb313-d01a-4305-b123-e253b3382b85-kube-api-access-dm77k" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.252821 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b75d4622-ac12-4f82-afc9-ab63e6278b0c" volumeName="kubernetes.io/configmap/b75d4622-ac12-4f82-afc9-ab63e6278b0c-config" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.252832 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a4202c2-c330-4a5d-87e7-0a63d069113f" volumeName="kubernetes.io/projected/5a4202c2-c330-4a5d-87e7-0a63d069113f-kube-api-access-kb5b6" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.252845 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf9d21f9-64d6-4e21-a985-491197038568" volumeName="kubernetes.io/secret/bf9d21f9-64d6-4e21-a985-491197038568-serving-cert" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.252859 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d2455453-5943-49ef-bfea-cba077197da0" volumeName="kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.252872 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e390416b-4fa1-41d5-bc74-9e779b252350" volumeName="kubernetes.io/projected/e390416b-4fa1-41d5-bc74-9e779b252350-kube-api-access-cz6h6" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.252887 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="053cc9bc-f98e-46f6-93bb-b5344d20bf74" volumeName="kubernetes.io/projected/053cc9bc-f98e-46f6-93bb-b5344d20bf74-kube-api-access-gnxv5" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.252900 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b12af9a-8041-477f-90eb-05bb6ae7861a" volumeName="kubernetes.io/configmap/2b12af9a-8041-477f-90eb-05bb6ae7861a-auth-proxy-config" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.252918 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="767da57e-44e4-4861-bc6f-427c5bbb4d9d" volumeName="kubernetes.io/projected/767da57e-44e4-4861-bc6f-427c5bbb4d9d-kube-api-access-2nxzr" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.252931 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7dca7577-6bee-4dd3-917a-7b7ccc42f0fc" volumeName="kubernetes.io/projected/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc-kube-api-access-qhs5w" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.252944 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8eff549-02f3-446e-b3a1-a66cecdc02a6" volumeName="kubernetes.io/projected/a8eff549-02f3-446e-b3a1-a66cecdc02a6-kube-api-access" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.252955 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="053cc9bc-f98e-46f6-93bb-b5344d20bf74" volumeName="kubernetes.io/configmap/053cc9bc-f98e-46f6-93bb-b5344d20bf74-iptables-alerter-script" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.252967 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="290d1f84-5c5c-4bff-b045-e6020793cded" volumeName="kubernetes.io/projected/290d1f84-5c5c-4bff-b045-e6020793cded-kube-api-access-rdkx7" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.252978 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a350f317-f058-4102-af5c-cbba46d35e02" volumeName="kubernetes.io/configmap/a350f317-f058-4102-af5c-cbba46d35e02-client-ca" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.252994 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19a76585-a9ac-4ed9-9146-bb77b31848c6" volumeName="kubernetes.io/secret/19a76585-a9ac-4ed9-9146-bb77b31848c6-serving-cert" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.253008 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a2bdf5b0-8764-4b15-97c9-20af36634fd0" volumeName="kubernetes.io/configmap/a2bdf5b0-8764-4b15-97c9-20af36634fd0-config" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.253022 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="595f697b-d238-4500-84ce-1ea00377f05e" volumeName="kubernetes.io/secret/595f697b-d238-4500-84ce-1ea00377f05e-serving-cert" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.253053 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce3728ab-5d50-40ac-95b3-74a5b62a557f" volumeName="kubernetes.io/empty-dir/ce3728ab-5d50-40ac-95b3-74a5b62a557f-available-featuregates" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.253065 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e390416b-4fa1-41d5-bc74-9e779b252350" volumeName="kubernetes.io/empty-dir/e390416b-4fa1-41d5-bc74-9e779b252350-utilities" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.253077 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e390416b-4fa1-41d5-bc74-9e779b252350" volumeName="kubernetes.io/empty-dir/e390416b-4fa1-41d5-bc74-9e779b252350-catalog-content" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.253090 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f38b464d-a218-4753-b7ac-a7d373952c4d" volumeName="kubernetes.io/projected/f38b464d-a218-4753-b7ac-a7d373952c4d-kube-api-access-lfbx8" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.253102 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="68104a8c-3fac-4d4b-b975-bc2d045b3375" volumeName="kubernetes.io/projected/68104a8c-3fac-4d4b-b975-bc2d045b3375-kube-api-access-sx8j5" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.253114 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="68104a8c-3fac-4d4b-b975-bc2d045b3375" volumeName="kubernetes.io/secret/68104a8c-3fac-4d4b-b975-bc2d045b3375-machine-api-operator-tls" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.253125 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74f296d4-40d1-449e-88ea-db6c1574a11a" volumeName="kubernetes.io/projected/74f296d4-40d1-449e-88ea-db6c1574a11a-kube-api-access-ff8tm" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.253138 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c0e5eca-819b-40f3-bf77-0cd90a4f6e94" volumeName="kubernetes.io/configmap/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-telemetry-config" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.253153 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="902909ca-ab08-49aa-9736-70e073f8e67d" volumeName="kubernetes.io/projected/902909ca-ab08-49aa-9736-70e073f8e67d-kube-api-access-kskqr" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.253166 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7f4ae93-428b-4ebd-bfaa-18359b407ede" volumeName="kubernetes.io/secret/f7f4ae93-428b-4ebd-bfaa-18359b407ede-metrics-tls" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.253179 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2a25632e-32d0-43d2-9be7-f515d29a1720" volumeName="kubernetes.io/empty-dir/2a25632e-32d0-43d2-9be7-f515d29a1720-catalog-content" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.253191 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="41cc6278-8f99-407c-ba5f-750a40e3058c" volumeName="kubernetes.io/configmap/41cc6278-8f99-407c-ba5f-750a40e3058c-metrics-server-audit-profiles" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.253202 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="702076a9-b542-4768-9e9e-99b2cac0a66e" volumeName="kubernetes.io/empty-dir/702076a9-b542-4768-9e9e-99b2cac0a66e-node-exporter-textfile" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.253216 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a350f317-f058-4102-af5c-cbba46d35e02" volumeName="kubernetes.io/projected/a350f317-f058-4102-af5c-cbba46d35e02-kube-api-access-t56bf" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.253230 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d325c523-8e6f-4665-9f54-334eaf301141" volumeName="kubernetes.io/configmap/d325c523-8e6f-4665-9f54-334eaf301141-metrics-client-ca" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.253244 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0e7156cf-2d68-4de8-b7e7-60e1539590dd" volumeName="kubernetes.io/secret/0e7156cf-2d68-4de8-b7e7-60e1539590dd-webhook-cert" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.253258 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16f8e725-f18a-478e-88c5-87d54aeb4857" volumeName="kubernetes.io/empty-dir/16f8e725-f18a-478e-88c5-87d54aeb4857-cache" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.253271 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3ff09ab-cbe1-49e7-8121-5f71997a5176" volumeName="kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-node-tuning-operator-tls" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.253284 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c6c35e08-cdbc-4a86-a64a-3e5c34e941d7" volumeName="kubernetes.io/projected/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-kube-api-access-5qn7f" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.253295 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d325c523-8e6f-4665-9f54-334eaf301141" volumeName="kubernetes.io/secret/d325c523-8e6f-4665-9f54-334eaf301141-openshift-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.253306 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a93ff56-362e-44fc-a54f-666a01559892" volumeName="kubernetes.io/projected/6a93ff56-362e-44fc-a54f-666a01559892-kube-api-access-wmlh2" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.253320 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b75d4622-ac12-4f82-afc9-ab63e6278b0c" volumeName="kubernetes.io/projected/b75d4622-ac12-4f82-afc9-ab63e6278b0c-kube-api-access" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.253331 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="029b127e-0faf-4957-b591-9c561b053cda" volumeName="kubernetes.io/projected/029b127e-0faf-4957-b591-9c561b053cda-kube-api-access-wgt55" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.253343 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ee0f85b-219b-47cb-a22a-67d359a69881" volumeName="kubernetes.io/projected/3ee0f85b-219b-47cb-a22a-67d359a69881-kube-api-access-82f9g" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.253359 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab2f96fb-ef55-4427-a598-7e3f1e224045" volumeName="kubernetes.io/projected/ab2f96fb-ef55-4427-a598-7e3f1e224045-kube-api-access-mddh9" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.253370 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="80994f33-21e7-45d6-9f21-1cfd8e1f41ce" volumeName="kubernetes.io/projected/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-kube-api-access-gwqln" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.253383 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c6c35e08-cdbc-4a86-a64a-3e5c34e941d7" volumeName="kubernetes.io/secret/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-serving-cert" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.253396 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d42bcf13-548b-46c4-9a3d-a46f1b6ec045" volumeName="kubernetes.io/projected/d42bcf13-548b-46c4-9a3d-a46f1b6ec045-kube-api-access-vkxxg" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.253435 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="00375107-9a3b-4161-a90d-72ea8827c5fc" volumeName="kubernetes.io/secret/00375107-9a3b-4161-a90d-72ea8827c5fc-default-certificate" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.253450 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19a76585-a9ac-4ed9-9146-bb77b31848c6" volumeName="kubernetes.io/configmap/19a76585-a9ac-4ed9-9146-bb77b31848c6-config" seLinuxMountContext="" Mar 18 13:23:56.253401 master-0 kubenswrapper[27835]: I0318 13:23:56.253463 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2a25632e-32d0-43d2-9be7-f515d29a1720" volumeName="kubernetes.io/empty-dir/2a25632e-32d0-43d2-9be7-f515d29a1720-utilities" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253474 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="68104a8c-3fac-4d4b-b975-bc2d045b3375" volumeName="kubernetes.io/configmap/68104a8c-3fac-4d4b-b975-bc2d045b3375-config" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253486 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a93ff56-362e-44fc-a54f-666a01559892" volumeName="kubernetes.io/configmap/6a93ff56-362e-44fc-a54f-666a01559892-metrics-client-ca" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253500 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9" volumeName="kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253512 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c6c35e08-cdbc-4a86-a64a-3e5c34e941d7" volumeName="kubernetes.io/configmap/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-client-ca" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253524 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="029b127e-0faf-4957-b591-9c561b053cda" volumeName="kubernetes.io/secret/029b127e-0faf-4957-b591-9c561b053cda-metrics-tls" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253539 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0a6090f0-3a27-4102-b8dd-b071644a3543" volumeName="kubernetes.io/secret/0a6090f0-3a27-4102-b8dd-b071644a3543-serving-cert" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253553 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="41cc6278-8f99-407c-ba5f-750a40e3058c" volumeName="kubernetes.io/projected/41cc6278-8f99-407c-ba5f-750a40e3058c-kube-api-access-g2w6b" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253565 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a4202c2-c330-4a5d-87e7-0a63d069113f" volumeName="kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253579 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="767da57e-44e4-4861-bc6f-427c5bbb4d9d" volumeName="kubernetes.io/configmap/767da57e-44e4-4861-bc6f-427c5bbb4d9d-cni-sysctl-allowlist" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253595 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="07505113-d5e7-4ea3-b9cc-8f08cba45ccc" volumeName="kubernetes.io/configmap/07505113-d5e7-4ea3-b9cc-8f08cba45ccc-config" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253608 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="41cc6278-8f99-407c-ba5f-750a40e3058c" volumeName="kubernetes.io/configmap/41cc6278-8f99-407c-ba5f-750a40e3058c-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253620 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9548e397-0db4-41c8-9cc8-b575060e9c66" volumeName="kubernetes.io/empty-dir/9548e397-0db4-41c8-9cc8-b575060e9c66-catalog-content" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253632 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a2bdf5b0-8764-4b15-97c9-20af36634fd0" volumeName="kubernetes.io/projected/a2bdf5b0-8764-4b15-97c9-20af36634fd0-kube-api-access-sfb5c" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253665 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a9de7243-90c0-49c4-8059-34e0558fca40" volumeName="kubernetes.io/secret/a9de7243-90c0-49c4-8059-34e0558fca40-cloud-credential-operator-serving-cert" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253677 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b75d4622-ac12-4f82-afc9-ab63e6278b0c" volumeName="kubernetes.io/secret/b75d4622-ac12-4f82-afc9-ab63e6278b0c-serving-cert" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253689 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="deb67ea0-8342-40cb-b0f4-115270e878dd" volumeName="kubernetes.io/projected/deb67ea0-8342-40cb-b0f4-115270e878dd-kube-api-access-62lvq" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253702 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0e7156cf-2d68-4de8-b7e7-60e1539590dd" volumeName="kubernetes.io/configmap/0e7156cf-2d68-4de8-b7e7-60e1539590dd-ovnkube-identity-cm" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253714 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16f8e725-f18a-478e-88c5-87d54aeb4857" volumeName="kubernetes.io/projected/16f8e725-f18a-478e-88c5-87d54aeb4857-kube-api-access-8hvsl" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253735 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ac6d8eb6-1d5e-4757-9823-5ffe478c711c" volumeName="kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253749 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="00375107-9a3b-4161-a90d-72ea8827c5fc" volumeName="kubernetes.io/secret/00375107-9a3b-4161-a90d-72ea8827c5fc-stats-auth" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253764 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="290d1f84-5c5c-4bff-b045-e6020793cded" volumeName="kubernetes.io/projected/290d1f84-5c5c-4bff-b045-e6020793cded-bound-sa-token" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253778 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fb5bad7-07d9-45ac-ad27-a887d12d148f" volumeName="kubernetes.io/secret/7fb5bad7-07d9-45ac-ad27-a887d12d148f-etcd-client" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253794 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d42bcf13-548b-46c4-9a3d-a46f1b6ec045" volumeName="kubernetes.io/configmap/d42bcf13-548b-46c4-9a3d-a46f1b6ec045-env-overrides" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253809 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0e7156cf-2d68-4de8-b7e7-60e1539590dd" volumeName="kubernetes.io/projected/0e7156cf-2d68-4de8-b7e7-60e1539590dd-kube-api-access-z84cq" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253821 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fb5bad7-07d9-45ac-ad27-a887d12d148f" volumeName="kubernetes.io/secret/7fb5bad7-07d9-45ac-ad27-a887d12d148f-encryption-config" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253834 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8eff549-02f3-446e-b3a1-a66cecdc02a6" volumeName="kubernetes.io/secret/a8eff549-02f3-446e-b3a1-a66cecdc02a6-serving-cert" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253847 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="41cc6278-8f99-407c-ba5f-750a40e3058c" volumeName="kubernetes.io/secret/41cc6278-8f99-407c-ba5f-750a40e3058c-client-ca-bundle" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253858 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd8aa7c1-0a04-4df0-9047-63ab846b9535" volumeName="kubernetes.io/secret/bd8aa7c1-0a04-4df0-9047-63ab846b9535-certs" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253890 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="595f697b-d238-4500-84ce-1ea00377f05e" volumeName="kubernetes.io/configmap/595f697b-d238-4500-84ce-1ea00377f05e-config" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253905 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="720a1f60-c1cb-4aef-aaec-f082090ca631" volumeName="kubernetes.io/projected/720a1f60-c1cb-4aef-aaec-f082090ca631-kube-api-access-nbqfh" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253919 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fb5bad7-07d9-45ac-ad27-a887d12d148f" volumeName="kubernetes.io/configmap/7fb5bad7-07d9-45ac-ad27-a887d12d148f-audit-policies" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253931 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab2f96fb-ef55-4427-a598-7e3f1e224045" volumeName="kubernetes.io/configmap/ab2f96fb-ef55-4427-a598-7e3f1e224045-ovnkube-config" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253944 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19a76585-a9ac-4ed9-9146-bb77b31848c6" volumeName="kubernetes.io/secret/19a76585-a9ac-4ed9-9146-bb77b31848c6-etcd-client" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253956 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ee0f85b-219b-47cb-a22a-67d359a69881" volumeName="kubernetes.io/secret/3ee0f85b-219b-47cb-a22a-67d359a69881-webhook-cert" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253968 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74f296d4-40d1-449e-88ea-db6c1574a11a" volumeName="kubernetes.io/secret/74f296d4-40d1-449e-88ea-db6c1574a11a-samples-operator-tls" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253980 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb65c095-ca20-432c-a069-ad6719fca9c8" volumeName="kubernetes.io/secret/fb65c095-ca20-432c-a069-ad6719fca9c8-prometheus-operator-tls" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.253999 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0a6090f0-3a27-4102-b8dd-b071644a3543" volumeName="kubernetes.io/configmap/0a6090f0-3a27-4102-b8dd-b071644a3543-service-ca-bundle" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254014 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15a97fe2-5022-4997-9936-4247ae7ecb43" volumeName="kubernetes.io/projected/15a97fe2-5022-4997-9936-4247ae7ecb43-kube-api-access-h4vtf" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254028 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2a25632e-32d0-43d2-9be7-f515d29a1720" volumeName="kubernetes.io/projected/2a25632e-32d0-43d2-9be7-f515d29a1720-kube-api-access-bcfsk" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254041 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bdcd72a6-a8e8-47ba-8b51-7325d35bad6b" volumeName="kubernetes.io/secret/bdcd72a6-a8e8-47ba-8b51-7325d35bad6b-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254061 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f38b464d-a218-4753-b7ac-a7d373952c4d" volumeName="kubernetes.io/secret/f38b464d-a218-4753-b7ac-a7d373952c4d-machine-approver-tls" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254076 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0c2c4a58-9780-4ecd-b417-e590ac3576ed" volumeName="kubernetes.io/secret/0c2c4a58-9780-4ecd-b417-e590ac3576ed-serving-cert" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254096 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="41cc6278-8f99-407c-ba5f-750a40e3058c" volumeName="kubernetes.io/secret/41cc6278-8f99-407c-ba5f-750a40e3058c-secret-metrics-server-tls" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254110 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3ff09ab-cbe1-49e7-8121-5f71997a5176" volumeName="kubernetes.io/configmap/c3ff09ab-cbe1-49e7-8121-5f71997a5176-trusted-ca" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254126 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab2f96fb-ef55-4427-a598-7e3f1e224045" volumeName="kubernetes.io/configmap/ab2f96fb-ef55-4427-a598-7e3f1e224045-ovnkube-script-lib" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254147 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb65c095-ca20-432c-a069-ad6719fca9c8" volumeName="kubernetes.io/projected/fb65c095-ca20-432c-a069-ad6719fca9c8-kube-api-access-j5lv2" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254161 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617" volumeName="kubernetes.io/projected/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617-kube-api-access-fbsq9" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254175 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a2bdf5b0-8764-4b15-97c9-20af36634fd0" volumeName="kubernetes.io/configmap/a2bdf5b0-8764-4b15-97c9-20af36634fd0-trusted-ca-bundle" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254195 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe643e40-d06d-4e69-9be3-0065c2a78567" volumeName="kubernetes.io/configmap/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-trusted-ca" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254210 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="00375107-9a3b-4161-a90d-72ea8827c5fc" volumeName="kubernetes.io/configmap/00375107-9a3b-4161-a90d-72ea8827c5fc-service-ca-bundle" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254268 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0a6090f0-3a27-4102-b8dd-b071644a3543" volumeName="kubernetes.io/projected/0a6090f0-3a27-4102-b8dd-b071644a3543-kube-api-access-bd8ff" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254291 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19a76585-a9ac-4ed9-9146-bb77b31848c6" volumeName="kubernetes.io/configmap/19a76585-a9ac-4ed9-9146-bb77b31848c6-etcd-service-ca" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254305 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="41cc6278-8f99-407c-ba5f-750a40e3058c" volumeName="kubernetes.io/secret/41cc6278-8f99-407c-ba5f-750a40e3058c-secret-metrics-client-certs" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254345 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="902909ca-ab08-49aa-9736-70e073f8e67d" volumeName="kubernetes.io/secret/902909ca-ab08-49aa-9736-70e073f8e67d-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254370 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d325c523-8e6f-4665-9f54-334eaf301141" volumeName="kubernetes.io/secret/d325c523-8e6f-4665-9f54-334eaf301141-openshift-state-metrics-tls" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254384 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34a3a84b-048f-4822-9f05-0e7509327ca2" volumeName="kubernetes.io/configmap/34a3a84b-048f-4822-9f05-0e7509327ca2-config" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254489 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9" volumeName="kubernetes.io/projected/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-kube-api-access-qvdg2" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254504 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf1cc230-0a79-4a1d-b500-a65d02e50973" volumeName="kubernetes.io/projected/bf1cc230-0a79-4a1d-b500-a65d02e50973-kube-api-access-dvdtw" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254516 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d42bcf13-548b-46c4-9a3d-a46f1b6ec045" volumeName="kubernetes.io/configmap/d42bcf13-548b-46c4-9a3d-a46f1b6ec045-ovnkube-config" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254529 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb65c095-ca20-432c-a069-ad6719fca9c8" volumeName="kubernetes.io/secret/fb65c095-ca20-432c-a069-ad6719fca9c8-prometheus-operator-kube-rbac-proxy-config" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254556 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d2316774-4ebc-4fa9-be07-eb1f16f614dd" volumeName="kubernetes.io/empty-dir/d2316774-4ebc-4fa9-be07-eb1f16f614dd-utilities" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254572 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="029b127e-0faf-4957-b591-9c561b053cda" volumeName="kubernetes.io/configmap/029b127e-0faf-4957-b591-9c561b053cda-config-volume" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254586 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a4202c2-c330-4a5d-87e7-0a63d069113f" volumeName="kubernetes.io/configmap/5a4202c2-c330-4a5d-87e7-0a63d069113f-images" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254599 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6db2bfbd-d8db-4384-8979-23e8a1e87e5e" volumeName="kubernetes.io/secret/6db2bfbd-d8db-4384-8979-23e8a1e87e5e-tls-certificates" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254611 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="80994f33-21e7-45d6-9f21-1cfd8e1f41ce" volumeName="kubernetes.io/configmap/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-images" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254624 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a350f317-f058-4102-af5c-cbba46d35e02" volumeName="kubernetes.io/configmap/a350f317-f058-4102-af5c-cbba46d35e02-config" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254636 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a2bdf5b0-8764-4b15-97c9-20af36634fd0" volumeName="kubernetes.io/configmap/a2bdf5b0-8764-4b15-97c9-20af36634fd0-image-import-ca" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254651 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a2bdf5b0-8764-4b15-97c9-20af36634fd0" volumeName="kubernetes.io/secret/a2bdf5b0-8764-4b15-97c9-20af36634fd0-etcd-client" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254665 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ac6d8eb6-1d5e-4757-9823-5ffe478c711c" volumeName="kubernetes.io/projected/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-kube-api-access-zlzqd" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254678 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3be6654-f969-4952-976d-218c86af7d2d" volumeName="kubernetes.io/projected/f3be6654-f969-4952-976d-218c86af7d2d-kube-api-access-9wnqw" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254694 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a93ff56-362e-44fc-a54f-666a01559892" volumeName="kubernetes.io/secret/6a93ff56-362e-44fc-a54f-666a01559892-kube-state-metrics-tls" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254716 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="00375107-9a3b-4161-a90d-72ea8827c5fc" volumeName="kubernetes.io/secret/00375107-9a3b-4161-a90d-72ea8827c5fc-metrics-certs" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254734 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0a6090f0-3a27-4102-b8dd-b071644a3543" volumeName="kubernetes.io/configmap/0a6090f0-3a27-4102-b8dd-b071644a3543-trusted-ca-bundle" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254747 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59bf5114-29f9-4f70-8582-108e95327cb2" volumeName="kubernetes.io/projected/59bf5114-29f9-4f70-8582-108e95327cb2-kube-api-access-z5xgh" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254766 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b2acd84-85c0-4c47-90a4-44745b79976d" volumeName="kubernetes.io/projected/5b2acd84-85c0-4c47-90a4-44745b79976d-kube-api-access-28z2f" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254780 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="68104a8c-3fac-4d4b-b975-bc2d045b3375" volumeName="kubernetes.io/configmap/68104a8c-3fac-4d4b-b975-bc2d045b3375-images" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254792 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="767da57e-44e4-4861-bc6f-427c5bbb4d9d" volumeName="kubernetes.io/configmap/767da57e-44e4-4861-bc6f-427c5bbb4d9d-whereabouts-flatfile-configmap" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254808 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9548e397-0db4-41c8-9cc8-b575060e9c66" volumeName="kubernetes.io/empty-dir/9548e397-0db4-41c8-9cc8-b575060e9c66-utilities" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254825 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10" volumeName="kubernetes.io/configmap/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-cni-binary-copy" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254846 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf9d21f9-64d6-4e21-a985-491197038568" volumeName="kubernetes.io/projected/bf9d21f9-64d6-4e21-a985-491197038568-kube-api-access-qgffb" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254864 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d2316774-4ebc-4fa9-be07-eb1f16f614dd" volumeName="kubernetes.io/empty-dir/d2316774-4ebc-4fa9-be07-eb1f16f614dd-catalog-content" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254878 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19a76585-a9ac-4ed9-9146-bb77b31848c6" volumeName="kubernetes.io/projected/19a76585-a9ac-4ed9-9146-bb77b31848c6-kube-api-access-w9zbp" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254890 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b12af9a-8041-477f-90eb-05bb6ae7861a" volumeName="kubernetes.io/secret/2b12af9a-8041-477f-90eb-05bb6ae7861a-cert" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254904 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ee0f85b-219b-47cb-a22a-67d359a69881" volumeName="kubernetes.io/empty-dir/3ee0f85b-219b-47cb-a22a-67d359a69881-tmpfs" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254929 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="708812af-3249-4d57-8f28-055da22a7329" volumeName="kubernetes.io/configmap/708812af-3249-4d57-8f28-055da22a7329-mcc-auth-proxy-config" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254941 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="830ff1d6-332e-46b1-b13c-c2507fdc3c19" volumeName="kubernetes.io/secret/830ff1d6-332e-46b1-b13c-c2507fdc3c19-webhook-certs" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.254956 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9d09a56-ed4c-40b7-8be1-f3934c07296e" volumeName="kubernetes.io/projected/d9d09a56-ed4c-40b7-8be1-f3934c07296e-bound-sa-token" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.255020 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0c2c4a58-9780-4ecd-b417-e590ac3576ed" volumeName="kubernetes.io/configmap/0c2c4a58-9780-4ecd-b417-e590ac3576ed-config" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.255087 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="702076a9-b542-4768-9e9e-99b2cac0a66e" volumeName="kubernetes.io/secret/702076a9-b542-4768-9e9e-99b2cac0a66e-node-exporter-tls" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.255101 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617" volumeName="kubernetes.io/projected/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617-ca-certs" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.255116 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce3c462e-b655-40bc-811a-95ccde49fdb8" volumeName="kubernetes.io/configmap/ce3c462e-b655-40bc-811a-95ccde49fdb8-mcd-auth-proxy-config" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.255197 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15a97fe2-5022-4997-9936-4247ae7ecb43" volumeName="kubernetes.io/secret/15a97fe2-5022-4997-9936-4247ae7ecb43-cluster-storage-operator-serving-cert" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.255213 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d2316774-4ebc-4fa9-be07-eb1f16f614dd" volumeName="kubernetes.io/projected/d2316774-4ebc-4fa9-be07-eb1f16f614dd-kube-api-access-lrgxg" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.255260 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe643e40-d06d-4e69-9be3-0065c2a78567" volumeName="kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics" seLinuxMountContext="" Mar 18 13:23:56.256021 master-0 kubenswrapper[27835]: I0318 13:23:56.255274 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fb5bad7-07d9-45ac-ad27-a887d12d148f" volumeName="kubernetes.io/configmap/7fb5bad7-07d9-45ac-ad27-a887d12d148f-trusted-ca-bundle" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.256539 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ffe2e75-9cc3-4244-95c8-800463c5aa28" volumeName="kubernetes.io/secret/8ffe2e75-9cc3-4244-95c8-800463c5aa28-serving-cert" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.256604 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf4c5410-fb44-45e8-ab66-24806e6349b8" volumeName="kubernetes.io/projected/bf4c5410-fb44-45e8-ab66-24806e6349b8-kube-api-access-hnkdr" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.256705 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2ea9eb53-0385-4a1a-a64f-696f8520cf49" volumeName="kubernetes.io/projected/2ea9eb53-0385-4a1a-a64f-696f8520cf49-kube-api-access-4dw4r" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.256764 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce3728ab-5d50-40ac-95b3-74a5b62a557f" volumeName="kubernetes.io/projected/ce3728ab-5d50-40ac-95b3-74a5b62a557f-kube-api-access-29qbv" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.256784 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="13c71f7d-1485-4f86-beb2-ee16cf420350" volumeName="kubernetes.io/projected/13c71f7d-1485-4f86-beb2-ee16cf420350-kube-api-access-zplb4" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.256800 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="708812af-3249-4d57-8f28-055da22a7329" volumeName="kubernetes.io/projected/708812af-3249-4d57-8f28-055da22a7329-kube-api-access-clhcj" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.256813 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fb5bad7-07d9-45ac-ad27-a887d12d148f" volumeName="kubernetes.io/configmap/7fb5bad7-07d9-45ac-ad27-a887d12d148f-etcd-serving-ca" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.256829 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10" volumeName="kubernetes.io/projected/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-kube-api-access-xsvmx" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.256844 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9d09a56-ed4c-40b7-8be1-f3934c07296e" volumeName="kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.256857 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab2f96fb-ef55-4427-a598-7e3f1e224045" volumeName="kubernetes.io/secret/ab2f96fb-ef55-4427-a598-7e3f1e224045-ovn-node-metrics-cert" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.256907 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bdcd72a6-a8e8-47ba-8b51-7325d35bad6b" volumeName="kubernetes.io/projected/bdcd72a6-a8e8-47ba-8b51-7325d35bad6b-kube-api-access-6fw5f" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.256925 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf9d21f9-64d6-4e21-a985-491197038568" volumeName="kubernetes.io/configmap/bf9d21f9-64d6-4e21-a985-491197038568-service-ca-bundle" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.256940 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ac6d8eb6-1d5e-4757-9823-5ffe478c711c" volumeName="kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.256957 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d2455453-5943-49ef-bfea-cba077197da0" volumeName="kubernetes.io/projected/d2455453-5943-49ef-bfea-cba077197da0-kube-api-access-lxk9v" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.256969 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fb5bad7-07d9-45ac-ad27-a887d12d148f" volumeName="kubernetes.io/secret/7fb5bad7-07d9-45ac-ad27-a887d12d148f-serving-cert" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.256982 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a9de7243-90c0-49c4-8059-34e0558fca40" volumeName="kubernetes.io/projected/a9de7243-90c0-49c4-8059-34e0558fca40-kube-api-access-75jwh" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.256995 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab2f96fb-ef55-4427-a598-7e3f1e224045" volumeName="kubernetes.io/configmap/ab2f96fb-ef55-4427-a598-7e3f1e224045-env-overrides" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257018 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16f8e725-f18a-478e-88c5-87d54aeb4857" volumeName="kubernetes.io/projected/16f8e725-f18a-478e-88c5-87d54aeb4857-ca-certs" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257032 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16f8e725-f18a-478e-88c5-87d54aeb4857" volumeName="kubernetes.io/secret/16f8e725-f18a-478e-88c5-87d54aeb4857-catalogserver-certs" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257052 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ee0f85b-219b-47cb-a22a-67d359a69881" volumeName="kubernetes.io/secret/3ee0f85b-219b-47cb-a22a-67d359a69881-apiservice-cert" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257066 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59bf5114-29f9-4f70-8582-108e95327cb2" volumeName="kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257078 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a93ff56-362e-44fc-a54f-666a01559892" volumeName="kubernetes.io/configmap/6a93ff56-362e-44fc-a54f-666a01559892-kube-state-metrics-custom-resource-state-configmap" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257091 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ac6d8eb6-1d5e-4757-9823-5ffe478c711c" volumeName="kubernetes.io/configmap/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-config" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257106 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e54baea8-6c3e-45a0-ac8c-880a8aaa8208" volumeName="kubernetes.io/secret/e54baea8-6c3e-45a0-ac8c-880a8aaa8208-cert" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257126 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a4202c2-c330-4a5d-87e7-0a63d069113f" volumeName="kubernetes.io/configmap/5a4202c2-c330-4a5d-87e7-0a63d069113f-auth-proxy-config" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257139 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="708812af-3249-4d57-8f28-055da22a7329" volumeName="kubernetes.io/secret/708812af-3249-4d57-8f28-055da22a7329-proxy-tls" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257157 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="80994f33-21e7-45d6-9f21-1cfd8e1f41ce" volumeName="kubernetes.io/secret/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-cloud-controller-manager-operator-tls" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257212 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d42bcf13-548b-46c4-9a3d-a46f1b6ec045" volumeName="kubernetes.io/secret/d42bcf13-548b-46c4-9a3d-a46f1b6ec045-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257231 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9d09a56-ed4c-40b7-8be1-f3934c07296e" volumeName="kubernetes.io/projected/d9d09a56-ed4c-40b7-8be1-f3934c07296e-kube-api-access-wmzr4" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257246 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="80994f33-21e7-45d6-9f21-1cfd8e1f41ce" volumeName="kubernetes.io/configmap/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-auth-proxy-config" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257261 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf9d21f9-64d6-4e21-a985-491197038568" volumeName="kubernetes.io/configmap/bf9d21f9-64d6-4e21-a985-491197038568-config" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257277 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce3728ab-5d50-40ac-95b3-74a5b62a557f" volumeName="kubernetes.io/secret/ce3728ab-5d50-40ac-95b3-74a5b62a557f-serving-cert" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257292 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce3c462e-b655-40bc-811a-95ccde49fdb8" volumeName="kubernetes.io/secret/ce3c462e-b655-40bc-811a-95ccde49fdb8-proxy-tls" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257307 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c0e5eca-819b-40f3-bf77-0cd90a4f6e94" volumeName="kubernetes.io/projected/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-kube-api-access-lgt5t" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257322 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9d09a56-ed4c-40b7-8be1-f3934c07296e" volumeName="kubernetes.io/configmap/d9d09a56-ed4c-40b7-8be1-f3934c07296e-trusted-ca" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257335 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f38b464d-a218-4753-b7ac-a7d373952c4d" volumeName="kubernetes.io/configmap/f38b464d-a218-4753-b7ac-a7d373952c4d-config" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257358 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="00375107-9a3b-4161-a90d-72ea8827c5fc" volumeName="kubernetes.io/projected/00375107-9a3b-4161-a90d-72ea8827c5fc-kube-api-access-zlb8t" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257379 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a93ff56-362e-44fc-a54f-666a01559892" volumeName="kubernetes.io/empty-dir/6a93ff56-362e-44fc-a54f-666a01559892-volume-directive-shadow" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257395 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="767da57e-44e4-4861-bc6f-427c5bbb4d9d" volumeName="kubernetes.io/configmap/767da57e-44e4-4861-bc6f-427c5bbb4d9d-cni-binary-copy" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257427 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb65c095-ca20-432c-a069-ad6719fca9c8" volumeName="kubernetes.io/configmap/fb65c095-ca20-432c-a069-ad6719fca9c8-metrics-client-ca" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257442 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0a6090f0-3a27-4102-b8dd-b071644a3543" volumeName="kubernetes.io/empty-dir/0a6090f0-3a27-4102-b8dd-b071644a3543-snapshots" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257456 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34a3a84b-048f-4822-9f05-0e7509327ca2" volumeName="kubernetes.io/projected/34a3a84b-048f-4822-9f05-0e7509327ca2-kube-api-access" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257469 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="720a1f60-c1cb-4aef-aaec-f082090ca631" volumeName="kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257484 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3ff09ab-cbe1-49e7-8121-5f71997a5176" volumeName="kubernetes.io/projected/c3ff09ab-cbe1-49e7-8121-5f71997a5176-kube-api-access-n2hxh" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257498 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7f4ae93-428b-4ebd-bfaa-18359b407ede" volumeName="kubernetes.io/projected/f7f4ae93-428b-4ebd-bfaa-18359b407ede-kube-api-access-mc8t5" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257511 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e54baea8-6c3e-45a0-ac8c-880a8aaa8208" volumeName="kubernetes.io/projected/e54baea8-6c3e-45a0-ac8c-880a8aaa8208-kube-api-access-kkw55" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257522 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="595f697b-d238-4500-84ce-1ea00377f05e" volumeName="kubernetes.io/projected/595f697b-d238-4500-84ce-1ea00377f05e-kube-api-access-4fxgl" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257542 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c0e5eca-819b-40f3-bf77-0cd90a4f6e94" volumeName="kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257555 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="902909ca-ab08-49aa-9736-70e073f8e67d" volumeName="kubernetes.io/empty-dir/902909ca-ab08-49aa-9736-70e073f8e67d-operand-assets" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257574 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9548e397-0db4-41c8-9cc8-b575060e9c66" volumeName="kubernetes.io/projected/9548e397-0db4-41c8-9cc8-b575060e9c66-kube-api-access-kbwfq" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257595 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c6c35e08-cdbc-4a86-a64a-3e5c34e941d7" volumeName="kubernetes.io/configmap/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-config" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257610 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ffe2e75-9cc3-4244-95c8-800463c5aa28" volumeName="kubernetes.io/configmap/8ffe2e75-9cc3-4244-95c8-800463c5aa28-service-ca" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257632 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="07505113-d5e7-4ea3-b9cc-8f08cba45ccc" volumeName="kubernetes.io/projected/07505113-d5e7-4ea3-b9cc-8f08cba45ccc-kube-api-access-lgzkd" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257646 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0e7156cf-2d68-4de8-b7e7-60e1539590dd" volumeName="kubernetes.io/configmap/0e7156cf-2d68-4de8-b7e7-60e1539590dd-env-overrides" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257665 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="394061b4-1bac-4699-96d2-88558c1adaf8" volumeName="kubernetes.io/projected/394061b4-1bac-4699-96d2-88558c1adaf8-kube-api-access-r7bpz" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257680 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7dca7577-6bee-4dd3-917a-7b7ccc42f0fc" volumeName="kubernetes.io/configmap/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc-config" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257701 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0c2c4a58-9780-4ecd-b417-e590ac3576ed" volumeName="kubernetes.io/projected/0c2c4a58-9780-4ecd-b417-e590ac3576ed-kube-api-access-v6zmc" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257715 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="290d1f84-5c5c-4bff-b045-e6020793cded" volumeName="kubernetes.io/configmap/290d1f84-5c5c-4bff-b045-e6020793cded-trusted-ca" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257727 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10" volumeName="kubernetes.io/configmap/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-multus-daemon-config" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257739 27835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce3c462e-b655-40bc-811a-95ccde49fdb8" volumeName="kubernetes.io/projected/ce3c462e-b655-40bc-811a-95ccde49fdb8-kube-api-access-8jxdg" seLinuxMountContext="" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257752 27835 reconstruct.go:97] "Volume reconstruction finished" Mar 18 13:23:56.259372 master-0 kubenswrapper[27835]: I0318 13:23:56.257760 27835 reconciler.go:26] "Reconciler: start to sync state" Mar 18 13:23:56.261942 master-0 kubenswrapper[27835]: I0318 13:23:56.261465 27835 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 18 13:23:56.276075 master-0 kubenswrapper[27835]: I0318 13:23:56.275939 27835 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 18 13:23:56.279616 master-0 kubenswrapper[27835]: I0318 13:23:56.279556 27835 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 18 13:23:56.279729 master-0 kubenswrapper[27835]: I0318 13:23:56.279652 27835 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 18 13:23:56.279729 master-0 kubenswrapper[27835]: I0318 13:23:56.279694 27835 kubelet.go:2335] "Starting kubelet main sync loop" Mar 18 13:23:56.279866 master-0 kubenswrapper[27835]: E0318 13:23:56.279777 27835 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 18 13:23:56.282301 master-0 kubenswrapper[27835]: I0318 13:23:56.282245 27835 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 18 13:23:56.289644 master-0 kubenswrapper[27835]: I0318 13:23:56.289440 27835 generic.go:334] "Generic (PLEG): container finished" podID="b89fb313-d01a-4305-b123-e253b3382b85" containerID="9a89fb2a5bf4388a7514a371a51f6ac933c33ac9c54d8113cf8c422503facd37" exitCode=0 Mar 18 13:23:56.296384 master-0 kubenswrapper[27835]: I0318 13:23:56.296322 27835 generic.go:334] "Generic (PLEG): container finished" podID="fe643e40-d06d-4e69-9be3-0065c2a78567" containerID="f43bc8a8bbbc899289099dead977b3860d9a72dc536beb3c67e961db9a9900e8" exitCode=0 Mar 18 13:23:56.299567 master-0 kubenswrapper[27835]: I0318 13:23:56.299524 27835 generic.go:334] "Generic (PLEG): container finished" podID="ce3728ab-5d50-40ac-95b3-74a5b62a557f" containerID="ce628a61289a6356a4840f81be538656bf2f65763801f5f5367447fe1929945e" exitCode=0 Mar 18 13:23:56.299567 master-0 kubenswrapper[27835]: I0318 13:23:56.299557 27835 generic.go:334] "Generic (PLEG): container finished" podID="ce3728ab-5d50-40ac-95b3-74a5b62a557f" containerID="2be57f1bc2d84ad4ff4dd4172fd46f3cfddc882962f936029d991fec6bacfeb8" exitCode=0 Mar 18 13:23:56.304279 master-0 kubenswrapper[27835]: I0318 13:23:56.304234 27835 generic.go:334] "Generic (PLEG): container finished" podID="07505113-d5e7-4ea3-b9cc-8f08cba45ccc" containerID="34f2829f920c0b8e7fad32f3489c2848036444d936bf5324856fb8eb487c04e1" exitCode=0 Mar 18 13:23:56.309753 master-0 kubenswrapper[27835]: I0318 13:23:56.309700 27835 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="aec5346f46da33d997a4c62bc92998fc48a19573760229d71e97091c1c9a67c9" exitCode=0 Mar 18 13:23:56.309753 master-0 kubenswrapper[27835]: I0318 13:23:56.309739 27835 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="ac6a03840b83398cf49ffdbda9e45e37a6a4ad486796c7aa5525dfdd483b2a1c" exitCode=0 Mar 18 13:23:56.309858 master-0 kubenswrapper[27835]: I0318 13:23:56.309763 27835 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="e51c044a2405dc8e2c15e99d23adc3d518ef8ba93339eb0eb649f5a9e556f757" exitCode=0 Mar 18 13:23:56.313374 master-0 kubenswrapper[27835]: I0318 13:23:56.313302 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-x8r78_0e7156cf-2d68-4de8-b7e7-60e1539590dd/approver/1.log" Mar 18 13:23:56.314984 master-0 kubenswrapper[27835]: I0318 13:23:56.314924 27835 generic.go:334] "Generic (PLEG): container finished" podID="0e7156cf-2d68-4de8-b7e7-60e1539590dd" containerID="0df05b30952e8bace8c1fadfea54a4900c846053f046ccb0bcbeb970b3b63e09" exitCode=1 Mar 18 13:23:56.321794 master-0 kubenswrapper[27835]: I0318 13:23:56.321743 27835 generic.go:334] "Generic (PLEG): container finished" podID="a2bdf5b0-8764-4b15-97c9-20af36634fd0" containerID="fbf0aecf9f06b167d5a00c6e13e0a1fb74d188d7a55e8c083388c3f5b4d41a40" exitCode=0 Mar 18 13:23:56.327516 master-0 kubenswrapper[27835]: I0318 13:23:56.327468 27835 generic.go:334] "Generic (PLEG): container finished" podID="d2316774-4ebc-4fa9-be07-eb1f16f614dd" containerID="1a099b747318c0fe3ecf7281f4b981921dcc9c60c98ba0e17565f1557ebc2839" exitCode=0 Mar 18 13:23:56.327516 master-0 kubenswrapper[27835]: I0318 13:23:56.327509 27835 generic.go:334] "Generic (PLEG): container finished" podID="d2316774-4ebc-4fa9-be07-eb1f16f614dd" containerID="4c8a9dfdf52860c843b25f4e4b2d64bea7e0f6631bfdbe29d75a91918d723d48" exitCode=0 Mar 18 13:23:56.329093 master-0 kubenswrapper[27835]: I0318 13:23:56.329061 27835 generic.go:334] "Generic (PLEG): container finished" podID="ce43e217adc4d0869adee3ba7c628c00" containerID="057d6561c0f4da44fc1dbbb3cf541c1859a6f838b5eed3e585b47f89bb483358" exitCode=0 Mar 18 13:23:56.332846 master-0 kubenswrapper[27835]: I0318 13:23:56.332807 27835 generic.go:334] "Generic (PLEG): container finished" podID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerID="ca11ff8dd74bbd57e44f6070d192194a64ab628351ce867a0ac332f4e51a71b5" exitCode=0 Mar 18 13:23:56.335047 master-0 kubenswrapper[27835]: I0318 13:23:56.335016 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-gxxbr_f7f4ae93-428b-4ebd-bfaa-18359b407ede/network-operator/0.log" Mar 18 13:23:56.335162 master-0 kubenswrapper[27835]: I0318 13:23:56.335052 27835 generic.go:334] "Generic (PLEG): container finished" podID="f7f4ae93-428b-4ebd-bfaa-18359b407ede" containerID="0f68e5c45ea6d8fc8605559b1dd3501571f6348a64337151b3b9a1c54518d47c" exitCode=255 Mar 18 13:23:56.337788 master-0 kubenswrapper[27835]: I0318 13:23:56.337764 27835 generic.go:334] "Generic (PLEG): container finished" podID="2a25632e-32d0-43d2-9be7-f515d29a1720" containerID="03f24e4774570f5bcb22723cea17bbe58e8e6018e449616ad7396efe7f6ed545" exitCode=0 Mar 18 13:23:56.337874 master-0 kubenswrapper[27835]: I0318 13:23:56.337782 27835 generic.go:334] "Generic (PLEG): container finished" podID="2a25632e-32d0-43d2-9be7-f515d29a1720" containerID="c180f7f3ef28dbeeb20612afdf694c75b1483a1a6158630039543cf7971e63f5" exitCode=0 Mar 18 13:23:56.342516 master-0 kubenswrapper[27835]: I0318 13:23:56.342476 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-lqtbg_2b12af9a-8041-477f-90eb-05bb6ae7861a/cluster-autoscaler-operator/0.log" Mar 18 13:23:56.343018 master-0 kubenswrapper[27835]: I0318 13:23:56.342956 27835 generic.go:334] "Generic (PLEG): container finished" podID="2b12af9a-8041-477f-90eb-05bb6ae7861a" containerID="1ea74ec7ff988c3aa1326aad273ebf989a1e564b326b601e6eb48c414dd19ee0" exitCode=255 Mar 18 13:23:56.347847 master-0 kubenswrapper[27835]: I0318 13:23:56.347781 27835 generic.go:334] "Generic (PLEG): container finished" podID="767da57e-44e4-4861-bc6f-427c5bbb4d9d" containerID="22e1bd5e28c298ede758e5ddea0b33351ac8c7be1111bab8e7269abdb7d0b24d" exitCode=0 Mar 18 13:23:56.347933 master-0 kubenswrapper[27835]: I0318 13:23:56.347852 27835 generic.go:334] "Generic (PLEG): container finished" podID="767da57e-44e4-4861-bc6f-427c5bbb4d9d" containerID="7dd7465ff0a0e7bd1744dc8ce263fa13a50d77f65ff8439074a245d515a4445a" exitCode=0 Mar 18 13:23:56.347933 master-0 kubenswrapper[27835]: I0318 13:23:56.347862 27835 generic.go:334] "Generic (PLEG): container finished" podID="767da57e-44e4-4861-bc6f-427c5bbb4d9d" containerID="ce72b00f2972d5446b5f276006e7acfa3fdc14bc227bc60b88d427b8aca46c01" exitCode=0 Mar 18 13:23:56.347933 master-0 kubenswrapper[27835]: I0318 13:23:56.347893 27835 generic.go:334] "Generic (PLEG): container finished" podID="767da57e-44e4-4861-bc6f-427c5bbb4d9d" containerID="e6c5e39905127934bde209ce2f1016715a59ddc9fc387b1a3a64af536455bdb8" exitCode=0 Mar 18 13:23:56.347933 master-0 kubenswrapper[27835]: I0318 13:23:56.347901 27835 generic.go:334] "Generic (PLEG): container finished" podID="767da57e-44e4-4861-bc6f-427c5bbb4d9d" containerID="2c7ef62a916ad3298edbd1aa1cbc3e8ff60647bfc3a55655d38feae6a6189afb" exitCode=0 Mar 18 13:23:56.347933 master-0 kubenswrapper[27835]: I0318 13:23:56.347909 27835 generic.go:334] "Generic (PLEG): container finished" podID="767da57e-44e4-4861-bc6f-427c5bbb4d9d" containerID="b002856dfe7358511cd094dcfacc7030cb861d82b50197ce9130a1536facf510" exitCode=0 Mar 18 13:23:56.349647 master-0 kubenswrapper[27835]: I0318 13:23:56.349620 27835 generic.go:334] "Generic (PLEG): container finished" podID="814ffa63-b08e-4de8-b912-8d7f0638230b" containerID="bd16bdf4e73c45c278128af3a659c5a213de4cb9ef8b0c72e75eabe56dd40dbc" exitCode=0 Mar 18 13:23:56.351895 master-0 kubenswrapper[27835]: I0318 13:23:56.351874 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-mz4qp_ac6d8eb6-1d5e-4757-9823-5ffe478c711c/cluster-baremetal-operator/2.log" Mar 18 13:23:56.352500 master-0 kubenswrapper[27835]: I0318 13:23:56.352457 27835 generic.go:334] "Generic (PLEG): container finished" podID="ac6d8eb6-1d5e-4757-9823-5ffe478c711c" containerID="3ec5f61268f5b704bdd3ae4759c44192ac2e3c0b60c608cf999dd449ac28017b" exitCode=1 Mar 18 13:23:56.354867 master-0 kubenswrapper[27835]: I0318 13:23:56.354831 27835 generic.go:334] "Generic (PLEG): container finished" podID="9548e397-0db4-41c8-9cc8-b575060e9c66" containerID="403ebc2e5a41ebd83d754ef243b009a18ec0ae88fbc50c4907c8838a7c5edab4" exitCode=0 Mar 18 13:23:56.354867 master-0 kubenswrapper[27835]: I0318 13:23:56.354855 27835 generic.go:334] "Generic (PLEG): container finished" podID="9548e397-0db4-41c8-9cc8-b575060e9c66" containerID="aacaa4f75f3c9d2bdb4d347974e6b6d65020cdef4eea519f86746e64d1055396" exitCode=0 Mar 18 13:23:56.360997 master-0 kubenswrapper[27835]: I0318 13:23:56.360905 27835 generic.go:334] "Generic (PLEG): container finished" podID="ab2f96fb-ef55-4427-a598-7e3f1e224045" containerID="5848e50846e9206c31c30b47f8e7f2df5ddc303c266302abaf44f36dbaa6229a" exitCode=0 Mar 18 13:23:56.365685 master-0 kubenswrapper[27835]: I0318 13:23:56.365628 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-68f85b4d6c-t84s9_d2455453-5943-49ef-bfea-cba077197da0/catalog-operator/1.log" Mar 18 13:23:56.365766 master-0 kubenswrapper[27835]: I0318 13:23:56.365689 27835 generic.go:334] "Generic (PLEG): container finished" podID="d2455453-5943-49ef-bfea-cba077197da0" containerID="c6d06965eb2aa010cd8386f146e5e7d18099615f71beaf1a6e240f94bd2aecf0" exitCode=1 Mar 18 13:23:56.367697 master-0 kubenswrapper[27835]: I0318 13:23:56.367661 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41/installer/0.log" Mar 18 13:23:56.367783 master-0 kubenswrapper[27835]: I0318 13:23:56.367702 27835 generic.go:334] "Generic (PLEG): container finished" podID="e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41" containerID="31b0fc8784eb8367b69b8a7c847bfd1469f93f534490b89c89aa0c82a72151b2" exitCode=1 Mar 18 13:23:56.369789 master-0 kubenswrapper[27835]: I0318 13:23:56.369743 27835 generic.go:334] "Generic (PLEG): container finished" podID="c6c35e08-cdbc-4a86-a64a-3e5c34e941d7" containerID="8fd3086731035b08c09720259a6ef231b1be865d3ade946ceb31136e3b43913c" exitCode=0 Mar 18 13:23:56.373212 master-0 kubenswrapper[27835]: I0318 13:23:56.373193 27835 generic.go:334] "Generic (PLEG): container finished" podID="0a6090f0-3a27-4102-b8dd-b071644a3543" containerID="d0ac20086f35d51bcf8fc783fb1c1bf1ac3f8ca49ee1fa8aafa1da1a9b8115d7" exitCode=0 Mar 18 13:23:56.380378 master-0 kubenswrapper[27835]: E0318 13:23:56.380307 27835 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 18 13:23:56.383221 master-0 kubenswrapper[27835]: I0318 13:23:56.382872 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-8lzkl_80994f33-21e7-45d6-9f21-1cfd8e1f41ce/config-sync-controllers/0.log" Mar 18 13:23:56.383498 master-0 kubenswrapper[27835]: I0318 13:23:56.383468 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-8lzkl_80994f33-21e7-45d6-9f21-1cfd8e1f41ce/cluster-cloud-controller-manager/0.log" Mar 18 13:23:56.383553 master-0 kubenswrapper[27835]: I0318 13:23:56.383518 27835 generic.go:334] "Generic (PLEG): container finished" podID="80994f33-21e7-45d6-9f21-1cfd8e1f41ce" containerID="d50601e164ccfcbdf07931c427e847ca4740015597032ab2b84aea93b2d7cd31" exitCode=1 Mar 18 13:23:56.383553 master-0 kubenswrapper[27835]: I0318 13:23:56.383538 27835 generic.go:334] "Generic (PLEG): container finished" podID="80994f33-21e7-45d6-9f21-1cfd8e1f41ce" containerID="b60e278771d4ab09e373261d0f5e1a2d382ec8ee4872ddb07f8d9ad772242c29" exitCode=1 Mar 18 13:23:56.386935 master-0 kubenswrapper[27835]: I0318 13:23:56.386889 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-598fbc5f8f-kvbzn_c3ff09ab-cbe1-49e7-8121-5f71997a5176/cluster-node-tuning-operator/0.log" Mar 18 13:23:56.387111 master-0 kubenswrapper[27835]: I0318 13:23:56.387022 27835 generic.go:334] "Generic (PLEG): container finished" podID="c3ff09ab-cbe1-49e7-8121-5f71997a5176" containerID="8a0561b48d7cbb59281ef2be420f500c179586e31854a6ba87f0ee5471e4ee95" exitCode=1 Mar 18 13:23:56.391635 master-0 kubenswrapper[27835]: I0318 13:23:56.391609 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_62eae2a9-2667-431e-ad73-ca18124d01f6/installer/0.log" Mar 18 13:23:56.391712 master-0 kubenswrapper[27835]: I0318 13:23:56.391638 27835 generic.go:334] "Generic (PLEG): container finished" podID="62eae2a9-2667-431e-ad73-ca18124d01f6" containerID="84d4addeaab69d00ff961004821b23d05bc68d242853d91f47889592129b1a88" exitCode=1 Mar 18 13:23:56.394031 master-0 kubenswrapper[27835]: I0318 13:23:56.393962 27835 generic.go:334] "Generic (PLEG): container finished" podID="0c2c4a58-9780-4ecd-b417-e590ac3576ed" containerID="8e530c2314387d6faa3389f896853faadcabf48e6b1056d8665d0aee6b25ba83" exitCode=0 Mar 18 13:23:56.396604 master-0 kubenswrapper[27835]: I0318 13:23:56.396546 27835 generic.go:334] "Generic (PLEG): container finished" podID="d42bcf13-548b-46c4-9a3d-a46f1b6ec045" containerID="d95ca6e96bcbe20e26ec06e8bea97630f7abc38b8dcb855ed93eec8b8ea1c22b" exitCode=0 Mar 18 13:23:56.399862 master-0 kubenswrapper[27835]: I0318 13:23:56.399826 27835 generic.go:334] "Generic (PLEG): container finished" podID="b75d4622-ac12-4f82-afc9-ab63e6278b0c" containerID="efe6e287c36852699c4eb20fb17353458d83a029dc0001b97b2d103045cc17c2" exitCode=0 Mar 18 13:23:56.401776 master-0 kubenswrapper[27835]: I0318 13:23:56.401757 27835 generic.go:334] "Generic (PLEG): container finished" podID="15a97fe2-5022-4997-9936-4247ae7ecb43" containerID="6bba51891e1777a8a2c079cba18156b56f50c10e22f9de1c059b65799e3a81f6" exitCode=0 Mar 18 13:23:56.410579 master-0 kubenswrapper[27835]: I0318 13:23:56.410530 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_615539dc-56e1-4489-9aee-33b3e769d4fc/installer/0.log" Mar 18 13:23:56.410697 master-0 kubenswrapper[27835]: I0318 13:23:56.410585 27835 generic.go:334] "Generic (PLEG): container finished" podID="615539dc-56e1-4489-9aee-33b3e769d4fc" containerID="60014c22022db848874d3a05474beca08d37dd24a5fad732534f373108a2dd40" exitCode=1 Mar 18 13:23:56.413883 master-0 kubenswrapper[27835]: I0318 13:23:56.413793 27835 generic.go:334] "Generic (PLEG): container finished" podID="394061b4-1bac-4699-96d2-88558c1adaf8" containerID="c9f1921c446214d30702dfb6939c3c003e6da6eb3a26e4b0d63f3a857db0e4ce" exitCode=0 Mar 18 13:23:56.424331 master-0 kubenswrapper[27835]: I0318 13:23:56.424258 27835 generic.go:334] "Generic (PLEG): container finished" podID="e390416b-4fa1-41d5-bc74-9e779b252350" containerID="b43b7d12d5938ada2c8a891881e47265567c35b517ea58afd154109c58f9fc86" exitCode=0 Mar 18 13:23:56.424331 master-0 kubenswrapper[27835]: I0318 13:23:56.424313 27835 generic.go:334] "Generic (PLEG): container finished" podID="e390416b-4fa1-41d5-bc74-9e779b252350" containerID="430ac96fd015a6eea0a650279b116d5a8e02003f3361085b042396c185be38af" exitCode=0 Mar 18 13:23:56.427383 master-0 kubenswrapper[27835]: I0318 13:23:56.427326 27835 generic.go:334] "Generic (PLEG): container finished" podID="bf9d21f9-64d6-4e21-a985-491197038568" containerID="e3030c6144549ecf6368b1e14f59622a57b27f9cd532ce32634fa6a2d9e59421" exitCode=0 Mar 18 13:23:56.429380 master-0 kubenswrapper[27835]: I0318 13:23:56.429341 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-ff75f747c-r46tm_3ee0f85b-219b-47cb-a22a-67d359a69881/packageserver/0.log" Mar 18 13:23:56.429380 master-0 kubenswrapper[27835]: I0318 13:23:56.429373 27835 generic.go:334] "Generic (PLEG): container finished" podID="3ee0f85b-219b-47cb-a22a-67d359a69881" containerID="48e43ee75779b8e1045feaede050da1592482395d03ca73890f0546a58a0cc80" exitCode=2 Mar 18 13:23:56.430876 master-0 kubenswrapper[27835]: I0318 13:23:56.430839 27835 generic.go:334] "Generic (PLEG): container finished" podID="80daec9e-b15b-4782-a1f7-ce398bbe323b" containerID="2307d9f9b6edb7075e27303dc674c0604795c0e793d990a0bd35a8d4c7882a78" exitCode=0 Mar 18 13:23:56.432439 master-0 kubenswrapper[27835]: I0318 13:23:56.432379 27835 generic.go:334] "Generic (PLEG): container finished" podID="720a1f60-c1cb-4aef-aaec-f082090ca631" containerID="6fc3b00292545591e6c5349f2483ea9d57bac5ac21bd098a1969c029ee5e5b9a" exitCode=0 Mar 18 13:23:56.435152 master-0 kubenswrapper[27835]: I0318 13:23:56.435093 27835 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="04b008430621fdfcd606415bb682fb2830b69521c04743dd7425bc1288a8ae9b" exitCode=0 Mar 18 13:23:56.437081 master-0 kubenswrapper[27835]: I0318 13:23:56.437052 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-q2ndb_16f8e725-f18a-478e-88c5-87d54aeb4857/manager/1.log" Mar 18 13:23:56.437488 master-0 kubenswrapper[27835]: I0318 13:23:56.437459 27835 generic.go:334] "Generic (PLEG): container finished" podID="16f8e725-f18a-478e-88c5-87d54aeb4857" containerID="8b95eb8fc69ceaf4692d8f4970690d7b9c31eb8fb64b767afa33cfaa9ea6e088" exitCode=1 Mar 18 13:23:56.440513 master-0 kubenswrapper[27835]: I0318 13:23:56.440484 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-5vhnr_bdcd72a6-a8e8-47ba-8b51-7325d35bad6b/control-plane-machine-set-operator/0.log" Mar 18 13:23:56.440513 master-0 kubenswrapper[27835]: I0318 13:23:56.440512 27835 generic.go:334] "Generic (PLEG): container finished" podID="bdcd72a6-a8e8-47ba-8b51-7325d35bad6b" containerID="7256607264aec34fc303524e25688f50a4035bdf4da670438e512c20c88759c7" exitCode=1 Mar 18 13:23:56.443665 master-0 kubenswrapper[27835]: I0318 13:23:56.443634 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 18 13:23:56.443983 master-0 kubenswrapper[27835]: I0318 13:23:56.443948 27835 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="4d4a6fb4b82b14518b178f0f032fb16b2ce281c0de8e87c9f0449d46d7739b5b" exitCode=1 Mar 18 13:23:56.443983 master-0 kubenswrapper[27835]: I0318 13:23:56.443970 27835 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="1eb12a87dc862d5b3d8d0f8d6df8c24ebffab83c33817eb9807a92d04594145f" exitCode=0 Mar 18 13:23:56.448995 master-0 kubenswrapper[27835]: I0318 13:23:56.448945 27835 generic.go:334] "Generic (PLEG): container finished" podID="a8eff549-02f3-446e-b3a1-a66cecdc02a6" containerID="8282b58a87a9816b39b8e46af1e553cfafda7bc3ace1196ac63b527830a8a86a" exitCode=0 Mar 18 13:23:56.453718 master-0 kubenswrapper[27835]: I0318 13:23:56.453679 27835 generic.go:334] "Generic (PLEG): container finished" podID="5217b77d-b517-45c3-b76d-eee86d72b141" containerID="44724c38cb2d6b59ba2396d53ded36b1d7f457c6dd6834e92f2a09e247880a38" exitCode=0 Mar 18 13:23:56.456983 master-0 kubenswrapper[27835]: I0318 13:23:56.456959 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-wqxpk_d9d09a56-ed4c-40b7-8be1-f3934c07296e/ingress-operator/4.log" Mar 18 13:23:56.457987 master-0 kubenswrapper[27835]: I0318 13:23:56.457959 27835 generic.go:334] "Generic (PLEG): container finished" podID="d9d09a56-ed4c-40b7-8be1-f3934c07296e" containerID="caba03fd38cc380067c9996cdbf9e7480fdd072e4dbef7e8d359a205ae43b4e1" exitCode=1 Mar 18 13:23:56.459598 master-0 kubenswrapper[27835]: I0318 13:23:56.459568 27835 generic.go:334] "Generic (PLEG): container finished" podID="6a4c87a8-6bf0-43b2-b598-1561cba3e391" containerID="d3c2d483573799510afcab12d760b1183078a2dd2aa3d3d851d413db0b1d8ab1" exitCode=0 Mar 18 13:23:56.462146 master-0 kubenswrapper[27835]: I0318 13:23:56.462122 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qsnxz_deb67ea0-8342-40cb-b0f4-115270e878dd/snapshot-controller/3.log" Mar 18 13:23:56.462209 master-0 kubenswrapper[27835]: I0318 13:23:56.462155 27835 generic.go:334] "Generic (PLEG): container finished" podID="deb67ea0-8342-40cb-b0f4-115270e878dd" containerID="006cf29dad6df8430c273883ed49909e3a00377ac86065a75fe3162eb36811e5" exitCode=1 Mar 18 13:23:56.470670 master-0 kubenswrapper[27835]: I0318 13:23:56.470627 27835 generic.go:334] "Generic (PLEG): container finished" podID="7fb5bad7-07d9-45ac-ad27-a887d12d148f" containerID="36dcdc5868f986f835679461c4df710fd18e0dcfbcbbdc4c74c1460f2651a842" exitCode=0 Mar 18 13:23:56.473549 master-0 kubenswrapper[27835]: I0318 13:23:56.473512 27835 generic.go:334] "Generic (PLEG): container finished" podID="702076a9-b542-4768-9e9e-99b2cac0a66e" containerID="e09c13a4c855b0e00ad1329ef737699f109774957ff6b437737fd8c1e39daca5" exitCode=0 Mar 18 13:23:56.474894 master-0 kubenswrapper[27835]: I0318 13:23:56.474865 27835 generic.go:334] "Generic (PLEG): container finished" podID="7dca7577-6bee-4dd3-917a-7b7ccc42f0fc" containerID="b5aaa571a68806249fc7d55159a4093df00ace03fbc9a12d84446e66a7f3e311" exitCode=0 Mar 18 13:23:56.477290 master-0 kubenswrapper[27835]: I0318 13:23:56.477259 27835 generic.go:334] "Generic (PLEG): container finished" podID="cb385758-78ae-46b3-994e-fec9b14b7322" containerID="254c4c55fc5a8cefc576158a3cd6566c4e22decb0988ded62e89b98504ee1458" exitCode=0 Mar 18 13:23:56.479878 master-0 kubenswrapper[27835]: I0318 13:23:56.479848 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8413125cf444e5c95f023c5dd9c6151e/kube-scheduler/0.log" Mar 18 13:23:56.480132 master-0 kubenswrapper[27835]: I0318 13:23:56.480097 27835 generic.go:334] "Generic (PLEG): container finished" podID="8413125cf444e5c95f023c5dd9c6151e" containerID="8f2f91bac220e62247e22b1d4ddac3f6faed23614b554c7d9cb87b50de91ff64" exitCode=1 Mar 18 13:23:56.480132 master-0 kubenswrapper[27835]: I0318 13:23:56.480118 27835 generic.go:334] "Generic (PLEG): container finished" podID="8413125cf444e5c95f023c5dd9c6151e" containerID="66077c2a26014879f2ee8a44731dd4750343ebe7a4a34fc0f126a55d48c25d7c" exitCode=0 Mar 18 13:23:56.481553 master-0 kubenswrapper[27835]: I0318 13:23:56.481460 27835 generic.go:334] "Generic (PLEG): container finished" podID="19a76585-a9ac-4ed9-9146-bb77b31848c6" containerID="f3d6a2875cca50d672dfde1a32c8dca9e65a425957da660e57609821797e598b" exitCode=0 Mar 18 13:23:56.482689 master-0 kubenswrapper[27835]: I0318 13:23:56.482659 27835 generic.go:334] "Generic (PLEG): container finished" podID="34a3a84b-048f-4822-9f05-0e7509327ca2" containerID="f405c7c5758aab122512ec8685660fb5ea0502d97836267e430ea463ff79f592" exitCode=0 Mar 18 13:23:56.486251 master-0 kubenswrapper[27835]: I0318 13:23:56.486197 27835 generic.go:334] "Generic (PLEG): container finished" podID="b4d424a6-cf4e-4e32-bc50-db63ef03f8dd" containerID="4daffe612ceab094bb2d1f38476f0856eefbbaa467bc42a2d0b021a9807cf03f" exitCode=0 Mar 18 13:23:56.487962 master-0 kubenswrapper[27835]: I0318 13:23:56.487937 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-9bqxm_68104a8c-3fac-4d4b-b975-bc2d045b3375/machine-api-operator/0.log" Mar 18 13:23:56.488262 master-0 kubenswrapper[27835]: I0318 13:23:56.488243 27835 generic.go:334] "Generic (PLEG): container finished" podID="68104a8c-3fac-4d4b-b975-bc2d045b3375" containerID="48a75a1bd556b4ca5c903ca8cec01a63d2822cbb454ffb75470b5fa995517263" exitCode=255 Mar 18 13:23:56.490792 master-0 kubenswrapper[27835]: I0318 13:23:56.490765 27835 generic.go:334] "Generic (PLEG): container finished" podID="595f697b-d238-4500-84ce-1ea00377f05e" containerID="239bc63a547a5d1be7fb026224506bae5660c286e46adef016daf55c15815d54" exitCode=0 Mar 18 13:23:56.495135 master-0 kubenswrapper[27835]: I0318 13:23:56.495062 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-5c6485487f-fk8ql_f38b464d-a218-4753-b7ac-a7d373952c4d/machine-approver-controller/0.log" Mar 18 13:23:56.495387 master-0 kubenswrapper[27835]: I0318 13:23:56.495359 27835 generic.go:334] "Generic (PLEG): container finished" podID="f38b464d-a218-4753-b7ac-a7d373952c4d" containerID="55fa6d94ce214941faacc4a186e818424b11b71ba4c1eab406a044ddb774b931" exitCode=255 Mar 18 13:23:56.498084 master-0 kubenswrapper[27835]: I0318 13:23:56.498045 27835 generic.go:334] "Generic (PLEG): container finished" podID="902909ca-ab08-49aa-9736-70e073f8e67d" containerID="55c94bf30a1ccca039ed50a5bce5510c09848033cc6f053a453f757341dfc8bc" exitCode=0 Mar 18 13:23:56.498084 master-0 kubenswrapper[27835]: I0318 13:23:56.498065 27835 generic.go:334] "Generic (PLEG): container finished" podID="902909ca-ab08-49aa-9736-70e073f8e67d" containerID="a4c090aab4f3bf89ced608a71e5db3af3d21ed7b2100020f019a5440d122cecc" exitCode=0 Mar 18 13:23:56.498084 master-0 kubenswrapper[27835]: I0318 13:23:56.498075 27835 generic.go:334] "Generic (PLEG): container finished" podID="902909ca-ab08-49aa-9736-70e073f8e67d" containerID="0697c4988a8dce166398ff970c57c5e68178bc04fae2f2829aa0dffd05961950" exitCode=0 Mar 18 13:23:56.499360 master-0 kubenswrapper[27835]: I0318 13:23:56.499325 27835 generic.go:334] "Generic (PLEG): container finished" podID="9b853631-ff77-4643-aa07-b1f8056320a3" containerID="64aef303c60ed75302cdf53b54c1f5e7b01831e38260821ecee71573b2f8873b" exitCode=0 Mar 18 13:23:56.500678 master-0 kubenswrapper[27835]: I0318 13:23:56.500645 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-9bjsj_98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617/manager/1.log" Mar 18 13:23:56.500940 master-0 kubenswrapper[27835]: I0318 13:23:56.500909 27835 generic.go:334] "Generic (PLEG): container finished" podID="98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617" containerID="373a96948933142905a0929fac3fe9686db40a54b4edff77be09a9cdf58a235d" exitCode=1 Mar 18 13:23:56.580960 master-0 kubenswrapper[27835]: E0318 13:23:56.580896 27835 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 18 13:23:56.981954 master-0 kubenswrapper[27835]: E0318 13:23:56.981876 27835 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 18 13:23:57.205818 master-0 kubenswrapper[27835]: I0318 13:23:57.205732 27835 apiserver.go:52] "Watching apiserver" Mar 18 13:23:57.233952 master-0 kubenswrapper[27835]: I0318 13:23:57.233799 27835 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 18 13:23:57.783073 master-0 kubenswrapper[27835]: E0318 13:23:57.782977 27835 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 18 13:23:59.383506 master-0 kubenswrapper[27835]: E0318 13:23:59.383334 27835 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 18 13:24:02.584667 master-0 kubenswrapper[27835]: E0318 13:24:02.584559 27835 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 18 13:24:07.584880 master-0 kubenswrapper[27835]: E0318 13:24:07.584789 27835 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 18 13:24:12.546204 master-0 kubenswrapper[27835]: E0318 13:24:12.545902 27835 summary_sys_containers.go:89] "Failed to get system container stats" err="failed to get cgroup stats for \"/system.slice/crio.service\": failed to get container info for \"/system.slice/crio.service\": unknown container \"/system.slice/crio.service\"" containerName="/system.slice/crio.service" Mar 18 13:24:12.552119 master-0 kubenswrapper[27835]: E0318 13:24:12.552081 27835 summary_sys_containers.go:89] "Failed to get system container stats" err="failed to get cgroup stats for \"/system.slice\": failed to get container info for \"/system.slice\": unknown container \"/system.slice\"" containerName="/system.slice" Mar 18 13:24:12.552495 master-0 kubenswrapper[27835]: E0318 13:24:12.552470 27835 summary_sys_containers.go:89] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods.slice\": failed to get container info for \"/kubepods.slice\": unknown container \"/kubepods.slice\"" containerName="/kubepods.slice" Mar 18 13:24:12.585277 master-0 kubenswrapper[27835]: E0318 13:24:12.585248 27835 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 18 13:24:12.636959 master-0 kubenswrapper[27835]: I0318 13:24:12.636898 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5dbbb8b86f-zrc8h_720a1f60-c1cb-4aef-aaec-f082090ca631/multus-admission-controller/0.log" Mar 18 13:24:12.637176 master-0 kubenswrapper[27835]: I0318 13:24:12.636959 27835 generic.go:334] "Generic (PLEG): container finished" podID="720a1f60-c1cb-4aef-aaec-f082090ca631" containerID="b6d0118c2fdf2cbc54c92133c6e31568d8996365d7d961746064b4d6f7f3d6e8" exitCode=137 Mar 18 13:24:12.640389 master-0 kubenswrapper[27835]: I0318 13:24:12.640353 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-wqxpk_d9d09a56-ed4c-40b7-8be1-f3934c07296e/ingress-operator/5.log" Mar 18 13:24:12.640834 master-0 kubenswrapper[27835]: I0318 13:24:12.640803 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-wqxpk_d9d09a56-ed4c-40b7-8be1-f3934c07296e/ingress-operator/4.log" Mar 18 13:24:12.645245 master-0 kubenswrapper[27835]: I0318 13:24:12.645188 27835 generic.go:334] "Generic (PLEG): container finished" podID="d9d09a56-ed4c-40b7-8be1-f3934c07296e" containerID="a272363aabc94bf515887116c3094b118b2c3e6ac7802ab09d5f4466b9ec2a97" exitCode=1 Mar 18 13:24:12.648163 master-0 kubenswrapper[27835]: I0318 13:24:12.648126 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-check-endpoints/0.log" Mar 18 13:24:12.655181 master-0 kubenswrapper[27835]: I0318 13:24:12.655124 27835 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="721fa0a6e32ffbe367060749a069ffa65b9f6ad129708e70bf8fe6c632945146" exitCode=255 Mar 18 13:24:13.066754 master-0 kubenswrapper[27835]: I0318 13:24:13.066720 27835 manager.go:324] Recovery completed Mar 18 13:24:13.152889 master-0 kubenswrapper[27835]: I0318 13:24:13.152833 27835 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 18 13:24:13.152889 master-0 kubenswrapper[27835]: I0318 13:24:13.152876 27835 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 18 13:24:13.153144 master-0 kubenswrapper[27835]: I0318 13:24:13.152914 27835 state_mem.go:36] "Initialized new in-memory state store" Mar 18 13:24:13.153190 master-0 kubenswrapper[27835]: I0318 13:24:13.153165 27835 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 18 13:24:13.153240 master-0 kubenswrapper[27835]: I0318 13:24:13.153182 27835 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 18 13:24:13.153240 master-0 kubenswrapper[27835]: I0318 13:24:13.153217 27835 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 18 13:24:13.153240 master-0 kubenswrapper[27835]: I0318 13:24:13.153226 27835 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 18 13:24:13.153240 master-0 kubenswrapper[27835]: I0318 13:24:13.153234 27835 policy_none.go:49] "None policy: Start" Mar 18 13:24:13.158356 master-0 kubenswrapper[27835]: I0318 13:24:13.158312 27835 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 18 13:24:13.158546 master-0 kubenswrapper[27835]: I0318 13:24:13.158371 27835 state_mem.go:35] "Initializing new in-memory state store" Mar 18 13:24:13.158699 master-0 kubenswrapper[27835]: I0318 13:24:13.158683 27835 state_mem.go:75] "Updated machine memory state" Mar 18 13:24:13.158699 master-0 kubenswrapper[27835]: I0318 13:24:13.158698 27835 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 18 13:24:13.178354 master-0 kubenswrapper[27835]: I0318 13:24:13.178318 27835 manager.go:334] "Starting Device Plugin manager" Mar 18 13:24:13.178354 master-0 kubenswrapper[27835]: I0318 13:24:13.178372 27835 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 18 13:24:13.178663 master-0 kubenswrapper[27835]: I0318 13:24:13.178384 27835 server.go:79] "Starting device plugin registration server" Mar 18 13:24:13.178763 master-0 kubenswrapper[27835]: I0318 13:24:13.178738 27835 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 18 13:24:13.178834 master-0 kubenswrapper[27835]: I0318 13:24:13.178754 27835 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 18 13:24:13.178977 master-0 kubenswrapper[27835]: I0318 13:24:13.178886 27835 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 18 13:24:13.178977 master-0 kubenswrapper[27835]: I0318 13:24:13.178958 27835 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 18 13:24:13.178977 master-0 kubenswrapper[27835]: I0318 13:24:13.178965 27835 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 18 13:24:13.278986 master-0 kubenswrapper[27835]: I0318 13:24:13.278874 27835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 13:24:13.281086 master-0 kubenswrapper[27835]: I0318 13:24:13.281059 27835 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 13:24:13.281237 master-0 kubenswrapper[27835]: I0318 13:24:13.281210 27835 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 13:24:13.281298 master-0 kubenswrapper[27835]: I0318 13:24:13.281288 27835 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 13:24:13.281485 master-0 kubenswrapper[27835]: I0318 13:24:13.281473 27835 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 13:24:13.294334 master-0 kubenswrapper[27835]: I0318 13:24:13.294283 27835 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 18 13:24:13.294657 master-0 kubenswrapper[27835]: I0318 13:24:13.294422 27835 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 18 13:24:17.586673 master-0 kubenswrapper[27835]: I0318 13:24:17.586580 27835 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0","openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 13:24:17.587365 master-0 kubenswrapper[27835]: I0318 13:24:17.587118 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql","openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-sqx7p","openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fdnt","openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p","openshift-apiserver/apiserver-85b59d8688-wd26k","openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld","openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv","openshift-kube-controller-manager/installer-3-master-0","openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-5vhnr","openshift-kube-scheduler/installer-4-master-0","openshift-network-diagnostics/network-check-source-b4bf74f6-tw7c7","openshift-cluster-version/cluster-version-operator-7d58488df-bqmqw","openshift-dns-operator/dns-operator-9c5679d8f-5lzzn","openshift-etcd/etcd-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd","openshift-network-node-identity/network-node-identity-x8r78","openshift-dns/node-resolver-7vddk","openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf","openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk","openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9","openshift-controller-manager/controller-manager-d7c95db55-d6lqm","openshift-ingress/router-default-7dcf5569b5-gvmtv","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl","openshift-dns/dns-default-92s8c","openshift-machine-config-operator/machine-config-server-wxht4","openshift-marketplace/certified-operators-8wqfk","openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m","openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7","openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn","openshift-kube-apiserver/installer-1-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb","openshift-marketplace/redhat-operators-89st2","openshift-monitoring/node-exporter-t4p42","openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-4bqf9","openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-59d95","openshift-ingress-canary/ingress-canary-6hldc","openshift-multus/multus-admission-controller-58c9f8fc64-lk5k7","openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm","openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp","openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm","openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-k8tv4","openshift-kube-apiserver/installer-3-master-0","openshift-kube-controller-manager/installer-3-retry-1-master-0","openshift-kube-scheduler/installer-3-master-0","openshift-kube-storage-version-migrator/migrator-8487694857-49h6x","openshift-machine-api/cluster-autoscaler-operator-866dc4744-lqtbg","openshift-etcd/installer-2-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-marketplace/marketplace-operator-89ccd998f-99pzm","openshift-multus/multus-additional-cni-plugins-ttdn5","openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx","openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4","openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb","openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qsnxz","openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm","openshift-etcd/installer-1-master-0","openshift-multus/multus-vkbvp","openshift-network-diagnostics/network-check-target-kcsgp","openshift-service-ca/service-ca-79bc6b8d76-2zvf2","openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt","openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6","assisted-installer/assisted-installer-controller-7bfhd","openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl","openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-r4dzk","openshift-machine-config-operator/machine-config-daemon-5blrl","openshift-monitoring/prometheus-operator-6c8df6d4b-7tcjk","openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h","openshift-network-operator/network-operator-7bd846bfc4-gxxbr","openshift-ovn-kubernetes/ovnkube-node-kxqjc","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-68lgz","openshift-insights/insights-operator-68bf6ff9d6-bbqfl","openshift-kube-controller-manager/installer-2-master-0","openshift-marketplace/community-operators-tqw5h","openshift-marketplace/redhat-marketplace-bxlrz","openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wsmsc","openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5","openshift-monitoring/metrics-server-65dbcd767c-7bqc9","openshift-network-operator/iptables-alerter-jkl4x","openshift-cluster-node-tuning-operator/tuned-5ftdj","openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4","openshift-monitoring/kube-state-metrics-7bbc969446-mxcng","openshift-multus/network-metrics-daemon-kbfbq","openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj"] Mar 18 13:24:17.587530 master-0 kubenswrapper[27835]: I0318 13:24:17.587440 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-7bfhd" Mar 18 13:24:17.599969 master-0 kubenswrapper[27835]: I0318 13:24:17.599907 27835 kubelet.go:2566] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="994f49eb-f082-48c0-a73d-916d0ce332bc" Mar 18 13:24:17.618470 master-0 kubenswrapper[27835]: I0318 13:24:17.611010 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 18 13:24:17.621137 master-0 kubenswrapper[27835]: I0318 13:24:17.620721 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 18 13:24:17.621137 master-0 kubenswrapper[27835]: I0318 13:24:17.620790 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 18 13:24:17.690021 master-0 kubenswrapper[27835]: I0318 13:24:17.674045 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 13:24:17.690021 master-0 kubenswrapper[27835]: I0318 13:24:17.674295 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 18 13:24:17.690021 master-0 kubenswrapper[27835]: I0318 13:24:17.674619 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 18 13:24:17.690021 master-0 kubenswrapper[27835]: I0318 13:24:17.675247 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 18 13:24:17.690021 master-0 kubenswrapper[27835]: I0318 13:24:17.675393 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 18 13:24:17.690021 master-0 kubenswrapper[27835]: I0318 13:24:17.675608 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 18 13:24:17.690021 master-0 kubenswrapper[27835]: I0318 13:24:17.675753 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 18 13:24:17.690021 master-0 kubenswrapper[27835]: I0318 13:24:17.675880 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 18 13:24:17.690021 master-0 kubenswrapper[27835]: I0318 13:24:17.676988 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 13:24:17.690021 master-0 kubenswrapper[27835]: I0318 13:24:17.678882 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 13:24:17.690021 master-0 kubenswrapper[27835]: I0318 13:24:17.680662 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 18 13:24:17.690021 master-0 kubenswrapper[27835]: I0318 13:24:17.687694 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 13:24:17.690874 master-0 kubenswrapper[27835]: I0318 13:24:17.690851 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 18 13:24:17.691465 master-0 kubenswrapper[27835]: I0318 13:24:17.691434 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 18 13:24:17.695220 master-0 kubenswrapper[27835]: I0318 13:24:17.694892 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 13:24:17.695439 master-0 kubenswrapper[27835]: I0318 13:24:17.695320 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-79bc6b8d76-2zvf2" event={"ID":"b89fb313-d01a-4305-b123-e253b3382b85","Type":"ContainerStarted","Data":"1be597ce241a4c605b967a1c6529bc798d2a367805fff6066c48887fdc2a2af1"} Mar 18 13:24:17.695507 master-0 kubenswrapper[27835]: I0318 13:24:17.695467 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" event={"ID":"41cc6278-8f99-407c-ba5f-750a40e3058c","Type":"ContainerStarted","Data":"de4324b4c32cf4e9cbdf79af1c88339cded8c6fd18295426d2e5f309799e44c1"} Mar 18 13:24:17.695507 master-0 kubenswrapper[27835]: I0318 13:24:17.695495 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" event={"ID":"41cc6278-8f99-407c-ba5f-750a40e3058c","Type":"ContainerStarted","Data":"03b02d62589926abe5e0b1261c9a635f1d2bfbcefca79eac740978fb36ffa6b1"} Mar 18 13:24:17.695589 master-0 kubenswrapper[27835]: I0318 13:24:17.695515 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" event={"ID":"290d1f84-5c5c-4bff-b045-e6020793cded","Type":"ContainerStarted","Data":"070b282208ec733465a61cb3d4378f64269708ba5a361a70c5483204a7f87847"} Mar 18 13:24:17.695589 master-0 kubenswrapper[27835]: I0318 13:24:17.695534 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" event={"ID":"290d1f84-5c5c-4bff-b045-e6020793cded","Type":"ContainerStarted","Data":"cb74a42e367af8586d98d799b6ded81e9d93e7b3d806a9a925a94b3e763a3830"} Mar 18 13:24:17.698928 master-0 kubenswrapper[27835]: I0318 13:24:17.695550 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" event={"ID":"fe643e40-d06d-4e69-9be3-0065c2a78567","Type":"ContainerStarted","Data":"1e569b3cafd93d8af4f801b48428238651c12ec610106d9d95db5f8c5cc1b218"} Mar 18 13:24:17.699589 master-0 kubenswrapper[27835]: I0318 13:24:17.699474 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" event={"ID":"fe643e40-d06d-4e69-9be3-0065c2a78567","Type":"ContainerDied","Data":"f43bc8a8bbbc899289099dead977b3860d9a72dc536beb3c67e961db9a9900e8"} Mar 18 13:24:17.699701 master-0 kubenswrapper[27835]: I0318 13:24:17.699600 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" event={"ID":"fe643e40-d06d-4e69-9be3-0065c2a78567","Type":"ContainerStarted","Data":"cb3a395c88586f9726036952a749f0819efe1ca07bfec591e8bf77ac60734a87"} Mar 18 13:24:17.699701 master-0 kubenswrapper[27835]: I0318 13:24:17.699620 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" event={"ID":"ce3728ab-5d50-40ac-95b3-74a5b62a557f","Type":"ContainerStarted","Data":"a84047fd9b87cdfb49ea7e164528794ba2d0999a5e7dcba9dd9e544a562e4b04"} Mar 18 13:24:17.699701 master-0 kubenswrapper[27835]: I0318 13:24:17.699635 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" event={"ID":"ce3728ab-5d50-40ac-95b3-74a5b62a557f","Type":"ContainerDied","Data":"ce628a61289a6356a4840f81be538656bf2f65763801f5f5367447fe1929945e"} Mar 18 13:24:17.699701 master-0 kubenswrapper[27835]: I0318 13:24:17.699647 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" event={"ID":"ce3728ab-5d50-40ac-95b3-74a5b62a557f","Type":"ContainerDied","Data":"2be57f1bc2d84ad4ff4dd4172fd46f3cfddc882962f936029d991fec6bacfeb8"} Mar 18 13:24:17.699701 master-0 kubenswrapper[27835]: I0318 13:24:17.699682 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" event={"ID":"ce3728ab-5d50-40ac-95b3-74a5b62a557f","Type":"ContainerStarted","Data":"e3813939efa506945bdf3b3ce075a38e8fa5a4f203ce2d57587a61e54ae68d09"} Mar 18 13:24:17.699701 master-0 kubenswrapper[27835]: I0318 13:24:17.699699 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vkbvp" event={"ID":"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10","Type":"ContainerStarted","Data":"3f3dca80b39a4776e47d8b812a2714786234fa3f72e9236861e58ef6c6314c8f"} Mar 18 13:24:17.700024 master-0 kubenswrapper[27835]: I0318 13:24:17.699712 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vkbvp" event={"ID":"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10","Type":"ContainerStarted","Data":"95c74df7f62f7dc8b78c4b8838181b79403ce4060666c2043d78b39ebd9f0419"} Mar 18 13:24:17.700024 master-0 kubenswrapper[27835]: I0318 13:24:17.699724 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb" event={"ID":"07505113-d5e7-4ea3-b9cc-8f08cba45ccc","Type":"ContainerStarted","Data":"b1d807d6b0428c0212050973f0490d6a880b69e8127e076fbe197dddf8a96d5b"} Mar 18 13:24:17.700024 master-0 kubenswrapper[27835]: I0318 13:24:17.699736 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb" event={"ID":"07505113-d5e7-4ea3-b9cc-8f08cba45ccc","Type":"ContainerDied","Data":"34f2829f920c0b8e7fad32f3489c2848036444d936bf5324856fb8eb487c04e1"} Mar 18 13:24:17.700024 master-0 kubenswrapper[27835]: I0318 13:24:17.699777 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb" event={"ID":"07505113-d5e7-4ea3-b9cc-8f08cba45ccc","Type":"ContainerStarted","Data":"6aa30a9c358b647ee82a65206dafbfb29c8b2259010a68ba5311f3d4fa4e3bc2"} Mar 18 13:24:17.700024 master-0 kubenswrapper[27835]: I0318 13:24:17.699795 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"73ea828b1ec3fb5eeb4f799bf4aac37c9e219ec1697ef6d6fbf4963823466e19"} Mar 18 13:24:17.700024 master-0 kubenswrapper[27835]: I0318 13:24:17.699807 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"447db57cab5ba3c69b27b8cc5082a77bb51da84b7ea28cd8dbd5650fa54f13e0"} Mar 18 13:24:17.700024 master-0 kubenswrapper[27835]: I0318 13:24:17.699818 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"4fdf9a4a6b4d2639ef00c48189b5ca39aef049f50cde7194dc5dacc0bb496278"} Mar 18 13:24:17.700024 master-0 kubenswrapper[27835]: I0318 13:24:17.699860 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"f3b165a56beb52cbeaa61b6b02ce9e692cd29bd9898a9870c02bb4754aac4be3"} Mar 18 13:24:17.700024 master-0 kubenswrapper[27835]: I0318 13:24:17.699874 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"5a427fe739d798898ee45c0bf356bb2e2c26d43edea40af7c1f44b831591867e"} Mar 18 13:24:17.700024 master-0 kubenswrapper[27835]: I0318 13:24:17.699885 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"aec5346f46da33d997a4c62bc92998fc48a19573760229d71e97091c1c9a67c9"} Mar 18 13:24:17.700024 master-0 kubenswrapper[27835]: I0318 13:24:17.699898 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"ac6a03840b83398cf49ffdbda9e45e37a6a4ad486796c7aa5525dfdd483b2a1c"} Mar 18 13:24:17.700024 master-0 kubenswrapper[27835]: I0318 13:24:17.699936 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"e51c044a2405dc8e2c15e99d23adc3d518ef8ba93339eb0eb649f5a9e556f757"} Mar 18 13:24:17.700024 master-0 kubenswrapper[27835]: I0318 13:24:17.699958 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"fa7bdc6eb3bcdebec3d64b4ce8194bafce362b67c9019cd975ec6f9a5ac40f46"} Mar 18 13:24:17.700024 master-0 kubenswrapper[27835]: I0318 13:24:17.699971 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-x8r78" event={"ID":"0e7156cf-2d68-4de8-b7e7-60e1539590dd","Type":"ContainerStarted","Data":"086e9fc8ca523144919a5163c71d4016399ffa720f997ba6ec3ad12584d9cb30"} Mar 18 13:24:17.700024 master-0 kubenswrapper[27835]: I0318 13:24:17.699984 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-x8r78" event={"ID":"0e7156cf-2d68-4de8-b7e7-60e1539590dd","Type":"ContainerDied","Data":"0df05b30952e8bace8c1fadfea54a4900c846053f046ccb0bcbeb970b3b63e09"} Mar 18 13:24:17.700024 master-0 kubenswrapper[27835]: I0318 13:24:17.699996 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-x8r78" event={"ID":"0e7156cf-2d68-4de8-b7e7-60e1539590dd","Type":"ContainerStarted","Data":"3e2abd83cb78987bc1cea9ff0bde57ccd8d857515f5058285b44c75df988a5ac"} Mar 18 13:24:17.700024 master-0 kubenswrapper[27835]: I0318 13:24:17.700034 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-x8r78" event={"ID":"0e7156cf-2d68-4de8-b7e7-60e1539590dd","Type":"ContainerStarted","Data":"6a1e7d1316bc2cf34931aa8f311ea4a4a0854b6b6c453298e452fe499d1407a7"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700050 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" event={"ID":"bf4c5410-fb44-45e8-ab66-24806e6349b8","Type":"ContainerStarted","Data":"ace486964b3baf64b75fe000f29856b42211876fd8f8e8061e47b74f3fd46fe3"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700063 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" event={"ID":"bf4c5410-fb44-45e8-ab66-24806e6349b8","Type":"ContainerStarted","Data":"7781d91a527c8a68391c4776d4e2edee3564ed2594b252a676d696ac4e021083"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700074 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-85b59d8688-wd26k" event={"ID":"a2bdf5b0-8764-4b15-97c9-20af36634fd0","Type":"ContainerStarted","Data":"e591f5850f593f50de8a2695774798cb0f8224a8598b6cd3cc1b58fe720d0858"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700091 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-85b59d8688-wd26k" event={"ID":"a2bdf5b0-8764-4b15-97c9-20af36634fd0","Type":"ContainerStarted","Data":"72bcea02bc364ef9e96c66bb5c3590d1a8a24253dd3b5839088e3771a465ac82"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700108 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-85b59d8688-wd26k" event={"ID":"a2bdf5b0-8764-4b15-97c9-20af36634fd0","Type":"ContainerDied","Data":"fbf0aecf9f06b167d5a00c6e13e0a1fb74d188d7a55e8c083388c3f5b4d41a40"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700177 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-85b59d8688-wd26k" event={"ID":"a2bdf5b0-8764-4b15-97c9-20af36634fd0","Type":"ContainerStarted","Data":"7c0a9d3ecc02d97801da90faa78ea9a04fc4381142a502c2ebc0a26f2eb9f11b"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700200 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dac657218e0dfd4a3c5aa3191ba5551afdd06a112b3e1902ad24f1784c7e77bb" Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700213 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wxht4" event={"ID":"bd8aa7c1-0a04-4df0-9047-63ab846b9535","Type":"ContainerStarted","Data":"fc86d76759dab4eb23de1eef4d8d288bf3dac5716425557a62f1343bc2eae90e"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700226 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wxht4" event={"ID":"bd8aa7c1-0a04-4df0-9047-63ab846b9535","Type":"ContainerStarted","Data":"91e6cc574de9ab0ab69c5ac67d10cbf7cd272238dd17877d6c8486b06ad54731"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700238 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8wqfk" event={"ID":"d2316774-4ebc-4fa9-be07-eb1f16f614dd","Type":"ContainerStarted","Data":"7f146f2adc27fe3158369931da2a9a1a2960129e0d07116c72c9e8f51434c0ed"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700250 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8wqfk" event={"ID":"d2316774-4ebc-4fa9-be07-eb1f16f614dd","Type":"ContainerDied","Data":"1a099b747318c0fe3ecf7281f4b981921dcc9c60c98ba0e17565f1557ebc2839"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700262 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8wqfk" event={"ID":"d2316774-4ebc-4fa9-be07-eb1f16f614dd","Type":"ContainerDied","Data":"4c8a9dfdf52860c843b25f4e4b2d64bea7e0f6631bfdbe29d75a91918d723d48"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700273 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8wqfk" event={"ID":"d2316774-4ebc-4fa9-be07-eb1f16f614dd","Type":"ContainerStarted","Data":"f8ec4afc73563013c96a8e1eace508a943272ee46d78033f1795223ee51579db"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700295 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60156cd8a457797db9bf54e48022d6e4ae174300834ce3ef829021fe366c28b0" Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700310 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt" event={"ID":"822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9","Type":"ContainerStarted","Data":"9d653ae820cf4c6210b4fa575e4bc19b9b9f22c2b83029e507c4eb09ffdf189c"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700323 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt" event={"ID":"822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9","Type":"ContainerStarted","Data":"1cceb4712c77ca2fdf0849f1bea9fd2ebeb3d8a95d1db4ec067d2a7d333a8d1f"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700333 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" event={"ID":"00375107-9a3b-4161-a90d-72ea8827c5fc","Type":"ContainerStarted","Data":"3ad887c0a7265b813a19c4352fb7d718fc8a0cbf00d4ec6a7cef361eef024983"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700345 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" event={"ID":"00375107-9a3b-4161-a90d-72ea8827c5fc","Type":"ContainerDied","Data":"ca11ff8dd74bbd57e44f6070d192194a64ab628351ce867a0ac332f4e51a71b5"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700356 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" event={"ID":"00375107-9a3b-4161-a90d-72ea8827c5fc","Type":"ContainerStarted","Data":"ca95e515f4a5a1b63626328ea2ad328d0f3f07c258a5281fc61399ac842b383f"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700366 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-gxxbr" event={"ID":"f7f4ae93-428b-4ebd-bfaa-18359b407ede","Type":"ContainerStarted","Data":"12f0bb461bc477b8eb65dc72156ed9ad8f7e41968ae2d0ef9cad32f3e837b199"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700457 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-gxxbr" event={"ID":"f7f4ae93-428b-4ebd-bfaa-18359b407ede","Type":"ContainerDied","Data":"0f68e5c45ea6d8fc8605559b1dd3501571f6348a64337151b3b9a1c54518d47c"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700499 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-gxxbr" event={"ID":"f7f4ae93-428b-4ebd-bfaa-18359b407ede","Type":"ContainerStarted","Data":"2df3167c99041fb8b521641e83cdf585c987ff07f0be8411cb46dd3d61303f4c"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700517 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tqw5h" event={"ID":"2a25632e-32d0-43d2-9be7-f515d29a1720","Type":"ContainerStarted","Data":"e8e8e5f5ee6fd77b7212349b29251fc3476241d2ab0b5d83a3ecdc238d84a2ae"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700528 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tqw5h" event={"ID":"2a25632e-32d0-43d2-9be7-f515d29a1720","Type":"ContainerDied","Data":"03f24e4774570f5bcb22723cea17bbe58e8e6018e449616ad7396efe7f6ed545"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700540 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tqw5h" event={"ID":"2a25632e-32d0-43d2-9be7-f515d29a1720","Type":"ContainerDied","Data":"c180f7f3ef28dbeeb20612afdf694c75b1483a1a6158630039543cf7971e63f5"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700551 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tqw5h" event={"ID":"2a25632e-32d0-43d2-9be7-f515d29a1720","Type":"ContainerStarted","Data":"fd58bf4306c0d3457858f4ec24d59cd979f6f4afdc73f13f04c121d2cc971fc3"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700561 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wsmsc" event={"ID":"6db2bfbd-d8db-4384-8979-23e8a1e87e5e","Type":"ContainerStarted","Data":"832f56b880d15099c333330fc427d4ba30a01745231832a4a7863a3a894c690d"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700573 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wsmsc" event={"ID":"6db2bfbd-d8db-4384-8979-23e8a1e87e5e","Type":"ContainerStarted","Data":"bb505f490d9f0d175bd48b40f2b116d9b59fe037e6e27e85a04f72f615f5d521"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700583 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lqtbg" event={"ID":"2b12af9a-8041-477f-90eb-05bb6ae7861a","Type":"ContainerStarted","Data":"46bdd357defc2dd21565769c8123edf1ef61b5a491fb0aa0d385f559a48dfecf"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700594 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lqtbg" event={"ID":"2b12af9a-8041-477f-90eb-05bb6ae7861a","Type":"ContainerDied","Data":"1ea74ec7ff988c3aa1326aad273ebf989a1e564b326b601e6eb48c414dd19ee0"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700614 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lqtbg" event={"ID":"2b12af9a-8041-477f-90eb-05bb6ae7861a","Type":"ContainerStarted","Data":"98f348c48c25a3ad3d98bffee1f3f7c9ece63bd1ce7d6bb87b45e89183bb6b2b"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700632 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lqtbg" event={"ID":"2b12af9a-8041-477f-90eb-05bb6ae7861a","Type":"ContainerStarted","Data":"ed3daf11e343e1b2061522afa05ec8c54dad41a761078c089559715ea58a7e8b"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700645 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttdn5" event={"ID":"767da57e-44e4-4861-bc6f-427c5bbb4d9d","Type":"ContainerStarted","Data":"a3d21d22429f0c722ae40da9b05bb2559f747cf0cc8ac63b185beb2d14f0e235"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700657 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttdn5" event={"ID":"767da57e-44e4-4861-bc6f-427c5bbb4d9d","Type":"ContainerDied","Data":"22e1bd5e28c298ede758e5ddea0b33351ac8c7be1111bab8e7269abdb7d0b24d"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700668 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttdn5" event={"ID":"767da57e-44e4-4861-bc6f-427c5bbb4d9d","Type":"ContainerDied","Data":"7dd7465ff0a0e7bd1744dc8ce263fa13a50d77f65ff8439074a245d515a4445a"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700679 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttdn5" event={"ID":"767da57e-44e4-4861-bc6f-427c5bbb4d9d","Type":"ContainerDied","Data":"ce72b00f2972d5446b5f276006e7acfa3fdc14bc227bc60b88d427b8aca46c01"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700691 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttdn5" event={"ID":"767da57e-44e4-4861-bc6f-427c5bbb4d9d","Type":"ContainerDied","Data":"e6c5e39905127934bde209ce2f1016715a59ddc9fc387b1a3a64af536455bdb8"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700708 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttdn5" event={"ID":"767da57e-44e4-4861-bc6f-427c5bbb4d9d","Type":"ContainerDied","Data":"2c7ef62a916ad3298edbd1aa1cbc3e8ff60647bfc3a55655d38feae6a6189afb"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700725 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttdn5" event={"ID":"767da57e-44e4-4861-bc6f-427c5bbb4d9d","Type":"ContainerDied","Data":"b002856dfe7358511cd094dcfacc7030cb861d82b50197ce9130a1536facf510"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700737 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttdn5" event={"ID":"767da57e-44e4-4861-bc6f-427c5bbb4d9d","Type":"ContainerStarted","Data":"c9e13e3e5b49e845caaa9344e46631934d49f01be6a7ead87d5884c85f85894d"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700748 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"814ffa63-b08e-4de8-b912-8d7f0638230b","Type":"ContainerDied","Data":"bd16bdf4e73c45c278128af3a659c5a213de4cb9ef8b0c72e75eabe56dd40dbc"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700788 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"814ffa63-b08e-4de8-b912-8d7f0638230b","Type":"ContainerDied","Data":"399e36a17781740e987661a51d15dc9628e7dba92fbae5bfa7767552365b7e5a"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700801 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="399e36a17781740e987661a51d15dc9628e7dba92fbae5bfa7767552365b7e5a" Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700812 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" event={"ID":"ac6d8eb6-1d5e-4757-9823-5ffe478c711c","Type":"ContainerStarted","Data":"dc388f1effef07f85f07a2d22d20e7738827bcf12878e52c4f8e033bb80ad74c"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700824 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" event={"ID":"ac6d8eb6-1d5e-4757-9823-5ffe478c711c","Type":"ContainerDied","Data":"3ec5f61268f5b704bdd3ae4759c44192ac2e3c0b60c608cf999dd449ac28017b"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700835 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" event={"ID":"ac6d8eb6-1d5e-4757-9823-5ffe478c711c","Type":"ContainerStarted","Data":"3cfbaa8df9a218f8dee119016e4288585bf40d98d2be646b4c356cf8d4a6af1b"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700845 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" event={"ID":"ac6d8eb6-1d5e-4757-9823-5ffe478c711c","Type":"ContainerStarted","Data":"5e20d46e2ff68c35ec5f71de1a7613daa62264adc487ab5ef65e9454569fe466"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700857 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-89st2" event={"ID":"9548e397-0db4-41c8-9cc8-b575060e9c66","Type":"ContainerStarted","Data":"9e4278835752208e516e1189ae8ac5a890d3d4160a41274c8f9f115a5ab41220"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.700867 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-89st2" event={"ID":"9548e397-0db4-41c8-9cc8-b575060e9c66","Type":"ContainerDied","Data":"403ebc2e5a41ebd83d754ef243b009a18ec0ae88fbc50c4907c8838a7c5edab4"} Mar 18 13:24:17.702298 master-0 kubenswrapper[27835]: I0318 13:24:17.701357 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-89st2" event={"ID":"9548e397-0db4-41c8-9cc8-b575060e9c66","Type":"ContainerDied","Data":"aacaa4f75f3c9d2bdb4d347974e6b6d65020cdef4eea519f86746e64d1055396"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.702613 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.702658 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-89st2" event={"ID":"9548e397-0db4-41c8-9cc8-b575060e9c66","Type":"ContainerStarted","Data":"5eea39afe08c6fda2308b0aa93f656fdde076cef1d17307c9c4b3694c8a0bf52"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.702684 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" event={"ID":"ab2f96fb-ef55-4427-a598-7e3f1e224045","Type":"ContainerStarted","Data":"119d9e30269ff1ad7de2d957f082d56c48f1d252e3456aa6d95395e7b3eb424b"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.702786 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" event={"ID":"ab2f96fb-ef55-4427-a598-7e3f1e224045","Type":"ContainerStarted","Data":"e209b4b73eecb88cb54a49758853303bdcdf5c32268cc6c2b82da80281f8a70f"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.702830 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" event={"ID":"ab2f96fb-ef55-4427-a598-7e3f1e224045","Type":"ContainerStarted","Data":"f9cf42fa7b1174c3acfbdce01651da3818728bde2c45ab899be0bac58cf14d63"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.702852 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" event={"ID":"ab2f96fb-ef55-4427-a598-7e3f1e224045","Type":"ContainerStarted","Data":"c2910bb6cfe70a0b7fe7aec2dfbcd08566c4710c84d5c9e277f29f1f256e1137"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.702868 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" event={"ID":"ab2f96fb-ef55-4427-a598-7e3f1e224045","Type":"ContainerStarted","Data":"907b5d903e5957f54b3bce3ec41f050fa3c3f32c30e69541581441ecd0e3d71f"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.702928 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" event={"ID":"ab2f96fb-ef55-4427-a598-7e3f1e224045","Type":"ContainerStarted","Data":"2b48ee454e0a7faaac5086b96c579627b7ea7c7f153d481a4dbe7060bc0b9ae5"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.702966 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" event={"ID":"ab2f96fb-ef55-4427-a598-7e3f1e224045","Type":"ContainerStarted","Data":"ab9a907722835d84f71abb4d9eab924c219518b800540915689a3323e5847cd3"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.703005 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" event={"ID":"ab2f96fb-ef55-4427-a598-7e3f1e224045","Type":"ContainerStarted","Data":"ac602c08b43d4d9d84ca16d70364a42759fac3f28c0a56e0ee205a06885a2fad"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.703021 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" event={"ID":"ab2f96fb-ef55-4427-a598-7e3f1e224045","Type":"ContainerDied","Data":"5848e50846e9206c31c30b47f8e7f2df5ddc303c266302abaf44f36dbaa6229a"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.703035 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" event={"ID":"ab2f96fb-ef55-4427-a598-7e3f1e224045","Type":"ContainerStarted","Data":"35f2a49474234a3cc3d6b357341939ab9604ca7cc08b21e5412a5ae4810169c5"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.703046 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" event={"ID":"5a4202c2-c330-4a5d-87e7-0a63d069113f","Type":"ContainerStarted","Data":"5fb9005824f3eda87674b34f8ef509039990a1d8b887fbb8b0af782cf52d8bd8"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.703083 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" event={"ID":"5a4202c2-c330-4a5d-87e7-0a63d069113f","Type":"ContainerStarted","Data":"bb9b31a21bb7804acfec780725627f507b26c743a8732b84d2fc722559953044"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.703098 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" event={"ID":"5a4202c2-c330-4a5d-87e7-0a63d069113f","Type":"ContainerStarted","Data":"29147c6f3d8625422f173796ecc5c56624b69d9bc34abe3727182adc4dde3e20"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.703109 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" event={"ID":"d2455453-5943-49ef-bfea-cba077197da0","Type":"ContainerStarted","Data":"d0f4426f9820f0b9e0e6d6fc6e2530dc463048378180953ea79d72f72fda3c7c"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.703171 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.703178 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" event={"ID":"d2455453-5943-49ef-bfea-cba077197da0","Type":"ContainerDied","Data":"c6d06965eb2aa010cd8386f146e5e7d18099615f71beaf1a6e240f94bd2aecf0"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.703194 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" event={"ID":"d2455453-5943-49ef-bfea-cba077197da0","Type":"ContainerStarted","Data":"637824f5bb31724423d6735813857b47b37d15ab88987d8a010fd58f58c5ab69"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.703275 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41","Type":"ContainerDied","Data":"31b0fc8784eb8367b69b8a7c847bfd1469f93f534490b89c89aa0c82a72151b2"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.703299 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41","Type":"ContainerDied","Data":"3cb0fd8ad50843d858abaee21b28a02e53fe5cd0a20c10c6df87f1573285730f"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.703313 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cb0fd8ad50843d858abaee21b28a02e53fe5cd0a20c10c6df87f1573285730f" Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.703353 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" event={"ID":"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7","Type":"ContainerStarted","Data":"313c72120bec2b6d08365ada8135c3dfd105d61c037f0f5155256e309f9275b8"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.703354 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.703371 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" event={"ID":"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7","Type":"ContainerDied","Data":"8fd3086731035b08c09720259a6ef231b1be865d3ade946ceb31136e3b43913c"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.704408 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" event={"ID":"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7","Type":"ContainerStarted","Data":"3f4c5edfdc04ff6f06a18f7e79a33fe2c7ca34a279290a61c3b81818bc079d6b"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.704477 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" event={"ID":"2ea9eb53-0385-4a1a-a64f-696f8520cf49","Type":"ContainerStarted","Data":"7a03a9b2903f78b606da104794c398882ae1463636eb659e02174a991cae43c1"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.704497 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" event={"ID":"2ea9eb53-0385-4a1a-a64f-696f8520cf49","Type":"ContainerStarted","Data":"b4b45a7fb108962bc9dd2947cc8423b17d4611ef737a6e7507f8ef8f54c77640"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.704514 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" event={"ID":"2ea9eb53-0385-4a1a-a64f-696f8520cf49","Type":"ContainerStarted","Data":"5ea36913089cb553f8b6a17431d06736cf6ac63c1508cc4d7903325dd9e50f7f"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.704531 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-bbqfl" event={"ID":"0a6090f0-3a27-4102-b8dd-b071644a3543","Type":"ContainerStarted","Data":"efc7dcc65e51970be3f223938d17e3608d2b08a5580819c1889dcf943e6c33b1"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.704552 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-bbqfl" event={"ID":"0a6090f0-3a27-4102-b8dd-b071644a3543","Type":"ContainerDied","Data":"d0ac20086f35d51bcf8fc783fb1c1bf1ac3f8ca49ee1fa8aafa1da1a9b8115d7"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.704574 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-bbqfl" event={"ID":"0a6090f0-3a27-4102-b8dd-b071644a3543","Type":"ContainerStarted","Data":"5f5a7d7c0e9750e48ccca14b1c41ca2a57206319db458c1aefe78bdb62a1f334"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.704591 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" event={"ID":"80994f33-21e7-45d6-9f21-1cfd8e1f41ce","Type":"ContainerStarted","Data":"71d3404a38107a5a5dbdfa1cee4c6928a2f2a83f6bfa89195edb436d961641f1"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.704610 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" event={"ID":"80994f33-21e7-45d6-9f21-1cfd8e1f41ce","Type":"ContainerStarted","Data":"5f3c8d778382c867b08fbd74f7923dd512336eb8b121e70f84bb319617f783a5"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.704627 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" event={"ID":"80994f33-21e7-45d6-9f21-1cfd8e1f41ce","Type":"ContainerStarted","Data":"aea0ba9c47771383cbd332d289d9bc75e884ce916b9826020091a8cb0cfb26f5"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.704645 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" event={"ID":"80994f33-21e7-45d6-9f21-1cfd8e1f41ce","Type":"ContainerDied","Data":"d50601e164ccfcbdf07931c427e847ca4740015597032ab2b84aea93b2d7cd31"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.704663 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" event={"ID":"80994f33-21e7-45d6-9f21-1cfd8e1f41ce","Type":"ContainerDied","Data":"b60e278771d4ab09e373261d0f5e1a2d382ec8ee4872ddb07f8d9ad772242c29"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.704681 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" event={"ID":"80994f33-21e7-45d6-9f21-1cfd8e1f41ce","Type":"ContainerStarted","Data":"6269dfbb0082e40be315007eb2be8e6ed68859c371da0d4ee487418e5943d283"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.704690 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.704699 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" event={"ID":"c3ff09ab-cbe1-49e7-8121-5f71997a5176","Type":"ContainerStarted","Data":"3a95d1fdcb3068d3515bb9fdf082318832b4849fdd4dcfcbda66215465532969"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.704727 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" event={"ID":"c3ff09ab-cbe1-49e7-8121-5f71997a5176","Type":"ContainerDied","Data":"8a0561b48d7cbb59281ef2be420f500c179586e31854a6ba87f0ee5471e4ee95"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.704745 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" event={"ID":"c3ff09ab-cbe1-49e7-8121-5f71997a5176","Type":"ContainerStarted","Data":"35c33231cc5394e541c516a963005ff2abf91292685c4e1cbb8e7e960d479ab2"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.704752 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.704757 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-58c9f8fc64-lk5k7" event={"ID":"830ff1d6-332e-46b1-b13c-c2507fdc3c19","Type":"ContainerStarted","Data":"b6426d584feaf1dccc9586fadfcc5b8411ec145f968fc4d370c3068013252e93"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.704786 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-58c9f8fc64-lk5k7" event={"ID":"830ff1d6-332e-46b1-b13c-c2507fdc3c19","Type":"ContainerStarted","Data":"badd417202c4299e05d5e5c0664cbf010b21bb652f30b93278ac43926e68a829"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.704805 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-58c9f8fc64-lk5k7" event={"ID":"830ff1d6-332e-46b1-b13c-c2507fdc3c19","Type":"ContainerStarted","Data":"b4b9a672b76f3adc2ab4b631c1e084b51c152e435a2f95e756fd77ce61bb9196"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.704823 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"62eae2a9-2667-431e-ad73-ca18124d01f6","Type":"ContainerDied","Data":"84d4addeaab69d00ff961004821b23d05bc68d242853d91f47889592129b1a88"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.704844 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"62eae2a9-2667-431e-ad73-ca18124d01f6","Type":"ContainerDied","Data":"d8f5aeb16a7c0c81d68e0b4c65665a3f2d3427f082b4e9bb5b808153582ec4ae"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.704863 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8f5aeb16a7c0c81d68e0b4c65665a3f2d3427f082b4e9bb5b808153582ec4ae" Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.704880 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-4bqf9" event={"ID":"0c2c4a58-9780-4ecd-b417-e590ac3576ed","Type":"ContainerStarted","Data":"ece7f8de2256ca4c5499a6c68682a60215b1ff9074f8ada25360681bd459a76c"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.704898 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-4bqf9" event={"ID":"0c2c4a58-9780-4ecd-b417-e590ac3576ed","Type":"ContainerDied","Data":"8e530c2314387d6faa3389f896853faadcabf48e6b1056d8665d0aee6b25ba83"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.704916 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-4bqf9" event={"ID":"0c2c4a58-9780-4ecd-b417-e590ac3576ed","Type":"ContainerStarted","Data":"aadd21574589df05a94b4c4fadbf0dfafa5f50f06c631557a3bc30c9b28ade98"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.704934 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" event={"ID":"d42bcf13-548b-46c4-9a3d-a46f1b6ec045","Type":"ContainerStarted","Data":"d9d04bdcfdc2c33ba07b3882d662c20d9203671752e04b4037bf3995673ad759"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.704952 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" event={"ID":"d42bcf13-548b-46c4-9a3d-a46f1b6ec045","Type":"ContainerDied","Data":"d95ca6e96bcbe20e26ec06e8bea97630f7abc38b8dcb855ed93eec8b8ea1c22b"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.704972 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" event={"ID":"d42bcf13-548b-46c4-9a3d-a46f1b6ec045","Type":"ContainerStarted","Data":"290bb48008e0b7c46cc865de084cb0c95db01085d2c9c06c7668d41505cbf49a"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.705043 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" event={"ID":"d42bcf13-548b-46c4-9a3d-a46f1b6ec045","Type":"ContainerStarted","Data":"4b4e7da45f4c21e2af79784fe7c50524be71fedf894b5ecf38d0d12d29080573"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.705063 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-jkl4x" event={"ID":"053cc9bc-f98e-46f6-93bb-b5344d20bf74","Type":"ContainerStarted","Data":"7c62277fe5706e0717cad60492fc0cd55a642ceb15d309441041259d65ca5acd"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.705085 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-jkl4x" event={"ID":"053cc9bc-f98e-46f6-93bb-b5344d20bf74","Type":"ContainerStarted","Data":"c3a20ede6cada5383a3c17314cdc63a1bd82056b7193b0a825d73322086a74cd"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.705104 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4" event={"ID":"b75d4622-ac12-4f82-afc9-ab63e6278b0c","Type":"ContainerStarted","Data":"61c248cba1bc559e1d4464ce4ef3f38b93e86ef81619df8b81ab863a153e9722"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.705125 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4" event={"ID":"b75d4622-ac12-4f82-afc9-ab63e6278b0c","Type":"ContainerDied","Data":"efe6e287c36852699c4eb20fb17353458d83a029dc0001b97b2d103045cc17c2"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.695717 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.705142 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4" event={"ID":"b75d4622-ac12-4f82-afc9-ab63e6278b0c","Type":"ContainerStarted","Data":"f9589a25d07ced54d4fbfa68774c413e985214ddc531362c9f8430ade544bfcc"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.705171 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-r4dzk" event={"ID":"15a97fe2-5022-4997-9936-4247ae7ecb43","Type":"ContainerStarted","Data":"11285df327738337914cc0ae565734b64d8fbdbaed5cdcd21d8f84db43967978"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.705196 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-r4dzk" event={"ID":"15a97fe2-5022-4997-9936-4247ae7ecb43","Type":"ContainerDied","Data":"6bba51891e1777a8a2c079cba18156b56f50c10e22f9de1c059b65799e3a81f6"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.697695 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.705227 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-r4dzk" event={"ID":"15a97fe2-5022-4997-9936-4247ae7ecb43","Type":"ContainerStarted","Data":"f060cf0da8cda14bc2c113a9d1ffe07541127085f23390156d2bcdad8537aa45"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.705260 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kbfbq" event={"ID":"bf1cc230-0a79-4a1d-b500-a65d02e50973","Type":"ContainerStarted","Data":"2a8d489983f6bd76b9f322763b6391f38fc2342999f533803f70c94c9fb9e891"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.705285 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kbfbq" event={"ID":"bf1cc230-0a79-4a1d-b500-a65d02e50973","Type":"ContainerStarted","Data":"556adc9fcaa1c4f729bd2c62ba03266f487249b8813b55699d8f5f124825641f"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.705304 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.697812 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.702291 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.705311 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kbfbq" event={"ID":"bf1cc230-0a79-4a1d-b500-a65d02e50973","Type":"ContainerStarted","Data":"8a23814e5648f40975e3bf4990cc1d8a9b9e996452a93cd95f8834fb95ae4fd9"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.702471 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.705456 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"615539dc-56e1-4489-9aee-33b3e769d4fc","Type":"ContainerDied","Data":"60014c22022db848874d3a05474beca08d37dd24a5fad732534f373108a2dd40"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.705487 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"615539dc-56e1-4489-9aee-33b3e769d4fc","Type":"ContainerDied","Data":"2ce4033218ef99b5e9450dbf20ed2bc82ac943f7fb09ce23f0d76d88d185685f"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.705508 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ce4033218ef99b5e9450dbf20ed2bc82ac943f7fb09ce23f0d76d88d185685f" Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.705524 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-68lgz" event={"ID":"394061b4-1bac-4699-96d2-88558c1adaf8","Type":"ContainerStarted","Data":"56938ffab16990f3cffb8faf949e1cb22709029d512d01af84649257d7bf62fc"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.705531 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.705610 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.705628 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.705549 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-68lgz" event={"ID":"394061b4-1bac-4699-96d2-88558c1adaf8","Type":"ContainerDied","Data":"c9f1921c446214d30702dfb6939c3c003e6da6eb3a26e4b0d63f3a857db0e4ce"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.702506 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.702539 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.702571 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.705772 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-68lgz" event={"ID":"394061b4-1bac-4699-96d2-88558c1adaf8","Type":"ContainerStarted","Data":"3e755bfdf969ae0aedc8dea1041ea98192494df2fdc6f217c2fff168055bbf86"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.705857 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.705870 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-tw7c7" event={"ID":"f3be6654-f969-4952-976d-218c86af7d2d","Type":"ContainerStarted","Data":"cfad49cbff250b58b653ccae069695f63c9bab515760a0757841107af6244cda"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.705896 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-tw7c7" event={"ID":"f3be6654-f969-4952-976d-218c86af7d2d","Type":"ContainerStarted","Data":"85d2f2197e1e2ff4c1589210cd39f7a91df442afde156e6d9ca6ea0a582e9f7e"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.702603 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.705957 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.702636 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.705912 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxlrz" event={"ID":"e390416b-4fa1-41d5-bc74-9e779b252350","Type":"ContainerStarted","Data":"a279c0abb201f34c96a283114d3949bb3fe1eddd5b4315ac341720f9a904daea"} Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.704703 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 18 13:24:17.706091 master-0 kubenswrapper[27835]: I0318 13:24:17.704800 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.704800 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.706311 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxlrz" event={"ID":"e390416b-4fa1-41d5-bc74-9e779b252350","Type":"ContainerDied","Data":"b43b7d12d5938ada2c8a891881e47265567c35b517ea58afd154109c58f9fc86"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.704848 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.706344 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxlrz" event={"ID":"e390416b-4fa1-41d5-bc74-9e779b252350","Type":"ContainerDied","Data":"430ac96fd015a6eea0a650279b116d5a8e02003f3361085b042396c185be38af"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.705096 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.705441 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.706439 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.706619 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bxlrz" event={"ID":"e390416b-4fa1-41d5-bc74-9e779b252350","Type":"ContainerStarted","Data":"9f1629a9c890b158ad74d9b6c35c2de2573e526e00eff6015bd3861ec48b5231"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.706720 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7d58488df-bqmqw" event={"ID":"8ffe2e75-9cc3-4244-95c8-800463c5aa28","Type":"ContainerStarted","Data":"6c8e2099733ce74e4ed7853f255ed961973595eaee20fcbedbc997cee28f6bf1"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.706745 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7d58488df-bqmqw" event={"ID":"8ffe2e75-9cc3-4244-95c8-800463c5aa28","Type":"ContainerStarted","Data":"77922c67e22a90e02f2bc6f9c2c3361d1f9624d65d1b4a186c450f61aa3c27f3"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.706801 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" event={"ID":"bf9d21f9-64d6-4e21-a985-491197038568","Type":"ContainerStarted","Data":"cb973050d91145843fda6519effa669a5d62a92181e514441bd6c04fe69dc004"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.706817 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" event={"ID":"bf9d21f9-64d6-4e21-a985-491197038568","Type":"ContainerDied","Data":"e3030c6144549ecf6368b1e14f59622a57b27f9cd532ce32634fa6a2d9e59421"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.706829 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" event={"ID":"bf9d21f9-64d6-4e21-a985-491197038568","Type":"ContainerStarted","Data":"d01d4e5c147c00b55f6b6b29aae7404f1c65f4d9542857788934f8f1acdf5475"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.706839 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" event={"ID":"3ee0f85b-219b-47cb-a22a-67d359a69881","Type":"ContainerStarted","Data":"a173260494fc7cb4b3e5f060c679f7a75fbec9929d0f639c7f0f786a29fccfb7"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.706874 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" event={"ID":"3ee0f85b-219b-47cb-a22a-67d359a69881","Type":"ContainerDied","Data":"48e43ee75779b8e1045feaede050da1592482395d03ca73890f0546a58a0cc80"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.706889 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" event={"ID":"3ee0f85b-219b-47cb-a22a-67d359a69881","Type":"ContainerStarted","Data":"99a3ea12b4f55e1c479ad9ada5ad2452af1ac0e39904d45fd6656f0a1828ea6f"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.706899 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-7bfhd" event={"ID":"80daec9e-b15b-4782-a1f7-ce398bbe323b","Type":"ContainerDied","Data":"2307d9f9b6edb7075e27303dc674c0604795c0e793d990a0bd35a8d4c7882a78"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.706961 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-7bfhd" event={"ID":"80daec9e-b15b-4782-a1f7-ce398bbe323b","Type":"ContainerDied","Data":"b75c64ccdcd6c7429e0eee9f5d3eef7c095db4da53200590b14c4e34b0c5741d"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.706976 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b75c64ccdcd6c7429e0eee9f5d3eef7c095db4da53200590b14c4e34b0c5741d" Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.706987 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" event={"ID":"720a1f60-c1cb-4aef-aaec-f082090ca631","Type":"ContainerDied","Data":"6fc3b00292545591e6c5349f2483ea9d57bac5ac21bd098a1969c029ee5e5b9a"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707048 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" event={"ID":"720a1f60-c1cb-4aef-aaec-f082090ca631","Type":"ContainerStarted","Data":"b6d0118c2fdf2cbc54c92133c6e31568d8996365d7d961746064b4d6f7f3d6e8"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707064 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" event={"ID":"720a1f60-c1cb-4aef-aaec-f082090ca631","Type":"ContainerStarted","Data":"54bd19e9b4d7f9ab310771b8b4db448ca0ec68978bb44a7d76ba5895f6b7148d"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707075 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"721fa0a6e32ffbe367060749a069ffa65b9f6ad129708e70bf8fe6c632945146"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707088 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"c7fc7f0847ee79291a6574d7683fcc6d9f536ed82d5caf4aefcfccd3e9a849e8"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707123 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"9cf2f9b8128dbf10ede3eb562a24cc58c7aba33dba8eeddf4d4f3e24e5798ca1"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707136 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"1f5fb1784d061c1838ee1821bce9e8751dbeeab4df3260e90e439d05f3ac2d1d"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707147 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"65d7e5b55a010fe304ccf53f634d090df67438b3eddaa2aa17e4e898f972f8b1"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707157 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerDied","Data":"04b008430621fdfcd606415bb682fb2830b69521c04743dd7425bc1288a8ae9b"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707206 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"930c71fe1f23ece50da1d42638c299a59f7406da6f3b38ce884bbbd9a8e9fd63"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707219 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" event={"ID":"16f8e725-f18a-478e-88c5-87d54aeb4857","Type":"ContainerStarted","Data":"73e962786148e331d71bca99dadc5db6b5fccf1a19effac4baa8614e839409fb"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707231 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" event={"ID":"16f8e725-f18a-478e-88c5-87d54aeb4857","Type":"ContainerDied","Data":"8b95eb8fc69ceaf4692d8f4970690d7b9c31eb8fb64b767afa33cfaa9ea6e088"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707242 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" event={"ID":"16f8e725-f18a-478e-88c5-87d54aeb4857","Type":"ContainerStarted","Data":"eefde7657bd37afb4c9f371b147972b63886e6cf2d4cda43e0d1e78de918e266"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707253 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" event={"ID":"16f8e725-f18a-478e-88c5-87d54aeb4857","Type":"ContainerStarted","Data":"66854fab27d048679dff3730825d0acfff884899a282ccd890ab724bab9d3de2"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707290 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-92s8c" event={"ID":"029b127e-0faf-4957-b591-9c561b053cda","Type":"ContainerStarted","Data":"ff6705ae022d0bf84f4c53bcb269bd6cef0bdaa6cd2d1b607917b732069608ca"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707302 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-92s8c" event={"ID":"029b127e-0faf-4957-b591-9c561b053cda","Type":"ContainerStarted","Data":"be09797185331aebbcbf41f53d2dbc11c634e6ebb97e729dc7217ba21143b152"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707314 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-92s8c" event={"ID":"029b127e-0faf-4957-b591-9c561b053cda","Type":"ContainerStarted","Data":"0bec5b0b6152a0f7f02d36d9ef96fae029938eb181145d603f9ae776f9e6ecbd"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707325 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-5vhnr" event={"ID":"bdcd72a6-a8e8-47ba-8b51-7325d35bad6b","Type":"ContainerStarted","Data":"ba376a1d73a67617b715fd0231574193b06155f8209c21a3c5307d41e5c8af24"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707337 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-5vhnr" event={"ID":"bdcd72a6-a8e8-47ba-8b51-7325d35bad6b","Type":"ContainerDied","Data":"7256607264aec34fc303524e25688f50a4035bdf4da670438e512c20c88759c7"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707349 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-5vhnr" event={"ID":"bdcd72a6-a8e8-47ba-8b51-7325d35bad6b","Type":"ContainerStarted","Data":"217f2ddac8460682f53f483f75566ba056797e6cb9215803ff6c892d4d2a8575"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707422 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-7vddk" event={"ID":"13c71f7d-1485-4f86-beb2-ee16cf420350","Type":"ContainerStarted","Data":"60319838cb4c436130cf522bd5ef49f412cc405649c46fe810d603e975d6844e"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707439 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-7vddk" event={"ID":"13c71f7d-1485-4f86-beb2-ee16cf420350","Type":"ContainerStarted","Data":"9e3149a06c6f175072a4f298029a63d5886a08058f2cfbf229c65bf7015d1f34"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707450 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"c6b7be01dc24d7f26b3d57447fbf2490a6f4dfb2fb1c9fdf65bee4f74420bdb3"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707464 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"4d4a6fb4b82b14518b178f0f032fb16b2ce281c0de8e87c9f0449d46d7739b5b"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707476 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"1eb12a87dc862d5b3d8d0f8d6df8c24ebffab83c33817eb9807a92d04594145f"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707487 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"c0f3b16d1ffaf44cfe1f64310fd36df2459a40e3dd2e9014c6e2cf3307b7b8c5"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707528 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7" event={"ID":"d325c523-8e6f-4665-9f54-334eaf301141","Type":"ContainerStarted","Data":"ed1f8fad77b3049231437f6fc06b7cd861bd826f2609faf9830e6b26f51e0a3b"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707565 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7" event={"ID":"d325c523-8e6f-4665-9f54-334eaf301141","Type":"ContainerStarted","Data":"aae54c8930e87459876624e3a195385d1057c9142b7bea1bae8fab9500f4916d"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707576 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7" event={"ID":"d325c523-8e6f-4665-9f54-334eaf301141","Type":"ContainerStarted","Data":"ff229113000bcb5174eae222c2757e6f95658656feb5832275cddd0f205c413f"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707586 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7" event={"ID":"d325c523-8e6f-4665-9f54-334eaf301141","Type":"ContainerStarted","Data":"daaff2e16f5e705f64dc5a7b025fa31e1b94f1cba87483d97066f316342671c2"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707612 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-9c5679d8f-5lzzn" event={"ID":"59bf5114-29f9-4f70-8582-108e95327cb2","Type":"ContainerStarted","Data":"2c1eaab5376b76077cdd4ce6b7a0fb23bc1c0baefb99ecfa31f11681b75f8136"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707624 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-9c5679d8f-5lzzn" event={"ID":"59bf5114-29f9-4f70-8582-108e95327cb2","Type":"ContainerStarted","Data":"6edbcdc30c81dd06208679a3331d6c44ead81bfa5ca710d7268a4a8e1bd10597"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707634 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-9c5679d8f-5lzzn" event={"ID":"59bf5114-29f9-4f70-8582-108e95327cb2","Type":"ContainerStarted","Data":"e48d984bde067fff459bf66d3627856479bf9e2fe952a4228b45cfe581507bda"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707645 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd" event={"ID":"a8eff549-02f3-446e-b3a1-a66cecdc02a6","Type":"ContainerStarted","Data":"6a4244395f4d75895479c6bde3bd69b3e184f114ebdcc985559b2e60abc18c9f"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707680 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd" event={"ID":"a8eff549-02f3-446e-b3a1-a66cecdc02a6","Type":"ContainerDied","Data":"8282b58a87a9816b39b8e46af1e553cfafda7bc3ace1196ac63b527830a8a86a"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707696 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd" event={"ID":"a8eff549-02f3-446e-b3a1-a66cecdc02a6","Type":"ContainerStarted","Data":"087f1bfbdd93b7cd9cc4547bdaad7f0c837b1009fb9f9eea1b3ef9f1330544d2"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707707 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c129e07da670ff3af256d72652e4b1da","Type":"ContainerStarted","Data":"d48e15ab911c60989cfb99c327ae7e0567590b1356659404b161af05907e4337"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707718 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c129e07da670ff3af256d72652e4b1da","Type":"ContainerStarted","Data":"907f62a3ed0a66ba580ec70c52ca37e615478a0dc4f9200560aa2d1b0a9edbd8"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707771 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c129e07da670ff3af256d72652e4b1da","Type":"ContainerStarted","Data":"80ea1bc510c56f8c04524b33659a9124c49ea6274b90ad277511efa074d7ad89"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707783 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c129e07da670ff3af256d72652e4b1da","Type":"ContainerStarted","Data":"771bd5b4b91a07c5659ebb9ce85816fcbf0812eb5cfe253bf1a7b334533c5d55"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707819 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c129e07da670ff3af256d72652e4b1da","Type":"ContainerStarted","Data":"12c899e36dc6ffd83c34c2d6e92c233e31c0860e033db20595d2d07c037dd6e7"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707831 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"5217b77d-b517-45c3-b76d-eee86d72b141","Type":"ContainerDied","Data":"44724c38cb2d6b59ba2396d53ded36b1d7f457c6dd6834e92f2a09e247880a38"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707857 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"5217b77d-b517-45c3-b76d-eee86d72b141","Type":"ContainerDied","Data":"a1317dc133fc5be7f7286b377875f1533cdac704e7aa1b12b6a9de99eec130ad"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707868 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1317dc133fc5be7f7286b377875f1533cdac704e7aa1b12b6a9de99eec130ad" Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707877 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" event={"ID":"d9d09a56-ed4c-40b7-8be1-f3934c07296e","Type":"ContainerStarted","Data":"a272363aabc94bf515887116c3094b118b2c3e6ac7802ab09d5f4466b9ec2a97"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707889 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" event={"ID":"d9d09a56-ed4c-40b7-8be1-f3934c07296e","Type":"ContainerDied","Data":"caba03fd38cc380067c9996cdbf9e7480fdd072e4dbef7e8d359a205ae43b4e1"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707917 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" event={"ID":"d9d09a56-ed4c-40b7-8be1-f3934c07296e","Type":"ContainerStarted","Data":"dd510699ed24732f88c9dd87f0f6af2740999700a9b342734a145bb0ab91ee55"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707928 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" event={"ID":"d9d09a56-ed4c-40b7-8be1-f3934c07296e","Type":"ContainerStarted","Data":"15ab52d652113ef266940e33258fee75e250f493080fb37576944ab0faae3a29"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707938 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"6a4c87a8-6bf0-43b2-b598-1561cba3e391","Type":"ContainerDied","Data":"d3c2d483573799510afcab12d760b1183078a2dd2aa3d3d851d413db0b1d8ab1"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707976 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"6a4c87a8-6bf0-43b2-b598-1561cba3e391","Type":"ContainerDied","Data":"54b6e29331a441885b9941b0a8d3cb3f4a69221f2394a03c8cf38fa54c2e30f4"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.707998 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54b6e29331a441885b9941b0a8d3cb3f4a69221f2394a03c8cf38fa54c2e30f4" Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.708015 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" event={"ID":"a350f317-f058-4102-af5c-cbba46d35e02","Type":"ContainerStarted","Data":"74afab1d4776e159eb27ac77593909c8a0f9782fdc2bad1e15b99fc960c20db9"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.708038 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" event={"ID":"a350f317-f058-4102-af5c-cbba46d35e02","Type":"ContainerStarted","Data":"71a8c5f3dcdb995fab9a53a08bf0c85f486ffb1845cc1c509788983f480ff491"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.708056 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qsnxz" event={"ID":"deb67ea0-8342-40cb-b0f4-115270e878dd","Type":"ContainerStarted","Data":"19465452bf90617b71d40fb46ab80696b86f027e8232a3f4b9f70c4975c500c6"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.708069 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qsnxz" event={"ID":"deb67ea0-8342-40cb-b0f4-115270e878dd","Type":"ContainerDied","Data":"006cf29dad6df8430c273883ed49909e3a00377ac86065a75fe3162eb36811e5"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.708080 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qsnxz" event={"ID":"deb67ea0-8342-40cb-b0f4-115270e878dd","Type":"ContainerStarted","Data":"86270375ddd9ef7091a168593f24db7b8afc117f301f953944886d249627818f"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.708092 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fdnt" event={"ID":"708812af-3249-4d57-8f28-055da22a7329","Type":"ContainerStarted","Data":"256b1acfd961770152114ac2f96390408c67e8cdc51d71250cbe9043324535ff"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.708103 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fdnt" event={"ID":"708812af-3249-4d57-8f28-055da22a7329","Type":"ContainerStarted","Data":"46c5f01485b7d374a9f96f911d1e08a0851a6da27ef5610d41f394290374b7e5"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.708276 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fdnt" event={"ID":"708812af-3249-4d57-8f28-055da22a7329","Type":"ContainerStarted","Data":"8f5a82461be0913418e367f26894d38008c42543db14c5256b1c342d3bda363f"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.708291 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-8487694857-49h6x" event={"ID":"5b2acd84-85c0-4c47-90a4-44745b79976d","Type":"ContainerStarted","Data":"ff6612978fe2b0582a45870266115fa659f6abe171419afdc4fcd20dc786a7cb"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.708350 27835 scope.go:117] "RemoveContainer" containerID="caba03fd38cc380067c9996cdbf9e7480fdd072e4dbef7e8d359a205ae43b4e1" Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.708621 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-8487694857-49h6x" event={"ID":"5b2acd84-85c0-4c47-90a4-44745b79976d","Type":"ContainerStarted","Data":"33f04bfdd9c035c5cd30a8348194efaf7b8c0c01d29ad4ecd3e45f3c84d558aa"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.708664 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-8487694857-49h6x" event={"ID":"5b2acd84-85c0-4c47-90a4-44745b79976d","Type":"ContainerStarted","Data":"6d8e82ffbe075824d8315d20c4a3c5c63d1c4a778f543315fadbc9c6a49fcd1c"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.708677 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" event={"ID":"7fb5bad7-07d9-45ac-ad27-a887d12d148f","Type":"ContainerStarted","Data":"c363b4bee719d98f91140350d9af5c483f50d31b877a20b1c896b84c11923483"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.708707 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" event={"ID":"7fb5bad7-07d9-45ac-ad27-a887d12d148f","Type":"ContainerDied","Data":"36dcdc5868f986f835679461c4df710fd18e0dcfbcbbdc4c74c1460f2651a842"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.708748 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" event={"ID":"7fb5bad7-07d9-45ac-ad27-a887d12d148f","Type":"ContainerStarted","Data":"70ca4cb931b7545d294f00c69b8bfe23595c69c1d94a66566a713806aa3eda58"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.708774 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"8e7a82869988463543d3d8dd1f0b5fe3","Type":"ContainerStarted","Data":"70f992d6ed6a10b9fe4f1fff57a0961086b5274d71307bfff82b9e6a9a664092"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.708790 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"8e7a82869988463543d3d8dd1f0b5fe3","Type":"ContainerStarted","Data":"aad28fcc9206746f0d26ad1538815d0d7f16ddcfe6c46b81f66fd625f49ae815"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.708828 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-t4p42" event={"ID":"702076a9-b542-4768-9e9e-99b2cac0a66e","Type":"ContainerStarted","Data":"e491672fbffb4614bce8c9d210033686066eca2a19da5ff73650fa1e0579c900"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.708846 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-t4p42" event={"ID":"702076a9-b542-4768-9e9e-99b2cac0a66e","Type":"ContainerStarted","Data":"2e03f5085662e6e504d1377448ea945910b21e845ebe1440b4adca9307187581"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.708860 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-t4p42" event={"ID":"702076a9-b542-4768-9e9e-99b2cac0a66e","Type":"ContainerDied","Data":"e09c13a4c855b0e00ad1329ef737699f109774957ff6b437737fd8c1e39daca5"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.708905 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-t4p42" event={"ID":"702076a9-b542-4768-9e9e-99b2cac0a66e","Type":"ContainerStarted","Data":"d085d8a019f7e2c66eb4ee6d163b9b2393cab47ea58008a519ad2cb921a6f6d3"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.708926 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5" event={"ID":"7dca7577-6bee-4dd3-917a-7b7ccc42f0fc","Type":"ContainerStarted","Data":"f45388cd1975238bf0e6b465991fcc80231413d8a53415460458a08b790ffcab"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.708939 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5" event={"ID":"7dca7577-6bee-4dd3-917a-7b7ccc42f0fc","Type":"ContainerDied","Data":"b5aaa571a68806249fc7d55159a4093df00ace03fbc9a12d84446e66a7f3e311"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.708952 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5" event={"ID":"7dca7577-6bee-4dd3-917a-7b7ccc42f0fc","Type":"ContainerStarted","Data":"3f26792b173013020f69888fc826973fdb52355d71160dda571060f1b858412f"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.709020 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-sqx7p" event={"ID":"74f296d4-40d1-449e-88ea-db6c1574a11a","Type":"ContainerStarted","Data":"623b1f887d07a207253d786ce6f347b115eff72fb9da12be783a840d209812fb"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.709037 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-sqx7p" event={"ID":"74f296d4-40d1-449e-88ea-db6c1574a11a","Type":"ContainerStarted","Data":"c57e6b4204d657669c9164f93a42b5760026a8d1d5180433a4216ca3f552edf0"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.709074 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-sqx7p" event={"ID":"74f296d4-40d1-449e-88ea-db6c1574a11a","Type":"ContainerStarted","Data":"b8e76ab6e36792c638116c40619921d7addf605312998f00e62d98e5a5614955"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.709093 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"cb385758-78ae-46b3-994e-fec9b14b7322","Type":"ContainerDied","Data":"254c4c55fc5a8cefc576158a3cd6566c4e22decb0988ded62e89b98504ee1458"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.709107 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"cb385758-78ae-46b3-994e-fec9b14b7322","Type":"ContainerDied","Data":"44cadcc137f107d216cd01d7217282fd78fcbb3fd1c79dd935088ac2165b138b"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.709123 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44cadcc137f107d216cd01d7217282fd78fcbb3fd1c79dd935088ac2165b138b" Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.709134 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"a205b027fb3bb1fbef6f4f0b2f902a1dfc370d3685fa3edd769df89a510f9823"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.709174 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"b889e0b36e4f8979e44b821a5f017daae136b65f503dea68d88d71644816b7aa"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.709187 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"bb3c1483ffac3748926d161fadb4e79f4a598cb1de15bbbd5db0a2eb9306ca39"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.709204 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerDied","Data":"8f2f91bac220e62247e22b1d4ddac3f6faed23614b554c7d9cb87b50de91ff64"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.709220 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerDied","Data":"66077c2a26014879f2ee8a44731dd4750343ebe7a4a34fc0f126a55d48c25d7c"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.709262 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"8e4ba40307f1e3c32ed5043b13eaa8d528a5352038969de985182a9daf4f59ae"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.709281 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" event={"ID":"19a76585-a9ac-4ed9-9146-bb77b31848c6","Type":"ContainerStarted","Data":"e6502b58667f09b48e77dc67a79186e19cc74b3537a34e37099ff0c5b4adbd6e"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.709293 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" event={"ID":"19a76585-a9ac-4ed9-9146-bb77b31848c6","Type":"ContainerDied","Data":"f3d6a2875cca50d672dfde1a32c8dca9e65a425957da660e57609821797e598b"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.709305 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" event={"ID":"19a76585-a9ac-4ed9-9146-bb77b31848c6","Type":"ContainerStarted","Data":"4a97b24b2b4402b956c009659df6a92e6079c267e12ae961ceccadc636caf34a"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.709350 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-k8tv4" event={"ID":"34a3a84b-048f-4822-9f05-0e7509327ca2","Type":"ContainerStarted","Data":"f3ae91480a2e8eb448094d4c03f841ffed318076eee9f40a63820ede2deb2573"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.709365 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-k8tv4" event={"ID":"34a3a84b-048f-4822-9f05-0e7509327ca2","Type":"ContainerDied","Data":"f405c7c5758aab122512ec8685660fb5ea0502d97836267e430ea463ff79f592"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.709444 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-k8tv4" event={"ID":"34a3a84b-048f-4822-9f05-0e7509327ca2","Type":"ContainerStarted","Data":"e311ec640a1a240867598468c9fc4d6af26b5595345ee1d6fdaa4ef38454491f"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.709468 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-7tcjk" event={"ID":"fb65c095-ca20-432c-a069-ad6719fca9c8","Type":"ContainerStarted","Data":"34eb85215321835de9e05074243d042b395a27aa34e46a23e03dd0c3867bbe76"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.709512 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-7tcjk" event={"ID":"fb65c095-ca20-432c-a069-ad6719fca9c8","Type":"ContainerStarted","Data":"663b337d51e2873d5151b1f329e1358b3ddb8ded99570ad538a8ad35be083482"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.709527 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-7tcjk" event={"ID":"fb65c095-ca20-432c-a069-ad6719fca9c8","Type":"ContainerStarted","Data":"381d29d4a5ad407e637362bbe1b13c2af8936f3cc15562644f115d2bb0e3ff71"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.709538 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5blrl" event={"ID":"ce3c462e-b655-40bc-811a-95ccde49fdb8","Type":"ContainerStarted","Data":"7e3df25b205a0f81e8b3659d8d979ba18ce4e4a3839b35bafa1b5c2dfee3ce6c"} Mar 18 13:24:17.709752 master-0 kubenswrapper[27835]: I0318 13:24:17.709623 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5blrl" event={"ID":"ce3c462e-b655-40bc-811a-95ccde49fdb8","Type":"ContainerStarted","Data":"4730d27ac1ee53c97b091eb46aa90f3ccfdd14d063c45f26304bbc54bbafa80e"} Mar 18 13:24:17.714946 master-0 kubenswrapper[27835]: I0318 13:24:17.709650 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5blrl" event={"ID":"ce3c462e-b655-40bc-811a-95ccde49fdb8","Type":"ContainerStarted","Data":"9d840b1327f66205cf6b23b15b1f1425e68ae2cb9d5dd3a177c50ba638a9ce65"} Mar 18 13:24:17.716008 master-0 kubenswrapper[27835]: I0318 13:24:17.709814 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf9d21f9-64d6-4e21-a985-491197038568-serving-cert\") pod \"authentication-operator-5885bfd7f4-sp4ld\" (UID: \"bf9d21f9-64d6-4e21-a985-491197038568\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:24:17.716008 master-0 kubenswrapper[27835]: I0318 13:24:17.715938 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb5b6\" (UniqueName: \"kubernetes.io/projected/5a4202c2-c330-4a5d-87e7-0a63d069113f-kube-api-access-kb5b6\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:24:17.716008 master-0 kubenswrapper[27835]: I0318 13:24:17.715965 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nxzr\" (UniqueName: \"kubernetes.io/projected/767da57e-44e4-4861-bc6f-427c5bbb4d9d-kube-api-access-2nxzr\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:24:17.717470 master-0 kubenswrapper[27835]: I0318 13:24:17.716140 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-var-lib-kubelet\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.717470 master-0 kubenswrapper[27835]: I0318 13:24:17.716199 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhs5w\" (UniqueName: \"kubernetes.io/projected/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc-kube-api-access-qhs5w\") pod \"openshift-controller-manager-operator-8c94f4649-cpqm5\" (UID: \"7dca7577-6bee-4dd3-917a-7b7ccc42f0fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5" Mar 18 13:24:17.717470 master-0 kubenswrapper[27835]: I0318 13:24:17.716235 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf9d21f9-64d6-4e21-a985-491197038568-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-sp4ld\" (UID: \"bf9d21f9-64d6-4e21-a985-491197038568\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:24:17.717470 master-0 kubenswrapper[27835]: I0318 13:24:17.710822 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"b4d424a6-cf4e-4e32-bc50-db63ef03f8dd","Type":"ContainerDied","Data":"4daffe612ceab094bb2d1f38476f0856eefbbaa467bc42a2d0b021a9807cf03f"} Mar 18 13:24:17.717470 master-0 kubenswrapper[27835]: I0318 13:24:17.716317 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"b4d424a6-cf4e-4e32-bc50-db63ef03f8dd","Type":"ContainerDied","Data":"52a3b14cf6bdc42bb301c45eb61a63c8e96420bc048eb6405582d863a95b40ad"} Mar 18 13:24:17.717470 master-0 kubenswrapper[27835]: I0318 13:24:17.716355 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52a3b14cf6bdc42bb301c45eb61a63c8e96420bc048eb6405582d863a95b40ad" Mar 18 13:24:17.717470 master-0 kubenswrapper[27835]: I0318 13:24:17.716368 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm" event={"ID":"68104a8c-3fac-4d4b-b975-bc2d045b3375","Type":"ContainerStarted","Data":"8a1d2a28a02adaf96d6f547aaeb69dc4f550840901bd9f9f9311a6733ad3c203"} Mar 18 13:24:17.717470 master-0 kubenswrapper[27835]: I0318 13:24:17.716379 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm" event={"ID":"68104a8c-3fac-4d4b-b975-bc2d045b3375","Type":"ContainerDied","Data":"48a75a1bd556b4ca5c903ca8cec01a63d2822cbb454ffb75470b5fa995517263"} Mar 18 13:24:17.717470 master-0 kubenswrapper[27835]: I0318 13:24:17.716388 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm" event={"ID":"68104a8c-3fac-4d4b-b975-bc2d045b3375","Type":"ContainerStarted","Data":"c8a9c7ba3dfa56fce014dd866938b3ebae10a392ba44b6a44344dd4757310fda"} Mar 18 13:24:17.717470 master-0 kubenswrapper[27835]: I0318 13:24:17.716389 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/767da57e-44e4-4861-bc6f-427c5bbb4d9d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:24:17.717470 master-0 kubenswrapper[27835]: I0318 13:24:17.716397 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm" event={"ID":"68104a8c-3fac-4d4b-b975-bc2d045b3375","Type":"ContainerStarted","Data":"06b37ad3c0f2f564ede6e81cb5f87c31e9193aa64abb54a08ee07cad5168cccd"} Mar 18 13:24:17.717470 master-0 kubenswrapper[27835]: I0318 13:24:17.716428 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c129e07da670ff3af256d72652e4b1da-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c129e07da670ff3af256d72652e4b1da\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:24:17.717470 master-0 kubenswrapper[27835]: I0318 13:24:17.716454 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a2bdf5b0-8764-4b15-97c9-20af36634fd0-etcd-serving-ca\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:24:17.717470 master-0 kubenswrapper[27835]: I0318 13:24:17.716939 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:24:17.717470 master-0 kubenswrapper[27835]: I0318 13:24:17.717219 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmzr4\" (UniqueName: \"kubernetes.io/projected/d9d09a56-ed4c-40b7-8be1-f3934c07296e-kube-api-access-wmzr4\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:24:17.717470 master-0 kubenswrapper[27835]: I0318 13:24:17.717240 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:24:17.717470 master-0 kubenswrapper[27835]: I0318 13:24:17.717257 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-config\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:24:17.717470 master-0 kubenswrapper[27835]: I0318 13:24:17.717272 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-images\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:24:17.717470 master-0 kubenswrapper[27835]: I0318 13:24:17.717290 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2bdf5b0-8764-4b15-97c9-20af36634fd0-config\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:24:17.717470 master-0 kubenswrapper[27835]: I0318 13:24:17.717307 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7bpz\" (UniqueName: \"kubernetes.io/projected/394061b4-1bac-4699-96d2-88558c1adaf8-kube-api-access-r7bpz\") pod \"csi-snapshot-controller-operator-5f5d689c6b-68lgz\" (UID: \"394061b4-1bac-4699-96d2-88558c1adaf8\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-68lgz" Mar 18 13:24:17.717470 master-0 kubenswrapper[27835]: I0318 13:24:17.717324 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-cpqm5\" (UID: \"7dca7577-6bee-4dd3-917a-7b7ccc42f0fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5" Mar 18 13:24:17.717470 master-0 kubenswrapper[27835]: I0318 13:24:17.717344 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdkqm\" (UniqueName: \"kubernetes.io/projected/7fb5bad7-07d9-45ac-ad27-a887d12d148f-kube-api-access-sdkqm\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:24:17.717470 master-0 kubenswrapper[27835]: I0318 13:24:17.717359 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-var-lib-cni-bin\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.717470 master-0 kubenswrapper[27835]: I0318 13:24:17.717374 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2bdf5b0-8764-4b15-97c9-20af36634fd0-serving-cert\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:24:17.717470 master-0 kubenswrapper[27835]: I0318 13:24:17.717400 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9d09a56-ed4c-40b7-8be1-f3934c07296e-trusted-ca\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.717782 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-images\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.716458 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-kcsgp" event={"ID":"0278b04b-b27b-4717-a009-a70315fd05a6","Type":"ContainerStarted","Data":"255878e502d4cefb42ba40055cada36ae5db45de3d4a7c393b1e4c8220dae784"} Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718026 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-kcsgp" event={"ID":"0278b04b-b27b-4717-a009-a70315fd05a6","Type":"ContainerStarted","Data":"caf8685ec1d7171c12646ad4a2c704d85c1985e24c1994b6f4a18dfa14666d6f"} Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718046 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m" event={"ID":"595f697b-d238-4500-84ce-1ea00377f05e","Type":"ContainerStarted","Data":"924646e44a1c5adbdb9870533fe34c79d2c53b932110e145fe7c6282f99e8cc8"} Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718055 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m" event={"ID":"595f697b-d238-4500-84ce-1ea00377f05e","Type":"ContainerDied","Data":"239bc63a547a5d1be7fb026224506bae5660c286e46adef016daf55c15815d54"} Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718067 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m" event={"ID":"595f697b-d238-4500-84ce-1ea00377f05e","Type":"ContainerStarted","Data":"5ebf31a11d3c2bc39d6ab17ef9ad48b30eadf37d8f9dbe43bf78a5b88a48eeba"} Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718075 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" event={"ID":"6a93ff56-362e-44fc-a54f-666a01559892","Type":"ContainerStarted","Data":"bf094747ec2d2ce13923f1981e2b60301e7fc87989743a2574d721a3108a0f23"} Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718084 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" event={"ID":"6a93ff56-362e-44fc-a54f-666a01559892","Type":"ContainerStarted","Data":"e6c4257b0b67452b20cd7a6f86548bd2672d0f4565af7d50ef244e38ac13bf8e"} Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718114 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" event={"ID":"6a93ff56-362e-44fc-a54f-666a01559892","Type":"ContainerStarted","Data":"a669294396e2c35e0be9a0842b4ba90e0c2258e89ac9948c8865f45b4432e16b"} Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718123 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" event={"ID":"6a93ff56-362e-44fc-a54f-666a01559892","Type":"ContainerStarted","Data":"cb59122d7a7b042121b64340b8ada26c1823fa00f9c980926b47cbaa0d20cc3f"} Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718132 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-59d95" event={"ID":"a9de7243-90c0-49c4-8059-34e0558fca40","Type":"ContainerStarted","Data":"ad240e2b8d0267297457b342613a05d440eb47d85b7ae176e7581cd53dc16f38"} Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718141 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-59d95" event={"ID":"a9de7243-90c0-49c4-8059-34e0558fca40","Type":"ContainerStarted","Data":"46201f52909688d0b866665564333312afa7f308fe0c5dd71538e6e4a883b683"} Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718150 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-59d95" event={"ID":"a9de7243-90c0-49c4-8059-34e0558fca40","Type":"ContainerStarted","Data":"c338b30f4ce7d3b65e0ff2e507deac121b209e8c01583658956897e30a06262e"} Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718160 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql" event={"ID":"f38b464d-a218-4753-b7ac-a7d373952c4d","Type":"ContainerStarted","Data":"92280bccd2b6d5e0f9862db35c3e4e8885627146385a886dbc5fe3415968b7dc"} Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718169 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql" event={"ID":"f38b464d-a218-4753-b7ac-a7d373952c4d","Type":"ContainerDied","Data":"55fa6d94ce214941faacc4a186e818424b11b71ba4c1eab406a044ddb774b931"} Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718178 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql" event={"ID":"f38b464d-a218-4753-b7ac-a7d373952c4d","Type":"ContainerStarted","Data":"b3c1a7233994d6fe76298cbc19c305628db4f9a91233624d87cce643360815bc"} Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718187 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql" event={"ID":"f38b464d-a218-4753-b7ac-a7d373952c4d","Type":"ContainerStarted","Data":"354c2a6b66c065fe648ce36ee5e4c7bbfed1c688af2120800fda750d61548f3b"} Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718200 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-6hldc" event={"ID":"e54baea8-6c3e-45a0-ac8c-880a8aaa8208","Type":"ContainerStarted","Data":"381bb2c8f965d035883df3aed2837df6b027fe5a3fa9b570128156fcc37a3b8c"} Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718210 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-6hldc" event={"ID":"e54baea8-6c3e-45a0-ac8c-880a8aaa8208","Type":"ContainerStarted","Data":"79c45dcce1d819c7fccd19f2123bf5227e7882d825c4cbdf8c140e544e9eccec"} Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718218 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4" event={"ID":"902909ca-ab08-49aa-9736-70e073f8e67d","Type":"ContainerStarted","Data":"57458ff3e47bb71f51462cfaad03298ba4f4252a840f0e60177178013f47586d"} Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718227 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4" event={"ID":"902909ca-ab08-49aa-9736-70e073f8e67d","Type":"ContainerDied","Data":"55c94bf30a1ccca039ed50a5bce5510c09848033cc6f053a453f757341dfc8bc"} Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718237 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4" event={"ID":"902909ca-ab08-49aa-9736-70e073f8e67d","Type":"ContainerDied","Data":"a4c090aab4f3bf89ced608a71e5db3af3d21ed7b2100020f019a5440d122cecc"} Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718245 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4" event={"ID":"902909ca-ab08-49aa-9736-70e073f8e67d","Type":"ContainerDied","Data":"0697c4988a8dce166398ff970c57c5e68178bc04fae2f2829aa0dffd05961950"} Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718253 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4" event={"ID":"902909ca-ab08-49aa-9736-70e073f8e67d","Type":"ContainerStarted","Data":"8bdb6f1dfbc7856feb8d743ecbf26b76b58e1fc414ee00508489367f3860efae"} Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718274 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"9b853631-ff77-4643-aa07-b1f8056320a3","Type":"ContainerDied","Data":"64aef303c60ed75302cdf53b54c1f5e7b01831e38260821ecee71573b2f8873b"} Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718286 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"9b853631-ff77-4643-aa07-b1f8056320a3","Type":"ContainerDied","Data":"a971c7d2244a974c92fa9c0c47762387f2198f4f305068fbf35726add0f3183d"} Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718295 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a971c7d2244a974c92fa9c0c47762387f2198f4f305068fbf35726add0f3183d" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718303 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" event={"ID":"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617","Type":"ContainerStarted","Data":"7a7f2ccfc78b34586b520e7b273c5529da5b88ce117fdf9009b75da391aff58c"} Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718312 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" event={"ID":"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617","Type":"ContainerDied","Data":"373a96948933142905a0929fac3fe9686db40a54b4edff77be09a9cdf58a235d"} Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718323 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" event={"ID":"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617","Type":"ContainerStarted","Data":"59c8efa660020136a15ef14448bc9cb0b22e7df7c1b1767ff473eca4a83bd7ff"} Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718332 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" event={"ID":"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617","Type":"ContainerStarted","Data":"1bf3d426d907a1cb94f7713355be45a70fb7cd061dca794ecb62191beca0b9d4"} Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718341 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" event={"ID":"8c0e5eca-819b-40f3-bf77-0cd90a4f6e94","Type":"ContainerStarted","Data":"ad72c23a28f38e825c0456b52af920cafa1e150c8d395ab556d6b63b8187ab88"} Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718350 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" event={"ID":"8c0e5eca-819b-40f3-bf77-0cd90a4f6e94","Type":"ContainerStarted","Data":"d5c45f47f10bb08721004bc944edd8b049be91900e107372ecc9bc0e512a2248"} Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718359 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" event={"ID":"720a1f60-c1cb-4aef-aaec-f082090ca631","Type":"ContainerDied","Data":"b6d0118c2fdf2cbc54c92133c6e31568d8996365d7d961746064b4d6f7f3d6e8"} Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718371 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" event={"ID":"d9d09a56-ed4c-40b7-8be1-f3934c07296e","Type":"ContainerDied","Data":"a272363aabc94bf515887116c3094b118b2c3e6ac7802ab09d5f4466b9ec2a97"} Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718382 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerDied","Data":"721fa0a6e32ffbe367060749a069ffa65b9f6ad129708e70bf8fe6c632945146"} Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.716731 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718584 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-cpqm5\" (UID: \"7dca7577-6bee-4dd3-917a-7b7ccc42f0fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.716777 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.716816 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.716848 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718709 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718751 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718787 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f7f4ae93-428b-4ebd-bfaa-18359b407ede-metrics-tls\") pod \"network-operator-7bd846bfc4-gxxbr\" (UID: \"f7f4ae93-428b-4ebd-bfaa-18359b407ede\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gxxbr" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718811 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07505113-d5e7-4ea3-b9cc-8f08cba45ccc-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb\" (UID: \"07505113-d5e7-4ea3-b9cc-8f08cba45ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.716863 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.718836 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-n8hgl\" (UID: \"8c0e5eca-819b-40f3-bf77-0cd90a4f6e94\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.719023 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-99pzm\" (UID: \"fe643e40-d06d-4e69-9be3-0065c2a78567\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.719054 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/767da57e-44e4-4861-bc6f-427c5bbb4d9d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.719077 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfb5c\" (UniqueName: \"kubernetes.io/projected/a2bdf5b0-8764-4b15-97c9-20af36634fd0-kube-api-access-sfb5c\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.719243 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7fb5bad7-07d9-45ac-ad27-a887d12d148f-encryption-config\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.719284 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvdg2\" (UniqueName: \"kubernetes.io/projected/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-kube-api-access-qvdg2\") pod \"olm-operator-5c9796789-z8jkt\" (UID: \"822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.719306 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc-config\") pod \"openshift-controller-manager-operator-8c94f4649-cpqm5\" (UID: \"7dca7577-6bee-4dd3-917a-7b7ccc42f0fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.719326 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9d09a56-ed4c-40b7-8be1-f3934c07296e-bound-sa-token\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.719351 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-9bjsj\" (UID: \"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.719372 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-system-cni-dir\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.719437 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-cni-binary-copy\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.719462 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5xgh\" (UniqueName: \"kubernetes.io/projected/59bf5114-29f9-4f70-8582-108e95327cb2-kube-api-access-z5xgh\") pod \"dns-operator-9c5679d8f-5lzzn\" (UID: \"59bf5114-29f9-4f70-8582-108e95327cb2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-5lzzn" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.719483 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf9d21f9-64d6-4e21-a985-491197038568-config\") pod \"authentication-operator-5885bfd7f4-sp4ld\" (UID: \"bf9d21f9-64d6-4e21-a985-491197038568\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.719518 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgffb\" (UniqueName: \"kubernetes.io/projected/bf9d21f9-64d6-4e21-a985-491197038568-kube-api-access-qgffb\") pod \"authentication-operator-5885bfd7f4-sp4ld\" (UID: \"bf9d21f9-64d6-4e21-a985-491197038568\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.719540 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617-cache\") pod \"operator-controller-controller-manager-57777556ff-9bjsj\" (UID: \"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.719563 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.719699 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-config\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.716890 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.717150 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.717151 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.719947 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617-cache\") pod \"operator-controller-controller-manager-57777556ff-9bjsj\" (UID: \"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.717160 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.717169 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.720070 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc-config\") pod \"openshift-controller-manager-operator-8c94f4649-cpqm5\" (UID: \"7dca7577-6bee-4dd3-917a-7b7ccc42f0fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.717176 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.720172 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.720264 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07505113-d5e7-4ea3-b9cc-8f08cba45ccc-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb\" (UID: \"07505113-d5e7-4ea3-b9cc-8f08cba45ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.717184 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.720284 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.717226 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.720354 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62lvq\" (UniqueName: \"kubernetes.io/projected/deb67ea0-8342-40cb-b0f4-115270e878dd-kube-api-access-62lvq\") pod \"csi-snapshot-controller-64854d9cff-qsnxz\" (UID: \"deb67ea0-8342-40cb-b0f4-115270e878dd\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qsnxz" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.720437 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf9d21f9-64d6-4e21-a985-491197038568-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-sp4ld\" (UID: \"bf9d21f9-64d6-4e21-a985-491197038568\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.720464 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-99pzm\" (UID: \"fe643e40-d06d-4e69-9be3-0065c2a78567\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.717240 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.720488 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8eff549-02f3-446e-b3a1-a66cecdc02a6-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-zm5rd\" (UID: \"a8eff549-02f3-446e-b3a1-a66cecdc02a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.720513 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8eff549-02f3-446e-b3a1-a66cecdc02a6-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-zm5rd\" (UID: \"a8eff549-02f3-446e-b3a1-a66cecdc02a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.717310 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 18 13:24:17.720573 master-0 kubenswrapper[27835]: I0318 13:24:17.717352 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 18 13:24:17.725732 master-0 kubenswrapper[27835]: I0318 13:24:17.717501 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 18 13:24:17.725732 master-0 kubenswrapper[27835]: I0318 13:24:17.718833 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 18 13:24:17.725732 master-0 kubenswrapper[27835]: I0318 13:24:17.720802 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-99pzm\" (UID: \"fe643e40-d06d-4e69-9be3-0065c2a78567\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:24:17.725732 master-0 kubenswrapper[27835]: I0318 13:24:17.721086 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf9d21f9-64d6-4e21-a985-491197038568-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-sp4ld\" (UID: \"bf9d21f9-64d6-4e21-a985-491197038568\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:24:17.725732 master-0 kubenswrapper[27835]: I0318 13:24:17.721218 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5a4202c2-c330-4a5d-87e7-0a63d069113f-images\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:24:17.725732 master-0 kubenswrapper[27835]: I0318 13:24:17.721244 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-cnibin\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.725732 master-0 kubenswrapper[27835]: I0318 13:24:17.721332 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8eff549-02f3-446e-b3a1-a66cecdc02a6-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-zm5rd\" (UID: \"a8eff549-02f3-446e-b3a1-a66cecdc02a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd" Mar 18 13:24:17.746560 master-0 kubenswrapper[27835]: I0318 13:24:17.746497 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-etc-kubernetes\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.746947 master-0 kubenswrapper[27835]: I0318 13:24:17.746901 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlzqd\" (UniqueName: \"kubernetes.io/projected/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-kube-api-access-zlzqd\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:24:17.746996 master-0 kubenswrapper[27835]: I0318 13:24:17.746958 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2hxh\" (UniqueName: \"kubernetes.io/projected/c3ff09ab-cbe1-49e7-8121-5f71997a5176-kube-api-access-n2hxh\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:24:17.746996 master-0 kubenswrapper[27835]: I0318 13:24:17.746960 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f7f4ae93-428b-4ebd-bfaa-18359b407ede-metrics-tls\") pod \"network-operator-7bd846bfc4-gxxbr\" (UID: \"f7f4ae93-428b-4ebd-bfaa-18359b407ede\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gxxbr" Mar 18 13:24:17.747093 master-0 kubenswrapper[27835]: I0318 13:24:17.747024 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8eff549-02f3-446e-b3a1-a66cecdc02a6-config\") pod \"openshift-kube-scheduler-operator-dddff6458-zm5rd\" (UID: \"a8eff549-02f3-446e-b3a1-a66cecdc02a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd" Mar 18 13:24:17.747144 master-0 kubenswrapper[27835]: I0318 13:24:17.747095 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7fb5bad7-07d9-45ac-ad27-a887d12d148f-etcd-client\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:24:17.748025 master-0 kubenswrapper[27835]: I0318 13:24:17.747996 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4vtf\" (UniqueName: \"kubernetes.io/projected/15a97fe2-5022-4997-9936-4247ae7ecb43-kube-api-access-h4vtf\") pod \"cluster-storage-operator-7d87854d6-r4dzk\" (UID: \"15a97fe2-5022-4997-9936-4247ae7ecb43\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-r4dzk" Mar 18 13:24:17.748092 master-0 kubenswrapper[27835]: I0318 13:24:17.748052 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:24:17.748092 master-0 kubenswrapper[27835]: I0318 13:24:17.748084 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/767da57e-44e4-4861-bc6f-427c5bbb4d9d-os-release\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:24:17.748173 master-0 kubenswrapper[27835]: I0318 13:24:17.748112 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-run-netns\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.748323 master-0 kubenswrapper[27835]: I0318 13:24:17.748294 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a2bdf5b0-8764-4b15-97c9-20af36634fd0-etcd-client\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:24:17.748373 master-0 kubenswrapper[27835]: I0318 13:24:17.748329 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:24:17.748373 master-0 kubenswrapper[27835]: I0318 13:24:17.748358 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:24:17.748477 master-0 kubenswrapper[27835]: I0318 13:24:17.748383 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:24:17.748477 master-0 kubenswrapper[27835]: I0318 13:24:17.748455 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:24:17.748573 master-0 kubenswrapper[27835]: I0318 13:24:17.748484 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/f7f4ae93-428b-4ebd-bfaa-18359b407ede-host-etc-kube\") pod \"network-operator-7bd846bfc4-gxxbr\" (UID: \"f7f4ae93-428b-4ebd-bfaa-18359b407ede\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gxxbr" Mar 18 13:24:17.748573 master-0 kubenswrapper[27835]: I0318 13:24:17.748517 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wnqw\" (UniqueName: \"kubernetes.io/projected/f3be6654-f969-4952-976d-218c86af7d2d-kube-api-access-9wnqw\") pod \"network-check-source-b4bf74f6-tw7c7\" (UID: \"f3be6654-f969-4952-976d-218c86af7d2d\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-tw7c7" Mar 18 13:24:17.748811 master-0 kubenswrapper[27835]: I0318 13:24:17.748780 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3ff09ab-cbe1-49e7-8121-5f71997a5176-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:24:17.748860 master-0 kubenswrapper[27835]: I0318 13:24:17.748820 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-os-release\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.748860 master-0 kubenswrapper[27835]: I0318 13:24:17.748849 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a2bdf5b0-8764-4b15-97c9-20af36634fd0-audit\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:24:17.748944 master-0 kubenswrapper[27835]: I0318 13:24:17.748882 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a2bdf5b0-8764-4b15-97c9-20af36634fd0-image-import-ca\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:24:17.748944 master-0 kubenswrapper[27835]: I0318 13:24:17.748912 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-9bjsj\" (UID: \"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:24:17.749084 master-0 kubenswrapper[27835]: I0318 13:24:17.748941 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:24:17.749084 master-0 kubenswrapper[27835]: I0318 13:24:17.748993 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/290d1f84-5c5c-4bff-b045-e6020793cded-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:24:17.749084 master-0 kubenswrapper[27835]: I0318 13:24:17.749026 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdkx7\" (UniqueName: \"kubernetes.io/projected/290d1f84-5c5c-4bff-b045-e6020793cded-kube-api-access-rdkx7\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:24:17.749084 master-0 kubenswrapper[27835]: I0318 13:24:17.749075 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fb5bad7-07d9-45ac-ad27-a887d12d148f-serving-cert\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:24:17.749297 master-0 kubenswrapper[27835]: I0318 13:24:17.749107 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2bdf5b0-8764-4b15-97c9-20af36634fd0-trusted-ca-bundle\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:24:17.749297 master-0 kubenswrapper[27835]: I0318 13:24:17.749132 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/902909ca-ab08-49aa-9736-70e073f8e67d-operand-assets\") pod \"cluster-olm-operator-67dcd4998-bppd4\" (UID: \"902909ca-ab08-49aa-9736-70e073f8e67d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4" Mar 18 13:24:17.749297 master-0 kubenswrapper[27835]: I0318 13:24:17.749160 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc8t5\" (UniqueName: \"kubernetes.io/projected/f7f4ae93-428b-4ebd-bfaa-18359b407ede-kube-api-access-mc8t5\") pod \"network-operator-7bd846bfc4-gxxbr\" (UID: \"f7f4ae93-428b-4ebd-bfaa-18359b407ede\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gxxbr" Mar 18 13:24:17.749297 master-0 kubenswrapper[27835]: I0318 13:24:17.749212 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c129e07da670ff3af256d72652e4b1da-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c129e07da670ff3af256d72652e4b1da\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:24:17.749297 master-0 kubenswrapper[27835]: I0318 13:24:17.749241 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-multus-socket-dir-parent\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.749297 master-0 kubenswrapper[27835]: I0318 13:24:17.749257 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 18 13:24:17.749592 master-0 kubenswrapper[27835]: I0318 13:24:17.749568 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 18 13:24:17.750027 master-0 kubenswrapper[27835]: I0318 13:24:17.749992 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 18 13:24:17.750118 master-0 kubenswrapper[27835]: I0318 13:24:17.750084 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:24:17.750499 master-0 kubenswrapper[27835]: I0318 13:24:17.750137 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fb5bad7-07d9-45ac-ad27-a887d12d148f-trusted-ca-bundle\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:24:17.750499 master-0 kubenswrapper[27835]: I0318 13:24:17.750168 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-9bjsj\" (UID: \"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:24:17.750499 master-0 kubenswrapper[27835]: I0318 13:24:17.750198 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:24:17.750499 master-0 kubenswrapper[27835]: I0318 13:24:17.750225 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:24:17.750499 master-0 kubenswrapper[27835]: I0318 13:24:17.750254 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-multus-cni-dir\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.750499 master-0 kubenswrapper[27835]: I0318 13:24:17.750283 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbsq9\" (UniqueName: \"kubernetes.io/projected/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617-kube-api-access-fbsq9\") pod \"operator-controller-controller-manager-57777556ff-9bjsj\" (UID: \"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:24:17.750499 master-0 kubenswrapper[27835]: I0318 13:24:17.750312 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-n8hgl\" (UID: \"8c0e5eca-819b-40f3-bf77-0cd90a4f6e94\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" Mar 18 13:24:17.750499 master-0 kubenswrapper[27835]: I0318 13:24:17.750343 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/290d1f84-5c5c-4bff-b045-e6020793cded-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:24:17.753403 master-0 kubenswrapper[27835]: I0318 13:24:17.750365 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/767da57e-44e4-4861-bc6f-427c5bbb4d9d-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:24:17.753403 master-0 kubenswrapper[27835]: I0318 13:24:17.753055 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/767da57e-44e4-4861-bc6f-427c5bbb4d9d-cnibin\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:24:17.753403 master-0 kubenswrapper[27835]: I0318 13:24:17.753091 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-run-multus-certs\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.753403 master-0 kubenswrapper[27835]: I0318 13:24:17.753114 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a2bdf5b0-8764-4b15-97c9-20af36634fd0-node-pullsecrets\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:24:17.753403 master-0 kubenswrapper[27835]: I0318 13:24:17.753152 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert\") pod \"olm-operator-5c9796789-z8jkt\" (UID: \"822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt" Mar 18 13:24:17.753403 master-0 kubenswrapper[27835]: I0318 13:24:17.753181 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/902909ca-ab08-49aa-9736-70e073f8e67d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-bppd4\" (UID: \"902909ca-ab08-49aa-9736-70e073f8e67d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4" Mar 18 13:24:17.753403 master-0 kubenswrapper[27835]: I0318 13:24:17.753209 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls\") pod \"dns-operator-9c5679d8f-5lzzn\" (UID: \"59bf5114-29f9-4f70-8582-108e95327cb2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-5lzzn" Mar 18 13:24:17.753403 master-0 kubenswrapper[27835]: I0318 13:24:17.753237 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgzkd\" (UniqueName: \"kubernetes.io/projected/07505113-d5e7-4ea3-b9cc-8f08cba45ccc-kube-api-access-lgzkd\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb\" (UID: \"07505113-d5e7-4ea3-b9cc-8f08cba45ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb" Mar 18 13:24:17.753403 master-0 kubenswrapper[27835]: I0318 13:24:17.753263 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a4202c2-c330-4a5d-87e7-0a63d069113f-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:24:17.753403 master-0 kubenswrapper[27835]: I0318 13:24:17.753289 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7fb5bad7-07d9-45ac-ad27-a887d12d148f-etcd-serving-ca\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:24:17.753403 master-0 kubenswrapper[27835]: I0318 13:24:17.753318 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsvmx\" (UniqueName: \"kubernetes.io/projected/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-kube-api-access-xsvmx\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.753403 master-0 kubenswrapper[27835]: I0318 13:24:17.753346 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4cqp\" (UniqueName: \"kubernetes.io/projected/fe643e40-d06d-4e69-9be3-0065c2a78567-kube-api-access-w4cqp\") pod \"marketplace-operator-89ccd998f-99pzm\" (UID: \"fe643e40-d06d-4e69-9be3-0065c2a78567\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:24:17.753403 master-0 kubenswrapper[27835]: I0318 13:24:17.753370 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-var-lib-cni-multus\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.773632 master-0 kubenswrapper[27835]: I0318 13:24:17.751685 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 18 13:24:17.773632 master-0 kubenswrapper[27835]: I0318 13:24:17.751774 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 18 13:24:17.774027 master-0 kubenswrapper[27835]: I0318 13:24:17.773960 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5a4202c2-c330-4a5d-87e7-0a63d069113f-images\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:24:17.774886 master-0 kubenswrapper[27835]: I0318 13:24:17.774194 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/15a97fe2-5022-4997-9936-4247ae7ecb43-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-r4dzk\" (UID: \"15a97fe2-5022-4997-9936-4247ae7ecb43\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-r4dzk" Mar 18 13:24:17.774959 master-0 kubenswrapper[27835]: I0318 13:24:17.774912 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:24:17.775005 master-0 kubenswrapper[27835]: I0318 13:24:17.774955 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgt5t\" (UniqueName: \"kubernetes.io/projected/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-kube-api-access-lgt5t\") pod \"cluster-monitoring-operator-58845fbb57-n8hgl\" (UID: \"8c0e5eca-819b-40f3-bf77-0cd90a4f6e94\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" Mar 18 13:24:17.775005 master-0 kubenswrapper[27835]: I0318 13:24:17.774978 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8eff549-02f3-446e-b3a1-a66cecdc02a6-config\") pod \"openshift-kube-scheduler-operator-dddff6458-zm5rd\" (UID: \"a8eff549-02f3-446e-b3a1-a66cecdc02a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd" Mar 18 13:24:17.775117 master-0 kubenswrapper[27835]: I0318 13:24:17.775049 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:24:17.775270 master-0 kubenswrapper[27835]: I0318 13:24:17.775251 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 18 13:24:17.775612 master-0 kubenswrapper[27835]: I0318 13:24:17.751844 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 18 13:24:17.775820 master-0 kubenswrapper[27835]: I0318 13:24:17.751831 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 18 13:24:17.782100 master-0 kubenswrapper[27835]: I0318 13:24:17.781438 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 18 13:24:17.782100 master-0 kubenswrapper[27835]: I0318 13:24:17.776928 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-p7vvx\" (UID: \"2ea9eb53-0385-4a1a-a64f-696f8520cf49\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" Mar 18 13:24:17.782100 master-0 kubenswrapper[27835]: I0318 13:24:17.777649 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/59bf5114-29f9-4f70-8582-108e95327cb2-metrics-tls\") pod \"dns-operator-9c5679d8f-5lzzn\" (UID: \"59bf5114-29f9-4f70-8582-108e95327cb2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-5lzzn" Mar 18 13:24:17.782100 master-0 kubenswrapper[27835]: I0318 13:24:17.777917 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a4202c2-c330-4a5d-87e7-0a63d069113f-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:24:17.782100 master-0 kubenswrapper[27835]: I0318 13:24:17.778250 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5a4202c2-c330-4a5d-87e7-0a63d069113f-proxy-tls\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:24:17.782100 master-0 kubenswrapper[27835]: I0318 13:24:17.781513 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7fb5bad7-07d9-45ac-ad27-a887d12d148f-audit-dir\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:24:17.782100 master-0 kubenswrapper[27835]: I0318 13:24:17.778708 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3ff09ab-cbe1-49e7-8121-5f71997a5176-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:24:17.782100 master-0 kubenswrapper[27835]: I0318 13:24:17.778513 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-n8hgl\" (UID: \"8c0e5eca-819b-40f3-bf77-0cd90a4f6e94\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" Mar 18 13:24:17.782100 master-0 kubenswrapper[27835]: I0318 13:24:17.779847 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/902909ca-ab08-49aa-9736-70e073f8e67d-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-bppd4\" (UID: \"902909ca-ab08-49aa-9736-70e073f8e67d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4" Mar 18 13:24:17.782100 master-0 kubenswrapper[27835]: I0318 13:24:17.780195 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:24:17.782100 master-0 kubenswrapper[27835]: I0318 13:24:17.780577 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/902909ca-ab08-49aa-9736-70e073f8e67d-operand-assets\") pod \"cluster-olm-operator-67dcd4998-bppd4\" (UID: \"902909ca-ab08-49aa-9736-70e073f8e67d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4" Mar 18 13:24:17.782100 master-0 kubenswrapper[27835]: E0318 13:24:17.780808 27835 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-master-0\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:24:17.782100 master-0 kubenswrapper[27835]: E0318 13:24:17.780865 27835 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"openshift-kube-scheduler-master-0\" already exists" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:24:17.782100 master-0 kubenswrapper[27835]: E0318 13:24:17.780906 27835 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Mar 18 13:24:17.782100 master-0 kubenswrapper[27835]: E0318 13:24:17.780974 27835 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:24:17.782100 master-0 kubenswrapper[27835]: I0318 13:24:17.781061 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/15a97fe2-5022-4997-9936-4247ae7ecb43-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-r4dzk\" (UID: \"15a97fe2-5022-4997-9936-4247ae7ecb43\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-r4dzk" Mar 18 13:24:17.782100 master-0 kubenswrapper[27835]: I0318 13:24:17.779147 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/290d1f84-5c5c-4bff-b045-e6020793cded-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:24:17.782100 master-0 kubenswrapper[27835]: I0318 13:24:17.781618 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-multus-conf-dir\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.782100 master-0 kubenswrapper[27835]: I0318 13:24:17.781625 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-n8hgl\" (UID: \"8c0e5eca-819b-40f3-bf77-0cd90a4f6e94\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" Mar 18 13:24:17.782100 master-0 kubenswrapper[27835]: I0318 13:24:17.777237 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ea9eb53-0385-4a1a-a64f-696f8520cf49-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-p7vvx\" (UID: \"2ea9eb53-0385-4a1a-a64f-696f8520cf49\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" Mar 18 13:24:17.782100 master-0 kubenswrapper[27835]: I0318 13:24:17.751937 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 18 13:24:17.782100 master-0 kubenswrapper[27835]: I0318 13:24:17.751984 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 18 13:24:17.782100 master-0 kubenswrapper[27835]: I0318 13:24:17.752151 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 18 13:24:17.791237 master-0 kubenswrapper[27835]: I0318 13:24:17.770775 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 18 13:24:17.791237 master-0 kubenswrapper[27835]: I0318 13:24:17.772002 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 18 13:24:17.791237 master-0 kubenswrapper[27835]: I0318 13:24:17.772181 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 18 13:24:17.791237 master-0 kubenswrapper[27835]: I0318 13:24:17.773554 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 18 13:24:17.791237 master-0 kubenswrapper[27835]: I0318 13:24:17.776830 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 18 13:24:17.791237 master-0 kubenswrapper[27835]: I0318 13:24:17.777138 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 18 13:24:17.791237 master-0 kubenswrapper[27835]: I0318 13:24:17.782512 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/767da57e-44e4-4861-bc6f-427c5bbb4d9d-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:24:17.791237 master-0 kubenswrapper[27835]: I0318 13:24:17.777161 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 18 13:24:17.791237 master-0 kubenswrapper[27835]: I0318 13:24:17.777202 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 18 13:24:17.791237 master-0 kubenswrapper[27835]: I0318 13:24:17.777245 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 18 13:24:17.791237 master-0 kubenswrapper[27835]: I0318 13:24:17.777294 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 18 13:24:17.791237 master-0 kubenswrapper[27835]: I0318 13:24:17.777331 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 18 13:24:17.791237 master-0 kubenswrapper[27835]: I0318 13:24:17.782820 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-cert\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:24:17.791237 master-0 kubenswrapper[27835]: I0318 13:24:17.777452 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 18 13:24:17.791237 master-0 kubenswrapper[27835]: I0318 13:24:17.777558 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 18 13:24:17.791237 master-0 kubenswrapper[27835]: I0318 13:24:17.777616 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 18 13:24:17.791237 master-0 kubenswrapper[27835]: I0318 13:24:17.777658 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 18 13:24:17.791237 master-0 kubenswrapper[27835]: I0318 13:24:17.777705 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 18 13:24:17.791237 master-0 kubenswrapper[27835]: I0318 13:24:17.777744 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 18 13:24:17.791237 master-0 kubenswrapper[27835]: I0318 13:24:17.777798 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 18 13:24:17.791237 master-0 kubenswrapper[27835]: I0318 13:24:17.777849 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 18 13:24:17.791237 master-0 kubenswrapper[27835]: I0318 13:24:17.777908 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 18 13:24:17.791237 master-0 kubenswrapper[27835]: I0318 13:24:17.777958 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 18 13:24:17.791237 master-0 kubenswrapper[27835]: I0318 13:24:17.779357 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 18 13:24:17.791237 master-0 kubenswrapper[27835]: I0318 13:24:17.780589 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 18 13:24:17.791237 master-0 kubenswrapper[27835]: I0318 13:24:17.780846 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 18 13:24:17.791237 master-0 kubenswrapper[27835]: I0318 13:24:17.780931 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 18 13:24:17.791237 master-0 kubenswrapper[27835]: I0318 13:24:17.781014 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 18 13:24:17.791237 master-0 kubenswrapper[27835]: I0318 13:24:17.781155 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 18 13:24:17.791237 master-0 kubenswrapper[27835]: I0318 13:24:17.787241 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-srv-cert\") pod \"olm-operator-5c9796789-z8jkt\" (UID: \"822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt" Mar 18 13:24:17.792232 master-0 kubenswrapper[27835]: I0318 13:24:17.792134 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf9d21f9-64d6-4e21-a985-491197038568-config\") pod \"authentication-operator-5885bfd7f4-sp4ld\" (UID: \"bf9d21f9-64d6-4e21-a985-491197038568\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:24:17.806098 master-0 kubenswrapper[27835]: I0318 13:24:17.804483 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 18 13:24:17.806098 master-0 kubenswrapper[27835]: I0318 13:24:17.804812 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe643e40-d06d-4e69-9be3-0065c2a78567-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-99pzm\" (UID: \"fe643e40-d06d-4e69-9be3-0065c2a78567\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:24:17.806098 master-0 kubenswrapper[27835]: I0318 13:24:17.804932 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 18 13:24:17.806098 master-0 kubenswrapper[27835]: I0318 13:24:17.805297 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 18 13:24:17.806098 master-0 kubenswrapper[27835]: I0318 13:24:17.805423 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 18 13:24:17.806098 master-0 kubenswrapper[27835]: I0318 13:24:17.806060 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf9d21f9-64d6-4e21-a985-491197038568-serving-cert\") pod \"authentication-operator-5885bfd7f4-sp4ld\" (UID: \"bf9d21f9-64d6-4e21-a985-491197038568\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:24:17.806098 master-0 kubenswrapper[27835]: I0318 13:24:17.806122 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 18 13:24:17.813711 master-0 kubenswrapper[27835]: I0318 13:24:17.806621 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9d09a56-ed4c-40b7-8be1-f3934c07296e-trusted-ca\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:24:17.813711 master-0 kubenswrapper[27835]: I0318 13:24:17.782021 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:24:17.813711 master-0 kubenswrapper[27835]: I0318 13:24:17.807127 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/767da57e-44e4-4861-bc6f-427c5bbb4d9d-system-cni-dir\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:24:17.813711 master-0 kubenswrapper[27835]: I0318 13:24:17.807167 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-run-k8s-cni-cncf-io\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.813711 master-0 kubenswrapper[27835]: I0318 13:24:17.807323 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-hostroot\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.813711 master-0 kubenswrapper[27835]: I0318 13:24:17.807367 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-multus-daemon-config\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.813711 master-0 kubenswrapper[27835]: I0318 13:24:17.807387 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07505113-d5e7-4ea3-b9cc-8f08cba45ccc-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb\" (UID: \"07505113-d5e7-4ea3-b9cc-8f08cba45ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb" Mar 18 13:24:17.813711 master-0 kubenswrapper[27835]: I0318 13:24:17.807407 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kskqr\" (UniqueName: \"kubernetes.io/projected/902909ca-ab08-49aa-9736-70e073f8e67d-kube-api-access-kskqr\") pod \"cluster-olm-operator-67dcd4998-bppd4\" (UID: \"902909ca-ab08-49aa-9736-70e073f8e67d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4" Mar 18 13:24:17.813711 master-0 kubenswrapper[27835]: I0318 13:24:17.807438 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:24:17.813711 master-0 kubenswrapper[27835]: I0318 13:24:17.807615 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:24:17.813711 master-0 kubenswrapper[27835]: I0318 13:24:17.807636 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dw4r\" (UniqueName: \"kubernetes.io/projected/2ea9eb53-0385-4a1a-a64f-696f8520cf49-kube-api-access-4dw4r\") pod \"package-server-manager-7b95f86987-p7vvx\" (UID: \"2ea9eb53-0385-4a1a-a64f-696f8520cf49\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" Mar 18 13:24:17.813711 master-0 kubenswrapper[27835]: I0318 13:24:17.807654 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:24:17.813711 master-0 kubenswrapper[27835]: I0318 13:24:17.807692 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7fb5bad7-07d9-45ac-ad27-a887d12d148f-audit-policies\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:24:17.813711 master-0 kubenswrapper[27835]: I0318 13:24:17.807713 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/767da57e-44e4-4861-bc6f-427c5bbb4d9d-cni-binary-copy\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:24:17.813711 master-0 kubenswrapper[27835]: I0318 13:24:17.810169 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a2bdf5b0-8764-4b15-97c9-20af36634fd0-encryption-config\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:24:17.813711 master-0 kubenswrapper[27835]: I0318 13:24:17.810214 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2bdf5b0-8764-4b15-97c9-20af36634fd0-audit-dir\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:24:17.813711 master-0 kubenswrapper[27835]: I0318 13:24:17.810453 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-multus-daemon-config\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.813711 master-0 kubenswrapper[27835]: I0318 13:24:17.810620 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07505113-d5e7-4ea3-b9cc-8f08cba45ccc-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb\" (UID: \"07505113-d5e7-4ea3-b9cc-8f08cba45ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb" Mar 18 13:24:17.813711 master-0 kubenswrapper[27835]: I0318 13:24:17.808820 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 18 13:24:17.813711 master-0 kubenswrapper[27835]: I0318 13:24:17.812241 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 18 13:24:17.813711 master-0 kubenswrapper[27835]: I0318 13:24:17.808880 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 18 13:24:17.813711 master-0 kubenswrapper[27835]: I0318 13:24:17.809014 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 18 13:24:17.813711 master-0 kubenswrapper[27835]: I0318 13:24:17.812929 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/767da57e-44e4-4861-bc6f-427c5bbb4d9d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:24:17.813711 master-0 kubenswrapper[27835]: I0318 13:24:17.813087 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-cni-binary-copy\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.815312 master-0 kubenswrapper[27835]: I0318 13:24:17.815181 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 18 13:24:17.816990 master-0 kubenswrapper[27835]: I0318 13:24:17.815664 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9d09a56-ed4c-40b7-8be1-f3934c07296e-metrics-tls\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:24:17.816990 master-0 kubenswrapper[27835]: I0318 13:24:17.815951 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/767da57e-44e4-4861-bc6f-427c5bbb4d9d-cni-binary-copy\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:24:17.816990 master-0 kubenswrapper[27835]: I0318 13:24:17.816501 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3ff09ab-cbe1-49e7-8121-5f71997a5176-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:24:17.816990 master-0 kubenswrapper[27835]: I0318 13:24:17.816695 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf9d21f9-64d6-4e21-a985-491197038568-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-sp4ld\" (UID: \"bf9d21f9-64d6-4e21-a985-491197038568\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:24:17.817166 master-0 kubenswrapper[27835]: I0318 13:24:17.817102 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 18 13:24:17.818231 master-0 kubenswrapper[27835]: I0318 13:24:17.818205 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 18 13:24:17.828899 master-0 kubenswrapper[27835]: I0318 13:24:17.828849 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/290d1f84-5c5c-4bff-b045-e6020793cded-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:24:17.829979 master-0 kubenswrapper[27835]: I0318 13:24:17.829841 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 18 13:24:17.839056 master-0 kubenswrapper[27835]: I0318 13:24:17.838975 27835 scope.go:117] "RemoveContainer" containerID="caba03fd38cc380067c9996cdbf9e7480fdd072e4dbef7e8d359a205ae43b4e1" Mar 18 13:24:17.843166 master-0 kubenswrapper[27835]: I0318 13:24:17.842192 27835 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 18 13:24:17.843374 master-0 kubenswrapper[27835]: E0318 13:24:17.843298 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caba03fd38cc380067c9996cdbf9e7480fdd072e4dbef7e8d359a205ae43b4e1\": container with ID starting with caba03fd38cc380067c9996cdbf9e7480fdd072e4dbef7e8d359a205ae43b4e1 not found: ID does not exist" containerID="caba03fd38cc380067c9996cdbf9e7480fdd072e4dbef7e8d359a205ae43b4e1" Mar 18 13:24:17.843374 master-0 kubenswrapper[27835]: I0318 13:24:17.843330 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caba03fd38cc380067c9996cdbf9e7480fdd072e4dbef7e8d359a205ae43b4e1"} err="failed to get container status \"caba03fd38cc380067c9996cdbf9e7480fdd072e4dbef7e8d359a205ae43b4e1\": rpc error: code = NotFound desc = could not find container \"caba03fd38cc380067c9996cdbf9e7480fdd072e4dbef7e8d359a205ae43b4e1\": container with ID starting with caba03fd38cc380067c9996cdbf9e7480fdd072e4dbef7e8d359a205ae43b4e1 not found: ID does not exist" Mar 18 13:24:17.849296 master-0 kubenswrapper[27835]: I0318 13:24:17.849232 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 18 13:24:17.868888 master-0 kubenswrapper[27835]: I0318 13:24:17.868854 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 18 13:24:17.889707 master-0 kubenswrapper[27835]: I0318 13:24:17.889645 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 18 13:24:17.911677 master-0 kubenswrapper[27835]: I0318 13:24:17.911624 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-9bjsj\" (UID: \"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:24:17.911677 master-0 kubenswrapper[27835]: I0318 13:24:17.911675 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e54baea8-6c3e-45a0-ac8c-880a8aaa8208-cert\") pod \"ingress-canary-6hldc\" (UID: \"e54baea8-6c3e-45a0-ac8c-880a8aaa8208\") " pod="openshift-ingress-canary/ingress-canary-6hldc" Mar 18 13:24:17.911842 master-0 kubenswrapper[27835]: I0318 13:24:17.911702 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t56bf\" (UniqueName: \"kubernetes.io/projected/a350f317-f058-4102-af5c-cbba46d35e02-kube-api-access-t56bf\") pod \"route-controller-manager-c8888769b-8mxp6\" (UID: \"a350f317-f058-4102-af5c-cbba46d35e02\") " pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" Mar 18 13:24:17.911842 master-0 kubenswrapper[27835]: I0318 13:24:17.911729 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-systemd\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:17.911842 master-0 kubenswrapper[27835]: I0318 13:24:17.911749 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9548e397-0db4-41c8-9cc8-b575060e9c66-utilities\") pod \"redhat-operators-89st2\" (UID: \"9548e397-0db4-41c8-9cc8-b575060e9c66\") " pod="openshift-marketplace/redhat-operators-89st2" Mar 18 13:24:17.911842 master-0 kubenswrapper[27835]: I0318 13:24:17.911769 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f38b464d-a218-4753-b7ac-a7d373952c4d-auth-proxy-config\") pod \"machine-approver-5c6485487f-fk8ql\" (UID: \"f38b464d-a218-4753-b7ac-a7d373952c4d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql" Mar 18 13:24:17.911842 master-0 kubenswrapper[27835]: I0318 13:24:17.911788 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/702076a9-b542-4768-9e9e-99b2cac0a66e-metrics-client-ca\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:24:17.911842 master-0 kubenswrapper[27835]: I0318 13:24:17.911805 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-run-multus-certs\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.911842 master-0 kubenswrapper[27835]: I0318 13:24:17.911821 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/19a76585-a9ac-4ed9-9146-bb77b31848c6-etcd-client\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:24:17.911842 master-0 kubenswrapper[27835]: I0318 13:24:17.911840 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9zbp\" (UniqueName: \"kubernetes.io/projected/19a76585-a9ac-4ed9-9146-bb77b31848c6-kube-api-access-w9zbp\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:24:17.912200 master-0 kubenswrapper[27835]: I0318 13:24:17.911859 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcfsk\" (UniqueName: \"kubernetes.io/projected/2a25632e-32d0-43d2-9be7-f515d29a1720-kube-api-access-bcfsk\") pod \"community-operators-tqw5h\" (UID: \"2a25632e-32d0-43d2-9be7-f515d29a1720\") " pod="openshift-marketplace/community-operators-tqw5h" Mar 18 13:24:17.912200 master-0 kubenswrapper[27835]: I0318 13:24:17.911878 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/fb65c095-ca20-432c-a069-ad6719fca9c8-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-7tcjk\" (UID: \"fb65c095-ca20-432c-a069-ad6719fca9c8\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-7tcjk" Mar 18 13:24:17.912200 master-0 kubenswrapper[27835]: I0318 13:24:17.911899 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/41cc6278-8f99-407c-ba5f-750a40e3058c-audit-log\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:24:17.912200 master-0 kubenswrapper[27835]: I0318 13:24:17.911915 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a2bdf5b0-8764-4b15-97c9-20af36634fd0-node-pullsecrets\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:24:17.912200 master-0 kubenswrapper[27835]: I0318 13:24:17.911933 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b89fb313-d01a-4305-b123-e253b3382b85-signing-cabundle\") pod \"service-ca-79bc6b8d76-2zvf2\" (UID: \"b89fb313-d01a-4305-b123-e253b3382b85\") " pod="openshift-service-ca/service-ca-79bc6b8d76-2zvf2" Mar 18 13:24:17.912200 master-0 kubenswrapper[27835]: I0318 13:24:17.911951 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm77k\" (UniqueName: \"kubernetes.io/projected/b89fb313-d01a-4305-b123-e253b3382b85-kube-api-access-dm77k\") pod \"service-ca-79bc6b8d76-2zvf2\" (UID: \"b89fb313-d01a-4305-b123-e253b3382b85\") " pod="openshift-service-ca/service-ca-79bc6b8d76-2zvf2" Mar 18 13:24:17.912200 master-0 kubenswrapper[27835]: I0318 13:24:17.911970 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnkdr\" (UniqueName: \"kubernetes.io/projected/bf4c5410-fb44-45e8-ab66-24806e6349b8-kube-api-access-hnkdr\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:17.912200 master-0 kubenswrapper[27835]: I0318 13:24:17.911986 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a350f317-f058-4102-af5c-cbba46d35e02-client-ca\") pod \"route-controller-manager-c8888769b-8mxp6\" (UID: \"a350f317-f058-4102-af5c-cbba46d35e02\") " pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" Mar 18 13:24:17.912200 master-0 kubenswrapper[27835]: I0318 13:24:17.912006 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3ee0f85b-219b-47cb-a22a-67d359a69881-webhook-cert\") pod \"packageserver-ff75f747c-r46tm\" (UID: \"3ee0f85b-219b-47cb-a22a-67d359a69881\") " pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" Mar 18 13:24:17.912200 master-0 kubenswrapper[27835]: I0318 13:24:17.912022 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab2f96fb-ef55-4427-a598-7e3f1e224045-ovn-node-metrics-cert\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:17.912200 master-0 kubenswrapper[27835]: I0318 13:24:17.912038 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19a76585-a9ac-4ed9-9146-bb77b31848c6-serving-cert\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:24:17.912200 master-0 kubenswrapper[27835]: I0318 13:24:17.912053 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d42bcf13-548b-46c4-9a3d-a46f1b6ec045-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-hvnt4\" (UID: \"d42bcf13-548b-46c4-9a3d-a46f1b6ec045\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" Mar 18 13:24:17.912200 master-0 kubenswrapper[27835]: I0318 13:24:17.912070 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8ffe2e75-9cc3-4244-95c8-800463c5aa28-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-bqmqw\" (UID: \"8ffe2e75-9cc3-4244-95c8-800463c5aa28\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bqmqw" Mar 18 13:24:17.912200 master-0 kubenswrapper[27835]: I0318 13:24:17.912089 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d325c523-8e6f-4665-9f54-334eaf301141-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-s4ql7\" (UID: \"d325c523-8e6f-4665-9f54-334eaf301141\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7" Mar 18 13:24:17.912200 master-0 kubenswrapper[27835]: I0318 13:24:17.912109 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9b853631-ff77-4643-aa07-b1f8056320a3-var-lock\") pod \"installer-3-master-0\" (UID: \"9b853631-ff77-4643-aa07-b1f8056320a3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:24:17.912200 master-0 kubenswrapper[27835]: I0318 13:24:17.912140 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-var-lib-cni-multus\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.912200 master-0 kubenswrapper[27835]: I0318 13:24:17.912157 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-run-netns\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:17.912200 master-0 kubenswrapper[27835]: I0318 13:24:17.912182 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-sys\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:17.912200 master-0 kubenswrapper[27835]: I0318 13:24:17.912200 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-8lzkl\" (UID: \"80994f33-21e7-45d6-9f21-1cfd8e1f41ce\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" Mar 18 13:24:17.913043 master-0 kubenswrapper[27835]: I0318 13:24:17.912502 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9548e397-0db4-41c8-9cc8-b575060e9c66-utilities\") pod \"redhat-operators-89st2\" (UID: \"9548e397-0db4-41c8-9cc8-b575060e9c66\") " pod="openshift-marketplace/redhat-operators-89st2" Mar 18 13:24:17.913043 master-0 kubenswrapper[27835]: I0318 13:24:17.912643 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-run-multus-certs\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.913043 master-0 kubenswrapper[27835]: I0318 13:24:17.912742 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab2f96fb-ef55-4427-a598-7e3f1e224045-ovn-node-metrics-cert\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:17.913043 master-0 kubenswrapper[27835]: I0318 13:24:17.912844 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/19a76585-a9ac-4ed9-9146-bb77b31848c6-etcd-client\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:24:17.913043 master-0 kubenswrapper[27835]: I0318 13:24:17.912985 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19a76585-a9ac-4ed9-9146-bb77b31848c6-serving-cert\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:24:17.913043 master-0 kubenswrapper[27835]: I0318 13:24:17.913037 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/41cc6278-8f99-407c-ba5f-750a40e3058c-audit-log\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:24:17.913294 master-0 kubenswrapper[27835]: I0318 13:24:17.913108 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-9bjsj\" (UID: \"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:24:17.913294 master-0 kubenswrapper[27835]: I0318 13:24:17.913139 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs\") pod \"network-metrics-daemon-kbfbq\" (UID: \"bf1cc230-0a79-4a1d-b500-a65d02e50973\") " pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:24:17.913294 master-0 kubenswrapper[27835]: I0318 13:24:17.913155 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d42bcf13-548b-46c4-9a3d-a46f1b6ec045-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-hvnt4\" (UID: \"d42bcf13-548b-46c4-9a3d-a46f1b6ec045\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" Mar 18 13:24:17.913294 master-0 kubenswrapper[27835]: I0318 13:24:17.913158 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41cc6278-8f99-407c-ba5f-750a40e3058c-client-ca-bundle\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:24:17.913294 master-0 kubenswrapper[27835]: I0318 13:24:17.913197 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:24:17.913294 master-0 kubenswrapper[27835]: I0318 13:24:17.913224 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab2f96fb-ef55-4427-a598-7e3f1e224045-env-overrides\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:17.913294 master-0 kubenswrapper[27835]: I0318 13:24:17.913244 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:24:17.913294 master-0 kubenswrapper[27835]: I0318 13:24:17.913262 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c2c4a58-9780-4ecd-b417-e590ac3576ed-config\") pod \"openshift-apiserver-operator-d65958b8-4bqf9\" (UID: \"0c2c4a58-9780-4ecd-b417-e590ac3576ed\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-4bqf9" Mar 18 13:24:17.913294 master-0 kubenswrapper[27835]: I0318 13:24:17.913280 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkfkr\" (UniqueName: \"kubernetes.io/projected/702076a9-b542-4768-9e9e-99b2cac0a66e-kube-api-access-bkfkr\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:24:17.913694 master-0 kubenswrapper[27835]: I0318 13:24:17.913298 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zplb4\" (UniqueName: \"kubernetes.io/projected/13c71f7d-1485-4f86-beb2-ee16cf420350-kube-api-access-zplb4\") pod \"node-resolver-7vddk\" (UID: \"13c71f7d-1485-4f86-beb2-ee16cf420350\") " pod="openshift-dns/node-resolver-7vddk" Mar 18 13:24:17.913694 master-0 kubenswrapper[27835]: I0318 13:24:17.913336 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/830ff1d6-332e-46b1-b13c-c2507fdc3c19-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-lk5k7\" (UID: \"830ff1d6-332e-46b1-b13c-c2507fdc3c19\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-lk5k7" Mar 18 13:24:17.913694 master-0 kubenswrapper[27835]: I0318 13:24:17.913349 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bf1cc230-0a79-4a1d-b500-a65d02e50973-metrics-certs\") pod \"network-metrics-daemon-kbfbq\" (UID: \"bf1cc230-0a79-4a1d-b500-a65d02e50973\") " pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:24:17.913694 master-0 kubenswrapper[27835]: I0318 13:24:17.913354 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d325c523-8e6f-4665-9f54-334eaf301141-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-s4ql7\" (UID: \"d325c523-8e6f-4665-9f54-334eaf301141\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7" Mar 18 13:24:17.913694 master-0 kubenswrapper[27835]: I0318 13:24:17.913381 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bdcd72a6-a8e8-47ba-8b51-7325d35bad6b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-5vhnr\" (UID: \"bdcd72a6-a8e8-47ba-8b51-7325d35bad6b\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-5vhnr" Mar 18 13:24:17.913694 master-0 kubenswrapper[27835]: I0318 13:24:17.913405 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z84cq\" (UniqueName: \"kubernetes.io/projected/0e7156cf-2d68-4de8-b7e7-60e1539590dd-kube-api-access-z84cq\") pod \"network-node-identity-x8r78\" (UID: \"0e7156cf-2d68-4de8-b7e7-60e1539590dd\") " pod="openshift-network-node-identity/network-node-identity-x8r78" Mar 18 13:24:17.913694 master-0 kubenswrapper[27835]: I0318 13:24:17.913457 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6zmc\" (UniqueName: \"kubernetes.io/projected/0c2c4a58-9780-4ecd-b417-e590ac3576ed-kube-api-access-v6zmc\") pod \"openshift-apiserver-operator-d65958b8-4bqf9\" (UID: \"0c2c4a58-9780-4ecd-b417-e590ac3576ed\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-4bqf9" Mar 18 13:24:17.913694 master-0 kubenswrapper[27835]: I0318 13:24:17.913488 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-node-log\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:17.913694 master-0 kubenswrapper[27835]: I0318 13:24:17.913516 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/19a76585-a9ac-4ed9-9146-bb77b31848c6-etcd-ca\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:24:17.913694 master-0 kubenswrapper[27835]: I0318 13:24:17.913541 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-sysctl-d\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:17.913694 master-0 kubenswrapper[27835]: I0318 13:24:17.913560 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-proxy-ca-bundles\") pod \"controller-manager-d7c95db55-d6lqm\" (UID: \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\") " pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:24:17.913694 master-0 kubenswrapper[27835]: I0318 13:24:17.913586 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-host\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:17.913694 master-0 kubenswrapper[27835]: I0318 13:24:17.913607 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a6090f0-3a27-4102-b8dd-b071644a3543-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-bbqfl\" (UID: \"0a6090f0-3a27-4102-b8dd-b071644a3543\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bbqfl" Mar 18 13:24:17.913694 master-0 kubenswrapper[27835]: I0318 13:24:17.913631 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fw5f\" (UniqueName: \"kubernetes.io/projected/bdcd72a6-a8e8-47ba-8b51-7325d35bad6b-kube-api-access-6fw5f\") pod \"control-plane-machine-set-operator-6f97756bc8-5vhnr\" (UID: \"bdcd72a6-a8e8-47ba-8b51-7325d35bad6b\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-5vhnr" Mar 18 13:24:17.913694 master-0 kubenswrapper[27835]: I0318 13:24:17.913676 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-var-lib-kubelet\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.913694 master-0 kubenswrapper[27835]: I0318 13:24:17.913697 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkw55\" (UniqueName: \"kubernetes.io/projected/e54baea8-6c3e-45a0-ac8c-880a8aaa8208-kube-api-access-kkw55\") pod \"ingress-canary-6hldc\" (UID: \"e54baea8-6c3e-45a0-ac8c-880a8aaa8208\") " pod="openshift-ingress-canary/ingress-canary-6hldc" Mar 18 13:24:17.914313 master-0 kubenswrapper[27835]: I0318 13:24:17.913717 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/767da57e-44e4-4861-bc6f-427c5bbb4d9d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:24:17.914313 master-0 kubenswrapper[27835]: I0318 13:24:17.913736 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/d325c523-8e6f-4665-9f54-334eaf301141-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-s4ql7\" (UID: \"d325c523-8e6f-4665-9f54-334eaf301141\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7" Mar 18 13:24:17.914313 master-0 kubenswrapper[27835]: I0318 13:24:17.913753 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c2c4a58-9780-4ecd-b417-e590ac3576ed-config\") pod \"openshift-apiserver-operator-d65958b8-4bqf9\" (UID: \"0c2c4a58-9780-4ecd-b417-e590ac3576ed\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-4bqf9" Mar 18 13:24:17.914313 master-0 kubenswrapper[27835]: I0318 13:24:17.913756 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-8lzkl\" (UID: \"80994f33-21e7-45d6-9f21-1cfd8e1f41ce\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" Mar 18 13:24:17.914313 master-0 kubenswrapper[27835]: I0318 13:24:17.913783 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:24:17.914313 master-0 kubenswrapper[27835]: I0318 13:24:17.913802 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-kubelet\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:17.914313 master-0 kubenswrapper[27835]: I0318 13:24:17.913823 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75jwh\" (UniqueName: \"kubernetes.io/projected/a9de7243-90c0-49c4-8059-34e0558fca40-kube-api-access-75jwh\") pod \"cloud-credential-operator-744f9dbf77-59d95\" (UID: \"a9de7243-90c0-49c4-8059-34e0558fca40\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-59d95" Mar 18 13:24:17.914313 master-0 kubenswrapper[27835]: I0318 13:24:17.913842 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/029b127e-0faf-4957-b591-9c561b053cda-config-volume\") pod \"dns-default-92s8c\" (UID: \"029b127e-0faf-4957-b591-9c561b053cda\") " pod="openshift-dns/dns-default-92s8c" Mar 18 13:24:17.914313 master-0 kubenswrapper[27835]: I0318 13:24:17.913863 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b75d4622-ac12-4f82-afc9-ab63e6278b0c-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-n7fn4\" (UID: \"b75d4622-ac12-4f82-afc9-ab63e6278b0c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4" Mar 18 13:24:17.914313 master-0 kubenswrapper[27835]: I0318 13:24:17.913883 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn8qc\" (UniqueName: \"kubernetes.io/projected/2b12af9a-8041-477f-90eb-05bb6ae7861a-kube-api-access-sn8qc\") pod \"cluster-autoscaler-operator-866dc4744-lqtbg\" (UID: \"2b12af9a-8041-477f-90eb-05bb6ae7861a\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lqtbg" Mar 18 13:24:17.914313 master-0 kubenswrapper[27835]: I0318 13:24:17.913900 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/702076a9-b542-4768-9e9e-99b2cac0a66e-node-exporter-textfile\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:24:17.914313 master-0 kubenswrapper[27835]: I0318 13:24:17.913925 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-run-openvswitch\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:17.914313 master-0 kubenswrapper[27835]: I0318 13:24:17.913944 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-cni-bin\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:17.914313 master-0 kubenswrapper[27835]: I0318 13:24:17.913959 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-cni-netd\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:17.914313 master-0 kubenswrapper[27835]: I0318 13:24:17.913975 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34a3a84b-048f-4822-9f05-0e7509327ca2-config\") pod \"kube-apiserver-operator-8b68b9d9b-k8tv4\" (UID: \"34a3a84b-048f-4822-9f05-0e7509327ca2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-k8tv4" Mar 18 13:24:17.914313 master-0 kubenswrapper[27835]: I0318 13:24:17.913990 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-tuned\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:17.914313 master-0 kubenswrapper[27835]: I0318 13:24:17.914008 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2316774-4ebc-4fa9-be07-eb1f16f614dd-utilities\") pod \"certified-operators-8wqfk\" (UID: \"d2316774-4ebc-4fa9-be07-eb1f16f614dd\") " pod="openshift-marketplace/certified-operators-8wqfk" Mar 18 13:24:17.914313 master-0 kubenswrapper[27835]: I0318 13:24:17.914042 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00375107-9a3b-4161-a90d-72ea8827c5fc-service-ca-bundle\") pod \"router-default-7dcf5569b5-gvmtv\" (UID: \"00375107-9a3b-4161-a90d-72ea8827c5fc\") " pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:24:17.914313 master-0 kubenswrapper[27835]: I0318 13:24:17.914062 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jxdg\" (UniqueName: \"kubernetes.io/projected/ce3c462e-b655-40bc-811a-95ccde49fdb8-kube-api-access-8jxdg\") pod \"machine-config-daemon-5blrl\" (UID: \"ce3c462e-b655-40bc-811a-95ccde49fdb8\") " pod="openshift-machine-config-operator/machine-config-daemon-5blrl" Mar 18 13:24:17.914313 master-0 kubenswrapper[27835]: I0318 13:24:17.914082 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmlh2\" (UniqueName: \"kubernetes.io/projected/6a93ff56-362e-44fc-a54f-666a01559892-kube-api-access-wmlh2\") pod \"kube-state-metrics-7bbc969446-mxcng\" (UID: \"6a93ff56-362e-44fc-a54f-666a01559892\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" Mar 18 13:24:17.914313 master-0 kubenswrapper[27835]: I0318 13:24:17.914100 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:24:17.914313 master-0 kubenswrapper[27835]: I0318 13:24:17.914129 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:24:17.914313 master-0 kubenswrapper[27835]: I0318 13:24:17.914157 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-var-lib-cni-multus\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.914313 master-0 kubenswrapper[27835]: I0318 13:24:17.914195 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a2bdf5b0-8764-4b15-97c9-20af36634fd0-node-pullsecrets\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:24:17.914313 master-0 kubenswrapper[27835]: I0318 13:24:17.914216 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-var-lib-kubelet\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.914313 master-0 kubenswrapper[27835]: I0318 13:24:17.914244 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:24:17.914313 master-0 kubenswrapper[27835]: I0318 13:24:17.914270 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/767da57e-44e4-4861-bc6f-427c5bbb4d9d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:24:17.914313 master-0 kubenswrapper[27835]: I0318 13:24:17.914343 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8ffe2e75-9cc3-4244-95c8-800463c5aa28-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-bqmqw\" (UID: \"8ffe2e75-9cc3-4244-95c8-800463c5aa28\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bqmqw" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.914367 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/68104a8c-3fac-4d4b-b975-bc2d045b3375-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-9bqxm\" (UID: \"68104a8c-3fac-4d4b-b975-bc2d045b3375\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.914391 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-8lzkl\" (UID: \"80994f33-21e7-45d6-9f21-1cfd8e1f41ce\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.914432 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2w6b\" (UniqueName: \"kubernetes.io/projected/41cc6278-8f99-407c-ba5f-750a40e3058c-kube-api-access-g2w6b\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.914479 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/16f8e725-f18a-478e-88c5-87d54aeb4857-cache\") pod \"catalogd-controller-manager-6864dc98f7-q2ndb\" (UID: \"16f8e725-f18a-478e-88c5-87d54aeb4857\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.914505 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvdtw\" (UniqueName: \"kubernetes.io/projected/bf1cc230-0a79-4a1d-b500-a65d02e50973-kube-api-access-dvdtw\") pod \"network-metrics-daemon-kbfbq\" (UID: \"bf1cc230-0a79-4a1d-b500-a65d02e50973\") " pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.914522 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ffe2e75-9cc3-4244-95c8-800463c5aa28-kube-api-access\") pod \"cluster-version-operator-7d58488df-bqmqw\" (UID: \"8ffe2e75-9cc3-4244-95c8-800463c5aa28\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bqmqw" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.914539 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a25632e-32d0-43d2-9be7-f515d29a1720-utilities\") pod \"community-operators-tqw5h\" (UID: \"2a25632e-32d0-43d2-9be7-f515d29a1720\") " pod="openshift-marketplace/community-operators-tqw5h" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.914367 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab2f96fb-ef55-4427-a598-7e3f1e224045-env-overrides\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.914587 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.914626 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9de7243-90c0-49c4-8059-34e0558fca40-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-59d95\" (UID: \"a9de7243-90c0-49c4-8059-34e0558fca40\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-59d95" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.914649 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clhcj\" (UniqueName: \"kubernetes.io/projected/708812af-3249-4d57-8f28-055da22a7329-kube-api-access-clhcj\") pod \"machine-config-controller-b4f87c5b9-9fdnt\" (UID: \"708812af-3249-4d57-8f28-055da22a7329\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fdnt" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.914668 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2316774-4ebc-4fa9-be07-eb1f16f614dd-utilities\") pod \"certified-operators-8wqfk\" (UID: \"d2316774-4ebc-4fa9-be07-eb1f16f614dd\") " pod="openshift-marketplace/certified-operators-8wqfk" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.914673 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d42bcf13-548b-46c4-9a3d-a46f1b6ec045-env-overrides\") pod \"ovnkube-control-plane-57f769d897-hvnt4\" (UID: \"d42bcf13-548b-46c4-9a3d-a46f1b6ec045\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.914693 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce3c462e-b655-40bc-811a-95ccde49fdb8-proxy-tls\") pod \"machine-config-daemon-5blrl\" (UID: \"ce3c462e-b655-40bc-811a-95ccde49fdb8\") " pod="openshift-machine-config-operator/machine-config-daemon-5blrl" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.914711 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e390416b-4fa1-41d5-bc74-9e779b252350-catalog-content\") pod \"redhat-marketplace-bxlrz\" (UID: \"e390416b-4fa1-41d5-bc74-9e779b252350\") " pod="openshift-marketplace/redhat-marketplace-bxlrz" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.914728 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/68104a8c-3fac-4d4b-b975-bc2d045b3375-images\") pod \"machine-api-operator-6fbb6cf6f9-9bqxm\" (UID: \"68104a8c-3fac-4d4b-b975-bc2d045b3375\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.914745 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx8j5\" (UniqueName: \"kubernetes.io/projected/68104a8c-3fac-4d4b-b975-bc2d045b3375-kube-api-access-sx8j5\") pod \"machine-api-operator-6fbb6cf6f9-9bqxm\" (UID: \"68104a8c-3fac-4d4b-b975-bc2d045b3375\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.914771 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd8ff\" (UniqueName: \"kubernetes.io/projected/0a6090f0-3a27-4102-b8dd-b071644a3543-kube-api-access-bd8ff\") pod \"insights-operator-68bf6ff9d6-bbqfl\" (UID: \"0a6090f0-3a27-4102-b8dd-b071644a3543\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bbqfl" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.914802 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbqfh\" (UniqueName: \"kubernetes.io/projected/720a1f60-c1cb-4aef-aaec-f082090ca631-kube-api-access-nbqfh\") pod \"multus-admission-controller-5dbbb8b86f-zrc8h\" (UID: \"720a1f60-c1cb-4aef-aaec-f082090ca631\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.914819 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.914842 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/16f8e725-f18a-478e-88c5-87d54aeb4857-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-q2ndb\" (UID: \"16f8e725-f18a-478e-88c5-87d54aeb4857\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.914861 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-cnibin\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.914877 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-etc-kubernetes\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.914896 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/41cc6278-8f99-407c-ba5f-750a40e3058c-secret-metrics-server-tls\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.914924 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.914942 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8ffe2e75-9cc3-4244-95c8-800463c5aa28-service-ca\") pod \"cluster-version-operator-7d58488df-bqmqw\" (UID: \"8ffe2e75-9cc3-4244-95c8-800463c5aa28\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bqmqw" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.914916 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b75d4622-ac12-4f82-afc9-ab63e6278b0c-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-n7fn4\" (UID: \"b75d4622-ac12-4f82-afc9-ab63e6278b0c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.914959 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-8lzkl\" (UID: \"80994f33-21e7-45d6-9f21-1cfd8e1f41ce\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.915000 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-run-netns\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.914043 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/19a76585-a9ac-4ed9-9146-bb77b31848c6-etcd-ca\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.915030 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce3c462e-b655-40bc-811a-95ccde49fdb8-mcd-auth-proxy-config\") pod \"machine-config-daemon-5blrl\" (UID: \"ce3c462e-b655-40bc-811a-95ccde49fdb8\") " pod="openshift-machine-config-operator/machine-config-daemon-5blrl" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.915050 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.915071 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/19a76585-a9ac-4ed9-9146-bb77b31848c6-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.915091 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-modprobe-d\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.915117 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgt55\" (UniqueName: \"kubernetes.io/projected/029b127e-0faf-4957-b591-9c561b053cda-kube-api-access-wgt55\") pod \"dns-default-92s8c\" (UID: \"029b127e-0faf-4957-b591-9c561b053cda\") " pod="openshift-dns/dns-default-92s8c" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.915151 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/6a93ff56-362e-44fc-a54f-666a01559892-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-mxcng\" (UID: \"6a93ff56-362e-44fc-a54f-666a01559892\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.915161 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/16f8e725-f18a-478e-88c5-87d54aeb4857-cache\") pod \"catalogd-controller-manager-6864dc98f7-q2ndb\" (UID: \"16f8e725-f18a-478e-88c5-87d54aeb4857\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.915171 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6a93ff56-362e-44fc-a54f-666a01559892-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-mxcng\" (UID: \"6a93ff56-362e-44fc-a54f-666a01559892\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.915215 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/f7f4ae93-428b-4ebd-bfaa-18359b407ede-host-etc-kube\") pod \"network-operator-7bd846bfc4-gxxbr\" (UID: \"f7f4ae93-428b-4ebd-bfaa-18359b407ede\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gxxbr" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.915236 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29qbv\" (UniqueName: \"kubernetes.io/projected/ce3728ab-5d50-40ac-95b3-74a5b62a557f-kube-api-access-29qbv\") pod \"openshift-config-operator-95bf4f4d-qwgrm\" (UID: \"ce3728ab-5d50-40ac-95b3-74a5b62a557f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.915253 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ab2f96fb-ef55-4427-a598-7e3f1e224045-ovnkube-script-lib\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.915274 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/053cc9bc-f98e-46f6-93bb-b5344d20bf74-host-slash\") pod \"iptables-alerter-jkl4x\" (UID: \"053cc9bc-f98e-46f6-93bb-b5344d20bf74\") " pod="openshift-network-operator/iptables-alerter-jkl4x" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.915288 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a25632e-32d0-43d2-9be7-f515d29a1720-utilities\") pod \"community-operators-tqw5h\" (UID: \"2a25632e-32d0-43d2-9be7-f515d29a1720\") " pod="openshift-marketplace/community-operators-tqw5h" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.915293 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-os-release\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.915367 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-os-release\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.915367 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28z2f\" (UniqueName: \"kubernetes.io/projected/5b2acd84-85c0-4c47-90a4-44745b79976d-kube-api-access-28z2f\") pod \"migrator-8487694857-49h6x\" (UID: \"5b2acd84-85c0-4c47-90a4-44745b79976d\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-49h6x" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.915402 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/702076a9-b542-4768-9e9e-99b2cac0a66e-node-exporter-wtmp\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.915502 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/702076a9-b542-4768-9e9e-99b2cac0a66e-node-exporter-textfile\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:24:17.915779 master-0 kubenswrapper[27835]: I0318 13:24:17.915589 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d42bcf13-548b-46c4-9a3d-a46f1b6ec045-env-overrides\") pod \"ovnkube-control-plane-57f769d897-hvnt4\" (UID: \"d42bcf13-548b-46c4-9a3d-a46f1b6ec045\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" Mar 18 13:24:17.918225 master-0 kubenswrapper[27835]: I0318 13:24:17.915999 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e390416b-4fa1-41d5-bc74-9e779b252350-catalog-content\") pod \"redhat-marketplace-bxlrz\" (UID: \"e390416b-4fa1-41d5-bc74-9e779b252350\") " pod="openshift-marketplace/redhat-marketplace-bxlrz" Mar 18 13:24:17.918225 master-0 kubenswrapper[27835]: I0318 13:24:17.916169 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34a3a84b-048f-4822-9f05-0e7509327ca2-config\") pod \"kube-apiserver-operator-8b68b9d9b-k8tv4\" (UID: \"34a3a84b-048f-4822-9f05-0e7509327ca2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-k8tv4" Mar 18 13:24:17.918225 master-0 kubenswrapper[27835]: I0318 13:24:17.917375 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0e7156cf-2d68-4de8-b7e7-60e1539590dd-env-overrides\") pod \"network-node-identity-x8r78\" (UID: \"0e7156cf-2d68-4de8-b7e7-60e1539590dd\") " pod="openshift-network-node-identity/network-node-identity-x8r78" Mar 18 13:24:17.918225 master-0 kubenswrapper[27835]: I0318 13:24:17.917436 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/6db2bfbd-d8db-4384-8979-23e8a1e87e5e-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-wsmsc\" (UID: \"6db2bfbd-d8db-4384-8979-23e8a1e87e5e\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wsmsc" Mar 18 13:24:17.918225 master-0 kubenswrapper[27835]: I0318 13:24:17.917467 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/13c71f7d-1485-4f86-beb2-ee16cf420350-hosts-file\") pod \"node-resolver-7vddk\" (UID: \"13c71f7d-1485-4f86-beb2-ee16cf420350\") " pod="openshift-dns/node-resolver-7vddk" Mar 18 13:24:17.918225 master-0 kubenswrapper[27835]: I0318 13:24:17.917500 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-systemd-units\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:17.918225 master-0 kubenswrapper[27835]: I0318 13:24:17.917500 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-etc-kubernetes\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.918225 master-0 kubenswrapper[27835]: I0318 13:24:17.917523 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-sysctl-conf\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:17.918225 master-0 kubenswrapper[27835]: I0318 13:24:17.917553 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/708812af-3249-4d57-8f28-055da22a7329-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-9fdnt\" (UID: \"708812af-3249-4d57-8f28-055da22a7329\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fdnt" Mar 18 13:24:17.918225 master-0 kubenswrapper[27835]: I0318 13:24:17.917556 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-cnibin\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.918225 master-0 kubenswrapper[27835]: I0318 13:24:17.917584 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5lv2\" (UniqueName: \"kubernetes.io/projected/fb65c095-ca20-432c-a069-ad6719fca9c8-kube-api-access-j5lv2\") pod \"prometheus-operator-6c8df6d4b-7tcjk\" (UID: \"fb65c095-ca20-432c-a069-ad6719fca9c8\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-7tcjk" Mar 18 13:24:17.918225 master-0 kubenswrapper[27835]: I0318 13:24:17.917606 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-run-netns\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.918225 master-0 kubenswrapper[27835]: I0318 13:24:17.917614 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b853631-ff77-4643-aa07-b1f8056320a3-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"9b853631-ff77-4643-aa07-b1f8056320a3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:24:17.918225 master-0 kubenswrapper[27835]: I0318 13:24:17.917751 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:24:17.918225 master-0 kubenswrapper[27835]: I0318 13:24:17.917787 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/6a93ff56-362e-44fc-a54f-666a01559892-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-mxcng\" (UID: \"6a93ff56-362e-44fc-a54f-666a01559892\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" Mar 18 13:24:17.918225 master-0 kubenswrapper[27835]: I0318 13:24:17.917808 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/f7f4ae93-428b-4ebd-bfaa-18359b407ede-host-etc-kube\") pod \"network-operator-7bd846bfc4-gxxbr\" (UID: \"f7f4ae93-428b-4ebd-bfaa-18359b407ede\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gxxbr" Mar 18 13:24:17.918225 master-0 kubenswrapper[27835]: I0318 13:24:17.917847 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0e7156cf-2d68-4de8-b7e7-60e1539590dd-env-overrides\") pod \"network-node-identity-x8r78\" (UID: \"0e7156cf-2d68-4de8-b7e7-60e1539590dd\") " pod="openshift-network-node-identity/network-node-identity-x8r78" Mar 18 13:24:17.918225 master-0 kubenswrapper[27835]: I0318 13:24:17.917898 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:24:17.918225 master-0 kubenswrapper[27835]: I0318 13:24:17.917930 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-multus-cni-dir\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.918225 master-0 kubenswrapper[27835]: I0318 13:24:17.918014 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrgxg\" (UniqueName: \"kubernetes.io/projected/d2316774-4ebc-4fa9-be07-eb1f16f614dd-kube-api-access-lrgxg\") pod \"certified-operators-8wqfk\" (UID: \"d2316774-4ebc-4fa9-be07-eb1f16f614dd\") " pod="openshift-marketplace/certified-operators-8wqfk" Mar 18 13:24:17.918225 master-0 kubenswrapper[27835]: I0318 13:24:17.918025 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce3c462e-b655-40bc-811a-95ccde49fdb8-mcd-auth-proxy-config\") pod \"machine-config-daemon-5blrl\" (UID: \"ce3c462e-b655-40bc-811a-95ccde49fdb8\") " pod="openshift-machine-config-operator/machine-config-daemon-5blrl" Mar 18 13:24:17.918225 master-0 kubenswrapper[27835]: I0318 13:24:17.918033 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-multus-cni-dir\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.918225 master-0 kubenswrapper[27835]: I0318 13:24:17.918087 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bd8aa7c1-0a04-4df0-9047-63ab846b9535-node-bootstrap-token\") pod \"machine-config-server-wxht4\" (UID: \"bd8aa7c1-0a04-4df0-9047-63ab846b9535\") " pod="openshift-machine-config-operator/machine-config-server-wxht4" Mar 18 13:24:17.918225 master-0 kubenswrapper[27835]: I0318 13:24:17.918145 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a6090f0-3a27-4102-b8dd-b071644a3543-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-bbqfl\" (UID: \"0a6090f0-3a27-4102-b8dd-b071644a3543\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bbqfl" Mar 18 13:24:17.919439 master-0 kubenswrapper[27835]: I0318 13:24:17.918324 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/0e7156cf-2d68-4de8-b7e7-60e1539590dd-ovnkube-identity-cm\") pod \"network-node-identity-x8r78\" (UID: \"0e7156cf-2d68-4de8-b7e7-60e1539590dd\") " pod="openshift-network-node-identity/network-node-identity-x8r78" Mar 18 13:24:17.919439 master-0 kubenswrapper[27835]: I0318 13:24:17.918365 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/767da57e-44e4-4861-bc6f-427c5bbb4d9d-cnibin\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:24:17.919439 master-0 kubenswrapper[27835]: I0318 13:24:17.918372 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/19a76585-a9ac-4ed9-9146-bb77b31848c6-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:24:17.919439 master-0 kubenswrapper[27835]: I0318 13:24:17.918391 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fxgl\" (UniqueName: \"kubernetes.io/projected/595f697b-d238-4500-84ce-1ea00377f05e-kube-api-access-4fxgl\") pod \"service-ca-operator-b865698dc-hnr6m\" (UID: \"595f697b-d238-4500-84ce-1ea00377f05e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m" Mar 18 13:24:17.919439 master-0 kubenswrapper[27835]: I0318 13:24:17.918429 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/767da57e-44e4-4861-bc6f-427c5bbb4d9d-cnibin\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:24:17.919439 master-0 kubenswrapper[27835]: I0318 13:24:17.918522 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 18 13:24:17.919439 master-0 kubenswrapper[27835]: I0318 13:24:17.918771 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-serving-cert\") pod \"controller-manager-d7c95db55-d6lqm\" (UID: \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\") " pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:24:17.919439 master-0 kubenswrapper[27835]: I0318 13:24:17.918792 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/0e7156cf-2d68-4de8-b7e7-60e1539590dd-ovnkube-identity-cm\") pod \"network-node-identity-x8r78\" (UID: \"0e7156cf-2d68-4de8-b7e7-60e1539590dd\") " pod="openshift-network-node-identity/network-node-identity-x8r78" Mar 18 13:24:17.919439 master-0 kubenswrapper[27835]: I0318 13:24:17.918840 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e390416b-4fa1-41d5-bc74-9e779b252350-utilities\") pod \"redhat-marketplace-bxlrz\" (UID: \"e390416b-4fa1-41d5-bc74-9e779b252350\") " pod="openshift-marketplace/redhat-marketplace-bxlrz" Mar 18 13:24:17.919439 master-0 kubenswrapper[27835]: I0318 13:24:17.918877 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a350f317-f058-4102-af5c-cbba46d35e02-serving-cert\") pod \"route-controller-manager-c8888769b-8mxp6\" (UID: \"a350f317-f058-4102-af5c-cbba46d35e02\") " pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" Mar 18 13:24:17.919439 master-0 kubenswrapper[27835]: I0318 13:24:17.918946 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk4ql\" (UniqueName: \"kubernetes.io/projected/d325c523-8e6f-4665-9f54-334eaf301141-kube-api-access-mk4ql\") pod \"openshift-state-metrics-5dc6c74576-s4ql7\" (UID: \"d325c523-8e6f-4665-9f54-334eaf301141\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7" Mar 18 13:24:17.919439 master-0 kubenswrapper[27835]: I0318 13:24:17.919166 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e390416b-4fa1-41d5-bc74-9e779b252350-utilities\") pod \"redhat-marketplace-bxlrz\" (UID: \"e390416b-4fa1-41d5-bc74-9e779b252350\") " pod="openshift-marketplace/redhat-marketplace-bxlrz" Mar 18 13:24:17.919439 master-0 kubenswrapper[27835]: I0318 13:24:17.919210 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:17.919439 master-0 kubenswrapper[27835]: I0318 13:24:17.919274 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34a3a84b-048f-4822-9f05-0e7509327ca2-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-k8tv4\" (UID: \"34a3a84b-048f-4822-9f05-0e7509327ca2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-k8tv4" Mar 18 13:24:17.919439 master-0 kubenswrapper[27835]: I0318 13:24:17.919329 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b75d4622-ac12-4f82-afc9-ab63e6278b0c-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-n7fn4\" (UID: \"b75d4622-ac12-4f82-afc9-ab63e6278b0c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4" Mar 18 13:24:17.919439 master-0 kubenswrapper[27835]: I0318 13:24:17.919378 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/702076a9-b542-4768-9e9e-99b2cac0a66e-node-exporter-tls\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:24:17.920156 master-0 kubenswrapper[27835]: I0318 13:24:17.919485 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e7156cf-2d68-4de8-b7e7-60e1539590dd-webhook-cert\") pod \"network-node-identity-x8r78\" (UID: \"0e7156cf-2d68-4de8-b7e7-60e1539590dd\") " pod="openshift-network-node-identity/network-node-identity-x8r78" Mar 18 13:24:17.920156 master-0 kubenswrapper[27835]: I0318 13:24:17.919519 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-zrc8h\" (UID: \"720a1f60-c1cb-4aef-aaec-f082090ca631\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" Mar 18 13:24:17.920156 master-0 kubenswrapper[27835]: I0318 13:24:17.919549 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9548e397-0db4-41c8-9cc8-b575060e9c66-catalog-content\") pod \"redhat-operators-89st2\" (UID: \"9548e397-0db4-41c8-9cc8-b575060e9c66\") " pod="openshift-marketplace/redhat-operators-89st2" Mar 18 13:24:17.920156 master-0 kubenswrapper[27835]: I0318 13:24:17.919579 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvq2h\" (UniqueName: \"kubernetes.io/projected/830ff1d6-332e-46b1-b13c-c2507fdc3c19-kube-api-access-dvq2h\") pod \"multus-admission-controller-58c9f8fc64-lk5k7\" (UID: \"830ff1d6-332e-46b1-b13c-c2507fdc3c19\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-lk5k7" Mar 18 13:24:17.920156 master-0 kubenswrapper[27835]: I0318 13:24:17.919581 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-tuned\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:17.920156 master-0 kubenswrapper[27835]: I0318 13:24:17.919601 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34a3a84b-048f-4822-9f05-0e7509327ca2-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-k8tv4\" (UID: \"34a3a84b-048f-4822-9f05-0e7509327ca2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-k8tv4" Mar 18 13:24:17.920156 master-0 kubenswrapper[27835]: I0318 13:24:17.919652 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9548e397-0db4-41c8-9cc8-b575060e9c66-catalog-content\") pod \"redhat-operators-89st2\" (UID: \"9548e397-0db4-41c8-9cc8-b575060e9c66\") " pod="openshift-marketplace/redhat-operators-89st2" Mar 18 13:24:17.920156 master-0 kubenswrapper[27835]: I0318 13:24:17.919739 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68104a8c-3fac-4d4b-b975-bc2d045b3375-config\") pod \"machine-api-operator-6fbb6cf6f9-9bqxm\" (UID: \"68104a8c-3fac-4d4b-b975-bc2d045b3375\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm" Mar 18 13:24:17.920156 master-0 kubenswrapper[27835]: I0318 13:24:17.919836 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/702076a9-b542-4768-9e9e-99b2cac0a66e-root\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:24:17.920156 master-0 kubenswrapper[27835]: I0318 13:24:17.919863 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbwfq\" (UniqueName: \"kubernetes.io/projected/9548e397-0db4-41c8-9cc8-b575060e9c66-kube-api-access-kbwfq\") pod \"redhat-operators-89st2\" (UID: \"9548e397-0db4-41c8-9cc8-b575060e9c66\") " pod="openshift-marketplace/redhat-operators-89st2" Mar 18 13:24:17.920156 master-0 kubenswrapper[27835]: I0318 13:24:17.919882 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f38b464d-a218-4753-b7ac-a7d373952c4d-config\") pod \"machine-approver-5c6485487f-fk8ql\" (UID: \"f38b464d-a218-4753-b7ac-a7d373952c4d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql" Mar 18 13:24:17.920156 master-0 kubenswrapper[27835]: I0318 13:24:17.919899 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:24:17.920156 master-0 kubenswrapper[27835]: I0318 13:24:17.919939 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00375107-9a3b-4161-a90d-72ea8827c5fc-metrics-certs\") pod \"router-default-7dcf5569b5-gvmtv\" (UID: \"00375107-9a3b-4161-a90d-72ea8827c5fc\") " pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:24:17.920156 master-0 kubenswrapper[27835]: I0318 13:24:17.920037 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2b12af9a-8041-477f-90eb-05bb6ae7861a-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-lqtbg\" (UID: \"2b12af9a-8041-477f-90eb-05bb6ae7861a\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lqtbg" Mar 18 13:24:17.920156 master-0 kubenswrapper[27835]: I0318 13:24:17.920069 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34a3a84b-048f-4822-9f05-0e7509327ca2-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-k8tv4\" (UID: \"34a3a84b-048f-4822-9f05-0e7509327ca2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-k8tv4" Mar 18 13:24:17.920156 master-0 kubenswrapper[27835]: I0318 13:24:17.920098 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7fb5bad7-07d9-45ac-ad27-a887d12d148f-audit-dir\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:24:17.922164 master-0 kubenswrapper[27835]: I0318 13:24:17.920163 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-multus-conf-dir\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.922164 master-0 kubenswrapper[27835]: I0318 13:24:17.920221 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-multus-conf-dir\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.922164 master-0 kubenswrapper[27835]: I0318 13:24:17.920388 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/16f8e725-f18a-478e-88c5-87d54aeb4857-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-q2ndb\" (UID: \"16f8e725-f18a-478e-88c5-87d54aeb4857\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:24:17.922164 master-0 kubenswrapper[27835]: I0318 13:24:17.920458 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-log-socket\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:17.922164 master-0 kubenswrapper[27835]: I0318 13:24:17.920534 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hvsl\" (UniqueName: \"kubernetes.io/projected/16f8e725-f18a-478e-88c5-87d54aeb4857-kube-api-access-8hvsl\") pod \"catalogd-controller-manager-6864dc98f7-q2ndb\" (UID: \"16f8e725-f18a-478e-88c5-87d54aeb4857\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:24:17.922164 master-0 kubenswrapper[27835]: I0318 13:24:17.920793 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7fb5bad7-07d9-45ac-ad27-a887d12d148f-audit-dir\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:24:17.922164 master-0 kubenswrapper[27835]: I0318 13:24:17.920917 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkxxg\" (UniqueName: \"kubernetes.io/projected/d42bcf13-548b-46c4-9a3d-a46f1b6ec045-kube-api-access-vkxxg\") pod \"ovnkube-control-plane-57f769d897-hvnt4\" (UID: \"d42bcf13-548b-46c4-9a3d-a46f1b6ec045\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" Mar 18 13:24:17.922164 master-0 kubenswrapper[27835]: I0318 13:24:17.920993 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:24:17.922164 master-0 kubenswrapper[27835]: I0318 13:24:17.921386 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/767da57e-44e4-4861-bc6f-427c5bbb4d9d-system-cni-dir\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:24:17.922164 master-0 kubenswrapper[27835]: I0318 13:24:17.921465 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-run-k8s-cni-cncf-io\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.922164 master-0 kubenswrapper[27835]: I0318 13:24:17.921488 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-hostroot\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.922164 master-0 kubenswrapper[27835]: I0318 13:24:17.921534 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w477x\" (UniqueName: \"kubernetes.io/projected/bd8aa7c1-0a04-4df0-9047-63ab846b9535-kube-api-access-w477x\") pod \"machine-config-server-wxht4\" (UID: \"bd8aa7c1-0a04-4df0-9047-63ab846b9535\") " pod="openshift-machine-config-operator/machine-config-server-wxht4" Mar 18 13:24:17.922164 master-0 kubenswrapper[27835]: I0318 13:24:17.921560 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b853631-ff77-4643-aa07-b1f8056320a3-kube-api-access\") pod \"installer-3-master-0\" (UID: \"9b853631-ff77-4643-aa07-b1f8056320a3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:24:17.922164 master-0 kubenswrapper[27835]: I0318 13:24:17.921612 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz6h6\" (UniqueName: \"kubernetes.io/projected/e390416b-4fa1-41d5-bc74-9e779b252350-kube-api-access-cz6h6\") pod \"redhat-marketplace-bxlrz\" (UID: \"e390416b-4fa1-41d5-bc74-9e779b252350\") " pod="openshift-marketplace/redhat-marketplace-bxlrz" Mar 18 13:24:17.922164 master-0 kubenswrapper[27835]: I0318 13:24:17.921661 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82f9g\" (UniqueName: \"kubernetes.io/projected/3ee0f85b-219b-47cb-a22a-67d359a69881-kube-api-access-82f9g\") pod \"packageserver-ff75f747c-r46tm\" (UID: \"3ee0f85b-219b-47cb-a22a-67d359a69881\") " pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" Mar 18 13:24:17.922164 master-0 kubenswrapper[27835]: I0318 13:24:17.921720 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff8tm\" (UniqueName: \"kubernetes.io/projected/74f296d4-40d1-449e-88ea-db6c1574a11a-kube-api-access-ff8tm\") pod \"cluster-samples-operator-85f7577d78-sqx7p\" (UID: \"74f296d4-40d1-449e-88ea-db6c1574a11a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-sqx7p" Mar 18 13:24:17.922164 master-0 kubenswrapper[27835]: I0318 13:24:17.921744 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/708812af-3249-4d57-8f28-055da22a7329-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-9fdnt\" (UID: \"708812af-3249-4d57-8f28-055da22a7329\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fdnt" Mar 18 13:24:17.922164 master-0 kubenswrapper[27835]: I0318 13:24:17.921769 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-config\") pod \"controller-manager-d7c95db55-d6lqm\" (UID: \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\") " pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:24:17.922164 master-0 kubenswrapper[27835]: I0318 13:24:17.921789 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2bdf5b0-8764-4b15-97c9-20af36634fd0-audit-dir\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:24:17.922164 master-0 kubenswrapper[27835]: I0318 13:24:17.921808 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:24:17.922164 master-0 kubenswrapper[27835]: I0318 13:24:17.921826 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-run-ovn-kubernetes\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:17.922164 master-0 kubenswrapper[27835]: I0318 13:24:17.921854 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/702076a9-b542-4768-9e9e-99b2cac0a66e-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:24:17.922164 master-0 kubenswrapper[27835]: I0318 13:24:17.921881 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/41cc6278-8f99-407c-ba5f-750a40e3058c-metrics-server-audit-profiles\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:24:17.922164 master-0 kubenswrapper[27835]: I0318 13:24:17.921902 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/0a6090f0-3a27-4102-b8dd-b071644a3543-snapshots\") pod \"insights-operator-68bf6ff9d6-bbqfl\" (UID: \"0a6090f0-3a27-4102-b8dd-b071644a3543\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bbqfl" Mar 18 13:24:17.922164 master-0 kubenswrapper[27835]: I0318 13:24:17.921948 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6a93ff56-362e-44fc-a54f-666a01559892-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-mxcng\" (UID: \"6a93ff56-362e-44fc-a54f-666a01559892\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" Mar 18 13:24:17.922164 master-0 kubenswrapper[27835]: I0318 13:24:17.921968 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/6a93ff56-362e-44fc-a54f-666a01559892-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-mxcng\" (UID: \"6a93ff56-362e-44fc-a54f-666a01559892\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" Mar 18 13:24:17.922164 master-0 kubenswrapper[27835]: I0318 13:24:17.921991 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c129e07da670ff3af256d72652e4b1da-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c129e07da670ff3af256d72652e4b1da\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:24:17.922164 master-0 kubenswrapper[27835]: I0318 13:24:17.922021 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19a76585-a9ac-4ed9-9146-bb77b31848c6-config\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:24:17.924051 master-0 kubenswrapper[27835]: I0318 13:24:17.922207 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/00375107-9a3b-4161-a90d-72ea8827c5fc-default-certificate\") pod \"router-default-7dcf5569b5-gvmtv\" (UID: \"00375107-9a3b-4161-a90d-72ea8827c5fc\") " pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:24:17.924051 master-0 kubenswrapper[27835]: I0318 13:24:17.922256 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-slash\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:17.924051 master-0 kubenswrapper[27835]: I0318 13:24:17.922285 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/16f8e725-f18a-478e-88c5-87d54aeb4857-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-q2ndb\" (UID: \"16f8e725-f18a-478e-88c5-87d54aeb4857\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:24:17.924051 master-0 kubenswrapper[27835]: I0318 13:24:17.922319 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2316774-4ebc-4fa9-be07-eb1f16f614dd-catalog-content\") pod \"certified-operators-8wqfk\" (UID: \"d2316774-4ebc-4fa9-be07-eb1f16f614dd\") " pod="openshift-marketplace/certified-operators-8wqfk" Mar 18 13:24:17.924051 master-0 kubenswrapper[27835]: I0318 13:24:17.922353 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qn7f\" (UniqueName: \"kubernetes.io/projected/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-kube-api-access-5qn7f\") pod \"controller-manager-d7c95db55-d6lqm\" (UID: \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\") " pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:24:17.924051 master-0 kubenswrapper[27835]: I0318 13:24:17.922382 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:24:17.924051 master-0 kubenswrapper[27835]: I0318 13:24:17.922441 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2snjj\" (UniqueName: \"kubernetes.io/projected/0278b04b-b27b-4717-a009-a70315fd05a6-kube-api-access-2snjj\") pod \"network-check-target-kcsgp\" (UID: \"0278b04b-b27b-4717-a009-a70315fd05a6\") " pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:24:17.924051 master-0 kubenswrapper[27835]: I0318 13:24:17.922460 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19a76585-a9ac-4ed9-9146-bb77b31848c6-config\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:24:17.924051 master-0 kubenswrapper[27835]: I0318 13:24:17.922473 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41cc6278-8f99-407c-ba5f-750a40e3058c-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:24:17.924051 master-0 kubenswrapper[27835]: I0318 13:24:17.922464 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/708812af-3249-4d57-8f28-055da22a7329-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-9fdnt\" (UID: \"708812af-3249-4d57-8f28-055da22a7329\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fdnt" Mar 18 13:24:17.924051 master-0 kubenswrapper[27835]: I0318 13:24:17.922578 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fb65c095-ca20-432c-a069-ad6719fca9c8-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-7tcjk\" (UID: \"fb65c095-ca20-432c-a069-ad6719fca9c8\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-7tcjk" Mar 18 13:24:17.924051 master-0 kubenswrapper[27835]: I0318 13:24:17.922692 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2bdf5b0-8764-4b15-97c9-20af36634fd0-audit-dir\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:24:17.924051 master-0 kubenswrapper[27835]: I0318 13:24:17.922728 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:24:17.924051 master-0 kubenswrapper[27835]: I0318 13:24:17.922784 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-var-lib-cni-bin\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.924051 master-0 kubenswrapper[27835]: I0318 13:24:17.922796 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2316774-4ebc-4fa9-be07-eb1f16f614dd-catalog-content\") pod \"certified-operators-8wqfk\" (UID: \"d2316774-4ebc-4fa9-be07-eb1f16f614dd\") " pod="openshift-marketplace/certified-operators-8wqfk" Mar 18 13:24:17.924051 master-0 kubenswrapper[27835]: I0318 13:24:17.922823 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:24:17.924051 master-0 kubenswrapper[27835]: I0318 13:24:17.922824 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b75d4622-ac12-4f82-afc9-ab63e6278b0c-config\") pod \"kube-controller-manager-operator-ff989d6cc-n7fn4\" (UID: \"b75d4622-ac12-4f82-afc9-ab63e6278b0c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4" Mar 18 13:24:17.924051 master-0 kubenswrapper[27835]: I0318 13:24:17.922872 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a6090f0-3a27-4102-b8dd-b071644a3543-serving-cert\") pod \"insights-operator-68bf6ff9d6-bbqfl\" (UID: \"0a6090f0-3a27-4102-b8dd-b071644a3543\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bbqfl" Mar 18 13:24:17.924051 master-0 kubenswrapper[27835]: I0318 13:24:17.922913 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-var-lib-cni-bin\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.924051 master-0 kubenswrapper[27835]: I0318 13:24:17.922926 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a350f317-f058-4102-af5c-cbba46d35e02-config\") pod \"route-controller-manager-c8888769b-8mxp6\" (UID: \"a350f317-f058-4102-af5c-cbba46d35e02\") " pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" Mar 18 13:24:17.924051 master-0 kubenswrapper[27835]: I0318 13:24:17.922960 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/74f296d4-40d1-449e-88ea-db6c1574a11a-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-sqx7p\" (UID: \"74f296d4-40d1-449e-88ea-db6c1574a11a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-sqx7p" Mar 18 13:24:17.924051 master-0 kubenswrapper[27835]: I0318 13:24:17.923133 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:24:17.924051 master-0 kubenswrapper[27835]: I0318 13:24:17.923167 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-host-run-k8s-cni-cncf-io\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.924051 master-0 kubenswrapper[27835]: I0318 13:24:17.923238 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b75d4622-ac12-4f82-afc9-ab63e6278b0c-config\") pod \"kube-controller-manager-operator-ff989d6cc-n7fn4\" (UID: \"b75d4622-ac12-4f82-afc9-ab63e6278b0c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4" Mar 18 13:24:17.924051 master-0 kubenswrapper[27835]: I0318 13:24:17.923261 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-hostroot\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.924051 master-0 kubenswrapper[27835]: I0318 13:24:17.923465 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/0a6090f0-3a27-4102-b8dd-b071644a3543-snapshots\") pod \"insights-operator-68bf6ff9d6-bbqfl\" (UID: \"0a6090f0-3a27-4102-b8dd-b071644a3543\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bbqfl" Mar 18 13:24:17.924051 master-0 kubenswrapper[27835]: I0318 13:24:17.923485 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/702076a9-b542-4768-9e9e-99b2cac0a66e-sys\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:24:17.924051 master-0 kubenswrapper[27835]: I0318 13:24:17.923538 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-run\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:17.924051 master-0 kubenswrapper[27835]: I0318 13:24:17.923558 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c129e07da670ff3af256d72652e4b1da-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c129e07da670ff3af256d72652e4b1da\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:24:17.924051 master-0 kubenswrapper[27835]: I0318 13:24:17.923722 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/767da57e-44e4-4861-bc6f-427c5bbb4d9d-system-cni-dir\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:24:17.924051 master-0 kubenswrapper[27835]: I0318 13:24:17.923954 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b12af9a-8041-477f-90eb-05bb6ae7861a-cert\") pod \"cluster-autoscaler-operator-866dc4744-lqtbg\" (UID: \"2b12af9a-8041-477f-90eb-05bb6ae7861a\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lqtbg" Mar 18 13:24:17.925721 master-0 kubenswrapper[27835]: I0318 13:24:17.924141 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ce3c462e-b655-40bc-811a-95ccde49fdb8-rootfs\") pod \"machine-config-daemon-5blrl\" (UID: \"ce3c462e-b655-40bc-811a-95ccde49fdb8\") " pod="openshift-machine-config-operator/machine-config-daemon-5blrl" Mar 18 13:24:17.925721 master-0 kubenswrapper[27835]: I0318 13:24:17.924179 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a25632e-32d0-43d2-9be7-f515d29a1720-catalog-content\") pod \"community-operators-tqw5h\" (UID: \"2a25632e-32d0-43d2-9be7-f515d29a1720\") " pod="openshift-marketplace/community-operators-tqw5h" Mar 18 13:24:17.925721 master-0 kubenswrapper[27835]: I0318 13:24:17.924521 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a25632e-32d0-43d2-9be7-f515d29a1720-catalog-content\") pod \"community-operators-tqw5h\" (UID: \"2a25632e-32d0-43d2-9be7-f515d29a1720\") " pod="openshift-marketplace/community-operators-tqw5h" Mar 18 13:24:17.925721 master-0 kubenswrapper[27835]: I0318 13:24:17.924573 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-var-lib-openvswitch\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:17.925721 master-0 kubenswrapper[27835]: I0318 13:24:17.924611 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b89fb313-d01a-4305-b123-e253b3382b85-signing-key\") pod \"service-ca-79bc6b8d76-2zvf2\" (UID: \"b89fb313-d01a-4305-b123-e253b3382b85\") " pod="openshift-service-ca/service-ca-79bc6b8d76-2zvf2" Mar 18 13:24:17.925721 master-0 kubenswrapper[27835]: I0318 13:24:17.924652 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d42bcf13-548b-46c4-9a3d-a46f1b6ec045-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-hvnt4\" (UID: \"d42bcf13-548b-46c4-9a3d-a46f1b6ec045\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" Mar 18 13:24:17.925721 master-0 kubenswrapper[27835]: I0318 13:24:17.924758 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f38b464d-a218-4753-b7ac-a7d373952c4d-machine-approver-tls\") pod \"machine-approver-5c6485487f-fk8ql\" (UID: \"f38b464d-a218-4753-b7ac-a7d373952c4d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql" Mar 18 13:24:17.925721 master-0 kubenswrapper[27835]: I0318 13:24:17.924830 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-9bjsj\" (UID: \"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:24:17.925721 master-0 kubenswrapper[27835]: I0318 13:24:17.924976 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d42bcf13-548b-46c4-9a3d-a46f1b6ec045-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-hvnt4\" (UID: \"d42bcf13-548b-46c4-9a3d-a46f1b6ec045\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" Mar 18 13:24:17.925721 master-0 kubenswrapper[27835]: I0318 13:24:17.925225 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-system-cni-dir\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.925721 master-0 kubenswrapper[27835]: I0318 13:24:17.925228 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-9bjsj\" (UID: \"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:24:17.925721 master-0 kubenswrapper[27835]: I0318 13:24:17.925279 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-system-cni-dir\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.925721 master-0 kubenswrapper[27835]: I0318 13:24:17.925319 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/029b127e-0faf-4957-b591-9c561b053cda-metrics-tls\") pod \"dns-default-92s8c\" (UID: \"029b127e-0faf-4957-b591-9c561b053cda\") " pod="openshift-dns/dns-default-92s8c" Mar 18 13:24:17.925721 master-0 kubenswrapper[27835]: I0318 13:24:17.925347 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-sysconfig\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:17.925721 master-0 kubenswrapper[27835]: I0318 13:24:17.925374 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-var-lib-kubelet\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:17.925721 master-0 kubenswrapper[27835]: I0318 13:24:17.925403 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert\") pod \"catalog-operator-68f85b4d6c-t84s9\" (UID: \"d2455453-5943-49ef-bfea-cba077197da0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:24:17.925721 master-0 kubenswrapper[27835]: I0318 13:24:17.925449 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/41cc6278-8f99-407c-ba5f-750a40e3058c-secret-metrics-client-certs\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:24:17.925721 master-0 kubenswrapper[27835]: I0318 13:24:17.925480 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:24:17.925721 master-0 kubenswrapper[27835]: I0318 13:24:17.925507 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:24:17.925721 master-0 kubenswrapper[27835]: I0318 13:24:17.925538 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mddh9\" (UniqueName: \"kubernetes.io/projected/ab2f96fb-ef55-4427-a598-7e3f1e224045-kube-api-access-mddh9\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:17.925721 master-0 kubenswrapper[27835]: I0318 13:24:17.925566 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/00375107-9a3b-4161-a90d-72ea8827c5fc-stats-auth\") pod \"router-default-7dcf5569b5-gvmtv\" (UID: \"00375107-9a3b-4161-a90d-72ea8827c5fc\") " pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:24:17.925721 master-0 kubenswrapper[27835]: I0318 13:24:17.925594 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwqln\" (UniqueName: \"kubernetes.io/projected/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-kube-api-access-gwqln\") pod \"cluster-cloud-controller-manager-operator-7dff898856-8lzkl\" (UID: \"80994f33-21e7-45d6-9f21-1cfd8e1f41ce\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" Mar 18 13:24:17.925721 master-0 kubenswrapper[27835]: I0318 13:24:17.925632 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce3728ab-5d50-40ac-95b3-74a5b62a557f-serving-cert\") pod \"openshift-config-operator-95bf4f4d-qwgrm\" (UID: \"ce3728ab-5d50-40ac-95b3-74a5b62a557f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" Mar 18 13:24:17.925721 master-0 kubenswrapper[27835]: I0318 13:24:17.925658 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bd8aa7c1-0a04-4df0-9047-63ab846b9535-certs\") pod \"machine-config-server-wxht4\" (UID: \"bd8aa7c1-0a04-4df0-9047-63ab846b9535\") " pod="openshift-machine-config-operator/machine-config-server-wxht4" Mar 18 13:24:17.925721 master-0 kubenswrapper[27835]: I0318 13:24:17.925674 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d2455453-5943-49ef-bfea-cba077197da0-srv-cert\") pod \"catalog-operator-68f85b4d6c-t84s9\" (UID: \"d2455453-5943-49ef-bfea-cba077197da0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:24:17.925721 master-0 kubenswrapper[27835]: I0318 13:24:17.925703 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-run-systemd\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:17.925721 master-0 kubenswrapper[27835]: I0318 13:24:17.925731 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-lib-modules\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:17.926893 master-0 kubenswrapper[27835]: I0318 13:24:17.925761 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlb8t\" (UniqueName: \"kubernetes.io/projected/00375107-9a3b-4161-a90d-72ea8827c5fc-kube-api-access-zlb8t\") pod \"router-default-7dcf5569b5-gvmtv\" (UID: \"00375107-9a3b-4161-a90d-72ea8827c5fc\") " pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:24:17.926893 master-0 kubenswrapper[27835]: I0318 13:24:17.925792 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/fb65c095-ca20-432c-a069-ad6719fca9c8-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-7tcjk\" (UID: \"fb65c095-ca20-432c-a069-ad6719fca9c8\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-7tcjk" Mar 18 13:24:17.926893 master-0 kubenswrapper[27835]: I0318 13:24:17.925826 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:24:17.926893 master-0 kubenswrapper[27835]: I0318 13:24:17.925846 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:24:17.926893 master-0 kubenswrapper[27835]: I0318 13:24:17.925853 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/767da57e-44e4-4861-bc6f-427c5bbb4d9d-os-release\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:24:17.926893 master-0 kubenswrapper[27835]: I0318 13:24:17.925887 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:24:17.926893 master-0 kubenswrapper[27835]: I0318 13:24:17.925906 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bf4c5410-fb44-45e8-ab66-24806e6349b8-tmp\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:17.926893 master-0 kubenswrapper[27835]: I0318 13:24:17.925914 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/767da57e-44e4-4861-bc6f-427c5bbb4d9d-os-release\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:24:17.926893 master-0 kubenswrapper[27835]: I0318 13:24:17.925974 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:24:17.926893 master-0 kubenswrapper[27835]: I0318 13:24:17.926041 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:24:17.926893 master-0 kubenswrapper[27835]: I0318 13:24:17.926080 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:24:17.926893 master-0 kubenswrapper[27835]: I0318 13:24:17.926089 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:24:17.926893 master-0 kubenswrapper[27835]: I0318 13:24:17.926162 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bf4c5410-fb44-45e8-ab66-24806e6349b8-tmp\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:17.926893 master-0 kubenswrapper[27835]: I0318 13:24:17.926356 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce3728ab-5d50-40ac-95b3-74a5b62a557f-serving-cert\") pod \"openshift-config-operator-95bf4f4d-qwgrm\" (UID: \"ce3728ab-5d50-40ac-95b3-74a5b62a557f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" Mar 18 13:24:17.926893 master-0 kubenswrapper[27835]: I0318 13:24:17.926356 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:24:17.926893 master-0 kubenswrapper[27835]: I0318 13:24:17.926360 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ce3728ab-5d50-40ac-95b3-74a5b62a557f-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-qwgrm\" (UID: \"ce3728ab-5d50-40ac-95b3-74a5b62a557f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" Mar 18 13:24:17.926893 master-0 kubenswrapper[27835]: I0318 13:24:17.926442 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ce3728ab-5d50-40ac-95b3-74a5b62a557f-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-qwgrm\" (UID: \"ce3728ab-5d50-40ac-95b3-74a5b62a557f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" Mar 18 13:24:17.926893 master-0 kubenswrapper[27835]: I0318 13:24:17.926456 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/053cc9bc-f98e-46f6-93bb-b5344d20bf74-iptables-alerter-script\") pod \"iptables-alerter-jkl4x\" (UID: \"053cc9bc-f98e-46f6-93bb-b5344d20bf74\") " pod="openshift-network-operator/iptables-alerter-jkl4x" Mar 18 13:24:17.926893 master-0 kubenswrapper[27835]: I0318 13:24:17.926525 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-kubernetes\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:17.926893 master-0 kubenswrapper[27835]: I0318 13:24:17.926585 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/16f8e725-f18a-478e-88c5-87d54aeb4857-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-q2ndb\" (UID: \"16f8e725-f18a-478e-88c5-87d54aeb4857\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:24:17.926893 master-0 kubenswrapper[27835]: I0318 13:24:17.926618 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/053cc9bc-f98e-46f6-93bb-b5344d20bf74-iptables-alerter-script\") pod \"iptables-alerter-jkl4x\" (UID: \"053cc9bc-f98e-46f6-93bb-b5344d20bf74\") " pod="openshift-network-operator/iptables-alerter-jkl4x" Mar 18 13:24:17.926893 master-0 kubenswrapper[27835]: I0318 13:24:17.926617 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9de7243-90c0-49c4-8059-34e0558fca40-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-59d95\" (UID: \"a9de7243-90c0-49c4-8059-34e0558fca40\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-59d95" Mar 18 13:24:17.926893 master-0 kubenswrapper[27835]: I0318 13:24:17.926653 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:24:17.926893 master-0 kubenswrapper[27835]: I0318 13:24:17.926678 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxk9v\" (UniqueName: \"kubernetes.io/projected/d2455453-5943-49ef-bfea-cba077197da0-kube-api-access-lxk9v\") pod \"catalog-operator-68f85b4d6c-t84s9\" (UID: \"d2455453-5943-49ef-bfea-cba077197da0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:24:17.926893 master-0 kubenswrapper[27835]: I0318 13:24:17.926698 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/595f697b-d238-4500-84ce-1ea00377f05e-config\") pod \"service-ca-operator-b865698dc-hnr6m\" (UID: \"595f697b-d238-4500-84ce-1ea00377f05e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m" Mar 18 13:24:17.926893 master-0 kubenswrapper[27835]: I0318 13:24:17.926826 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-client-ca\") pod \"controller-manager-d7c95db55-d6lqm\" (UID: \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\") " pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:24:17.926893 master-0 kubenswrapper[27835]: I0318 13:24:17.926881 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c2c4a58-9780-4ecd-b417-e590ac3576ed-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-4bqf9\" (UID: \"0c2c4a58-9780-4ecd-b417-e590ac3576ed\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-4bqf9" Mar 18 13:24:17.926893 master-0 kubenswrapper[27835]: I0318 13:24:17.926836 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/595f697b-d238-4500-84ce-1ea00377f05e-config\") pod \"service-ca-operator-b865698dc-hnr6m\" (UID: \"595f697b-d238-4500-84ce-1ea00377f05e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m" Mar 18 13:24:17.926893 master-0 kubenswrapper[27835]: I0318 13:24:17.926925 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-etc-openvswitch\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:17.928129 master-0 kubenswrapper[27835]: I0318 13:24:17.926960 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnxv5\" (UniqueName: \"kubernetes.io/projected/053cc9bc-f98e-46f6-93bb-b5344d20bf74-kube-api-access-gnxv5\") pod \"iptables-alerter-jkl4x\" (UID: \"053cc9bc-f98e-46f6-93bb-b5344d20bf74\") " pod="openshift-network-operator/iptables-alerter-jkl4x" Mar 18 13:24:17.928129 master-0 kubenswrapper[27835]: I0318 13:24:17.926987 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3ee0f85b-219b-47cb-a22a-67d359a69881-apiservice-cert\") pod \"packageserver-ff75f747c-r46tm\" (UID: \"3ee0f85b-219b-47cb-a22a-67d359a69881\") " pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" Mar 18 13:24:17.928129 master-0 kubenswrapper[27835]: I0318 13:24:17.927028 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ffe2e75-9cc3-4244-95c8-800463c5aa28-serving-cert\") pod \"cluster-version-operator-7d58488df-bqmqw\" (UID: \"8ffe2e75-9cc3-4244-95c8-800463c5aa28\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bqmqw" Mar 18 13:24:17.928129 master-0 kubenswrapper[27835]: I0318 13:24:17.927138 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c2c4a58-9780-4ecd-b417-e590ac3576ed-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-4bqf9\" (UID: \"0c2c4a58-9780-4ecd-b417-e590ac3576ed\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-4bqf9" Mar 18 13:24:17.928129 master-0 kubenswrapper[27835]: I0318 13:24:17.927485 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3ee0f85b-219b-47cb-a22a-67d359a69881-tmpfs\") pod \"packageserver-ff75f747c-r46tm\" (UID: \"3ee0f85b-219b-47cb-a22a-67d359a69881\") " pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" Mar 18 13:24:17.928129 master-0 kubenswrapper[27835]: I0318 13:24:17.927627 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/595f697b-d238-4500-84ce-1ea00377f05e-serving-cert\") pod \"service-ca-operator-b865698dc-hnr6m\" (UID: \"595f697b-d238-4500-84ce-1ea00377f05e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m" Mar 18 13:24:17.928129 master-0 kubenswrapper[27835]: I0318 13:24:17.927640 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3ee0f85b-219b-47cb-a22a-67d359a69881-tmpfs\") pod \"packageserver-ff75f747c-r46tm\" (UID: \"3ee0f85b-219b-47cb-a22a-67d359a69881\") " pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" Mar 18 13:24:17.928129 master-0 kubenswrapper[27835]: I0318 13:24:17.927652 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c129e07da670ff3af256d72652e4b1da-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c129e07da670ff3af256d72652e4b1da\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:24:17.928129 master-0 kubenswrapper[27835]: I0318 13:24:17.928005 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c129e07da670ff3af256d72652e4b1da-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c129e07da670ff3af256d72652e4b1da\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:24:17.928129 master-0 kubenswrapper[27835]: I0318 13:24:17.928006 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-multus-socket-dir-parent\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.928129 master-0 kubenswrapper[27835]: I0318 13:24:17.928046 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-run-ovn\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:17.928129 master-0 kubenswrapper[27835]: I0318 13:24:17.928077 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfbx8\" (UniqueName: \"kubernetes.io/projected/f38b464d-a218-4753-b7ac-a7d373952c4d-kube-api-access-lfbx8\") pod \"machine-approver-5c6485487f-fk8ql\" (UID: \"f38b464d-a218-4753-b7ac-a7d373952c4d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql" Mar 18 13:24:17.928129 master-0 kubenswrapper[27835]: I0318 13:24:17.928105 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:24:17.928129 master-0 kubenswrapper[27835]: I0318 13:24:17.928112 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-multus-socket-dir-parent\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:17.928934 master-0 kubenswrapper[27835]: I0318 13:24:17.928157 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 13:24:17.928934 master-0 kubenswrapper[27835]: I0318 13:24:17.928254 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab2f96fb-ef55-4427-a598-7e3f1e224045-ovnkube-config\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:17.928934 master-0 kubenswrapper[27835]: I0318 13:24:17.928304 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6a93ff56-362e-44fc-a54f-666a01559892-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-mxcng\" (UID: \"6a93ff56-362e-44fc-a54f-666a01559892\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" Mar 18 13:24:17.928934 master-0 kubenswrapper[27835]: I0318 13:24:17.928313 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/595f697b-d238-4500-84ce-1ea00377f05e-serving-cert\") pod \"service-ca-operator-b865698dc-hnr6m\" (UID: \"595f697b-d238-4500-84ce-1ea00377f05e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m" Mar 18 13:24:17.928934 master-0 kubenswrapper[27835]: I0318 13:24:17.928368 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ab2f96fb-ef55-4427-a598-7e3f1e224045-ovnkube-script-lib\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:17.928934 master-0 kubenswrapper[27835]: I0318 13:24:17.928579 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab2f96fb-ef55-4427-a598-7e3f1e224045-ovnkube-config\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:17.929656 master-0 kubenswrapper[27835]: I0318 13:24:17.928979 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 18 13:24:17.929989 master-0 kubenswrapper[27835]: I0318 13:24:17.929944 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e7156cf-2d68-4de8-b7e7-60e1539590dd-webhook-cert\") pod \"network-node-identity-x8r78\" (UID: \"0e7156cf-2d68-4de8-b7e7-60e1539590dd\") " pod="openshift-network-node-identity/network-node-identity-x8r78" Mar 18 13:24:17.949055 master-0 kubenswrapper[27835]: I0318 13:24:17.948980 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 18 13:24:17.949257 master-0 kubenswrapper[27835]: I0318 13:24:17.949221 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5dbbb8b86f-zrc8h_720a1f60-c1cb-4aef-aaec-f082090ca631/multus-admission-controller/0.log" Mar 18 13:24:17.949310 master-0 kubenswrapper[27835]: I0318 13:24:17.949287 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" Mar 18 13:24:17.972669 master-0 kubenswrapper[27835]: I0318 13:24:17.972618 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 18 13:24:17.989330 master-0 kubenswrapper[27835]: I0318 13:24:17.989273 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 18 13:24:18.008831 master-0 kubenswrapper[27835]: I0318 13:24:18.008781 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 18 13:24:18.020474 master-0 kubenswrapper[27835]: I0318 13:24:18.020433 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:24:18.020474 master-0 kubenswrapper[27835]: I0318 13:24:18.020466 27835 scope.go:117] "RemoveContainer" containerID="721fa0a6e32ffbe367060749a069ffa65b9f6ad129708e70bf8fe6c632945146" Mar 18 13:24:18.020613 master-0 kubenswrapper[27835]: I0318 13:24:18.020477 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:24:18.020656 master-0 kubenswrapper[27835]: I0318 13:24:18.020619 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:24:18.020702 master-0 kubenswrapper[27835]: I0318 13:24:18.020662 27835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:24:18.029566 master-0 kubenswrapper[27835]: I0318 13:24:18.029517 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 18 13:24:18.030215 master-0 kubenswrapper[27835]: I0318 13:24:18.030161 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:24:18.030276 master-0 kubenswrapper[27835]: I0318 13:24:18.030229 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/702076a9-b542-4768-9e9e-99b2cac0a66e-root\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:24:18.030276 master-0 kubenswrapper[27835]: I0318 13:24:18.030237 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:24:18.030276 master-0 kubenswrapper[27835]: I0318 13:24:18.030265 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-log-socket\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:18.030487 master-0 kubenswrapper[27835]: I0318 13:24:18.030318 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-log-socket\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:18.030487 master-0 kubenswrapper[27835]: I0318 13:24:18.030399 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b853631-ff77-4643-aa07-b1f8056320a3-kube-api-access\") pod \"installer-3-master-0\" (UID: \"9b853631-ff77-4643-aa07-b1f8056320a3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:24:18.030487 master-0 kubenswrapper[27835]: I0318 13:24:18.030451 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-run-ovn-kubernetes\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:18.030487 master-0 kubenswrapper[27835]: I0318 13:24:18.030476 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/702076a9-b542-4768-9e9e-99b2cac0a66e-root\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:24:18.030651 master-0 kubenswrapper[27835]: I0318 13:24:18.030515 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-run-ovn-kubernetes\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:18.030651 master-0 kubenswrapper[27835]: I0318 13:24:18.030554 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/16f8e725-f18a-478e-88c5-87d54aeb4857-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-q2ndb\" (UID: \"16f8e725-f18a-478e-88c5-87d54aeb4857\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:24:18.030651 master-0 kubenswrapper[27835]: I0318 13:24:18.030592 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-slash\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:18.030651 master-0 kubenswrapper[27835]: I0318 13:24:18.030642 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-slash\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:18.030841 master-0 kubenswrapper[27835]: I0318 13:24:18.030661 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/16f8e725-f18a-478e-88c5-87d54aeb4857-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-q2ndb\" (UID: \"16f8e725-f18a-478e-88c5-87d54aeb4857\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:24:18.030841 master-0 kubenswrapper[27835]: I0318 13:24:18.030733 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/702076a9-b542-4768-9e9e-99b2cac0a66e-sys\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:24:18.030841 master-0 kubenswrapper[27835]: I0318 13:24:18.030790 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ce3c462e-b655-40bc-811a-95ccde49fdb8-rootfs\") pod \"machine-config-daemon-5blrl\" (UID: \"ce3c462e-b655-40bc-811a-95ccde49fdb8\") " pod="openshift-machine-config-operator/machine-config-daemon-5blrl" Mar 18 13:24:18.030841 master-0 kubenswrapper[27835]: I0318 13:24:18.030811 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/702076a9-b542-4768-9e9e-99b2cac0a66e-sys\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:24:18.030841 master-0 kubenswrapper[27835]: I0318 13:24:18.030816 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-var-lib-openvswitch\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:18.031044 master-0 kubenswrapper[27835]: I0318 13:24:18.030845 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-var-lib-openvswitch\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:18.031044 master-0 kubenswrapper[27835]: I0318 13:24:18.030866 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ce3c462e-b655-40bc-811a-95ccde49fdb8-rootfs\") pod \"machine-config-daemon-5blrl\" (UID: \"ce3c462e-b655-40bc-811a-95ccde49fdb8\") " pod="openshift-machine-config-operator/machine-config-daemon-5blrl" Mar 18 13:24:18.031044 master-0 kubenswrapper[27835]: I0318 13:24:18.030895 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-run\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:18.031044 master-0 kubenswrapper[27835]: I0318 13:24:18.030932 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-sysconfig\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:18.031044 master-0 kubenswrapper[27835]: I0318 13:24:18.030950 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-var-lib-kubelet\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:18.031044 master-0 kubenswrapper[27835]: I0318 13:24:18.031023 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-var-lib-kubelet\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:18.031044 master-0 kubenswrapper[27835]: I0318 13:24:18.031031 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-sysconfig\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:18.031400 master-0 kubenswrapper[27835]: I0318 13:24:18.031063 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-run\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:18.031400 master-0 kubenswrapper[27835]: I0318 13:24:18.031084 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-run-systemd\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:18.031400 master-0 kubenswrapper[27835]: I0318 13:24:18.031139 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-lib-modules\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:18.031400 master-0 kubenswrapper[27835]: I0318 13:24:18.031175 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-run-systemd\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:18.031400 master-0 kubenswrapper[27835]: I0318 13:24:18.031187 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-kubernetes\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:18.031400 master-0 kubenswrapper[27835]: I0318 13:24:18.031244 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-lib-modules\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:18.031400 master-0 kubenswrapper[27835]: I0318 13:24:18.031253 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-etc-openvswitch\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:18.031400 master-0 kubenswrapper[27835]: I0318 13:24:18.031278 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:24:18.031400 master-0 kubenswrapper[27835]: I0318 13:24:18.031311 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-kubernetes\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:18.031400 master-0 kubenswrapper[27835]: I0318 13:24:18.031344 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-etc-openvswitch\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:18.031400 master-0 kubenswrapper[27835]: I0318 13:24:18.031425 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:24:18.031863 master-0 kubenswrapper[27835]: I0318 13:24:18.031443 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-run-ovn\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:18.031863 master-0 kubenswrapper[27835]: I0318 13:24:18.031516 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-run-ovn\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:18.031863 master-0 kubenswrapper[27835]: I0318 13:24:18.031562 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-systemd\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:18.031863 master-0 kubenswrapper[27835]: I0318 13:24:18.031629 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-systemd\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:18.031863 master-0 kubenswrapper[27835]: I0318 13:24:18.031699 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8ffe2e75-9cc3-4244-95c8-800463c5aa28-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-bqmqw\" (UID: \"8ffe2e75-9cc3-4244-95c8-800463c5aa28\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bqmqw" Mar 18 13:24:18.031863 master-0 kubenswrapper[27835]: I0318 13:24:18.031739 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9b853631-ff77-4643-aa07-b1f8056320a3-var-lock\") pod \"installer-3-master-0\" (UID: \"9b853631-ff77-4643-aa07-b1f8056320a3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:24:18.031863 master-0 kubenswrapper[27835]: I0318 13:24:18.031767 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-run-netns\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:18.031863 master-0 kubenswrapper[27835]: I0318 13:24:18.031814 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-run-netns\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:18.031863 master-0 kubenswrapper[27835]: I0318 13:24:18.031817 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-sys\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:18.031863 master-0 kubenswrapper[27835]: I0318 13:24:18.031836 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8ffe2e75-9cc3-4244-95c8-800463c5aa28-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-bqmqw\" (UID: \"8ffe2e75-9cc3-4244-95c8-800463c5aa28\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bqmqw" Mar 18 13:24:18.031863 master-0 kubenswrapper[27835]: I0318 13:24:18.031848 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9b853631-ff77-4643-aa07-b1f8056320a3-var-lock\") pod \"installer-3-master-0\" (UID: \"9b853631-ff77-4643-aa07-b1f8056320a3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:24:18.031863 master-0 kubenswrapper[27835]: I0318 13:24:18.031866 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-sys\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:18.032335 master-0 kubenswrapper[27835]: I0318 13:24:18.031901 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:24:18.032335 master-0 kubenswrapper[27835]: I0318 13:24:18.031950 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:24:18.032335 master-0 kubenswrapper[27835]: I0318 13:24:18.031966 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-node-log\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:18.032335 master-0 kubenswrapper[27835]: I0318 13:24:18.031990 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-sysctl-d\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:18.032335 master-0 kubenswrapper[27835]: I0318 13:24:18.032014 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-host\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:18.032335 master-0 kubenswrapper[27835]: I0318 13:24:18.032079 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-sysctl-d\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:18.032335 master-0 kubenswrapper[27835]: I0318 13:24:18.032105 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-node-log\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:18.032335 master-0 kubenswrapper[27835]: I0318 13:24:18.032127 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-host\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:18.032335 master-0 kubenswrapper[27835]: I0318 13:24:18.032184 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-8lzkl\" (UID: \"80994f33-21e7-45d6-9f21-1cfd8e1f41ce\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" Mar 18 13:24:18.032335 master-0 kubenswrapper[27835]: I0318 13:24:18.032205 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-kubelet\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:18.032335 master-0 kubenswrapper[27835]: I0318 13:24:18.032249 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-run-openvswitch\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:18.032335 master-0 kubenswrapper[27835]: I0318 13:24:18.032250 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-8lzkl\" (UID: \"80994f33-21e7-45d6-9f21-1cfd8e1f41ce\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" Mar 18 13:24:18.032335 master-0 kubenswrapper[27835]: I0318 13:24:18.032266 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-cni-bin\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:18.032335 master-0 kubenswrapper[27835]: I0318 13:24:18.032300 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-kubelet\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:18.032335 master-0 kubenswrapper[27835]: I0318 13:24:18.032303 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-run-openvswitch\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:18.032335 master-0 kubenswrapper[27835]: I0318 13:24:18.032341 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-cni-netd\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:18.032335 master-0 kubenswrapper[27835]: I0318 13:24:18.032330 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-cni-bin\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:18.033068 master-0 kubenswrapper[27835]: I0318 13:24:18.032393 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8ffe2e75-9cc3-4244-95c8-800463c5aa28-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-bqmqw\" (UID: \"8ffe2e75-9cc3-4244-95c8-800463c5aa28\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bqmqw" Mar 18 13:24:18.033068 master-0 kubenswrapper[27835]: I0318 13:24:18.032428 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-cni-netd\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:18.033068 master-0 kubenswrapper[27835]: I0318 13:24:18.032458 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8ffe2e75-9cc3-4244-95c8-800463c5aa28-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-bqmqw\" (UID: \"8ffe2e75-9cc3-4244-95c8-800463c5aa28\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bqmqw" Mar 18 13:24:18.033068 master-0 kubenswrapper[27835]: I0318 13:24:18.032564 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/16f8e725-f18a-478e-88c5-87d54aeb4857-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-q2ndb\" (UID: \"16f8e725-f18a-478e-88c5-87d54aeb4857\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:24:18.033068 master-0 kubenswrapper[27835]: I0318 13:24:18.032590 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:24:18.033068 master-0 kubenswrapper[27835]: I0318 13:24:18.032630 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-modprobe-d\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:18.033068 master-0 kubenswrapper[27835]: I0318 13:24:18.032657 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/16f8e725-f18a-478e-88c5-87d54aeb4857-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-q2ndb\" (UID: \"16f8e725-f18a-478e-88c5-87d54aeb4857\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:24:18.033068 master-0 kubenswrapper[27835]: I0318 13:24:18.032659 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/053cc9bc-f98e-46f6-93bb-b5344d20bf74-host-slash\") pod \"iptables-alerter-jkl4x\" (UID: \"053cc9bc-f98e-46f6-93bb-b5344d20bf74\") " pod="openshift-network-operator/iptables-alerter-jkl4x" Mar 18 13:24:18.033068 master-0 kubenswrapper[27835]: I0318 13:24:18.032678 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/053cc9bc-f98e-46f6-93bb-b5344d20bf74-host-slash\") pod \"iptables-alerter-jkl4x\" (UID: \"053cc9bc-f98e-46f6-93bb-b5344d20bf74\") " pod="openshift-network-operator/iptables-alerter-jkl4x" Mar 18 13:24:18.033068 master-0 kubenswrapper[27835]: I0318 13:24:18.032695 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 13:24:18.033068 master-0 kubenswrapper[27835]: I0318 13:24:18.032780 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/702076a9-b542-4768-9e9e-99b2cac0a66e-node-exporter-wtmp\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:24:18.033068 master-0 kubenswrapper[27835]: I0318 13:24:18.032826 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/702076a9-b542-4768-9e9e-99b2cac0a66e-node-exporter-wtmp\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:24:18.033068 master-0 kubenswrapper[27835]: I0318 13:24:18.032875 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/13c71f7d-1485-4f86-beb2-ee16cf420350-hosts-file\") pod \"node-resolver-7vddk\" (UID: \"13c71f7d-1485-4f86-beb2-ee16cf420350\") " pod="openshift-dns/node-resolver-7vddk" Mar 18 13:24:18.033068 master-0 kubenswrapper[27835]: I0318 13:24:18.032907 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-systemd-units\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:18.033068 master-0 kubenswrapper[27835]: I0318 13:24:18.032940 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-sysctl-conf\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:18.033068 master-0 kubenswrapper[27835]: I0318 13:24:18.032958 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-modprobe-d\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:18.033068 master-0 kubenswrapper[27835]: I0318 13:24:18.032996 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/13c71f7d-1485-4f86-beb2-ee16cf420350-hosts-file\") pod \"node-resolver-7vddk\" (UID: \"13c71f7d-1485-4f86-beb2-ee16cf420350\") " pod="openshift-dns/node-resolver-7vddk" Mar 18 13:24:18.033068 master-0 kubenswrapper[27835]: I0318 13:24:18.032994 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-systemd-units\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:18.033068 master-0 kubenswrapper[27835]: I0318 13:24:18.033021 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b853631-ff77-4643-aa07-b1f8056320a3-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"9b853631-ff77-4643-aa07-b1f8056320a3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:24:18.033796 master-0 kubenswrapper[27835]: I0318 13:24:18.033095 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/bf4c5410-fb44-45e8-ab66-24806e6349b8-etc-sysctl-conf\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:18.033796 master-0 kubenswrapper[27835]: I0318 13:24:18.033117 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:18.033796 master-0 kubenswrapper[27835]: I0318 13:24:18.033150 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b853631-ff77-4643-aa07-b1f8056320a3-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"9b853631-ff77-4643-aa07-b1f8056320a3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:24:18.033796 master-0 kubenswrapper[27835]: I0318 13:24:18.033214 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ab2f96fb-ef55-4427-a598-7e3f1e224045-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:18.049534 master-0 kubenswrapper[27835]: I0318 13:24:18.049475 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 18 13:24:18.056185 master-0 kubenswrapper[27835]: I0318 13:24:18.056118 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b89fb313-d01a-4305-b123-e253b3382b85-signing-key\") pod \"service-ca-79bc6b8d76-2zvf2\" (UID: \"b89fb313-d01a-4305-b123-e253b3382b85\") " pod="openshift-service-ca/service-ca-79bc6b8d76-2zvf2" Mar 18 13:24:18.070469 master-0 kubenswrapper[27835]: I0318 13:24:18.070402 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 18 13:24:18.072666 master-0 kubenswrapper[27835]: I0318 13:24:18.072616 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b89fb313-d01a-4305-b123-e253b3382b85-signing-cabundle\") pod \"service-ca-79bc6b8d76-2zvf2\" (UID: \"b89fb313-d01a-4305-b123-e253b3382b85\") " pod="openshift-service-ca/service-ca-79bc6b8d76-2zvf2" Mar 18 13:24:18.083429 master-0 kubenswrapper[27835]: I0318 13:24:18.083097 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:24:18.083714 master-0 kubenswrapper[27835]: I0318 13:24:18.083664 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 18 13:24:18.083800 master-0 kubenswrapper[27835]: I0318 13:24:18.083736 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 18 13:24:18.084066 master-0 kubenswrapper[27835]: I0318 13:24:18.083965 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:24:18.084066 master-0 kubenswrapper[27835]: I0318 13:24:18.084009 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:24:18.084066 master-0 kubenswrapper[27835]: I0318 13:24:18.084046 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:24:18.084358 master-0 kubenswrapper[27835]: I0318 13:24:18.084331 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:24:18.088385 master-0 kubenswrapper[27835]: I0318 13:24:18.088233 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 13:24:18.090375 master-0 kubenswrapper[27835]: I0318 13:24:18.090281 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 18 13:24:18.105510 master-0 kubenswrapper[27835]: I0318 13:24:18.104561 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:24:18.109802 master-0 kubenswrapper[27835]: I0318 13:24:18.109629 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 18 13:24:18.115722 master-0 kubenswrapper[27835]: I0318 13:24:18.115662 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a2bdf5b0-8764-4b15-97c9-20af36634fd0-audit\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:24:18.129083 master-0 kubenswrapper[27835]: I0318 13:24:18.129027 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 18 13:24:18.134307 master-0 kubenswrapper[27835]: I0318 13:24:18.133109 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a2bdf5b0-8764-4b15-97c9-20af36634fd0-etcd-client\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:24:18.149272 master-0 kubenswrapper[27835]: I0318 13:24:18.149221 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 18 13:24:18.149963 master-0 kubenswrapper[27835]: I0318 13:24:18.149899 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2bdf5b0-8764-4b15-97c9-20af36634fd0-serving-cert\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:24:18.170026 master-0 kubenswrapper[27835]: I0318 13:24:18.169962 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 18 13:24:18.175807 master-0 kubenswrapper[27835]: I0318 13:24:18.175764 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a2bdf5b0-8764-4b15-97c9-20af36634fd0-encryption-config\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:24:18.189715 master-0 kubenswrapper[27835]: I0318 13:24:18.189320 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 18 13:24:18.189715 master-0 kubenswrapper[27835]: I0318 13:24:18.189659 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2bdf5b0-8764-4b15-97c9-20af36634fd0-config\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:24:18.209774 master-0 kubenswrapper[27835]: I0318 13:24:18.209722 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 18 13:24:18.216476 master-0 kubenswrapper[27835]: I0318 13:24:18.216069 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a2bdf5b0-8764-4b15-97c9-20af36634fd0-image-import-ca\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:24:18.229373 master-0 kubenswrapper[27835]: I0318 13:24:18.229333 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 18 13:24:18.237067 master-0 kubenswrapper[27835]: I0318 13:24:18.237027 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a2bdf5b0-8764-4b15-97c9-20af36634fd0-etcd-serving-ca\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:24:18.248915 master-0 kubenswrapper[27835]: I0318 13:24:18.248846 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 18 13:24:18.289804 master-0 kubenswrapper[27835]: I0318 13:24:18.289751 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 18 13:24:18.290015 master-0 kubenswrapper[27835]: I0318 13:24:18.289759 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 18 13:24:18.297194 master-0 kubenswrapper[27835]: I0318 13:24:18.297135 27835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 18 13:24:18.297587 master-0 kubenswrapper[27835]: I0318 13:24:18.297559 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2bdf5b0-8764-4b15-97c9-20af36634fd0-trusted-ca-bundle\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:24:18.309333 master-0 kubenswrapper[27835]: I0318 13:24:18.309284 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 18 13:24:18.317532 master-0 kubenswrapper[27835]: I0318 13:24:18.317480 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/16f8e725-f18a-478e-88c5-87d54aeb4857-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-q2ndb\" (UID: \"16f8e725-f18a-478e-88c5-87d54aeb4857\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:24:18.335802 master-0 kubenswrapper[27835]: I0318 13:24:18.334939 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 18 13:24:18.349150 master-0 kubenswrapper[27835]: I0318 13:24:18.348917 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 18 13:24:18.369430 master-0 kubenswrapper[27835]: I0318 13:24:18.369363 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 18 13:24:18.372972 master-0 kubenswrapper[27835]: I0318 13:24:18.372927 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/16f8e725-f18a-478e-88c5-87d54aeb4857-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-q2ndb\" (UID: \"16f8e725-f18a-478e-88c5-87d54aeb4857\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:24:18.389051 master-0 kubenswrapper[27835]: I0318 13:24:18.388886 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 18 13:24:18.397189 master-0 kubenswrapper[27835]: I0318 13:24:18.397131 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7fb5bad7-07d9-45ac-ad27-a887d12d148f-audit-policies\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:24:18.409778 master-0 kubenswrapper[27835]: I0318 13:24:18.409746 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 18 13:24:18.428747 master-0 kubenswrapper[27835]: I0318 13:24:18.428696 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 18 13:24:18.429263 master-0 kubenswrapper[27835]: I0318 13:24:18.429239 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7fb5bad7-07d9-45ac-ad27-a887d12d148f-etcd-serving-ca\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:24:18.448727 master-0 kubenswrapper[27835]: I0318 13:24:18.448675 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 18 13:24:18.452430 master-0 kubenswrapper[27835]: I0318 13:24:18.452386 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fb5bad7-07d9-45ac-ad27-a887d12d148f-trusted-ca-bundle\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:24:18.468466 master-0 kubenswrapper[27835]: I0318 13:24:18.468400 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 18 13:24:18.471763 master-0 kubenswrapper[27835]: I0318 13:24:18.471712 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7fb5bad7-07d9-45ac-ad27-a887d12d148f-encryption-config\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:24:18.489774 master-0 kubenswrapper[27835]: I0318 13:24:18.489633 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 18 13:24:18.509312 master-0 kubenswrapper[27835]: I0318 13:24:18.509260 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 18 13:24:18.512755 master-0 kubenswrapper[27835]: I0318 13:24:18.512719 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7fb5bad7-07d9-45ac-ad27-a887d12d148f-etcd-client\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:24:18.530941 master-0 kubenswrapper[27835]: I0318 13:24:18.530872 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 18 13:24:18.536987 master-0 kubenswrapper[27835]: I0318 13:24:18.536944 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fb5bad7-07d9-45ac-ad27-a887d12d148f-serving-cert\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:24:18.548995 master-0 kubenswrapper[27835]: I0318 13:24:18.548961 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 18 13:24:18.554482 master-0 kubenswrapper[27835]: I0318 13:24:18.554460 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bdcd72a6-a8e8-47ba-8b51-7325d35bad6b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-5vhnr\" (UID: \"bdcd72a6-a8e8-47ba-8b51-7325d35bad6b\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-5vhnr" Mar 18 13:24:18.569385 master-0 kubenswrapper[27835]: I0318 13:24:18.569330 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 18 13:24:18.578559 master-0 kubenswrapper[27835]: I0318 13:24:18.578517 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/708812af-3249-4d57-8f28-055da22a7329-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-9fdnt\" (UID: \"708812af-3249-4d57-8f28-055da22a7329\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fdnt" Mar 18 13:24:18.588511 master-0 kubenswrapper[27835]: I0318 13:24:18.588473 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 18 13:24:18.609740 master-0 kubenswrapper[27835]: I0318 13:24:18.609623 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 18 13:24:18.613025 master-0 kubenswrapper[27835]: I0318 13:24:18.612988 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3ee0f85b-219b-47cb-a22a-67d359a69881-webhook-cert\") pod \"packageserver-ff75f747c-r46tm\" (UID: \"3ee0f85b-219b-47cb-a22a-67d359a69881\") " pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" Mar 18 13:24:18.617626 master-0 kubenswrapper[27835]: I0318 13:24:18.617593 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3ee0f85b-219b-47cb-a22a-67d359a69881-apiservice-cert\") pod \"packageserver-ff75f747c-r46tm\" (UID: \"3ee0f85b-219b-47cb-a22a-67d359a69881\") " pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" Mar 18 13:24:18.629636 master-0 kubenswrapper[27835]: I0318 13:24:18.629557 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-qqvgp" Mar 18 13:24:18.648577 master-0 kubenswrapper[27835]: I0318 13:24:18.648489 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 18 13:24:18.658890 master-0 kubenswrapper[27835]: I0318 13:24:18.658841 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ffe2e75-9cc3-4244-95c8-800463c5aa28-serving-cert\") pod \"cluster-version-operator-7d58488df-bqmqw\" (UID: \"8ffe2e75-9cc3-4244-95c8-800463c5aa28\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bqmqw" Mar 18 13:24:18.668997 master-0 kubenswrapper[27835]: I0318 13:24:18.668947 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-q8tt6" Mar 18 13:24:18.687480 master-0 kubenswrapper[27835]: I0318 13:24:18.687406 27835 request.go:700] Waited for 1.007472128s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Mar 18 13:24:18.688591 master-0 kubenswrapper[27835]: I0318 13:24:18.688544 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 18 13:24:18.698698 master-0 kubenswrapper[27835]: I0318 13:24:18.698653 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8ffe2e75-9cc3-4244-95c8-800463c5aa28-service-ca\") pod \"cluster-version-operator-7d58488df-bqmqw\" (UID: \"8ffe2e75-9cc3-4244-95c8-800463c5aa28\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bqmqw" Mar 18 13:24:18.709376 master-0 kubenswrapper[27835]: I0318 13:24:18.709321 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 18 13:24:18.713580 master-0 kubenswrapper[27835]: I0318 13:24:18.713549 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fb65c095-ca20-432c-a069-ad6719fca9c8-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-7tcjk\" (UID: \"fb65c095-ca20-432c-a069-ad6719fca9c8\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-7tcjk" Mar 18 13:24:18.713580 master-0 kubenswrapper[27835]: I0318 13:24:18.713574 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/702076a9-b542-4768-9e9e-99b2cac0a66e-metrics-client-ca\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:24:18.713738 master-0 kubenswrapper[27835]: I0318 13:24:18.713597 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6a93ff56-362e-44fc-a54f-666a01559892-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-mxcng\" (UID: \"6a93ff56-362e-44fc-a54f-666a01559892\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" Mar 18 13:24:18.714042 master-0 kubenswrapper[27835]: I0318 13:24:18.714008 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d325c523-8e6f-4665-9f54-334eaf301141-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-s4ql7\" (UID: \"d325c523-8e6f-4665-9f54-334eaf301141\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7" Mar 18 13:24:18.729729 master-0 kubenswrapper[27835]: I0318 13:24:18.729673 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 18 13:24:18.730064 master-0 kubenswrapper[27835]: I0318 13:24:18.729809 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5dbbb8b86f-zrc8h_720a1f60-c1cb-4aef-aaec-f082090ca631/multus-admission-controller/0.log" Mar 18 13:24:18.730662 master-0 kubenswrapper[27835]: I0318 13:24:18.730484 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" Mar 18 13:24:18.731459 master-0 kubenswrapper[27835]: I0318 13:24:18.731395 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-wqxpk_d9d09a56-ed4c-40b7-8be1-f3934c07296e/ingress-operator/5.log" Mar 18 13:24:18.734183 master-0 kubenswrapper[27835]: I0318 13:24:18.734153 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-check-endpoints/0.log" Mar 18 13:24:18.736177 master-0 kubenswrapper[27835]: I0318 13:24:18.736127 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:24:18.743970 master-0 kubenswrapper[27835]: I0318 13:24:18.743923 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:24:18.751516 master-0 kubenswrapper[27835]: I0318 13:24:18.751467 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-lz5d6" Mar 18 13:24:18.769053 master-0 kubenswrapper[27835]: I0318 13:24:18.768986 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-dprq6" Mar 18 13:24:18.789345 master-0 kubenswrapper[27835]: I0318 13:24:18.789309 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 18 13:24:18.793380 master-0 kubenswrapper[27835]: I0318 13:24:18.793340 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a6090f0-3a27-4102-b8dd-b071644a3543-serving-cert\") pod \"insights-operator-68bf6ff9d6-bbqfl\" (UID: \"0a6090f0-3a27-4102-b8dd-b071644a3543\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bbqfl" Mar 18 13:24:18.805981 master-0 kubenswrapper[27835]: E0318 13:24:18.805914 27835 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.813796 master-0 kubenswrapper[27835]: I0318 13:24:18.813749 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 18 13:24:18.815633 master-0 kubenswrapper[27835]: I0318 13:24:18.815589 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a6090f0-3a27-4102-b8dd-b071644a3543-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-bbqfl\" (UID: \"0a6090f0-3a27-4102-b8dd-b071644a3543\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bbqfl" Mar 18 13:24:18.829774 master-0 kubenswrapper[27835]: I0318 13:24:18.829695 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 18 13:24:18.839598 master-0 kubenswrapper[27835]: I0318 13:24:18.839544 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a6090f0-3a27-4102-b8dd-b071644a3543-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-bbqfl\" (UID: \"0a6090f0-3a27-4102-b8dd-b071644a3543\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bbqfl" Mar 18 13:24:18.848970 master-0 kubenswrapper[27835]: I0318 13:24:18.848919 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 18 13:24:18.854038 master-0 kubenswrapper[27835]: I0318 13:24:18.853991 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b853631-ff77-4643-aa07-b1f8056320a3-kubelet-dir\") pod \"9b853631-ff77-4643-aa07-b1f8056320a3\" (UID: \"9b853631-ff77-4643-aa07-b1f8056320a3\") " Mar 18 13:24:18.854119 master-0 kubenswrapper[27835]: I0318 13:24:18.854046 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b853631-ff77-4643-aa07-b1f8056320a3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9b853631-ff77-4643-aa07-b1f8056320a3" (UID: "9b853631-ff77-4643-aa07-b1f8056320a3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:24:18.854119 master-0 kubenswrapper[27835]: I0318 13:24:18.854066 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9b853631-ff77-4643-aa07-b1f8056320a3-var-lock\") pod \"9b853631-ff77-4643-aa07-b1f8056320a3\" (UID: \"9b853631-ff77-4643-aa07-b1f8056320a3\") " Mar 18 13:24:18.854119 master-0 kubenswrapper[27835]: I0318 13:24:18.854091 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b853631-ff77-4643-aa07-b1f8056320a3-var-lock" (OuterVolumeSpecName: "var-lock") pod "9b853631-ff77-4643-aa07-b1f8056320a3" (UID: "9b853631-ff77-4643-aa07-b1f8056320a3"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:24:18.855387 master-0 kubenswrapper[27835]: I0318 13:24:18.855349 27835 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b853631-ff77-4643-aa07-b1f8056320a3-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:24:18.855387 master-0 kubenswrapper[27835]: I0318 13:24:18.855379 27835 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9b853631-ff77-4643-aa07-b1f8056320a3-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:24:18.869072 master-0 kubenswrapper[27835]: I0318 13:24:18.868978 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 18 13:24:18.888929 master-0 kubenswrapper[27835]: I0318 13:24:18.888864 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 18 13:24:18.899461 master-0 kubenswrapper[27835]: I0318 13:24:18.899431 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bd8aa7c1-0a04-4df0-9047-63ab846b9535-node-bootstrap-token\") pod \"machine-config-server-wxht4\" (UID: \"bd8aa7c1-0a04-4df0-9047-63ab846b9535\") " pod="openshift-machine-config-operator/machine-config-server-wxht4" Mar 18 13:24:18.909427 master-0 kubenswrapper[27835]: I0318 13:24:18.909377 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 18 13:24:18.912717 master-0 kubenswrapper[27835]: E0318 13:24:18.912678 27835 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.912797 master-0 kubenswrapper[27835]: E0318 13:24:18.912747 27835 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.912797 master-0 kubenswrapper[27835]: E0318 13:24:18.912767 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e54baea8-6c3e-45a0-ac8c-880a8aaa8208-cert podName:e54baea8-6c3e-45a0-ac8c-880a8aaa8208 nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.412750069 +0000 UTC m=+23.377961629 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e54baea8-6c3e-45a0-ac8c-880a8aaa8208-cert") pod "ingress-canary-6hldc" (UID: "e54baea8-6c3e-45a0-ac8c-880a8aaa8208") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.912797 master-0 kubenswrapper[27835]: E0318 13:24:18.912685 27835 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.912942 master-0 kubenswrapper[27835]: E0318 13:24:18.912810 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-auth-proxy-config podName:80994f33-21e7-45d6-9f21-1cfd8e1f41ce nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.41279673 +0000 UTC m=+23.378008300 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-7dff898856-8lzkl" (UID: "80994f33-21e7-45d6-9f21-1cfd8e1f41ce") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.912942 master-0 kubenswrapper[27835]: E0318 13:24:18.912876 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f38b464d-a218-4753-b7ac-a7d373952c4d-auth-proxy-config podName:f38b464d-a218-4753-b7ac-a7d373952c4d nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.412851752 +0000 UTC m=+23.378063332 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/f38b464d-a218-4753-b7ac-a7d373952c4d-auth-proxy-config") pod "machine-approver-5c6485487f-fk8ql" (UID: "f38b464d-a218-4753-b7ac-a7d373952c4d") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.913822 master-0 kubenswrapper[27835]: E0318 13:24:18.913789 27835 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.913873 master-0 kubenswrapper[27835]: E0318 13:24:18.913824 27835 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-5fm1li8uoic3j: failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.913873 master-0 kubenswrapper[27835]: E0318 13:24:18.913844 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d325c523-8e6f-4665-9f54-334eaf301141-openshift-state-metrics-kube-rbac-proxy-config podName:d325c523-8e6f-4665-9f54-334eaf301141 nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.413831919 +0000 UTC m=+23.379043569 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/d325c523-8e6f-4665-9f54-334eaf301141-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-5dc6c74576-s4ql7" (UID: "d325c523-8e6f-4665-9f54-334eaf301141") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.913873 master-0 kubenswrapper[27835]: E0318 13:24:18.913847 27835 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.914004 master-0 kubenswrapper[27835]: E0318 13:24:18.913876 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41cc6278-8f99-407c-ba5f-750a40e3058c-client-ca-bundle podName:41cc6278-8f99-407c-ba5f-750a40e3058c nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.41386195 +0000 UTC m=+23.379073620 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/41cc6278-8f99-407c-ba5f-750a40e3058c-client-ca-bundle") pod "metrics-server-65dbcd767c-7bqc9" (UID: "41cc6278-8f99-407c-ba5f-750a40e3058c") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.914004 master-0 kubenswrapper[27835]: E0318 13:24:18.913901 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb65c095-ca20-432c-a069-ad6719fca9c8-prometheus-operator-kube-rbac-proxy-config podName:fb65c095-ca20-432c-a069-ad6719fca9c8 nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.41388832 +0000 UTC m=+23.379099890 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/fb65c095-ca20-432c-a069-ad6719fca9c8-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-6c8df6d4b-7tcjk" (UID: "fb65c095-ca20-432c-a069-ad6719fca9c8") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.915058 master-0 kubenswrapper[27835]: E0318 13:24:18.915023 27835 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.915160 master-0 kubenswrapper[27835]: E0318 13:24:18.915067 27835 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.915160 master-0 kubenswrapper[27835]: E0318 13:24:18.915089 27835 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.915160 master-0 kubenswrapper[27835]: E0318 13:24:18.915097 27835 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.915160 master-0 kubenswrapper[27835]: E0318 13:24:18.915108 27835 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.915160 master-0 kubenswrapper[27835]: E0318 13:24:18.915130 27835 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.915160 master-0 kubenswrapper[27835]: E0318 13:24:18.915094 27835 configmap.go:193] Couldn't get configMap openshift-ingress/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.915160 master-0 kubenswrapper[27835]: E0318 13:24:18.915143 27835 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.915539 master-0 kubenswrapper[27835]: E0318 13:24:18.915145 27835 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.915539 master-0 kubenswrapper[27835]: E0318 13:24:18.915070 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-images podName:80994f33-21e7-45d6-9f21-1cfd8e1f41ce nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.415060504 +0000 UTC m=+23.380272064 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-images") pod "cluster-cloud-controller-manager-operator-7dff898856-8lzkl" (UID: "80994f33-21e7-45d6-9f21-1cfd8e1f41ce") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.915539 master-0 kubenswrapper[27835]: E0318 13:24:18.915232 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a350f317-f058-4102-af5c-cbba46d35e02-client-ca podName:a350f317-f058-4102-af5c-cbba46d35e02 nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.415219578 +0000 UTC m=+23.380431228 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a350f317-f058-4102-af5c-cbba46d35e02-client-ca") pod "route-controller-manager-c8888769b-8mxp6" (UID: "a350f317-f058-4102-af5c-cbba46d35e02") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.915539 master-0 kubenswrapper[27835]: E0318 13:24:18.915248 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d325c523-8e6f-4665-9f54-334eaf301141-openshift-state-metrics-tls podName:d325c523-8e6f-4665-9f54-334eaf301141 nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.415240249 +0000 UTC m=+23.380451819 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/d325c523-8e6f-4665-9f54-334eaf301141-openshift-state-metrics-tls") pod "openshift-state-metrics-5dc6c74576-s4ql7" (UID: "d325c523-8e6f-4665-9f54-334eaf301141") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.915539 master-0 kubenswrapper[27835]: E0318 13:24:18.915264 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/830ff1d6-332e-46b1-b13c-c2507fdc3c19-webhook-certs podName:830ff1d6-332e-46b1-b13c-c2507fdc3c19 nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.415255509 +0000 UTC m=+23.380467079 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/830ff1d6-332e-46b1-b13c-c2507fdc3c19-webhook-certs") pod "multus-admission-controller-58c9f8fc64-lk5k7" (UID: "830ff1d6-332e-46b1-b13c-c2507fdc3c19") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.915539 master-0 kubenswrapper[27835]: E0318 13:24:18.915279 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-cloud-controller-manager-operator-tls podName:80994f33-21e7-45d6-9f21-1cfd8e1f41ce nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.41527237 +0000 UTC m=+23.380483940 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-7dff898856-8lzkl" (UID: "80994f33-21e7-45d6-9f21-1cfd8e1f41ce") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.915539 master-0 kubenswrapper[27835]: E0318 13:24:18.915297 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-proxy-ca-bundles podName:c6c35e08-cdbc-4a86-a64a-3e5c34e941d7 nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.41529002 +0000 UTC m=+23.380501600 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-proxy-ca-bundles") pod "controller-manager-d7c95db55-d6lqm" (UID: "c6c35e08-cdbc-4a86-a64a-3e5c34e941d7") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.915539 master-0 kubenswrapper[27835]: E0318 13:24:18.915312 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/00375107-9a3b-4161-a90d-72ea8827c5fc-service-ca-bundle podName:00375107-9a3b-4161-a90d-72ea8827c5fc nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.41530538 +0000 UTC m=+23.380516950 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/00375107-9a3b-4161-a90d-72ea8827c5fc-service-ca-bundle") pod "router-default-7dcf5569b5-gvmtv" (UID: "00375107-9a3b-4161-a90d-72ea8827c5fc") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.915539 master-0 kubenswrapper[27835]: E0318 13:24:18.915330 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68104a8c-3fac-4d4b-b975-bc2d045b3375-machine-api-operator-tls podName:68104a8c-3fac-4d4b-b975-bc2d045b3375 nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.415322271 +0000 UTC m=+23.380533941 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/68104a8c-3fac-4d4b-b975-bc2d045b3375-machine-api-operator-tls") pod "machine-api-operator-6fbb6cf6f9-9bqxm" (UID: "68104a8c-3fac-4d4b-b975-bc2d045b3375") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.915539 master-0 kubenswrapper[27835]: E0318 13:24:18.915343 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/029b127e-0faf-4957-b591-9c561b053cda-config-volume podName:029b127e-0faf-4957-b591-9c561b053cda nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.415337011 +0000 UTC m=+23.380548581 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/029b127e-0faf-4957-b591-9c561b053cda-config-volume") pod "dns-default-92s8c" (UID: "029b127e-0faf-4957-b591-9c561b053cda") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.916327 master-0 kubenswrapper[27835]: E0318 13:24:18.916287 27835 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.916603 master-0 kubenswrapper[27835]: E0318 13:24:18.916301 27835 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.916686 master-0 kubenswrapper[27835]: E0318 13:24:18.916570 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce3c462e-b655-40bc-811a-95ccde49fdb8-proxy-tls podName:ce3c462e-b655-40bc-811a-95ccde49fdb8 nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.416540475 +0000 UTC m=+23.381752075 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/ce3c462e-b655-40bc-811a-95ccde49fdb8-proxy-tls") pod "machine-config-daemon-5blrl" (UID: "ce3c462e-b655-40bc-811a-95ccde49fdb8") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.916686 master-0 kubenswrapper[27835]: E0318 13:24:18.916647 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a9de7243-90c0-49c4-8059-34e0558fca40-cco-trusted-ca podName:a9de7243-90c0-49c4-8059-34e0558fca40 nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.416630197 +0000 UTC m=+23.381841767 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/a9de7243-90c0-49c4-8059-34e0558fca40-cco-trusted-ca") pod "cloud-credential-operator-744f9dbf77-59d95" (UID: "a9de7243-90c0-49c4-8059-34e0558fca40") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.916818 master-0 kubenswrapper[27835]: I0318 13:24:18.916804 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bd8aa7c1-0a04-4df0-9047-63ab846b9535-certs\") pod \"machine-config-server-wxht4\" (UID: \"bd8aa7c1-0a04-4df0-9047-63ab846b9535\") " pod="openshift-machine-config-operator/machine-config-server-wxht4" Mar 18 13:24:18.917973 master-0 kubenswrapper[27835]: E0318 13:24:18.917937 27835 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.918054 master-0 kubenswrapper[27835]: E0318 13:24:18.917981 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41cc6278-8f99-407c-ba5f-750a40e3058c-secret-metrics-server-tls podName:41cc6278-8f99-407c-ba5f-750a40e3058c nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.417971385 +0000 UTC m=+23.383182945 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/41cc6278-8f99-407c-ba5f-750a40e3058c-secret-metrics-server-tls") pod "metrics-server-65dbcd767c-7bqc9" (UID: "41cc6278-8f99-407c-ba5f-750a40e3058c") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.918054 master-0 kubenswrapper[27835]: E0318 13:24:18.918020 27835 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.918054 master-0 kubenswrapper[27835]: E0318 13:24:18.918033 27835 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.918180 master-0 kubenswrapper[27835]: E0318 13:24:18.918072 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a93ff56-362e-44fc-a54f-666a01559892-kube-state-metrics-tls podName:6a93ff56-362e-44fc-a54f-666a01559892 nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.418062657 +0000 UTC m=+23.383274218 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/6a93ff56-362e-44fc-a54f-666a01559892-kube-state-metrics-tls") pod "kube-state-metrics-7bbc969446-mxcng" (UID: "6a93ff56-362e-44fc-a54f-666a01559892") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.918180 master-0 kubenswrapper[27835]: E0318 13:24:18.918094 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/68104a8c-3fac-4d4b-b975-bc2d045b3375-images podName:68104a8c-3fac-4d4b-b975-bc2d045b3375 nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.418080008 +0000 UTC m=+23.383291578 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/68104a8c-3fac-4d4b-b975-bc2d045b3375-images") pod "machine-api-operator-6fbb6cf6f9-9bqxm" (UID: "68104a8c-3fac-4d4b-b975-bc2d045b3375") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.918332 master-0 kubenswrapper[27835]: E0318 13:24:18.918310 27835 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.918485 master-0 kubenswrapper[27835]: E0318 13:24:18.918465 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6db2bfbd-d8db-4384-8979-23e8a1e87e5e-tls-certificates podName:6db2bfbd-d8db-4384-8979-23e8a1e87e5e nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.418449878 +0000 UTC m=+23.383661448 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/6db2bfbd-d8db-4384-8979-23e8a1e87e5e-tls-certificates") pod "prometheus-operator-admission-webhook-69c6b55594-wsmsc" (UID: "6db2bfbd-d8db-4384-8979-23e8a1e87e5e") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.919236 master-0 kubenswrapper[27835]: E0318 13:24:18.919205 27835 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.919327 master-0 kubenswrapper[27835]: E0318 13:24:18.919267 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a350f317-f058-4102-af5c-cbba46d35e02-serving-cert podName:a350f317-f058-4102-af5c-cbba46d35e02 nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.41925672 +0000 UTC m=+23.384468280 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a350f317-f058-4102-af5c-cbba46d35e02-serving-cert") pod "route-controller-manager-c8888769b-8mxp6" (UID: "a350f317-f058-4102-af5c-cbba46d35e02") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.919461 master-0 kubenswrapper[27835]: E0318 13:24:18.919438 27835 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.919630 master-0 kubenswrapper[27835]: E0318 13:24:18.919612 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-serving-cert podName:c6c35e08-cdbc-4a86-a64a-3e5c34e941d7 nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.41959512 +0000 UTC m=+23.384806760 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-serving-cert") pod "controller-manager-d7c95db55-d6lqm" (UID: "c6c35e08-cdbc-4a86-a64a-3e5c34e941d7") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.919746 master-0 kubenswrapper[27835]: E0318 13:24:18.919642 27835 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.919902 master-0 kubenswrapper[27835]: E0318 13:24:18.919648 27835 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.919971 master-0 kubenswrapper[27835]: E0318 13:24:18.919895 27835 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.919971 master-0 kubenswrapper[27835]: E0318 13:24:18.919879 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs podName:720a1f60-c1cb-4aef-aaec-f082090ca631 nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.419866757 +0000 UTC m=+23.385078327 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-zrc8h" (UID: "720a1f60-c1cb-4aef-aaec-f082090ca631") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.919971 master-0 kubenswrapper[27835]: E0318 13:24:18.919947 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/702076a9-b542-4768-9e9e-99b2cac0a66e-node-exporter-tls podName:702076a9-b542-4768-9e9e-99b2cac0a66e nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.419933119 +0000 UTC m=+23.385144769 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/702076a9-b542-4768-9e9e-99b2cac0a66e-node-exporter-tls") pod "node-exporter-t4p42" (UID: "702076a9-b542-4768-9e9e-99b2cac0a66e") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.919971 master-0 kubenswrapper[27835]: E0318 13:24:18.919967 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/68104a8c-3fac-4d4b-b975-bc2d045b3375-config podName:68104a8c-3fac-4d4b-b975-bc2d045b3375 nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.41995873 +0000 UTC m=+23.385170420 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/68104a8c-3fac-4d4b-b975-bc2d045b3375-config") pod "machine-api-operator-6fbb6cf6f9-9bqxm" (UID: "68104a8c-3fac-4d4b-b975-bc2d045b3375") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.920147 master-0 kubenswrapper[27835]: E0318 13:24:18.920005 27835 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.920147 master-0 kubenswrapper[27835]: E0318 13:24:18.920041 27835 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.920147 master-0 kubenswrapper[27835]: E0318 13:24:18.920046 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f38b464d-a218-4753-b7ac-a7d373952c4d-config podName:f38b464d-a218-4753-b7ac-a7d373952c4d nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.420037753 +0000 UTC m=+23.385249413 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f38b464d-a218-4753-b7ac-a7d373952c4d-config") pod "machine-approver-5c6485487f-fk8ql" (UID: "f38b464d-a218-4753-b7ac-a7d373952c4d") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.920147 master-0 kubenswrapper[27835]: E0318 13:24:18.920106 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00375107-9a3b-4161-a90d-72ea8827c5fc-metrics-certs podName:00375107-9a3b-4161-a90d-72ea8827c5fc nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.420094595 +0000 UTC m=+23.385306165 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/00375107-9a3b-4161-a90d-72ea8827c5fc-metrics-certs") pod "router-default-7dcf5569b5-gvmtv" (UID: "00375107-9a3b-4161-a90d-72ea8827c5fc") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.920363 master-0 kubenswrapper[27835]: E0318 13:24:18.920327 27835 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.920505 master-0 kubenswrapper[27835]: E0318 13:24:18.920382 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2b12af9a-8041-477f-90eb-05bb6ae7861a-auth-proxy-config podName:2b12af9a-8041-477f-90eb-05bb6ae7861a nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.420368712 +0000 UTC m=+23.385580362 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/2b12af9a-8041-477f-90eb-05bb6ae7861a-auth-proxy-config") pod "cluster-autoscaler-operator-866dc4744-lqtbg" (UID: "2b12af9a-8041-477f-90eb-05bb6ae7861a") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.922973 master-0 kubenswrapper[27835]: E0318 13:24:18.922942 27835 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.923130 master-0 kubenswrapper[27835]: E0318 13:24:18.923114 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/702076a9-b542-4768-9e9e-99b2cac0a66e-node-exporter-kube-rbac-proxy-config podName:702076a9-b542-4768-9e9e-99b2cac0a66e nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.423096418 +0000 UTC m=+23.388307988 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/702076a9-b542-4768-9e9e-99b2cac0a66e-node-exporter-kube-rbac-proxy-config") pod "node-exporter-t4p42" (UID: "702076a9-b542-4768-9e9e-99b2cac0a66e") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.923239 master-0 kubenswrapper[27835]: E0318 13:24:18.923120 27835 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.923362 master-0 kubenswrapper[27835]: E0318 13:24:18.923347 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74f296d4-40d1-449e-88ea-db6c1574a11a-samples-operator-tls podName:74f296d4-40d1-449e-88ea-db6c1574a11a nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.423334875 +0000 UTC m=+23.388546445 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/74f296d4-40d1-449e-88ea-db6c1574a11a-samples-operator-tls") pod "cluster-samples-operator-85f7577d78-sqx7p" (UID: "74f296d4-40d1-449e-88ea-db6c1574a11a") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.923519 master-0 kubenswrapper[27835]: E0318 13:24:18.923404 27835 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.923677 master-0 kubenswrapper[27835]: E0318 13:24:18.923658 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-config podName:c6c35e08-cdbc-4a86-a64a-3e5c34e941d7 nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.423641963 +0000 UTC m=+23.388853613 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-config") pod "controller-manager-d7c95db55-d6lqm" (UID: "c6c35e08-cdbc-4a86-a64a-3e5c34e941d7") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.923819 master-0 kubenswrapper[27835]: E0318 13:24:18.923405 27835 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.923958 master-0 kubenswrapper[27835]: E0318 13:24:18.923462 27835 secret.go:189] Couldn't get secret openshift-ingress/router-certs-default: failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.924016 master-0 kubenswrapper[27835]: E0318 13:24:18.923507 27835 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.924081 master-0 kubenswrapper[27835]: E0318 13:24:18.923721 27835 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.924132 master-0 kubenswrapper[27835]: E0318 13:24:18.923728 27835 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.924132 master-0 kubenswrapper[27835]: E0318 13:24:18.923928 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41cc6278-8f99-407c-ba5f-750a40e3058c-configmap-kubelet-serving-ca-bundle podName:41cc6278-8f99-407c-ba5f-750a40e3058c nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.423915531 +0000 UTC m=+23.389127101 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/41cc6278-8f99-407c-ba5f-750a40e3058c-configmap-kubelet-serving-ca-bundle") pod "metrics-server-65dbcd767c-7bqc9" (UID: "41cc6278-8f99-407c-ba5f-750a40e3058c") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.924228 master-0 kubenswrapper[27835]: E0318 13:24:18.924150 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00375107-9a3b-4161-a90d-72ea8827c5fc-default-certificate podName:00375107-9a3b-4161-a90d-72ea8827c5fc nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.424127857 +0000 UTC m=+23.389339517 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-certificate" (UniqueName: "kubernetes.io/secret/00375107-9a3b-4161-a90d-72ea8827c5fc-default-certificate") pod "router-default-7dcf5569b5-gvmtv" (UID: "00375107-9a3b-4161-a90d-72ea8827c5fc") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.924228 master-0 kubenswrapper[27835]: E0318 13:24:18.924170 27835 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.924228 master-0 kubenswrapper[27835]: E0318 13:24:18.924176 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a93ff56-362e-44fc-a54f-666a01559892-kube-state-metrics-custom-resource-state-configmap podName:6a93ff56-362e-44fc-a54f-666a01559892 nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.424161938 +0000 UTC m=+23.389373508 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/6a93ff56-362e-44fc-a54f-666a01559892-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7bbc969446-mxcng" (UID: "6a93ff56-362e-44fc-a54f-666a01559892") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.924366 master-0 kubenswrapper[27835]: E0318 13:24:18.924255 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41cc6278-8f99-407c-ba5f-750a40e3058c-metrics-server-audit-profiles podName:41cc6278-8f99-407c-ba5f-750a40e3058c nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.42423665 +0000 UTC m=+23.389448310 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/41cc6278-8f99-407c-ba5f-750a40e3058c-metrics-server-audit-profiles") pod "metrics-server-65dbcd767c-7bqc9" (UID: "41cc6278-8f99-407c-ba5f-750a40e3058c") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.924366 master-0 kubenswrapper[27835]: E0318 13:24:18.924288 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a350f317-f058-4102-af5c-cbba46d35e02-config podName:a350f317-f058-4102-af5c-cbba46d35e02 nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.424269271 +0000 UTC m=+23.389480951 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a350f317-f058-4102-af5c-cbba46d35e02-config") pod "route-controller-manager-c8888769b-8mxp6" (UID: "a350f317-f058-4102-af5c-cbba46d35e02") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.924366 master-0 kubenswrapper[27835]: E0318 13:24:18.924307 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b12af9a-8041-477f-90eb-05bb6ae7861a-cert podName:2b12af9a-8041-477f-90eb-05bb6ae7861a nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.424297951 +0000 UTC m=+23.389509631 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2b12af9a-8041-477f-90eb-05bb6ae7861a-cert") pod "cluster-autoscaler-operator-866dc4744-lqtbg" (UID: "2b12af9a-8041-477f-90eb-05bb6ae7861a") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.925216 master-0 kubenswrapper[27835]: E0318 13:24:18.925138 27835 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.925311 master-0 kubenswrapper[27835]: E0318 13:24:18.925226 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f38b464d-a218-4753-b7ac-a7d373952c4d-machine-approver-tls podName:f38b464d-a218-4753-b7ac-a7d373952c4d nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.425205706 +0000 UTC m=+23.390417356 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/f38b464d-a218-4753-b7ac-a7d373952c4d-machine-approver-tls") pod "machine-approver-5c6485487f-fk8ql" (UID: "f38b464d-a218-4753-b7ac-a7d373952c4d") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.926354 master-0 kubenswrapper[27835]: E0318 13:24:18.926309 27835 secret.go:189] Couldn't get secret openshift-ingress/router-stats-default: failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.926354 master-0 kubenswrapper[27835]: E0318 13:24:18.926334 27835 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.926521 master-0 kubenswrapper[27835]: E0318 13:24:18.926366 27835 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.926521 master-0 kubenswrapper[27835]: E0318 13:24:18.926383 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00375107-9a3b-4161-a90d-72ea8827c5fc-stats-auth podName:00375107-9a3b-4161-a90d-72ea8827c5fc nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.42637065 +0000 UTC m=+23.391582220 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "stats-auth" (UniqueName: "kubernetes.io/secret/00375107-9a3b-4161-a90d-72ea8827c5fc-stats-auth") pod "router-default-7dcf5569b5-gvmtv" (UID: "00375107-9a3b-4161-a90d-72ea8827c5fc") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.926521 master-0 kubenswrapper[27835]: E0318 13:24:18.926405 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41cc6278-8f99-407c-ba5f-750a40e3058c-secret-metrics-client-certs podName:41cc6278-8f99-407c-ba5f-750a40e3058c nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.42639316 +0000 UTC m=+23.391604740 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/41cc6278-8f99-407c-ba5f-750a40e3058c-secret-metrics-client-certs") pod "metrics-server-65dbcd767c-7bqc9" (UID: "41cc6278-8f99-407c-ba5f-750a40e3058c") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.926521 master-0 kubenswrapper[27835]: E0318 13:24:18.926495 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/029b127e-0faf-4957-b591-9c561b053cda-metrics-tls podName:029b127e-0faf-4957-b591-9c561b053cda nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.426483633 +0000 UTC m=+23.391695213 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/029b127e-0faf-4957-b591-9c561b053cda-metrics-tls") pod "dns-default-92s8c" (UID: "029b127e-0faf-4957-b591-9c561b053cda") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.926766 master-0 kubenswrapper[27835]: E0318 13:24:18.926522 27835 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.926766 master-0 kubenswrapper[27835]: E0318 13:24:18.926577 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb65c095-ca20-432c-a069-ad6719fca9c8-prometheus-operator-tls podName:fb65c095-ca20-432c-a069-ad6719fca9c8 nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.426564445 +0000 UTC m=+23.391776105 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/fb65c095-ca20-432c-a069-ad6719fca9c8-prometheus-operator-tls") pod "prometheus-operator-6c8df6d4b-7tcjk" (UID: "fb65c095-ca20-432c-a069-ad6719fca9c8") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.927566 master-0 kubenswrapper[27835]: E0318 13:24:18.927527 27835 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.927566 master-0 kubenswrapper[27835]: E0318 13:24:18.927550 27835 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.927714 master-0 kubenswrapper[27835]: E0318 13:24:18.927579 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-client-ca podName:c6c35e08-cdbc-4a86-a64a-3e5c34e941d7 nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.427565573 +0000 UTC m=+23.392777143 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-client-ca") pod "controller-manager-d7c95db55-d6lqm" (UID: "c6c35e08-cdbc-4a86-a64a-3e5c34e941d7") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:18.927714 master-0 kubenswrapper[27835]: E0318 13:24:18.927600 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9de7243-90c0-49c4-8059-34e0558fca40-cloud-credential-operator-serving-cert podName:a9de7243-90c0-49c4-8059-34e0558fca40 nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.427587443 +0000 UTC m=+23.392799013 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/a9de7243-90c0-49c4-8059-34e0558fca40-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-744f9dbf77-59d95" (UID: "a9de7243-90c0-49c4-8059-34e0558fca40") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.928761 master-0 kubenswrapper[27835]: E0318 13:24:18.928723 27835 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.928835 master-0 kubenswrapper[27835]: E0318 13:24:18.928785 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a93ff56-362e-44fc-a54f-666a01559892-kube-state-metrics-kube-rbac-proxy-config podName:6a93ff56-362e-44fc-a54f-666a01559892 nodeName:}" failed. No retries permitted until 2026-03-18 13:24:19.428773107 +0000 UTC m=+23.393984677 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/6a93ff56-362e-44fc-a54f-666a01559892-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7bbc969446-mxcng" (UID: "6a93ff56-362e-44fc-a54f-666a01559892") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:18.929652 master-0 kubenswrapper[27835]: I0318 13:24:18.929611 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 18 13:24:18.949192 master-0 kubenswrapper[27835]: I0318 13:24:18.949139 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-67jff" Mar 18 13:24:18.956474 master-0 kubenswrapper[27835]: I0318 13:24:18.956379 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs\") pod \"720a1f60-c1cb-4aef-aaec-f082090ca631\" (UID: \"720a1f60-c1cb-4aef-aaec-f082090ca631\") " Mar 18 13:24:18.961101 master-0 kubenswrapper[27835]: I0318 13:24:18.961037 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "720a1f60-c1cb-4aef-aaec-f082090ca631" (UID: "720a1f60-c1cb-4aef-aaec-f082090ca631"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:24:18.981897 master-0 kubenswrapper[27835]: I0318 13:24:18.981833 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 18 13:24:18.989593 master-0 kubenswrapper[27835]: I0318 13:24:18.989535 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 18 13:24:19.009570 master-0 kubenswrapper[27835]: I0318 13:24:19.009508 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-r54p6" Mar 18 13:24:19.033670 master-0 kubenswrapper[27835]: I0318 13:24:19.033607 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-hr2xw" Mar 18 13:24:19.048639 master-0 kubenswrapper[27835]: I0318 13:24:19.048590 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 18 13:24:19.060003 master-0 kubenswrapper[27835]: I0318 13:24:19.059953 27835 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/720a1f60-c1cb-4aef-aaec-f082090ca631-webhook-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:24:19.069438 master-0 kubenswrapper[27835]: I0318 13:24:19.069368 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 18 13:24:19.090019 master-0 kubenswrapper[27835]: I0318 13:24:19.089966 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 18 13:24:19.109994 master-0 kubenswrapper[27835]: I0318 13:24:19.109951 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 18 13:24:19.129576 master-0 kubenswrapper[27835]: I0318 13:24:19.129440 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 18 13:24:19.149192 master-0 kubenswrapper[27835]: I0318 13:24:19.149124 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-sn888" Mar 18 13:24:19.168876 master-0 kubenswrapper[27835]: I0318 13:24:19.168821 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-dz6jc" Mar 18 13:24:19.189469 master-0 kubenswrapper[27835]: I0318 13:24:19.189391 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 18 13:24:19.209491 master-0 kubenswrapper[27835]: I0318 13:24:19.209451 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 18 13:24:19.230073 master-0 kubenswrapper[27835]: I0318 13:24:19.230016 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-5fm1li8uoic3j" Mar 18 13:24:19.249808 master-0 kubenswrapper[27835]: I0318 13:24:19.249754 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 18 13:24:19.269865 master-0 kubenswrapper[27835]: I0318 13:24:19.269804 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-6fb5w" Mar 18 13:24:19.289507 master-0 kubenswrapper[27835]: I0318 13:24:19.289440 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 18 13:24:19.309504 master-0 kubenswrapper[27835]: I0318 13:24:19.309439 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 18 13:24:19.329229 master-0 kubenswrapper[27835]: I0318 13:24:19.329189 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 18 13:24:19.349792 master-0 kubenswrapper[27835]: I0318 13:24:19.349744 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-kn6rx" Mar 18 13:24:19.368926 master-0 kubenswrapper[27835]: I0318 13:24:19.368873 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 18 13:24:19.388746 master-0 kubenswrapper[27835]: I0318 13:24:19.388586 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 18 13:24:19.408987 master-0 kubenswrapper[27835]: I0318 13:24:19.408931 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 13:24:19.428993 master-0 kubenswrapper[27835]: I0318 13:24:19.428914 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 13:24:19.448995 master-0 kubenswrapper[27835]: I0318 13:24:19.448936 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 13:24:19.468791 master-0 kubenswrapper[27835]: I0318 13:24:19.468721 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/00375107-9a3b-4161-a90d-72ea8827c5fc-stats-auth\") pod \"router-default-7dcf5569b5-gvmtv\" (UID: \"00375107-9a3b-4161-a90d-72ea8827c5fc\") " pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:24:19.468973 master-0 kubenswrapper[27835]: I0318 13:24:19.468858 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/fb65c095-ca20-432c-a069-ad6719fca9c8-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-7tcjk\" (UID: \"fb65c095-ca20-432c-a069-ad6719fca9c8\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-7tcjk" Mar 18 13:24:19.468973 master-0 kubenswrapper[27835]: I0318 13:24:19.468916 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9de7243-90c0-49c4-8059-34e0558fca40-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-59d95\" (UID: \"a9de7243-90c0-49c4-8059-34e0558fca40\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-59d95" Mar 18 13:24:19.469124 master-0 kubenswrapper[27835]: I0318 13:24:19.468970 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 13:24:19.469194 master-0 kubenswrapper[27835]: I0318 13:24:19.469118 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-client-ca\") pod \"controller-manager-d7c95db55-d6lqm\" (UID: \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\") " pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:24:19.469262 master-0 kubenswrapper[27835]: I0318 13:24:19.469195 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/fb65c095-ca20-432c-a069-ad6719fca9c8-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-7tcjk\" (UID: \"fb65c095-ca20-432c-a069-ad6719fca9c8\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-7tcjk" Mar 18 13:24:19.469262 master-0 kubenswrapper[27835]: I0318 13:24:19.469255 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6a93ff56-362e-44fc-a54f-666a01559892-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-mxcng\" (UID: \"6a93ff56-362e-44fc-a54f-666a01559892\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" Mar 18 13:24:19.469377 master-0 kubenswrapper[27835]: I0318 13:24:19.469285 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9de7243-90c0-49c4-8059-34e0558fca40-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-59d95\" (UID: \"a9de7243-90c0-49c4-8059-34e0558fca40\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-59d95" Mar 18 13:24:19.469377 master-0 kubenswrapper[27835]: I0318 13:24:19.469289 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e54baea8-6c3e-45a0-ac8c-880a8aaa8208-cert\") pod \"ingress-canary-6hldc\" (UID: \"e54baea8-6c3e-45a0-ac8c-880a8aaa8208\") " pod="openshift-ingress-canary/ingress-canary-6hldc" Mar 18 13:24:19.469377 master-0 kubenswrapper[27835]: I0318 13:24:19.469339 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f38b464d-a218-4753-b7ac-a7d373952c4d-auth-proxy-config\") pod \"machine-approver-5c6485487f-fk8ql\" (UID: \"f38b464d-a218-4753-b7ac-a7d373952c4d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql" Mar 18 13:24:19.469608 master-0 kubenswrapper[27835]: I0318 13:24:19.469482 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/fb65c095-ca20-432c-a069-ad6719fca9c8-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-7tcjk\" (UID: \"fb65c095-ca20-432c-a069-ad6719fca9c8\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-7tcjk" Mar 18 13:24:19.469608 master-0 kubenswrapper[27835]: I0318 13:24:19.469489 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6a93ff56-362e-44fc-a54f-666a01559892-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-mxcng\" (UID: \"6a93ff56-362e-44fc-a54f-666a01559892\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" Mar 18 13:24:19.469608 master-0 kubenswrapper[27835]: I0318 13:24:19.469529 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a350f317-f058-4102-af5c-cbba46d35e02-client-ca\") pod \"route-controller-manager-c8888769b-8mxp6\" (UID: \"a350f317-f058-4102-af5c-cbba46d35e02\") " pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" Mar 18 13:24:19.469608 master-0 kubenswrapper[27835]: I0318 13:24:19.469553 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d325c523-8e6f-4665-9f54-334eaf301141-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-s4ql7\" (UID: \"d325c523-8e6f-4665-9f54-334eaf301141\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7" Mar 18 13:24:19.469608 master-0 kubenswrapper[27835]: I0318 13:24:19.469576 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-8lzkl\" (UID: \"80994f33-21e7-45d6-9f21-1cfd8e1f41ce\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" Mar 18 13:24:19.469608 master-0 kubenswrapper[27835]: I0318 13:24:19.469604 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41cc6278-8f99-407c-ba5f-750a40e3058c-client-ca-bundle\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:24:19.469981 master-0 kubenswrapper[27835]: I0318 13:24:19.469657 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/fb65c095-ca20-432c-a069-ad6719fca9c8-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-7tcjk\" (UID: \"fb65c095-ca20-432c-a069-ad6719fca9c8\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-7tcjk" Mar 18 13:24:19.469981 master-0 kubenswrapper[27835]: I0318 13:24:19.469666 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/830ff1d6-332e-46b1-b13c-c2507fdc3c19-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-lk5k7\" (UID: \"830ff1d6-332e-46b1-b13c-c2507fdc3c19\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-lk5k7" Mar 18 13:24:19.469981 master-0 kubenswrapper[27835]: I0318 13:24:19.469725 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-proxy-ca-bundles\") pod \"controller-manager-d7c95db55-d6lqm\" (UID: \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\") " pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:24:19.469981 master-0 kubenswrapper[27835]: I0318 13:24:19.469803 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/d325c523-8e6f-4665-9f54-334eaf301141-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-s4ql7\" (UID: \"d325c523-8e6f-4665-9f54-334eaf301141\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7" Mar 18 13:24:19.469981 master-0 kubenswrapper[27835]: I0318 13:24:19.469844 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/029b127e-0faf-4957-b591-9c561b053cda-config-volume\") pod \"dns-default-92s8c\" (UID: \"029b127e-0faf-4957-b591-9c561b053cda\") " pod="openshift-dns/dns-default-92s8c" Mar 18 13:24:19.469981 master-0 kubenswrapper[27835]: I0318 13:24:19.469915 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00375107-9a3b-4161-a90d-72ea8827c5fc-service-ca-bundle\") pod \"router-default-7dcf5569b5-gvmtv\" (UID: \"00375107-9a3b-4161-a90d-72ea8827c5fc\") " pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:24:19.469981 master-0 kubenswrapper[27835]: I0318 13:24:19.469959 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/68104a8c-3fac-4d4b-b975-bc2d045b3375-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-9bqxm\" (UID: \"68104a8c-3fac-4d4b-b975-bc2d045b3375\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm" Mar 18 13:24:19.470365 master-0 kubenswrapper[27835]: I0318 13:24:19.470038 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-8lzkl\" (UID: \"80994f33-21e7-45d6-9f21-1cfd8e1f41ce\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" Mar 18 13:24:19.470365 master-0 kubenswrapper[27835]: I0318 13:24:19.470120 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce3c462e-b655-40bc-811a-95ccde49fdb8-proxy-tls\") pod \"machine-config-daemon-5blrl\" (UID: \"ce3c462e-b655-40bc-811a-95ccde49fdb8\") " pod="openshift-machine-config-operator/machine-config-daemon-5blrl" Mar 18 13:24:19.470365 master-0 kubenswrapper[27835]: I0318 13:24:19.470130 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41cc6278-8f99-407c-ba5f-750a40e3058c-client-ca-bundle\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:24:19.470365 master-0 kubenswrapper[27835]: I0318 13:24:19.470159 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9de7243-90c0-49c4-8059-34e0558fca40-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-59d95\" (UID: \"a9de7243-90c0-49c4-8059-34e0558fca40\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-59d95" Mar 18 13:24:19.470365 master-0 kubenswrapper[27835]: I0318 13:24:19.470176 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/68104a8c-3fac-4d4b-b975-bc2d045b3375-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-9bqxm\" (UID: \"68104a8c-3fac-4d4b-b975-bc2d045b3375\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm" Mar 18 13:24:19.470365 master-0 kubenswrapper[27835]: I0318 13:24:19.470238 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/68104a8c-3fac-4d4b-b975-bc2d045b3375-images\") pod \"machine-api-operator-6fbb6cf6f9-9bqxm\" (UID: \"68104a8c-3fac-4d4b-b975-bc2d045b3375\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm" Mar 18 13:24:19.470365 master-0 kubenswrapper[27835]: I0318 13:24:19.470312 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-8lzkl\" (UID: \"80994f33-21e7-45d6-9f21-1cfd8e1f41ce\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" Mar 18 13:24:19.470791 master-0 kubenswrapper[27835]: I0318 13:24:19.470379 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/41cc6278-8f99-407c-ba5f-750a40e3058c-secret-metrics-server-tls\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:24:19.470791 master-0 kubenswrapper[27835]: I0318 13:24:19.470441 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6a93ff56-362e-44fc-a54f-666a01559892-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-mxcng\" (UID: \"6a93ff56-362e-44fc-a54f-666a01559892\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" Mar 18 13:24:19.470791 master-0 kubenswrapper[27835]: I0318 13:24:19.470379 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce3c462e-b655-40bc-811a-95ccde49fdb8-proxy-tls\") pod \"machine-config-daemon-5blrl\" (UID: \"ce3c462e-b655-40bc-811a-95ccde49fdb8\") " pod="openshift-machine-config-operator/machine-config-daemon-5blrl" Mar 18 13:24:19.470791 master-0 kubenswrapper[27835]: I0318 13:24:19.470502 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/6db2bfbd-d8db-4384-8979-23e8a1e87e5e-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-wsmsc\" (UID: \"6db2bfbd-d8db-4384-8979-23e8a1e87e5e\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wsmsc" Mar 18 13:24:19.470791 master-0 kubenswrapper[27835]: I0318 13:24:19.470538 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9de7243-90c0-49c4-8059-34e0558fca40-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-59d95\" (UID: \"a9de7243-90c0-49c4-8059-34e0558fca40\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-59d95" Mar 18 13:24:19.470791 master-0 kubenswrapper[27835]: I0318 13:24:19.470591 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-serving-cert\") pod \"controller-manager-d7c95db55-d6lqm\" (UID: \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\") " pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:24:19.470791 master-0 kubenswrapper[27835]: I0318 13:24:19.470619 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a350f317-f058-4102-af5c-cbba46d35e02-serving-cert\") pod \"route-controller-manager-c8888769b-8mxp6\" (UID: \"a350f317-f058-4102-af5c-cbba46d35e02\") " pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" Mar 18 13:24:19.470791 master-0 kubenswrapper[27835]: I0318 13:24:19.470653 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/702076a9-b542-4768-9e9e-99b2cac0a66e-node-exporter-tls\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:24:19.470791 master-0 kubenswrapper[27835]: I0318 13:24:19.470724 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/6db2bfbd-d8db-4384-8979-23e8a1e87e5e-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-wsmsc\" (UID: \"6db2bfbd-d8db-4384-8979-23e8a1e87e5e\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wsmsc" Mar 18 13:24:19.470791 master-0 kubenswrapper[27835]: I0318 13:24:19.470813 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68104a8c-3fac-4d4b-b975-bc2d045b3375-config\") pod \"machine-api-operator-6fbb6cf6f9-9bqxm\" (UID: \"68104a8c-3fac-4d4b-b975-bc2d045b3375\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm" Mar 18 13:24:19.471520 master-0 kubenswrapper[27835]: I0318 13:24:19.470840 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00375107-9a3b-4161-a90d-72ea8827c5fc-metrics-certs\") pod \"router-default-7dcf5569b5-gvmtv\" (UID: \"00375107-9a3b-4161-a90d-72ea8827c5fc\") " pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:24:19.471520 master-0 kubenswrapper[27835]: I0318 13:24:19.470858 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2b12af9a-8041-477f-90eb-05bb6ae7861a-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-lqtbg\" (UID: \"2b12af9a-8041-477f-90eb-05bb6ae7861a\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lqtbg" Mar 18 13:24:19.471520 master-0 kubenswrapper[27835]: I0318 13:24:19.470868 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a350f317-f058-4102-af5c-cbba46d35e02-serving-cert\") pod \"route-controller-manager-c8888769b-8mxp6\" (UID: \"a350f317-f058-4102-af5c-cbba46d35e02\") " pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" Mar 18 13:24:19.471520 master-0 kubenswrapper[27835]: I0318 13:24:19.470884 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f38b464d-a218-4753-b7ac-a7d373952c4d-config\") pod \"machine-approver-5c6485487f-fk8ql\" (UID: \"f38b464d-a218-4753-b7ac-a7d373952c4d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql" Mar 18 13:24:19.471520 master-0 kubenswrapper[27835]: I0318 13:24:19.471056 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00375107-9a3b-4161-a90d-72ea8827c5fc-metrics-certs\") pod \"router-default-7dcf5569b5-gvmtv\" (UID: \"00375107-9a3b-4161-a90d-72ea8827c5fc\") " pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:24:19.471520 master-0 kubenswrapper[27835]: I0318 13:24:19.471062 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-config\") pod \"controller-manager-d7c95db55-d6lqm\" (UID: \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\") " pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:24:19.471520 master-0 kubenswrapper[27835]: I0318 13:24:19.471103 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/702076a9-b542-4768-9e9e-99b2cac0a66e-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:24:19.471520 master-0 kubenswrapper[27835]: I0318 13:24:19.471131 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/41cc6278-8f99-407c-ba5f-750a40e3058c-metrics-server-audit-profiles\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:24:19.471520 master-0 kubenswrapper[27835]: I0318 13:24:19.471152 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/00375107-9a3b-4161-a90d-72ea8827c5fc-default-certificate\") pod \"router-default-7dcf5569b5-gvmtv\" (UID: \"00375107-9a3b-4161-a90d-72ea8827c5fc\") " pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:24:19.471520 master-0 kubenswrapper[27835]: I0318 13:24:19.471171 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/6a93ff56-362e-44fc-a54f-666a01559892-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-mxcng\" (UID: \"6a93ff56-362e-44fc-a54f-666a01559892\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" Mar 18 13:24:19.471520 master-0 kubenswrapper[27835]: I0318 13:24:19.471247 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41cc6278-8f99-407c-ba5f-750a40e3058c-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:24:19.471520 master-0 kubenswrapper[27835]: I0318 13:24:19.471276 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a350f317-f058-4102-af5c-cbba46d35e02-config\") pod \"route-controller-manager-c8888769b-8mxp6\" (UID: \"a350f317-f058-4102-af5c-cbba46d35e02\") " pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" Mar 18 13:24:19.471520 master-0 kubenswrapper[27835]: I0318 13:24:19.471304 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/74f296d4-40d1-449e-88ea-db6c1574a11a-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-sqx7p\" (UID: \"74f296d4-40d1-449e-88ea-db6c1574a11a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-sqx7p" Mar 18 13:24:19.471520 master-0 kubenswrapper[27835]: I0318 13:24:19.471324 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b12af9a-8041-477f-90eb-05bb6ae7861a-cert\") pod \"cluster-autoscaler-operator-866dc4744-lqtbg\" (UID: \"2b12af9a-8041-477f-90eb-05bb6ae7861a\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lqtbg" Mar 18 13:24:19.471520 master-0 kubenswrapper[27835]: I0318 13:24:19.471341 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f38b464d-a218-4753-b7ac-a7d373952c4d-machine-approver-tls\") pod \"machine-approver-5c6485487f-fk8ql\" (UID: \"f38b464d-a218-4753-b7ac-a7d373952c4d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql" Mar 18 13:24:19.471520 master-0 kubenswrapper[27835]: I0318 13:24:19.471405 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/029b127e-0faf-4957-b591-9c561b053cda-metrics-tls\") pod \"dns-default-92s8c\" (UID: \"029b127e-0faf-4957-b591-9c561b053cda\") " pod="openshift-dns/dns-default-92s8c" Mar 18 13:24:19.471520 master-0 kubenswrapper[27835]: I0318 13:24:19.471458 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/41cc6278-8f99-407c-ba5f-750a40e3058c-secret-metrics-client-certs\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:24:19.472700 master-0 kubenswrapper[27835]: I0318 13:24:19.471546 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a350f317-f058-4102-af5c-cbba46d35e02-config\") pod \"route-controller-manager-c8888769b-8mxp6\" (UID: \"a350f317-f058-4102-af5c-cbba46d35e02\") " pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" Mar 18 13:24:19.472700 master-0 kubenswrapper[27835]: I0318 13:24:19.471554 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/74f296d4-40d1-449e-88ea-db6c1574a11a-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-sqx7p\" (UID: \"74f296d4-40d1-449e-88ea-db6c1574a11a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-sqx7p" Mar 18 13:24:19.489506 master-0 kubenswrapper[27835]: I0318 13:24:19.489381 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 13:24:19.490555 master-0 kubenswrapper[27835]: I0318 13:24:19.490490 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a350f317-f058-4102-af5c-cbba46d35e02-client-ca\") pod \"route-controller-manager-c8888769b-8mxp6\" (UID: \"a350f317-f058-4102-af5c-cbba46d35e02\") " pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" Mar 18 13:24:19.508718 master-0 kubenswrapper[27835]: I0318 13:24:19.508653 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 18 13:24:19.529794 master-0 kubenswrapper[27835]: I0318 13:24:19.529721 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-qfz5b" Mar 18 13:24:19.550236 master-0 kubenswrapper[27835]: I0318 13:24:19.550177 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 18 13:24:19.552123 master-0 kubenswrapper[27835]: I0318 13:24:19.552075 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f38b464d-a218-4753-b7ac-a7d373952c4d-machine-approver-tls\") pod \"machine-approver-5c6485487f-fk8ql\" (UID: \"f38b464d-a218-4753-b7ac-a7d373952c4d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql" Mar 18 13:24:19.569690 master-0 kubenswrapper[27835]: I0318 13:24:19.569635 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 18 13:24:19.580816 master-0 kubenswrapper[27835]: I0318 13:24:19.580762 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f38b464d-a218-4753-b7ac-a7d373952c4d-auth-proxy-config\") pod \"machine-approver-5c6485487f-fk8ql\" (UID: \"f38b464d-a218-4753-b7ac-a7d373952c4d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql" Mar 18 13:24:19.589530 master-0 kubenswrapper[27835]: I0318 13:24:19.589484 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 18 13:24:19.591841 master-0 kubenswrapper[27835]: I0318 13:24:19.591798 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f38b464d-a218-4753-b7ac-a7d373952c4d-config\") pod \"machine-approver-5c6485487f-fk8ql\" (UID: \"f38b464d-a218-4753-b7ac-a7d373952c4d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql" Mar 18 13:24:19.609736 master-0 kubenswrapper[27835]: I0318 13:24:19.609661 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 18 13:24:19.629534 master-0 kubenswrapper[27835]: I0318 13:24:19.629490 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 18 13:24:19.649457 master-0 kubenswrapper[27835]: I0318 13:24:19.649343 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 18 13:24:19.650593 master-0 kubenswrapper[27835]: I0318 13:24:19.650561 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00375107-9a3b-4161-a90d-72ea8827c5fc-service-ca-bundle\") pod \"router-default-7dcf5569b5-gvmtv\" (UID: \"00375107-9a3b-4161-a90d-72ea8827c5fc\") " pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:24:19.669373 master-0 kubenswrapper[27835]: I0318 13:24:19.669314 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 18 13:24:19.688995 master-0 kubenswrapper[27835]: I0318 13:24:19.688933 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 13:24:19.692942 master-0 kubenswrapper[27835]: I0318 13:24:19.692879 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-config\") pod \"controller-manager-d7c95db55-d6lqm\" (UID: \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\") " pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:24:19.710629 master-0 kubenswrapper[27835]: I0318 13:24:19.710484 27835 request.go:700] Waited for 2.018093985s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/secrets?fieldSelector=metadata.name%3Dmetrics-client-certs&limit=500&resourceVersion=0 Mar 18 13:24:19.712310 master-0 kubenswrapper[27835]: I0318 13:24:19.712221 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 18 13:24:19.722080 master-0 kubenswrapper[27835]: I0318 13:24:19.722031 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/41cc6278-8f99-407c-ba5f-750a40e3058c-secret-metrics-client-certs\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:24:19.729384 master-0 kubenswrapper[27835]: I0318 13:24:19.729324 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 13:24:19.731576 master-0 kubenswrapper[27835]: I0318 13:24:19.731535 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-serving-cert\") pod \"controller-manager-d7c95db55-d6lqm\" (UID: \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\") " pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:24:19.742636 master-0 kubenswrapper[27835]: I0318 13:24:19.742581 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:24:19.749076 master-0 kubenswrapper[27835]: I0318 13:24:19.749012 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 13:24:19.749591 master-0 kubenswrapper[27835]: I0318 13:24:19.749526 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-client-ca\") pod \"controller-manager-d7c95db55-d6lqm\" (UID: \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\") " pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:24:19.769815 master-0 kubenswrapper[27835]: I0318 13:24:19.769750 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 18 13:24:19.776134 master-0 kubenswrapper[27835]: E0318 13:24:19.776088 27835 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj: failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:19.776286 master-0 kubenswrapper[27835]: E0318 13:24:19.776201 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617-ca-certs podName:98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617 nodeName:}" failed. No retries permitted until 2026-03-18 13:24:20.276178904 +0000 UTC m=+24.241390564 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617-ca-certs") pod "operator-controller-controller-manager-57777556ff-9bjsj" (UID: "98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617") : failed to sync configmap cache: timed out waiting for the condition Mar 18 13:24:19.794216 master-0 kubenswrapper[27835]: I0318 13:24:19.794149 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 13:24:19.801019 master-0 kubenswrapper[27835]: I0318 13:24:19.800970 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-proxy-ca-bundles\") pod \"controller-manager-d7c95db55-d6lqm\" (UID: \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\") " pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:24:19.809324 master-0 kubenswrapper[27835]: I0318 13:24:19.809275 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 18 13:24:19.812693 master-0 kubenswrapper[27835]: I0318 13:24:19.812653 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/029b127e-0faf-4957-b591-9c561b053cda-metrics-tls\") pod \"dns-default-92s8c\" (UID: \"029b127e-0faf-4957-b591-9c561b053cda\") " pod="openshift-dns/dns-default-92s8c" Mar 18 13:24:19.829515 master-0 kubenswrapper[27835]: I0318 13:24:19.829462 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 13:24:19.849056 master-0 kubenswrapper[27835]: I0318 13:24:19.849004 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 18 13:24:19.850230 master-0 kubenswrapper[27835]: I0318 13:24:19.850188 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/029b127e-0faf-4957-b591-9c561b053cda-config-volume\") pod \"dns-default-92s8c\" (UID: \"029b127e-0faf-4957-b591-9c561b053cda\") " pod="openshift-dns/dns-default-92s8c" Mar 18 13:24:19.869350 master-0 kubenswrapper[27835]: I0318 13:24:19.869301 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 13:24:19.889286 master-0 kubenswrapper[27835]: I0318 13:24:19.889197 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 18 13:24:19.892379 master-0 kubenswrapper[27835]: I0318 13:24:19.892323 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/00375107-9a3b-4161-a90d-72ea8827c5fc-default-certificate\") pod \"router-default-7dcf5569b5-gvmtv\" (UID: \"00375107-9a3b-4161-a90d-72ea8827c5fc\") " pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:24:19.909264 master-0 kubenswrapper[27835]: I0318 13:24:19.909133 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 18 13:24:19.919530 master-0 kubenswrapper[27835]: I0318 13:24:19.919456 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/00375107-9a3b-4161-a90d-72ea8827c5fc-stats-auth\") pod \"router-default-7dcf5569b5-gvmtv\" (UID: \"00375107-9a3b-4161-a90d-72ea8827c5fc\") " pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:24:19.929826 master-0 kubenswrapper[27835]: I0318 13:24:19.929765 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-hnp25" Mar 18 13:24:19.949951 master-0 kubenswrapper[27835]: I0318 13:24:19.949898 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 18 13:24:19.951527 master-0 kubenswrapper[27835]: I0318 13:24:19.951470 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-8lzkl\" (UID: \"80994f33-21e7-45d6-9f21-1cfd8e1f41ce\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" Mar 18 13:24:19.969698 master-0 kubenswrapper[27835]: I0318 13:24:19.969645 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 18 13:24:19.971033 master-0 kubenswrapper[27835]: I0318 13:24:19.971003 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/41cc6278-8f99-407c-ba5f-750a40e3058c-secret-metrics-server-tls\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:24:19.989398 master-0 kubenswrapper[27835]: I0318 13:24:19.989343 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 18 13:24:19.991182 master-0 kubenswrapper[27835]: I0318 13:24:19.991138 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/702076a9-b542-4768-9e9e-99b2cac0a66e-node-exporter-tls\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:24:20.018483 master-0 kubenswrapper[27835]: I0318 13:24:20.018428 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 18 13:24:20.029428 master-0 kubenswrapper[27835]: I0318 13:24:20.029364 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 18 13:24:20.049251 master-0 kubenswrapper[27835]: I0318 13:24:20.049181 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 18 13:24:20.049877 master-0 kubenswrapper[27835]: I0318 13:24:20.049824 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e54baea8-6c3e-45a0-ac8c-880a8aaa8208-cert\") pod \"ingress-canary-6hldc\" (UID: \"e54baea8-6c3e-45a0-ac8c-880a8aaa8208\") " pod="openshift-ingress-canary/ingress-canary-6hldc" Mar 18 13:24:20.068899 master-0 kubenswrapper[27835]: I0318 13:24:20.068840 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 13:24:20.089010 master-0 kubenswrapper[27835]: I0318 13:24:20.088939 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 18 13:24:20.091669 master-0 kubenswrapper[27835]: I0318 13:24:20.091630 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/41cc6278-8f99-407c-ba5f-750a40e3058c-metrics-server-audit-profiles\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:24:20.108946 master-0 kubenswrapper[27835]: I0318 13:24:20.108895 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-b4r5l" Mar 18 13:24:20.129759 master-0 kubenswrapper[27835]: I0318 13:24:20.129711 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-7vfv5" Mar 18 13:24:20.149338 master-0 kubenswrapper[27835]: I0318 13:24:20.149280 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 18 13:24:20.150753 master-0 kubenswrapper[27835]: I0318 13:24:20.150720 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d325c523-8e6f-4665-9f54-334eaf301141-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-s4ql7\" (UID: \"d325c523-8e6f-4665-9f54-334eaf301141\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7" Mar 18 13:24:20.169358 master-0 kubenswrapper[27835]: I0318 13:24:20.169254 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-lvs7l" Mar 18 13:24:20.190142 master-0 kubenswrapper[27835]: I0318 13:24:20.190098 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 18 13:24:20.201760 master-0 kubenswrapper[27835]: I0318 13:24:20.200639 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/d325c523-8e6f-4665-9f54-334eaf301141-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-s4ql7\" (UID: \"d325c523-8e6f-4665-9f54-334eaf301141\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7" Mar 18 13:24:20.209845 master-0 kubenswrapper[27835]: I0318 13:24:20.209799 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-vbmv6" Mar 18 13:24:20.229049 master-0 kubenswrapper[27835]: I0318 13:24:20.228995 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 18 13:24:20.231915 master-0 kubenswrapper[27835]: I0318 13:24:20.231875 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b12af9a-8041-477f-90eb-05bb6ae7861a-cert\") pod \"cluster-autoscaler-operator-866dc4744-lqtbg\" (UID: \"2b12af9a-8041-477f-90eb-05bb6ae7861a\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lqtbg" Mar 18 13:24:20.248750 master-0 kubenswrapper[27835]: I0318 13:24:20.248712 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 18 13:24:20.251477 master-0 kubenswrapper[27835]: I0318 13:24:20.251442 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2b12af9a-8041-477f-90eb-05bb6ae7861a-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-lqtbg\" (UID: \"2b12af9a-8041-477f-90eb-05bb6ae7861a\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lqtbg" Mar 18 13:24:20.269327 master-0 kubenswrapper[27835]: I0318 13:24:20.269275 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 18 13:24:20.271245 master-0 kubenswrapper[27835]: I0318 13:24:20.271206 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68104a8c-3fac-4d4b-b975-bc2d045b3375-config\") pod \"machine-api-operator-6fbb6cf6f9-9bqxm\" (UID: \"68104a8c-3fac-4d4b-b975-bc2d045b3375\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm" Mar 18 13:24:20.285818 master-0 kubenswrapper[27835]: I0318 13:24:20.285770 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-9bjsj\" (UID: \"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:24:20.286325 master-0 kubenswrapper[27835]: I0318 13:24:20.286280 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-9bjsj\" (UID: \"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:24:20.289995 master-0 kubenswrapper[27835]: I0318 13:24:20.289974 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 18 13:24:20.291216 master-0 kubenswrapper[27835]: I0318 13:24:20.291197 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6a93ff56-362e-44fc-a54f-666a01559892-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-mxcng\" (UID: \"6a93ff56-362e-44fc-a54f-666a01559892\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" Mar 18 13:24:20.309192 master-0 kubenswrapper[27835]: I0318 13:24:20.309127 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-sjstk" Mar 18 13:24:20.329049 master-0 kubenswrapper[27835]: I0318 13:24:20.329004 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-6lm6r" Mar 18 13:24:20.349780 master-0 kubenswrapper[27835]: I0318 13:24:20.349742 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 18 13:24:20.350908 master-0 kubenswrapper[27835]: I0318 13:24:20.350868 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-8lzkl\" (UID: \"80994f33-21e7-45d6-9f21-1cfd8e1f41ce\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" Mar 18 13:24:20.369538 master-0 kubenswrapper[27835]: I0318 13:24:20.369499 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 18 13:24:20.370089 master-0 kubenswrapper[27835]: I0318 13:24:20.370014 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-8lzkl\" (UID: \"80994f33-21e7-45d6-9f21-1cfd8e1f41ce\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" Mar 18 13:24:20.388388 master-0 kubenswrapper[27835]: I0318 13:24:20.388341 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 18 13:24:20.411604 master-0 kubenswrapper[27835]: I0318 13:24:20.411568 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 18 13:24:20.412220 master-0 kubenswrapper[27835]: I0318 13:24:20.412189 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41cc6278-8f99-407c-ba5f-750a40e3058c-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:24:20.429670 master-0 kubenswrapper[27835]: I0318 13:24:20.429587 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 18 13:24:20.430662 master-0 kubenswrapper[27835]: I0318 13:24:20.430639 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/68104a8c-3fac-4d4b-b975-bc2d045b3375-images\") pod \"machine-api-operator-6fbb6cf6f9-9bqxm\" (UID: \"68104a8c-3fac-4d4b-b975-bc2d045b3375\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm" Mar 18 13:24:20.448581 master-0 kubenswrapper[27835]: I0318 13:24:20.448526 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 18 13:24:20.452370 master-0 kubenswrapper[27835]: I0318 13:24:20.452343 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/6a93ff56-362e-44fc-a54f-666a01559892-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-mxcng\" (UID: \"6a93ff56-362e-44fc-a54f-666a01559892\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" Mar 18 13:24:20.469459 master-0 kubenswrapper[27835]: I0318 13:24:20.469394 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 18 13:24:20.470584 master-0 kubenswrapper[27835]: E0318 13:24:20.470556 27835 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:20.470753 master-0 kubenswrapper[27835]: E0318 13:24:20.470736 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/830ff1d6-332e-46b1-b13c-c2507fdc3c19-webhook-certs podName:830ff1d6-332e-46b1-b13c-c2507fdc3c19 nodeName:}" failed. No retries permitted until 2026-03-18 13:24:21.470712382 +0000 UTC m=+25.435923942 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/830ff1d6-332e-46b1-b13c-c2507fdc3c19-webhook-certs") pod "multus-admission-controller-58c9f8fc64-lk5k7" (UID: "830ff1d6-332e-46b1-b13c-c2507fdc3c19") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:20.471392 master-0 kubenswrapper[27835]: E0318 13:24:20.471343 27835 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:20.471488 master-0 kubenswrapper[27835]: E0318 13:24:20.471401 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/702076a9-b542-4768-9e9e-99b2cac0a66e-node-exporter-kube-rbac-proxy-config podName:702076a9-b542-4768-9e9e-99b2cac0a66e nodeName:}" failed. No retries permitted until 2026-03-18 13:24:21.471383891 +0000 UTC m=+25.436595511 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/702076a9-b542-4768-9e9e-99b2cac0a66e-node-exporter-kube-rbac-proxy-config") pod "node-exporter-t4p42" (UID: "702076a9-b542-4768-9e9e-99b2cac0a66e") : failed to sync secret cache: timed out waiting for the condition Mar 18 13:24:20.489129 master-0 kubenswrapper[27835]: I0318 13:24:20.489111 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-5dnvq" Mar 18 13:24:20.549446 master-0 kubenswrapper[27835]: I0318 13:24:20.549363 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 18 13:24:20.556336 master-0 kubenswrapper[27835]: I0318 13:24:20.556294 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 18 13:24:20.569472 master-0 kubenswrapper[27835]: I0318 13:24:20.569440 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-7kt87" Mar 18 13:24:20.712280 master-0 kubenswrapper[27835]: I0318 13:24:20.712161 27835 request.go:700] Waited for 2.992398126s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/serviceaccounts/authentication-operator/token Mar 18 13:24:20.758578 master-0 kubenswrapper[27835]: I0318 13:24:20.758511 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmzr4\" (UniqueName: \"kubernetes.io/projected/d9d09a56-ed4c-40b7-8be1-f3934c07296e-kube-api-access-wmzr4\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:24:20.758948 master-0 kubenswrapper[27835]: I0318 13:24:20.758917 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdkqm\" (UniqueName: \"kubernetes.io/projected/7fb5bad7-07d9-45ac-ad27-a887d12d148f-kube-api-access-sdkqm\") pod \"apiserver-5bb6f9f846-6wq9c\" (UID: \"7fb5bad7-07d9-45ac-ad27-a887d12d148f\") " pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:24:20.759184 master-0 kubenswrapper[27835]: I0318 13:24:20.759154 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhs5w\" (UniqueName: \"kubernetes.io/projected/7dca7577-6bee-4dd3-917a-7b7ccc42f0fc-kube-api-access-qhs5w\") pod \"openshift-controller-manager-operator-8c94f4649-cpqm5\" (UID: \"7dca7577-6bee-4dd3-917a-7b7ccc42f0fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-cpqm5" Mar 18 13:24:20.762965 master-0 kubenswrapper[27835]: I0318 13:24:20.762903 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nxzr\" (UniqueName: \"kubernetes.io/projected/767da57e-44e4-4861-bc6f-427c5bbb4d9d-kube-api-access-2nxzr\") pod \"multus-additional-cni-plugins-ttdn5\" (UID: \"767da57e-44e4-4861-bc6f-427c5bbb4d9d\") " pod="openshift-multus/multus-additional-cni-plugins-ttdn5" Mar 18 13:24:20.826232 master-0 kubenswrapper[27835]: I0318 13:24:20.826135 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7bpz\" (UniqueName: \"kubernetes.io/projected/394061b4-1bac-4699-96d2-88558c1adaf8-kube-api-access-r7bpz\") pod \"csi-snapshot-controller-operator-5f5d689c6b-68lgz\" (UID: \"394061b4-1bac-4699-96d2-88558c1adaf8\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-68lgz" Mar 18 13:24:20.827022 master-0 kubenswrapper[27835]: I0318 13:24:20.826969 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9d09a56-ed4c-40b7-8be1-f3934c07296e-bound-sa-token\") pod \"ingress-operator-66b84d69b-wqxpk\" (UID: \"d9d09a56-ed4c-40b7-8be1-f3934c07296e\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" Mar 18 13:24:20.839828 master-0 kubenswrapper[27835]: I0318 13:24:20.839778 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb5b6\" (UniqueName: \"kubernetes.io/projected/5a4202c2-c330-4a5d-87e7-0a63d069113f-kube-api-access-kb5b6\") pod \"machine-config-operator-84d549f6d5-dlr6p\" (UID: \"5a4202c2-c330-4a5d-87e7-0a63d069113f\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-dlr6p" Mar 18 13:24:20.839995 master-0 kubenswrapper[27835]: I0318 13:24:20.839787 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgffb\" (UniqueName: \"kubernetes.io/projected/bf9d21f9-64d6-4e21-a985-491197038568-kube-api-access-qgffb\") pod \"authentication-operator-5885bfd7f4-sp4ld\" (UID: \"bf9d21f9-64d6-4e21-a985-491197038568\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-sp4ld" Mar 18 13:24:20.902747 master-0 kubenswrapper[27835]: I0318 13:24:20.902664 27835 scope.go:117] "RemoveContainer" containerID="a272363aabc94bf515887116c3094b118b2c3e6ac7802ab09d5f4466b9ec2a97" Mar 18 13:24:21.460343 master-0 kubenswrapper[27835]: I0318 13:24:21.460290 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8eff549-02f3-446e-b3a1-a66cecdc02a6-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-zm5rd\" (UID: \"a8eff549-02f3-446e-b3a1-a66cecdc02a6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-zm5rd" Mar 18 13:24:21.464958 master-0 kubenswrapper[27835]: I0318 13:24:21.464897 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvdg2\" (UniqueName: \"kubernetes.io/projected/822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9-kube-api-access-qvdg2\") pod \"olm-operator-5c9796789-z8jkt\" (UID: \"822f4ac5-4f4b-4de3-9aad-e7fd5c0290e9\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt" Mar 18 13:24:21.466889 master-0 kubenswrapper[27835]: I0318 13:24:21.466831 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfb5c\" (UniqueName: \"kubernetes.io/projected/a2bdf5b0-8764-4b15-97c9-20af36634fd0-kube-api-access-sfb5c\") pod \"apiserver-85b59d8688-wd26k\" (UID: \"a2bdf5b0-8764-4b15-97c9-20af36634fd0\") " pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:24:21.480670 master-0 kubenswrapper[27835]: I0318 13:24:21.480601 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5xgh\" (UniqueName: \"kubernetes.io/projected/59bf5114-29f9-4f70-8582-108e95327cb2-kube-api-access-z5xgh\") pod \"dns-operator-9c5679d8f-5lzzn\" (UID: \"59bf5114-29f9-4f70-8582-108e95327cb2\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-5lzzn" Mar 18 13:24:21.520548 master-0 kubenswrapper[27835]: I0318 13:24:21.520400 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fw5f\" (UniqueName: \"kubernetes.io/projected/bdcd72a6-a8e8-47ba-8b51-7325d35bad6b-kube-api-access-6fw5f\") pod \"control-plane-machine-set-operator-6f97756bc8-5vhnr\" (UID: \"bdcd72a6-a8e8-47ba-8b51-7325d35bad6b\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-5vhnr" Mar 18 13:24:21.521792 master-0 kubenswrapper[27835]: I0318 13:24:21.521632 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc8t5\" (UniqueName: \"kubernetes.io/projected/f7f4ae93-428b-4ebd-bfaa-18359b407ede-kube-api-access-mc8t5\") pod \"network-operator-7bd846bfc4-gxxbr\" (UID: \"f7f4ae93-428b-4ebd-bfaa-18359b407ede\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gxxbr" Mar 18 13:24:21.526585 master-0 kubenswrapper[27835]: I0318 13:24:21.526526 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmlh2\" (UniqueName: \"kubernetes.io/projected/6a93ff56-362e-44fc-a54f-666a01559892-kube-api-access-wmlh2\") pod \"kube-state-metrics-7bbc969446-mxcng\" (UID: \"6a93ff56-362e-44fc-a54f-666a01559892\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-mxcng" Mar 18 13:24:21.530685 master-0 kubenswrapper[27835]: I0318 13:24:21.528220 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dw4r\" (UniqueName: \"kubernetes.io/projected/2ea9eb53-0385-4a1a-a64f-696f8520cf49-kube-api-access-4dw4r\") pod \"package-server-manager-7b95f86987-p7vvx\" (UID: \"2ea9eb53-0385-4a1a-a64f-696f8520cf49\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" Mar 18 13:24:21.530685 master-0 kubenswrapper[27835]: I0318 13:24:21.528998 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62lvq\" (UniqueName: \"kubernetes.io/projected/deb67ea0-8342-40cb-b0f4-115270e878dd-kube-api-access-62lvq\") pod \"csi-snapshot-controller-64854d9cff-qsnxz\" (UID: \"deb67ea0-8342-40cb-b0f4-115270e878dd\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qsnxz" Mar 18 13:24:21.530685 master-0 kubenswrapper[27835]: I0318 13:24:21.529611 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbsq9\" (UniqueName: \"kubernetes.io/projected/98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617-kube-api-access-fbsq9\") pod \"operator-controller-controller-manager-57777556ff-9bjsj\" (UID: \"98e9b9f2-dd2b-4bb0-b2a8-5659a7f95617\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:24:21.530685 master-0 kubenswrapper[27835]: I0318 13:24:21.530118 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4vtf\" (UniqueName: \"kubernetes.io/projected/15a97fe2-5022-4997-9936-4247ae7ecb43-kube-api-access-h4vtf\") pod \"cluster-storage-operator-7d87854d6-r4dzk\" (UID: \"15a97fe2-5022-4997-9936-4247ae7ecb43\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-r4dzk" Mar 18 13:24:21.530685 master-0 kubenswrapper[27835]: I0318 13:24:21.530656 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zplb4\" (UniqueName: \"kubernetes.io/projected/13c71f7d-1485-4f86-beb2-ee16cf420350-kube-api-access-zplb4\") pod \"node-resolver-7vddk\" (UID: \"13c71f7d-1485-4f86-beb2-ee16cf420350\") " pod="openshift-dns/node-resolver-7vddk" Mar 18 13:24:21.532706 master-0 kubenswrapper[27835]: I0318 13:24:21.532690 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75jwh\" (UniqueName: \"kubernetes.io/projected/a9de7243-90c0-49c4-8059-34e0558fca40-kube-api-access-75jwh\") pod \"cloud-credential-operator-744f9dbf77-59d95\" (UID: \"a9de7243-90c0-49c4-8059-34e0558fca40\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-59d95" Mar 18 13:24:21.533959 master-0 kubenswrapper[27835]: I0318 13:24:21.533937 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z84cq\" (UniqueName: \"kubernetes.io/projected/0e7156cf-2d68-4de8-b7e7-60e1539590dd-kube-api-access-z84cq\") pod \"network-node-identity-x8r78\" (UID: \"0e7156cf-2d68-4de8-b7e7-60e1539590dd\") " pod="openshift-network-node-identity/network-node-identity-x8r78" Mar 18 13:24:21.535237 master-0 kubenswrapper[27835]: I0318 13:24:21.535218 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t56bf\" (UniqueName: \"kubernetes.io/projected/a350f317-f058-4102-af5c-cbba46d35e02-kube-api-access-t56bf\") pod \"route-controller-manager-c8888769b-8mxp6\" (UID: \"a350f317-f058-4102-af5c-cbba46d35e02\") " pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" Mar 18 13:24:21.536166 master-0 kubenswrapper[27835]: I0318 13:24:21.536150 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcfsk\" (UniqueName: \"kubernetes.io/projected/2a25632e-32d0-43d2-9be7-f515d29a1720-kube-api-access-bcfsk\") pod \"community-operators-tqw5h\" (UID: \"2a25632e-32d0-43d2-9be7-f515d29a1720\") " pod="openshift-marketplace/community-operators-tqw5h" Mar 18 13:24:21.536747 master-0 kubenswrapper[27835]: I0318 13:24:21.536731 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9zbp\" (UniqueName: \"kubernetes.io/projected/19a76585-a9ac-4ed9-9146-bb77b31848c6-kube-api-access-w9zbp\") pod \"etcd-operator-8544cbcf9c-jx4mf\" (UID: \"19a76585-a9ac-4ed9-9146-bb77b31848c6\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-jx4mf" Mar 18 13:24:21.537333 master-0 kubenswrapper[27835]: I0318 13:24:21.537317 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6zmc\" (UniqueName: \"kubernetes.io/projected/0c2c4a58-9780-4ecd-b417-e590ac3576ed-kube-api-access-v6zmc\") pod \"openshift-apiserver-operator-d65958b8-4bqf9\" (UID: \"0c2c4a58-9780-4ecd-b417-e590ac3576ed\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-4bqf9" Mar 18 13:24:21.542358 master-0 kubenswrapper[27835]: I0318 13:24:21.542337 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdkx7\" (UniqueName: \"kubernetes.io/projected/290d1f84-5c5c-4bff-b045-e6020793cded-kube-api-access-rdkx7\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:24:21.543170 master-0 kubenswrapper[27835]: I0318 13:24:21.543156 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgt5t\" (UniqueName: \"kubernetes.io/projected/8c0e5eca-819b-40f3-bf77-0cd90a4f6e94-kube-api-access-lgt5t\") pod \"cluster-monitoring-operator-58845fbb57-n8hgl\" (UID: \"8c0e5eca-819b-40f3-bf77-0cd90a4f6e94\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-n8hgl" Mar 18 13:24:21.544025 master-0 kubenswrapper[27835]: I0318 13:24:21.544009 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kskqr\" (UniqueName: \"kubernetes.io/projected/902909ca-ab08-49aa-9736-70e073f8e67d-kube-api-access-kskqr\") pod \"cluster-olm-operator-67dcd4998-bppd4\" (UID: \"902909ca-ab08-49aa-9736-70e073f8e67d\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-bppd4" Mar 18 13:24:21.544654 master-0 kubenswrapper[27835]: I0318 13:24:21.544638 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlzqd\" (UniqueName: \"kubernetes.io/projected/ac6d8eb6-1d5e-4757-9823-5ffe478c711c-kube-api-access-zlzqd\") pod \"cluster-baremetal-operator-6f69995874-mz4qp\" (UID: \"ac6d8eb6-1d5e-4757-9823-5ffe478c711c\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mz4qp" Mar 18 13:24:21.545471 master-0 kubenswrapper[27835]: I0318 13:24:21.545457 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wnqw\" (UniqueName: \"kubernetes.io/projected/f3be6654-f969-4952-976d-218c86af7d2d-kube-api-access-9wnqw\") pod \"network-check-source-b4bf74f6-tw7c7\" (UID: \"f3be6654-f969-4952-976d-218c86af7d2d\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-tw7c7" Mar 18 13:24:21.546207 master-0 kubenswrapper[27835]: I0318 13:24:21.546191 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm77k\" (UniqueName: \"kubernetes.io/projected/b89fb313-d01a-4305-b123-e253b3382b85-kube-api-access-dm77k\") pod \"service-ca-79bc6b8d76-2zvf2\" (UID: \"b89fb313-d01a-4305-b123-e253b3382b85\") " pod="openshift-service-ca/service-ca-79bc6b8d76-2zvf2" Mar 18 13:24:21.547183 master-0 kubenswrapper[27835]: I0318 13:24:21.547164 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4cqp\" (UniqueName: \"kubernetes.io/projected/fe643e40-d06d-4e69-9be3-0065c2a78567-kube-api-access-w4cqp\") pod \"marketplace-operator-89ccd998f-99pzm\" (UID: \"fe643e40-d06d-4e69-9be3-0065c2a78567\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:24:21.547840 master-0 kubenswrapper[27835]: I0318 13:24:21.547821 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/290d1f84-5c5c-4bff-b045-e6020793cded-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-6l7pv\" (UID: \"290d1f84-5c5c-4bff-b045-e6020793cded\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-6l7pv" Mar 18 13:24:21.548594 master-0 kubenswrapper[27835]: I0318 13:24:21.548575 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsvmx\" (UniqueName: \"kubernetes.io/projected/b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10-kube-api-access-xsvmx\") pod \"multus-vkbvp\" (UID: \"b54f7c6d-b9ec-47c6-90a3-5a8d9bd15b10\") " pod="openshift-multus/multus-vkbvp" Mar 18 13:24:21.550206 master-0 kubenswrapper[27835]: I0318 13:24:21.550127 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/830ff1d6-332e-46b1-b13c-c2507fdc3c19-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-lk5k7\" (UID: \"830ff1d6-332e-46b1-b13c-c2507fdc3c19\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-lk5k7" Mar 18 13:24:21.550554 master-0 kubenswrapper[27835]: I0318 13:24:21.550515 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/830ff1d6-332e-46b1-b13c-c2507fdc3c19-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-lk5k7\" (UID: \"830ff1d6-332e-46b1-b13c-c2507fdc3c19\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-lk5k7" Mar 18 13:24:21.550809 master-0 kubenswrapper[27835]: I0318 13:24:21.550783 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/702076a9-b542-4768-9e9e-99b2cac0a66e-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:24:21.551051 master-0 kubenswrapper[27835]: I0318 13:24:21.551013 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/702076a9-b542-4768-9e9e-99b2cac0a66e-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:24:21.551117 master-0 kubenswrapper[27835]: I0318 13:24:21.551067 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2hxh\" (UniqueName: \"kubernetes.io/projected/c3ff09ab-cbe1-49e7-8121-5f71997a5176-kube-api-access-n2hxh\") pod \"cluster-node-tuning-operator-598fbc5f8f-kvbzn\" (UID: \"c3ff09ab-cbe1-49e7-8121-5f71997a5176\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-kvbzn" Mar 18 13:24:21.552138 master-0 kubenswrapper[27835]: I0318 13:24:21.552106 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnkdr\" (UniqueName: \"kubernetes.io/projected/bf4c5410-fb44-45e8-ab66-24806e6349b8-kube-api-access-hnkdr\") pod \"tuned-5ftdj\" (UID: \"bf4c5410-fb44-45e8-ab66-24806e6349b8\") " pod="openshift-cluster-node-tuning-operator/tuned-5ftdj" Mar 18 13:24:21.552207 master-0 kubenswrapper[27835]: I0318 13:24:21.552144 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgzkd\" (UniqueName: \"kubernetes.io/projected/07505113-d5e7-4ea3-b9cc-8f08cba45ccc-kube-api-access-lgzkd\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb\" (UID: \"07505113-d5e7-4ea3-b9cc-8f08cba45ccc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-b6szb" Mar 18 13:24:21.554798 master-0 kubenswrapper[27835]: I0318 13:24:21.554444 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkfkr\" (UniqueName: \"kubernetes.io/projected/702076a9-b542-4768-9e9e-99b2cac0a66e-kube-api-access-bkfkr\") pod \"node-exporter-t4p42\" (UID: \"702076a9-b542-4768-9e9e-99b2cac0a66e\") " pod="openshift-monitoring/node-exporter-t4p42" Mar 18 13:24:21.575510 master-0 kubenswrapper[27835]: I0318 13:24:21.575466 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkw55\" (UniqueName: \"kubernetes.io/projected/e54baea8-6c3e-45a0-ac8c-880a8aaa8208-kube-api-access-kkw55\") pod \"ingress-canary-6hldc\" (UID: \"e54baea8-6c3e-45a0-ac8c-880a8aaa8208\") " pod="openshift-ingress-canary/ingress-canary-6hldc" Mar 18 13:24:21.576258 master-0 kubenswrapper[27835]: I0318 13:24:21.576229 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28z2f\" (UniqueName: \"kubernetes.io/projected/5b2acd84-85c0-4c47-90a4-44745b79976d-kube-api-access-28z2f\") pod \"migrator-8487694857-49h6x\" (UID: \"5b2acd84-85c0-4c47-90a4-44745b79976d\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-49h6x" Mar 18 13:24:21.577123 master-0 kubenswrapper[27835]: I0318 13:24:21.577092 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2w6b\" (UniqueName: \"kubernetes.io/projected/41cc6278-8f99-407c-ba5f-750a40e3058c-kube-api-access-g2w6b\") pod \"metrics-server-65dbcd767c-7bqc9\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:24:21.577819 master-0 kubenswrapper[27835]: I0318 13:24:21.577790 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jxdg\" (UniqueName: \"kubernetes.io/projected/ce3c462e-b655-40bc-811a-95ccde49fdb8-kube-api-access-8jxdg\") pod \"machine-config-daemon-5blrl\" (UID: \"ce3c462e-b655-40bc-811a-95ccde49fdb8\") " pod="openshift-machine-config-operator/machine-config-daemon-5blrl" Mar 18 13:24:21.581357 master-0 kubenswrapper[27835]: I0318 13:24:21.581306 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvdtw\" (UniqueName: \"kubernetes.io/projected/bf1cc230-0a79-4a1d-b500-a65d02e50973-kube-api-access-dvdtw\") pod \"network-metrics-daemon-kbfbq\" (UID: \"bf1cc230-0a79-4a1d-b500-a65d02e50973\") " pod="openshift-multus/network-metrics-daemon-kbfbq" Mar 18 13:24:21.582129 master-0 kubenswrapper[27835]: I0318 13:24:21.582102 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clhcj\" (UniqueName: \"kubernetes.io/projected/708812af-3249-4d57-8f28-055da22a7329-kube-api-access-clhcj\") pod \"machine-config-controller-b4f87c5b9-9fdnt\" (UID: \"708812af-3249-4d57-8f28-055da22a7329\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fdnt" Mar 18 13:24:21.586598 master-0 kubenswrapper[27835]: I0318 13:24:21.586296 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn8qc\" (UniqueName: \"kubernetes.io/projected/2b12af9a-8041-477f-90eb-05bb6ae7861a-kube-api-access-sn8qc\") pod \"cluster-autoscaler-operator-866dc4744-lqtbg\" (UID: \"2b12af9a-8041-477f-90eb-05bb6ae7861a\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-lqtbg" Mar 18 13:24:21.587792 master-0 kubenswrapper[27835]: I0318 13:24:21.587734 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ffe2e75-9cc3-4244-95c8-800463c5aa28-kube-api-access\") pod \"cluster-version-operator-7d58488df-bqmqw\" (UID: \"8ffe2e75-9cc3-4244-95c8-800463c5aa28\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bqmqw" Mar 18 13:24:21.587855 master-0 kubenswrapper[27835]: I0318 13:24:21.587834 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd8ff\" (UniqueName: \"kubernetes.io/projected/0a6090f0-3a27-4102-b8dd-b071644a3543-kube-api-access-bd8ff\") pod \"insights-operator-68bf6ff9d6-bbqfl\" (UID: \"0a6090f0-3a27-4102-b8dd-b071644a3543\") " pod="openshift-insights/insights-operator-68bf6ff9d6-bbqfl" Mar 18 13:24:21.588622 master-0 kubenswrapper[27835]: I0318 13:24:21.588598 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgt55\" (UniqueName: \"kubernetes.io/projected/029b127e-0faf-4957-b591-9c561b053cda-kube-api-access-wgt55\") pod \"dns-default-92s8c\" (UID: \"029b127e-0faf-4957-b591-9c561b053cda\") " pod="openshift-dns/dns-default-92s8c" Mar 18 13:24:21.598114 master-0 kubenswrapper[27835]: I0318 13:24:21.598073 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx8j5\" (UniqueName: \"kubernetes.io/projected/68104a8c-3fac-4d4b-b975-bc2d045b3375-kube-api-access-sx8j5\") pod \"machine-api-operator-6fbb6cf6f9-9bqxm\" (UID: \"68104a8c-3fac-4d4b-b975-bc2d045b3375\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9bqxm" Mar 18 13:24:21.608908 master-0 kubenswrapper[27835]: I0318 13:24:21.608390 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5lv2\" (UniqueName: \"kubernetes.io/projected/fb65c095-ca20-432c-a069-ad6719fca9c8-kube-api-access-j5lv2\") pod \"prometheus-operator-6c8df6d4b-7tcjk\" (UID: \"fb65c095-ca20-432c-a069-ad6719fca9c8\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-7tcjk" Mar 18 13:24:21.628591 master-0 kubenswrapper[27835]: I0318 13:24:21.628520 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29qbv\" (UniqueName: \"kubernetes.io/projected/ce3728ab-5d50-40ac-95b3-74a5b62a557f-kube-api-access-29qbv\") pod \"openshift-config-operator-95bf4f4d-qwgrm\" (UID: \"ce3728ab-5d50-40ac-95b3-74a5b62a557f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" Mar 18 13:24:21.646630 master-0 kubenswrapper[27835]: I0318 13:24:21.646566 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrgxg\" (UniqueName: \"kubernetes.io/projected/d2316774-4ebc-4fa9-be07-eb1f16f614dd-kube-api-access-lrgxg\") pod \"certified-operators-8wqfk\" (UID: \"d2316774-4ebc-4fa9-be07-eb1f16f614dd\") " pod="openshift-marketplace/certified-operators-8wqfk" Mar 18 13:24:21.665033 master-0 kubenswrapper[27835]: I0318 13:24:21.664970 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbqfh\" (UniqueName: \"kubernetes.io/projected/720a1f60-c1cb-4aef-aaec-f082090ca631-kube-api-access-nbqfh\") pod \"multus-admission-controller-5dbbb8b86f-zrc8h\" (UID: \"720a1f60-c1cb-4aef-aaec-f082090ca631\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" Mar 18 13:24:21.683664 master-0 kubenswrapper[27835]: I0318 13:24:21.683596 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fxgl\" (UniqueName: \"kubernetes.io/projected/595f697b-d238-4500-84ce-1ea00377f05e-kube-api-access-4fxgl\") pod \"service-ca-operator-b865698dc-hnr6m\" (UID: \"595f697b-d238-4500-84ce-1ea00377f05e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-hnr6m" Mar 18 13:24:21.708336 master-0 kubenswrapper[27835]: I0318 13:24:21.708253 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk4ql\" (UniqueName: \"kubernetes.io/projected/d325c523-8e6f-4665-9f54-334eaf301141-kube-api-access-mk4ql\") pod \"openshift-state-metrics-5dc6c74576-s4ql7\" (UID: \"d325c523-8e6f-4665-9f54-334eaf301141\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-s4ql7" Mar 18 13:24:21.720555 master-0 kubenswrapper[27835]: I0318 13:24:21.720493 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b75d4622-ac12-4f82-afc9-ab63e6278b0c-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-n7fn4\" (UID: \"b75d4622-ac12-4f82-afc9-ab63e6278b0c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-n7fn4" Mar 18 13:24:21.727972 master-0 kubenswrapper[27835]: I0318 13:24:21.727888 27835 request.go:700] Waited for 3.808165452s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ac/token Mar 18 13:24:21.748846 master-0 kubenswrapper[27835]: I0318 13:24:21.748799 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvq2h\" (UniqueName: \"kubernetes.io/projected/830ff1d6-332e-46b1-b13c-c2507fdc3c19-kube-api-access-dvq2h\") pod \"multus-admission-controller-58c9f8fc64-lk5k7\" (UID: \"830ff1d6-332e-46b1-b13c-c2507fdc3c19\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-lk5k7" Mar 18 13:24:21.752848 master-0 kubenswrapper[27835]: I0318 13:24:21.752813 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbqfh\" (UniqueName: \"kubernetes.io/projected/720a1f60-c1cb-4aef-aaec-f082090ca631-kube-api-access-nbqfh\") pod \"720a1f60-c1cb-4aef-aaec-f082090ca631\" (UID: \"720a1f60-c1cb-4aef-aaec-f082090ca631\") " Mar 18 13:24:21.756044 master-0 kubenswrapper[27835]: I0318 13:24:21.756001 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/720a1f60-c1cb-4aef-aaec-f082090ca631-kube-api-access-nbqfh" (OuterVolumeSpecName: "kube-api-access-nbqfh") pod "720a1f60-c1cb-4aef-aaec-f082090ca631" (UID: "720a1f60-c1cb-4aef-aaec-f082090ca631"). InnerVolumeSpecName "kube-api-access-nbqfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:24:21.758364 master-0 kubenswrapper[27835]: I0318 13:24:21.758331 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-wqxpk_d9d09a56-ed4c-40b7-8be1-f3934c07296e/ingress-operator/5.log" Mar 18 13:24:21.760274 master-0 kubenswrapper[27835]: I0318 13:24:21.760222 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbwfq\" (UniqueName: \"kubernetes.io/projected/9548e397-0db4-41c8-9cc8-b575060e9c66-kube-api-access-kbwfq\") pod \"redhat-operators-89st2\" (UID: \"9548e397-0db4-41c8-9cc8-b575060e9c66\") " pod="openshift-marketplace/redhat-operators-89st2" Mar 18 13:24:21.794153 master-0 kubenswrapper[27835]: I0318 13:24:21.794033 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34a3a84b-048f-4822-9f05-0e7509327ca2-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-k8tv4\" (UID: \"34a3a84b-048f-4822-9f05-0e7509327ca2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-k8tv4" Mar 18 13:24:21.805363 master-0 kubenswrapper[27835]: I0318 13:24:21.805317 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hvsl\" (UniqueName: \"kubernetes.io/projected/16f8e725-f18a-478e-88c5-87d54aeb4857-kube-api-access-8hvsl\") pod \"catalogd-controller-manager-6864dc98f7-q2ndb\" (UID: \"16f8e725-f18a-478e-88c5-87d54aeb4857\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:24:21.822812 master-0 kubenswrapper[27835]: I0318 13:24:21.822769 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkxxg\" (UniqueName: \"kubernetes.io/projected/d42bcf13-548b-46c4-9a3d-a46f1b6ec045-kube-api-access-vkxxg\") pod \"ovnkube-control-plane-57f769d897-hvnt4\" (UID: \"d42bcf13-548b-46c4-9a3d-a46f1b6ec045\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-hvnt4" Mar 18 13:24:21.841482 master-0 kubenswrapper[27835]: I0318 13:24:21.841438 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qn7f\" (UniqueName: \"kubernetes.io/projected/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-kube-api-access-5qn7f\") pod \"controller-manager-d7c95db55-d6lqm\" (UID: \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\") " pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:24:21.856915 master-0 kubenswrapper[27835]: I0318 13:24:21.856850 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbqfh\" (UniqueName: \"kubernetes.io/projected/720a1f60-c1cb-4aef-aaec-f082090ca631-kube-api-access-nbqfh\") on node \"master-0\" DevicePath \"\"" Mar 18 13:24:21.880543 master-0 kubenswrapper[27835]: I0318 13:24:21.880500 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2snjj\" (UniqueName: \"kubernetes.io/projected/0278b04b-b27b-4717-a009-a70315fd05a6-kube-api-access-2snjj\") pod \"network-check-target-kcsgp\" (UID: \"0278b04b-b27b-4717-a009-a70315fd05a6\") " pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:24:21.891432 master-0 kubenswrapper[27835]: I0318 13:24:21.889535 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w477x\" (UniqueName: \"kubernetes.io/projected/bd8aa7c1-0a04-4df0-9047-63ab846b9535-kube-api-access-w477x\") pod \"machine-config-server-wxht4\" (UID: \"bd8aa7c1-0a04-4df0-9047-63ab846b9535\") " pod="openshift-machine-config-operator/machine-config-server-wxht4" Mar 18 13:24:21.942341 master-0 kubenswrapper[27835]: I0318 13:24:21.939391 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz6h6\" (UniqueName: \"kubernetes.io/projected/e390416b-4fa1-41d5-bc74-9e779b252350-kube-api-access-cz6h6\") pod \"redhat-marketplace-bxlrz\" (UID: \"e390416b-4fa1-41d5-bc74-9e779b252350\") " pod="openshift-marketplace/redhat-marketplace-bxlrz" Mar 18 13:24:21.942341 master-0 kubenswrapper[27835]: I0318 13:24:21.940151 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff8tm\" (UniqueName: \"kubernetes.io/projected/74f296d4-40d1-449e-88ea-db6c1574a11a-kube-api-access-ff8tm\") pod \"cluster-samples-operator-85f7577d78-sqx7p\" (UID: \"74f296d4-40d1-449e-88ea-db6c1574a11a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-sqx7p" Mar 18 13:24:21.951650 master-0 kubenswrapper[27835]: I0318 13:24:21.951601 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82f9g\" (UniqueName: \"kubernetes.io/projected/3ee0f85b-219b-47cb-a22a-67d359a69881-kube-api-access-82f9g\") pod \"packageserver-ff75f747c-r46tm\" (UID: \"3ee0f85b-219b-47cb-a22a-67d359a69881\") " pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" Mar 18 13:24:21.961021 master-0 kubenswrapper[27835]: I0318 13:24:21.960982 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mddh9\" (UniqueName: \"kubernetes.io/projected/ab2f96fb-ef55-4427-a598-7e3f1e224045-kube-api-access-mddh9\") pod \"ovnkube-node-kxqjc\" (UID: \"ab2f96fb-ef55-4427-a598-7e3f1e224045\") " pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:21.978675 master-0 kubenswrapper[27835]: I0318 13:24:21.978633 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwqln\" (UniqueName: \"kubernetes.io/projected/80994f33-21e7-45d6-9f21-1cfd8e1f41ce-kube-api-access-gwqln\") pod \"cluster-cloud-controller-manager-operator-7dff898856-8lzkl\" (UID: \"80994f33-21e7-45d6-9f21-1cfd8e1f41ce\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-8lzkl" Mar 18 13:24:22.000402 master-0 kubenswrapper[27835]: I0318 13:24:22.000328 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlb8t\" (UniqueName: \"kubernetes.io/projected/00375107-9a3b-4161-a90d-72ea8827c5fc-kube-api-access-zlb8t\") pod \"router-default-7dcf5569b5-gvmtv\" (UID: \"00375107-9a3b-4161-a90d-72ea8827c5fc\") " pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:24:22.021061 master-0 kubenswrapper[27835]: I0318 13:24:22.021005 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxk9v\" (UniqueName: \"kubernetes.io/projected/d2455453-5943-49ef-bfea-cba077197da0-kube-api-access-lxk9v\") pod \"catalog-operator-68f85b4d6c-t84s9\" (UID: \"d2455453-5943-49ef-bfea-cba077197da0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:24:22.040977 master-0 kubenswrapper[27835]: I0318 13:24:22.040917 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnxv5\" (UniqueName: \"kubernetes.io/projected/053cc9bc-f98e-46f6-93bb-b5344d20bf74-kube-api-access-gnxv5\") pod \"iptables-alerter-jkl4x\" (UID: \"053cc9bc-f98e-46f6-93bb-b5344d20bf74\") " pod="openshift-network-operator/iptables-alerter-jkl4x" Mar 18 13:24:22.059671 master-0 kubenswrapper[27835]: I0318 13:24:22.059541 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfbx8\" (UniqueName: \"kubernetes.io/projected/f38b464d-a218-4753-b7ac-a7d373952c4d-kube-api-access-lfbx8\") pod \"machine-approver-5c6485487f-fk8ql\" (UID: \"f38b464d-a218-4753-b7ac-a7d373952c4d\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-fk8ql" Mar 18 13:24:22.079429 master-0 kubenswrapper[27835]: E0318 13:24:22.079356 27835 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:24:22.079429 master-0 kubenswrapper[27835]: E0318 13:24:22.079405 27835 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:24:22.079709 master-0 kubenswrapper[27835]: E0318 13:24:22.079502 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9b853631-ff77-4643-aa07-b1f8056320a3-kube-api-access podName:9b853631-ff77-4643-aa07-b1f8056320a3 nodeName:}" failed. No retries permitted until 2026-03-18 13:24:22.579481084 +0000 UTC m=+26.544692654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/9b853631-ff77-4643-aa07-b1f8056320a3-kube-api-access") pod "installer-3-master-0" (UID: "9b853631-ff77-4643-aa07-b1f8056320a3") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:24:22.099854 master-0 kubenswrapper[27835]: E0318 13:24:22.099271 27835 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.818s" Mar 18 13:24:22.099854 master-0 kubenswrapper[27835]: I0318 13:24:22.099318 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h" event={"ID":"720a1f60-c1cb-4aef-aaec-f082090ca631","Type":"ContainerDied","Data":"54bd19e9b4d7f9ab310771b8b4db448ca0ec68978bb44a7d76ba5895f6b7148d"} Mar 18 13:24:22.099854 master-0 kubenswrapper[27835]: I0318 13:24:22.099388 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wsmsc" Mar 18 13:24:22.099854 master-0 kubenswrapper[27835]: I0318 13:24:22.099445 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 18 13:24:22.099854 master-0 kubenswrapper[27835]: I0318 13:24:22.099460 27835 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="994f49eb-f082-48c0-a73d-916d0ce332bc" Mar 18 13:24:22.100726 master-0 kubenswrapper[27835]: I0318 13:24:22.099457 27835 scope.go:117] "RemoveContainer" containerID="6fc3b00292545591e6c5349f2483ea9d57bac5ac21bd098a1969c029ee5e5b9a" Mar 18 13:24:22.100726 master-0 kubenswrapper[27835]: I0318 13:24:22.099489 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-wsmsc" Mar 18 13:24:22.100726 master-0 kubenswrapper[27835]: I0318 13:24:22.100563 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:24:22.100726 master-0 kubenswrapper[27835]: I0318 13:24:22.100593 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:24:22.100726 master-0 kubenswrapper[27835]: I0318 13:24:22.100606 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:24:22.100726 master-0 kubenswrapper[27835]: I0318 13:24:22.100616 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:24:22.108557 master-0 kubenswrapper[27835]: I0318 13:24:22.108517 27835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 18 13:24:22.120762 master-0 kubenswrapper[27835]: I0318 13:24:22.120710 27835 scope.go:117] "RemoveContainer" containerID="b6d0118c2fdf2cbc54c92133c6e31568d8996365d7d961746064b4d6f7f3d6e8" Mar 18 13:24:22.133039 master-0 kubenswrapper[27835]: I0318 13:24:22.132988 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tqw5h" Mar 18 13:24:22.133224 master-0 kubenswrapper[27835]: I0318 13:24:22.133053 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:24:22.133224 master-0 kubenswrapper[27835]: I0318 13:24:22.133073 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"6ad928278a5d1eff5df519521f7ea13f1a3eb8fdfb5c1803c15d7810fb9de453"} Mar 18 13:24:22.133224 master-0 kubenswrapper[27835]: I0318 13:24:22.133214 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-89st2" Mar 18 13:24:22.133355 master-0 kubenswrapper[27835]: I0318 13:24:22.133321 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8wqfk" Mar 18 13:24:22.133399 master-0 kubenswrapper[27835]: I0318 13:24:22.133351 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 18 13:24:22.133399 master-0 kubenswrapper[27835]: I0318 13:24:22.133370 27835 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="994f49eb-f082-48c0-a73d-916d0ce332bc" Mar 18 13:24:22.133399 master-0 kubenswrapper[27835]: I0318 13:24:22.133389 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-wqxpk" event={"ID":"d9d09a56-ed4c-40b7-8be1-f3934c07296e","Type":"ContainerStarted","Data":"2a8478813dad40bba8408bbdbda4913a3fcf3f1caf395619959b37d5488bdc8d"} Mar 18 13:24:22.133577 master-0 kubenswrapper[27835]: I0318 13:24:22.133464 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt" Mar 18 13:24:22.133577 master-0 kubenswrapper[27835]: I0318 13:24:22.133514 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-z8jkt" Mar 18 13:24:22.133577 master-0 kubenswrapper[27835]: I0318 13:24:22.133540 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:24:22.133701 master-0 kubenswrapper[27835]: I0318 13:24:22.133580 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:24:22.133701 master-0 kubenswrapper[27835]: I0318 13:24:22.133600 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tqw5h" Mar 18 13:24:22.133701 master-0 kubenswrapper[27835]: I0318 13:24:22.133641 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:24:22.133701 master-0 kubenswrapper[27835]: I0318 13:24:22.133684 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-92s8c" Mar 18 13:24:22.133856 master-0 kubenswrapper[27835]: I0318 13:24:22.133721 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-9bjsj" Mar 18 13:24:22.133856 master-0 kubenswrapper[27835]: I0318 13:24:22.133753 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" Mar 18 13:24:22.133856 master-0 kubenswrapper[27835]: I0318 13:24:22.133793 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-92s8c" Mar 18 13:24:22.133856 master-0 kubenswrapper[27835]: I0318 13:24:22.133809 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:24:22.133856 master-0 kubenswrapper[27835]: I0318 13:24:22.133839 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" Mar 18 13:24:22.134134 master-0 kubenswrapper[27835]: I0318 13:24:22.133871 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:24:22.134134 master-0 kubenswrapper[27835]: I0318 13:24:22.133906 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tqw5h" Mar 18 13:24:22.134134 master-0 kubenswrapper[27835]: I0318 13:24:22.133940 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:24:22.134134 master-0 kubenswrapper[27835]: I0318 13:24:22.133974 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" Mar 18 13:24:22.134134 master-0 kubenswrapper[27835]: I0318 13:24:22.134001 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:24:22.134134 master-0 kubenswrapper[27835]: I0318 13:24:22.134037 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" Mar 18 13:24:22.134134 master-0 kubenswrapper[27835]: I0318 13:24:22.134068 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-p7vvx" Mar 18 13:24:22.134134 master-0 kubenswrapper[27835]: I0318 13:24:22.134111 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:24:22.134470 master-0 kubenswrapper[27835]: I0318 13:24:22.134144 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:24:22.134470 master-0 kubenswrapper[27835]: I0318 13:24:22.134181 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-89st2" Mar 18 13:24:22.134470 master-0 kubenswrapper[27835]: I0318 13:24:22.134219 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8wqfk" Mar 18 13:24:22.134470 master-0 kubenswrapper[27835]: I0318 13:24:22.134244 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:24:22.134470 master-0 kubenswrapper[27835]: I0318 13:24:22.134268 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:24:22.134470 master-0 kubenswrapper[27835]: I0318 13:24:22.134291 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-q2ndb" Mar 18 13:24:22.134470 master-0 kubenswrapper[27835]: I0318 13:24:22.134333 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:22.135540 master-0 kubenswrapper[27835]: I0318 13:24:22.135001 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:22.135540 master-0 kubenswrapper[27835]: I0318 13:24:22.135260 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-qwgrm" Mar 18 13:24:22.135540 master-0 kubenswrapper[27835]: I0318 13:24:22.135282 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:24:22.137129 master-0 kubenswrapper[27835]: I0318 13:24:22.137100 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-kcsgp" Mar 18 13:24:22.141075 master-0 kubenswrapper[27835]: I0318 13:24:22.141044 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-t84s9" Mar 18 13:24:22.142514 master-0 kubenswrapper[27835]: I0318 13:24:22.142494 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:24:22.168143 master-0 kubenswrapper[27835]: I0318 13:24:22.168071 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:22.173118 master-0 kubenswrapper[27835]: I0318 13:24:22.173064 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:22.178339 master-0 kubenswrapper[27835]: I0318 13:24:22.178290 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8wqfk" Mar 18 13:24:22.180278 master-0 kubenswrapper[27835]: I0318 13:24:22.180249 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" Mar 18 13:24:22.183738 master-0 kubenswrapper[27835]: I0318 13:24:22.183693 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-ff75f747c-r46tm" Mar 18 13:24:22.186609 master-0 kubenswrapper[27835]: I0318 13:24:22.186559 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bxlrz" Mar 18 13:24:22.186814 master-0 kubenswrapper[27835]: I0318 13:24:22.186646 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bxlrz" Mar 18 13:24:22.190151 master-0 kubenswrapper[27835]: I0318 13:24:22.190113 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-89st2" Mar 18 13:24:22.190151 master-0 kubenswrapper[27835]: I0318 13:24:22.190160 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:24:22.190408 master-0 kubenswrapper[27835]: I0318 13:24:22.190205 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:24:22.192635 master-0 kubenswrapper[27835]: I0318 13:24:22.192591 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Liveness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]controller ok Mar 18 13:24:22.192635 master-0 kubenswrapper[27835]: [-]backend-http failed: reason withheld Mar 18 13:24:22.192635 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:22.192868 master-0 kubenswrapper[27835]: I0318 13:24:22.192645 27835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:22.198177 master-0 kubenswrapper[27835]: I0318 13:24:22.198114 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:22.198177 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:22.198177 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:22.198177 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:22.198546 master-0 kubenswrapper[27835]: I0318 13:24:22.198192 27835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:22.665522 master-0 kubenswrapper[27835]: I0318 13:24:22.665463 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b853631-ff77-4643-aa07-b1f8056320a3-kube-api-access\") pod \"installer-3-master-0\" (UID: \"9b853631-ff77-4643-aa07-b1f8056320a3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:24:22.665825 master-0 kubenswrapper[27835]: E0318 13:24:22.665632 27835 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:24:22.665912 master-0 kubenswrapper[27835]: E0318 13:24:22.665901 27835 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:24:22.666155 master-0 kubenswrapper[27835]: E0318 13:24:22.666143 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9b853631-ff77-4643-aa07-b1f8056320a3-kube-api-access podName:9b853631-ff77-4643-aa07-b1f8056320a3 nodeName:}" failed. No retries permitted until 2026-03-18 13:24:23.66612366 +0000 UTC m=+27.631335210 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/9b853631-ff77-4643-aa07-b1f8056320a3-kube-api-access") pod "installer-3-master-0" (UID: "9b853631-ff77-4643-aa07-b1f8056320a3") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:24:23.025959 master-0 kubenswrapper[27835]: I0318 13:24:23.025717 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:24:23.193978 master-0 kubenswrapper[27835]: I0318 13:24:23.193907 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:23.193978 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:23.193978 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:23.193978 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:23.193978 master-0 kubenswrapper[27835]: I0318 13:24:23.193981 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:23.686879 master-0 kubenswrapper[27835]: I0318 13:24:23.686825 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b853631-ff77-4643-aa07-b1f8056320a3-kube-api-access\") pod \"installer-3-master-0\" (UID: \"9b853631-ff77-4643-aa07-b1f8056320a3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:24:23.687151 master-0 kubenswrapper[27835]: E0318 13:24:23.687076 27835 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:24:23.687151 master-0 kubenswrapper[27835]: E0318 13:24:23.687120 27835 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:24:23.687253 master-0 kubenswrapper[27835]: E0318 13:24:23.687193 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9b853631-ff77-4643-aa07-b1f8056320a3-kube-api-access podName:9b853631-ff77-4643-aa07-b1f8056320a3 nodeName:}" failed. No retries permitted until 2026-03-18 13:24:25.687171657 +0000 UTC m=+29.652383227 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/9b853631-ff77-4643-aa07-b1f8056320a3-kube-api-access") pod "installer-3-master-0" (UID: "9b853631-ff77-4643-aa07-b1f8056320a3") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:24:23.773722 master-0 kubenswrapper[27835]: I0318 13:24:23.773676 27835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 13:24:23.778974 master-0 kubenswrapper[27835]: I0318 13:24:23.778937 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:24:24.039564 master-0 kubenswrapper[27835]: I0318 13:24:24.039379 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=7.039359543 podStartE2EDuration="7.039359543s" podCreationTimestamp="2026-03-18 13:24:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:24:24.038664734 +0000 UTC m=+28.003876314" watchObservedRunningTime="2026-03-18 13:24:24.039359543 +0000 UTC m=+28.004571123" Mar 18 13:24:24.193441 master-0 kubenswrapper[27835]: I0318 13:24:24.193347 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:24.193441 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:24.193441 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:24.193441 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:24.193796 master-0 kubenswrapper[27835]: I0318 13:24:24.193449 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:24.881169 master-0 kubenswrapper[27835]: I0318 13:24:24.880984 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=7.880956549 podStartE2EDuration="7.880956549s" podCreationTimestamp="2026-03-18 13:24:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:24:24.875453115 +0000 UTC m=+28.840664695" watchObservedRunningTime="2026-03-18 13:24:24.880956549 +0000 UTC m=+28.846168119" Mar 18 13:24:25.181388 master-0 kubenswrapper[27835]: I0318 13:24:25.181285 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h"] Mar 18 13:24:25.185737 master-0 kubenswrapper[27835]: I0318 13:24:25.185696 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-zrc8h"] Mar 18 13:24:25.193074 master-0 kubenswrapper[27835]: I0318 13:24:25.193029 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:25.193074 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:25.193074 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:25.193074 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:25.193223 master-0 kubenswrapper[27835]: I0318 13:24:25.193091 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:25.729258 master-0 kubenswrapper[27835]: I0318 13:24:25.729190 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b853631-ff77-4643-aa07-b1f8056320a3-kube-api-access\") pod \"installer-3-master-0\" (UID: \"9b853631-ff77-4643-aa07-b1f8056320a3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:24:25.729528 master-0 kubenswrapper[27835]: E0318 13:24:25.729355 27835 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:24:25.729528 master-0 kubenswrapper[27835]: E0318 13:24:25.729391 27835 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:24:25.729528 master-0 kubenswrapper[27835]: E0318 13:24:25.729473 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9b853631-ff77-4643-aa07-b1f8056320a3-kube-api-access podName:9b853631-ff77-4643-aa07-b1f8056320a3 nodeName:}" failed. No retries permitted until 2026-03-18 13:24:29.729455377 +0000 UTC m=+33.694666937 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/9b853631-ff77-4643-aa07-b1f8056320a3-kube-api-access") pod "installer-3-master-0" (UID: "9b853631-ff77-4643-aa07-b1f8056320a3") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:24:25.926106 master-0 kubenswrapper[27835]: I0318 13:24:25.926051 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-5bb6f9f846-6wq9c" Mar 18 13:24:26.194200 master-0 kubenswrapper[27835]: I0318 13:24:26.194157 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:26.194200 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:26.194200 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:26.194200 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:26.194994 master-0 kubenswrapper[27835]: I0318 13:24:26.194961 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:26.295282 master-0 kubenswrapper[27835]: I0318 13:24:26.295228 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="720a1f60-c1cb-4aef-aaec-f082090ca631" path="/var/lib/kubelet/pods/720a1f60-c1cb-4aef-aaec-f082090ca631/volumes" Mar 18 13:24:26.520807 master-0 kubenswrapper[27835]: I0318 13:24:26.520609 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-85b59d8688-wd26k" Mar 18 13:24:27.193360 master-0 kubenswrapper[27835]: I0318 13:24:27.193312 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:27.193360 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:27.193360 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:27.193360 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:27.193829 master-0 kubenswrapper[27835]: I0318 13:24:27.193371 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:28.025821 master-0 kubenswrapper[27835]: I0318 13:24:28.025767 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:24:28.089739 master-0 kubenswrapper[27835]: I0318 13:24:28.089681 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:24:28.093933 master-0 kubenswrapper[27835]: I0318 13:24:28.093863 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:24:28.098388 master-0 kubenswrapper[27835]: I0318 13:24:28.098350 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:24:28.121751 master-0 kubenswrapper[27835]: I0318 13:24:28.121686 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 18 13:24:28.144044 master-0 kubenswrapper[27835]: I0318 13:24:28.143990 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 18 13:24:28.208621 master-0 kubenswrapper[27835]: I0318 13:24:28.208574 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:28.208621 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:28.208621 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:28.208621 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:28.208943 master-0 kubenswrapper[27835]: I0318 13:24:28.208630 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:28.342382 master-0 kubenswrapper[27835]: I0318 13:24:28.342242 27835 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 13:24:28.342609 master-0 kubenswrapper[27835]: I0318 13:24:28.342477 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="8e7a82869988463543d3d8dd1f0b5fe3" containerName="startup-monitor" containerID="cri-o://70f992d6ed6a10b9fe4f1fff57a0961086b5274d71307bfff82b9e6a9a664092" gracePeriod=5 Mar 18 13:24:29.192709 master-0 kubenswrapper[27835]: I0318 13:24:29.192648 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:29.192709 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:29.192709 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:29.192709 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:29.193280 master-0 kubenswrapper[27835]: I0318 13:24:29.192741 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:29.786105 master-0 kubenswrapper[27835]: I0318 13:24:29.786047 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b853631-ff77-4643-aa07-b1f8056320a3-kube-api-access\") pod \"installer-3-master-0\" (UID: \"9b853631-ff77-4643-aa07-b1f8056320a3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:24:29.786359 master-0 kubenswrapper[27835]: E0318 13:24:29.786267 27835 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:24:29.786359 master-0 kubenswrapper[27835]: E0318 13:24:29.786307 27835 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:24:29.786471 master-0 kubenswrapper[27835]: E0318 13:24:29.786374 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9b853631-ff77-4643-aa07-b1f8056320a3-kube-api-access podName:9b853631-ff77-4643-aa07-b1f8056320a3 nodeName:}" failed. No retries permitted until 2026-03-18 13:24:37.786351929 +0000 UTC m=+41.751563499 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/9b853631-ff77-4643-aa07-b1f8056320a3-kube-api-access") pod "installer-3-master-0" (UID: "9b853631-ff77-4643-aa07-b1f8056320a3") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:24:30.192359 master-0 kubenswrapper[27835]: I0318 13:24:30.192295 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:30.192359 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:30.192359 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:30.192359 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:30.192359 master-0 kubenswrapper[27835]: I0318 13:24:30.192355 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:31.194108 master-0 kubenswrapper[27835]: I0318 13:24:31.194026 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:31.194108 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:31.194108 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:31.194108 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:31.194108 master-0 kubenswrapper[27835]: I0318 13:24:31.194097 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:31.618592 master-0 kubenswrapper[27835]: I0318 13:24:31.618439 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tqw5h" Mar 18 13:24:31.952132 master-0 kubenswrapper[27835]: I0318 13:24:31.952068 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-89st2" Mar 18 13:24:31.968386 master-0 kubenswrapper[27835]: I0318 13:24:31.968336 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8wqfk" Mar 18 13:24:32.192899 master-0 kubenswrapper[27835]: I0318 13:24:32.192815 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:32.192899 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:32.192899 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:32.192899 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:32.193213 master-0 kubenswrapper[27835]: I0318 13:24:32.192924 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:32.230214 master-0 kubenswrapper[27835]: I0318 13:24:32.230095 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bxlrz" Mar 18 13:24:32.290467 master-0 kubenswrapper[27835]: I0318 13:24:32.290370 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bxlrz" Mar 18 13:24:33.197547 master-0 kubenswrapper[27835]: I0318 13:24:33.197459 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:33.197547 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:33.197547 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:33.197547 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:33.198095 master-0 kubenswrapper[27835]: I0318 13:24:33.197577 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:33.488394 master-0 kubenswrapper[27835]: I0318 13:24:33.488359 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_8e7a82869988463543d3d8dd1f0b5fe3/startup-monitor/0.log" Mar 18 13:24:33.488977 master-0 kubenswrapper[27835]: I0318 13:24:33.488434 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:24:33.534283 master-0 kubenswrapper[27835]: I0318 13:24:33.534233 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"8e7a82869988463543d3d8dd1f0b5fe3\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " Mar 18 13:24:33.534573 master-0 kubenswrapper[27835]: I0318 13:24:33.534309 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"8e7a82869988463543d3d8dd1f0b5fe3\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " Mar 18 13:24:33.534573 master-0 kubenswrapper[27835]: I0318 13:24:33.534372 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"8e7a82869988463543d3d8dd1f0b5fe3\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " Mar 18 13:24:33.534573 master-0 kubenswrapper[27835]: I0318 13:24:33.534401 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"8e7a82869988463543d3d8dd1f0b5fe3\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " Mar 18 13:24:33.534573 master-0 kubenswrapper[27835]: I0318 13:24:33.534473 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"8e7a82869988463543d3d8dd1f0b5fe3\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " Mar 18 13:24:33.534892 master-0 kubenswrapper[27835]: I0318 13:24:33.534863 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock" (OuterVolumeSpecName: "var-lock") pod "8e7a82869988463543d3d8dd1f0b5fe3" (UID: "8e7a82869988463543d3d8dd1f0b5fe3"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:24:33.535018 master-0 kubenswrapper[27835]: I0318 13:24:33.534938 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests" (OuterVolumeSpecName: "manifests") pod "8e7a82869988463543d3d8dd1f0b5fe3" (UID: "8e7a82869988463543d3d8dd1f0b5fe3"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:24:33.535127 master-0 kubenswrapper[27835]: I0318 13:24:33.534938 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "8e7a82869988463543d3d8dd1f0b5fe3" (UID: "8e7a82869988463543d3d8dd1f0b5fe3"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:24:33.535233 master-0 kubenswrapper[27835]: I0318 13:24:33.534961 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log" (OuterVolumeSpecName: "var-log") pod "8e7a82869988463543d3d8dd1f0b5fe3" (UID: "8e7a82869988463543d3d8dd1f0b5fe3"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:24:33.542769 master-0 kubenswrapper[27835]: I0318 13:24:33.542716 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "8e7a82869988463543d3d8dd1f0b5fe3" (UID: "8e7a82869988463543d3d8dd1f0b5fe3"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:24:33.641812 master-0 kubenswrapper[27835]: I0318 13:24:33.641719 27835 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:24:33.642055 master-0 kubenswrapper[27835]: I0318 13:24:33.641827 27835 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") on node \"master-0\" DevicePath \"\"" Mar 18 13:24:33.642055 master-0 kubenswrapper[27835]: I0318 13:24:33.641879 27835 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") on node \"master-0\" DevicePath \"\"" Mar 18 13:24:33.642055 master-0 kubenswrapper[27835]: I0318 13:24:33.641913 27835 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:24:33.642055 master-0 kubenswrapper[27835]: I0318 13:24:33.641948 27835 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:24:33.840980 master-0 kubenswrapper[27835]: I0318 13:24:33.840838 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_8e7a82869988463543d3d8dd1f0b5fe3/startup-monitor/0.log" Mar 18 13:24:33.840980 master-0 kubenswrapper[27835]: I0318 13:24:33.840926 27835 generic.go:334] "Generic (PLEG): container finished" podID="8e7a82869988463543d3d8dd1f0b5fe3" containerID="70f992d6ed6a10b9fe4f1fff57a0961086b5274d71307bfff82b9e6a9a664092" exitCode=137 Mar 18 13:24:33.841250 master-0 kubenswrapper[27835]: I0318 13:24:33.841007 27835 scope.go:117] "RemoveContainer" containerID="70f992d6ed6a10b9fe4f1fff57a0961086b5274d71307bfff82b9e6a9a664092" Mar 18 13:24:33.841250 master-0 kubenswrapper[27835]: I0318 13:24:33.841085 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:24:33.855147 master-0 kubenswrapper[27835]: I0318 13:24:33.855112 27835 scope.go:117] "RemoveContainer" containerID="70f992d6ed6a10b9fe4f1fff57a0961086b5274d71307bfff82b9e6a9a664092" Mar 18 13:24:33.855486 master-0 kubenswrapper[27835]: E0318 13:24:33.855464 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70f992d6ed6a10b9fe4f1fff57a0961086b5274d71307bfff82b9e6a9a664092\": container with ID starting with 70f992d6ed6a10b9fe4f1fff57a0961086b5274d71307bfff82b9e6a9a664092 not found: ID does not exist" containerID="70f992d6ed6a10b9fe4f1fff57a0961086b5274d71307bfff82b9e6a9a664092" Mar 18 13:24:33.855588 master-0 kubenswrapper[27835]: I0318 13:24:33.855565 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70f992d6ed6a10b9fe4f1fff57a0961086b5274d71307bfff82b9e6a9a664092"} err="failed to get container status \"70f992d6ed6a10b9fe4f1fff57a0961086b5274d71307bfff82b9e6a9a664092\": rpc error: code = NotFound desc = could not find container \"70f992d6ed6a10b9fe4f1fff57a0961086b5274d71307bfff82b9e6a9a664092\": container with ID starting with 70f992d6ed6a10b9fe4f1fff57a0961086b5274d71307bfff82b9e6a9a664092 not found: ID does not exist" Mar 18 13:24:33.950301 master-0 kubenswrapper[27835]: I0318 13:24:33.950253 27835 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="7d7967a0-a711-4e90-bd5a-d5910240beaa" Mar 18 13:24:34.193235 master-0 kubenswrapper[27835]: I0318 13:24:34.193165 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:34.193235 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:34.193235 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:34.193235 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:34.193728 master-0 kubenswrapper[27835]: I0318 13:24:34.193272 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:34.290063 master-0 kubenswrapper[27835]: I0318 13:24:34.289990 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e7a82869988463543d3d8dd1f0b5fe3" path="/var/lib/kubelet/pods/8e7a82869988463543d3d8dd1f0b5fe3/volumes" Mar 18 13:24:34.290309 master-0 kubenswrapper[27835]: I0318 13:24:34.290281 27835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Mar 18 13:24:34.301663 master-0 kubenswrapper[27835]: I0318 13:24:34.301619 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 13:24:34.301663 master-0 kubenswrapper[27835]: I0318 13:24:34.301659 27835 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="7d7967a0-a711-4e90-bd5a-d5910240beaa" Mar 18 13:24:34.304982 master-0 kubenswrapper[27835]: I0318 13:24:34.304954 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 13:24:34.304982 master-0 kubenswrapper[27835]: I0318 13:24:34.304981 27835 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="7d7967a0-a711-4e90-bd5a-d5910240beaa" Mar 18 13:24:34.762209 master-0 kubenswrapper[27835]: I0318 13:24:34.762094 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:34.762930 master-0 kubenswrapper[27835]: I0318 13:24:34.762405 27835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 13:24:34.783383 master-0 kubenswrapper[27835]: I0318 13:24:34.782936 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kxqjc" Mar 18 13:24:35.228005 master-0 kubenswrapper[27835]: I0318 13:24:35.227928 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:35.228005 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:35.228005 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:35.228005 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:35.228321 master-0 kubenswrapper[27835]: I0318 13:24:35.228078 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:36.193643 master-0 kubenswrapper[27835]: I0318 13:24:36.193589 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:36.193643 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:36.193643 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:36.193643 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:36.194351 master-0 kubenswrapper[27835]: I0318 13:24:36.193654 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:37.193008 master-0 kubenswrapper[27835]: I0318 13:24:37.192940 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:37.193008 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:37.193008 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:37.193008 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:37.193284 master-0 kubenswrapper[27835]: I0318 13:24:37.193035 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:37.802057 master-0 kubenswrapper[27835]: I0318 13:24:37.801626 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b853631-ff77-4643-aa07-b1f8056320a3-kube-api-access\") pod \"installer-3-master-0\" (UID: \"9b853631-ff77-4643-aa07-b1f8056320a3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:24:37.802057 master-0 kubenswrapper[27835]: E0318 13:24:37.801926 27835 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:24:37.802057 master-0 kubenswrapper[27835]: E0318 13:24:37.801990 27835 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:24:37.803614 master-0 kubenswrapper[27835]: E0318 13:24:37.802082 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9b853631-ff77-4643-aa07-b1f8056320a3-kube-api-access podName:9b853631-ff77-4643-aa07-b1f8056320a3 nodeName:}" failed. No retries permitted until 2026-03-18 13:24:53.802052474 +0000 UTC m=+57.767264054 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/9b853631-ff77-4643-aa07-b1f8056320a3-kube-api-access") pod "installer-3-master-0" (UID: "9b853631-ff77-4643-aa07-b1f8056320a3") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:24:38.194465 master-0 kubenswrapper[27835]: I0318 13:24:38.194326 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:38.194465 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:38.194465 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:38.194465 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:38.195107 master-0 kubenswrapper[27835]: I0318 13:24:38.194486 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:39.194075 master-0 kubenswrapper[27835]: I0318 13:24:39.193974 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:39.194075 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:39.194075 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:39.194075 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:39.194699 master-0 kubenswrapper[27835]: I0318 13:24:39.194113 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:40.195894 master-0 kubenswrapper[27835]: I0318 13:24:40.195789 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:40.195894 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:40.195894 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:40.195894 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:40.196612 master-0 kubenswrapper[27835]: I0318 13:24:40.195924 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:41.194006 master-0 kubenswrapper[27835]: I0318 13:24:41.193890 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:41.194006 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:41.194006 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:41.194006 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:41.194547 master-0 kubenswrapper[27835]: I0318 13:24:41.194021 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:41.603192 master-0 kubenswrapper[27835]: I0318 13:24:41.603065 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:24:42.192077 master-0 kubenswrapper[27835]: I0318 13:24:42.192038 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:42.192077 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:42.192077 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:42.192077 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:42.192402 master-0 kubenswrapper[27835]: I0318 13:24:42.192099 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:43.192884 master-0 kubenswrapper[27835]: I0318 13:24:43.192826 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:43.192884 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:43.192884 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:43.192884 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:43.193971 master-0 kubenswrapper[27835]: I0318 13:24:43.192923 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:44.192595 master-0 kubenswrapper[27835]: I0318 13:24:44.192523 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:44.192595 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:44.192595 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:44.192595 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:44.192595 master-0 kubenswrapper[27835]: I0318 13:24:44.192579 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:45.195665 master-0 kubenswrapper[27835]: I0318 13:24:45.195572 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:45.195665 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:45.195665 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:45.195665 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:45.196223 master-0 kubenswrapper[27835]: I0318 13:24:45.195673 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:46.180179 master-0 kubenswrapper[27835]: I0318 13:24:46.180113 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-75f844c59b-v7dzz"] Mar 18 13:24:46.180488 master-0 kubenswrapper[27835]: E0318 13:24:46.180390 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41" containerName="installer" Mar 18 13:24:46.180488 master-0 kubenswrapper[27835]: I0318 13:24:46.180407 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41" containerName="installer" Mar 18 13:24:46.180488 master-0 kubenswrapper[27835]: E0318 13:24:46.180446 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4d424a6-cf4e-4e32-bc50-db63ef03f8dd" containerName="installer" Mar 18 13:24:46.180488 master-0 kubenswrapper[27835]: I0318 13:24:46.180456 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d424a6-cf4e-4e32-bc50-db63ef03f8dd" containerName="installer" Mar 18 13:24:46.180488 master-0 kubenswrapper[27835]: E0318 13:24:46.180470 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="cluster-policy-controller" Mar 18 13:24:46.180488 master-0 kubenswrapper[27835]: I0318 13:24:46.180478 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="cluster-policy-controller" Mar 18 13:24:46.180488 master-0 kubenswrapper[27835]: E0318 13:24:46.180496 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a4c87a8-6bf0-43b2-b598-1561cba3e391" containerName="installer" Mar 18 13:24:46.180907 master-0 kubenswrapper[27835]: I0318 13:24:46.180505 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a4c87a8-6bf0-43b2-b598-1561cba3e391" containerName="installer" Mar 18 13:24:46.180907 master-0 kubenswrapper[27835]: E0318 13:24:46.180520 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb385758-78ae-46b3-994e-fec9b14b7322" containerName="installer" Mar 18 13:24:46.180907 master-0 kubenswrapper[27835]: I0318 13:24:46.180530 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb385758-78ae-46b3-994e-fec9b14b7322" containerName="installer" Mar 18 13:24:46.180907 master-0 kubenswrapper[27835]: E0318 13:24:46.180540 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5217b77d-b517-45c3-b76d-eee86d72b141" containerName="installer" Mar 18 13:24:46.180907 master-0 kubenswrapper[27835]: I0318 13:24:46.180548 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="5217b77d-b517-45c3-b76d-eee86d72b141" containerName="installer" Mar 18 13:24:46.180907 master-0 kubenswrapper[27835]: E0318 13:24:46.180561 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="814ffa63-b08e-4de8-b912-8d7f0638230b" containerName="installer" Mar 18 13:24:46.180907 master-0 kubenswrapper[27835]: I0318 13:24:46.180568 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="814ffa63-b08e-4de8-b912-8d7f0638230b" containerName="installer" Mar 18 13:24:46.180907 master-0 kubenswrapper[27835]: E0318 13:24:46.180581 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b853631-ff77-4643-aa07-b1f8056320a3" containerName="installer" Mar 18 13:24:46.180907 master-0 kubenswrapper[27835]: I0318 13:24:46.180588 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b853631-ff77-4643-aa07-b1f8056320a3" containerName="installer" Mar 18 13:24:46.180907 master-0 kubenswrapper[27835]: E0318 13:24:46.180602 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="720a1f60-c1cb-4aef-aaec-f082090ca631" containerName="multus-admission-controller" Mar 18 13:24:46.180907 master-0 kubenswrapper[27835]: I0318 13:24:46.180610 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="720a1f60-c1cb-4aef-aaec-f082090ca631" containerName="multus-admission-controller" Mar 18 13:24:46.180907 master-0 kubenswrapper[27835]: E0318 13:24:46.180632 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62eae2a9-2667-431e-ad73-ca18124d01f6" containerName="installer" Mar 18 13:24:46.180907 master-0 kubenswrapper[27835]: I0318 13:24:46.180640 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="62eae2a9-2667-431e-ad73-ca18124d01f6" containerName="installer" Mar 18 13:24:46.180907 master-0 kubenswrapper[27835]: E0318 13:24:46.180652 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="615539dc-56e1-4489-9aee-33b3e769d4fc" containerName="installer" Mar 18 13:24:46.180907 master-0 kubenswrapper[27835]: I0318 13:24:46.180661 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="615539dc-56e1-4489-9aee-33b3e769d4fc" containerName="installer" Mar 18 13:24:46.180907 master-0 kubenswrapper[27835]: E0318 13:24:46.180673 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80daec9e-b15b-4782-a1f7-ce398bbe323b" containerName="assisted-installer-controller" Mar 18 13:24:46.180907 master-0 kubenswrapper[27835]: I0318 13:24:46.180680 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="80daec9e-b15b-4782-a1f7-ce398bbe323b" containerName="assisted-installer-controller" Mar 18 13:24:46.180907 master-0 kubenswrapper[27835]: E0318 13:24:46.180695 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e7a82869988463543d3d8dd1f0b5fe3" containerName="startup-monitor" Mar 18 13:24:46.180907 master-0 kubenswrapper[27835]: I0318 13:24:46.180702 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e7a82869988463543d3d8dd1f0b5fe3" containerName="startup-monitor" Mar 18 13:24:46.180907 master-0 kubenswrapper[27835]: E0318 13:24:46.180710 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="720a1f60-c1cb-4aef-aaec-f082090ca631" containerName="kube-rbac-proxy" Mar 18 13:24:46.180907 master-0 kubenswrapper[27835]: I0318 13:24:46.180717 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="720a1f60-c1cb-4aef-aaec-f082090ca631" containerName="kube-rbac-proxy" Mar 18 13:24:46.180907 master-0 kubenswrapper[27835]: I0318 13:24:46.180852 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4d424a6-cf4e-4e32-bc50-db63ef03f8dd" containerName="installer" Mar 18 13:24:46.180907 master-0 kubenswrapper[27835]: I0318 13:24:46.180873 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="720a1f60-c1cb-4aef-aaec-f082090ca631" containerName="multus-admission-controller" Mar 18 13:24:46.180907 master-0 kubenswrapper[27835]: I0318 13:24:46.180884 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce43e217adc4d0869adee3ba7c628c00" containerName="cluster-policy-controller" Mar 18 13:24:46.180907 master-0 kubenswrapper[27835]: I0318 13:24:46.180904 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb385758-78ae-46b3-994e-fec9b14b7322" containerName="installer" Mar 18 13:24:46.180907 master-0 kubenswrapper[27835]: I0318 13:24:46.180915 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="62eae2a9-2667-431e-ad73-ca18124d01f6" containerName="installer" Mar 18 13:24:46.180907 master-0 kubenswrapper[27835]: I0318 13:24:46.180923 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a4c87a8-6bf0-43b2-b598-1561cba3e391" containerName="installer" Mar 18 13:24:46.180907 master-0 kubenswrapper[27835]: I0318 13:24:46.180940 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="5217b77d-b517-45c3-b76d-eee86d72b141" containerName="installer" Mar 18 13:24:46.180907 master-0 kubenswrapper[27835]: I0318 13:24:46.180951 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="80daec9e-b15b-4782-a1f7-ce398bbe323b" containerName="assisted-installer-controller" Mar 18 13:24:46.182565 master-0 kubenswrapper[27835]: I0318 13:24:46.180961 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e7a82869988463543d3d8dd1f0b5fe3" containerName="startup-monitor" Mar 18 13:24:46.182565 master-0 kubenswrapper[27835]: I0318 13:24:46.180970 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="814ffa63-b08e-4de8-b912-8d7f0638230b" containerName="installer" Mar 18 13:24:46.182565 master-0 kubenswrapper[27835]: I0318 13:24:46.180982 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="e646632b-0ce3-4cb4-9ed6-aa43b7d0cf41" containerName="installer" Mar 18 13:24:46.182565 master-0 kubenswrapper[27835]: I0318 13:24:46.180993 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="615539dc-56e1-4489-9aee-33b3e769d4fc" containerName="installer" Mar 18 13:24:46.182565 master-0 kubenswrapper[27835]: I0318 13:24:46.181005 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b853631-ff77-4643-aa07-b1f8056320a3" containerName="installer" Mar 18 13:24:46.182565 master-0 kubenswrapper[27835]: I0318 13:24:46.181014 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="720a1f60-c1cb-4aef-aaec-f082090ca631" containerName="kube-rbac-proxy" Mar 18 13:24:46.192975 master-0 kubenswrapper[27835]: I0318 13:24:46.189578 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-75f844c59b-v7dzz" Mar 18 13:24:46.196149 master-0 kubenswrapper[27835]: I0318 13:24:46.195648 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-mbkdw" Mar 18 13:24:46.197045 master-0 kubenswrapper[27835]: I0318 13:24:46.196591 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 18 13:24:46.197821 master-0 kubenswrapper[27835]: I0318 13:24:46.197747 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:46.197821 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:46.197821 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:46.197821 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:46.200215 master-0 kubenswrapper[27835]: I0318 13:24:46.197841 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:46.203935 master-0 kubenswrapper[27835]: I0318 13:24:46.203896 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-75f844c59b-v7dzz"] Mar 18 13:24:46.223025 master-0 kubenswrapper[27835]: I0318 13:24:46.222962 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9439c9e6-476c-4bee-8285-5155fa553f30-monitoring-plugin-cert\") pod \"monitoring-plugin-75f844c59b-v7dzz\" (UID: \"9439c9e6-476c-4bee-8285-5155fa553f30\") " pod="openshift-monitoring/monitoring-plugin-75f844c59b-v7dzz" Mar 18 13:24:46.339453 master-0 kubenswrapper[27835]: I0318 13:24:46.327022 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9439c9e6-476c-4bee-8285-5155fa553f30-monitoring-plugin-cert\") pod \"monitoring-plugin-75f844c59b-v7dzz\" (UID: \"9439c9e6-476c-4bee-8285-5155fa553f30\") " pod="openshift-monitoring/monitoring-plugin-75f844c59b-v7dzz" Mar 18 13:24:46.339453 master-0 kubenswrapper[27835]: I0318 13:24:46.327504 27835 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 18 13:24:46.339453 master-0 kubenswrapper[27835]: I0318 13:24:46.333069 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9439c9e6-476c-4bee-8285-5155fa553f30-monitoring-plugin-cert\") pod \"monitoring-plugin-75f844c59b-v7dzz\" (UID: \"9439c9e6-476c-4bee-8285-5155fa553f30\") " pod="openshift-monitoring/monitoring-plugin-75f844c59b-v7dzz" Mar 18 13:24:46.579063 master-0 kubenswrapper[27835]: I0318 13:24:46.578862 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-75f844c59b-v7dzz" Mar 18 13:24:46.981558 master-0 kubenswrapper[27835]: I0318 13:24:46.981259 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-75f844c59b-v7dzz"] Mar 18 13:24:46.992180 master-0 kubenswrapper[27835]: I0318 13:24:46.992150 27835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 13:24:47.192477 master-0 kubenswrapper[27835]: I0318 13:24:47.192424 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:47.192477 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:47.192477 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:47.192477 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:47.192477 master-0 kubenswrapper[27835]: I0318 13:24:47.192486 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:47.944068 master-0 kubenswrapper[27835]: I0318 13:24:47.944018 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-75f844c59b-v7dzz" event={"ID":"9439c9e6-476c-4bee-8285-5155fa553f30","Type":"ContainerStarted","Data":"bfcc29c598be870d0a30159bb9167fcb95c8e136dafa5afef20f6f10e3b20788"} Mar 18 13:24:48.192589 master-0 kubenswrapper[27835]: I0318 13:24:48.192538 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:48.192589 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:48.192589 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:48.192589 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:48.192877 master-0 kubenswrapper[27835]: I0318 13:24:48.192614 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:49.194082 master-0 kubenswrapper[27835]: I0318 13:24:49.193702 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:49.194082 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:49.194082 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:49.194082 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:49.194082 master-0 kubenswrapper[27835]: I0318 13:24:49.193776 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:49.960334 master-0 kubenswrapper[27835]: I0318 13:24:49.960260 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-75f844c59b-v7dzz" event={"ID":"9439c9e6-476c-4bee-8285-5155fa553f30","Type":"ContainerStarted","Data":"ff280e6d23b01e76ac48c7d26621b6fcddbb9a3157c3ccd3b86ed5d39152eaaf"} Mar 18 13:24:49.983002 master-0 kubenswrapper[27835]: I0318 13:24:49.982911 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-75f844c59b-v7dzz" podStartSLOduration=1.3916373850000001 podStartE2EDuration="3.982885846s" podCreationTimestamp="2026-03-18 13:24:46 +0000 UTC" firstStartedPulling="2026-03-18 13:24:46.992107679 +0000 UTC m=+50.957319239" lastFinishedPulling="2026-03-18 13:24:49.58335614 +0000 UTC m=+53.548567700" observedRunningTime="2026-03-18 13:24:49.980861085 +0000 UTC m=+53.946072715" watchObservedRunningTime="2026-03-18 13:24:49.982885846 +0000 UTC m=+53.948097426" Mar 18 13:24:50.192594 master-0 kubenswrapper[27835]: I0318 13:24:50.192534 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:50.192594 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:50.192594 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:50.192594 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:50.193170 master-0 kubenswrapper[27835]: I0318 13:24:50.192607 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:50.966063 master-0 kubenswrapper[27835]: I0318 13:24:50.966007 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-75f844c59b-v7dzz" Mar 18 13:24:50.971439 master-0 kubenswrapper[27835]: I0318 13:24:50.971390 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-75f844c59b-v7dzz" Mar 18 13:24:51.192862 master-0 kubenswrapper[27835]: I0318 13:24:51.192820 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:51.192862 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:51.192862 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:51.192862 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:51.193207 master-0 kubenswrapper[27835]: I0318 13:24:51.192887 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:52.192709 master-0 kubenswrapper[27835]: I0318 13:24:52.192627 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:52.192709 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:52.192709 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:52.192709 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:52.193486 master-0 kubenswrapper[27835]: I0318 13:24:52.192709 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:53.192375 master-0 kubenswrapper[27835]: I0318 13:24:53.192294 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:53.192375 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:53.192375 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:53.192375 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:53.192725 master-0 kubenswrapper[27835]: I0318 13:24:53.192449 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:53.840444 master-0 kubenswrapper[27835]: I0318 13:24:53.840379 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b853631-ff77-4643-aa07-b1f8056320a3-kube-api-access\") pod \"installer-3-master-0\" (UID: \"9b853631-ff77-4643-aa07-b1f8056320a3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:24:53.841058 master-0 kubenswrapper[27835]: E0318 13:24:53.840639 27835 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:24:53.841058 master-0 kubenswrapper[27835]: E0318 13:24:53.840693 27835 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:24:53.841058 master-0 kubenswrapper[27835]: E0318 13:24:53.840781 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9b853631-ff77-4643-aa07-b1f8056320a3-kube-api-access podName:9b853631-ff77-4643-aa07-b1f8056320a3 nodeName:}" failed. No retries permitted until 2026-03-18 13:25:25.84075497 +0000 UTC m=+89.805966570 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/9b853631-ff77-4643-aa07-b1f8056320a3-kube-api-access") pod "installer-3-master-0" (UID: "9b853631-ff77-4643-aa07-b1f8056320a3") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 13:24:54.193556 master-0 kubenswrapper[27835]: I0318 13:24:54.193463 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:54.193556 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:54.193556 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:54.193556 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:54.194011 master-0 kubenswrapper[27835]: I0318 13:24:54.193567 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:55.193165 master-0 kubenswrapper[27835]: I0318 13:24:55.193066 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:55.193165 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:55.193165 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:55.193165 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:55.194025 master-0 kubenswrapper[27835]: I0318 13:24:55.193166 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:56.193731 master-0 kubenswrapper[27835]: I0318 13:24:56.193663 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:56.193731 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:56.193731 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:56.193731 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:56.194395 master-0 kubenswrapper[27835]: I0318 13:24:56.193733 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:57.192598 master-0 kubenswrapper[27835]: I0318 13:24:57.192559 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:57.192598 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:57.192598 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:57.192598 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:57.192936 master-0 kubenswrapper[27835]: I0318 13:24:57.192908 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:58.192710 master-0 kubenswrapper[27835]: I0318 13:24:58.192666 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:58.192710 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:58.192710 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:58.192710 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:58.193318 master-0 kubenswrapper[27835]: I0318 13:24:58.193290 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:24:59.194013 master-0 kubenswrapper[27835]: I0318 13:24:59.193923 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:24:59.194013 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:24:59.194013 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:24:59.194013 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:24:59.195019 master-0 kubenswrapper[27835]: I0318 13:24:59.194025 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:25:00.193926 master-0 kubenswrapper[27835]: I0318 13:25:00.193672 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:25:00.193926 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:25:00.193926 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:25:00.193926 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:25:00.194621 master-0 kubenswrapper[27835]: I0318 13:25:00.193960 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:25:00.294714 master-0 kubenswrapper[27835]: I0318 13:25:00.294652 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-76b6568d85-cxn2f"] Mar 18 13:25:00.295666 master-0 kubenswrapper[27835]: I0318 13:25:00.295624 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-76b6568d85-cxn2f" Mar 18 13:25:00.298152 master-0 kubenswrapper[27835]: I0318 13:25:00.298108 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 18 13:25:00.298152 master-0 kubenswrapper[27835]: I0318 13:25:00.298139 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 18 13:25:00.298471 master-0 kubenswrapper[27835]: I0318 13:25:00.298226 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 18 13:25:00.298471 master-0 kubenswrapper[27835]: I0318 13:25:00.298404 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 18 13:25:00.298705 master-0 kubenswrapper[27835]: I0318 13:25:00.298507 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-9lwzk" Mar 18 13:25:00.307787 master-0 kubenswrapper[27835]: I0318 13:25:00.307049 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 18 13:25:00.343294 master-0 kubenswrapper[27835]: I0318 13:25:00.343231 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz8vq\" (UniqueName: \"kubernetes.io/projected/5e8a9745-28f0-47b0-a930-49ce65ca1ae0-kube-api-access-fz8vq\") pod \"console-operator-76b6568d85-cxn2f\" (UID: \"5e8a9745-28f0-47b0-a930-49ce65ca1ae0\") " pod="openshift-console-operator/console-operator-76b6568d85-cxn2f" Mar 18 13:25:00.343508 master-0 kubenswrapper[27835]: I0318 13:25:00.343332 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5e8a9745-28f0-47b0-a930-49ce65ca1ae0-trusted-ca\") pod \"console-operator-76b6568d85-cxn2f\" (UID: \"5e8a9745-28f0-47b0-a930-49ce65ca1ae0\") " pod="openshift-console-operator/console-operator-76b6568d85-cxn2f" Mar 18 13:25:00.343508 master-0 kubenswrapper[27835]: I0318 13:25:00.343396 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e8a9745-28f0-47b0-a930-49ce65ca1ae0-serving-cert\") pod \"console-operator-76b6568d85-cxn2f\" (UID: \"5e8a9745-28f0-47b0-a930-49ce65ca1ae0\") " pod="openshift-console-operator/console-operator-76b6568d85-cxn2f" Mar 18 13:25:00.343586 master-0 kubenswrapper[27835]: I0318 13:25:00.343540 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e8a9745-28f0-47b0-a930-49ce65ca1ae0-config\") pod \"console-operator-76b6568d85-cxn2f\" (UID: \"5e8a9745-28f0-47b0-a930-49ce65ca1ae0\") " pod="openshift-console-operator/console-operator-76b6568d85-cxn2f" Mar 18 13:25:00.445219 master-0 kubenswrapper[27835]: I0318 13:25:00.445045 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz8vq\" (UniqueName: \"kubernetes.io/projected/5e8a9745-28f0-47b0-a930-49ce65ca1ae0-kube-api-access-fz8vq\") pod \"console-operator-76b6568d85-cxn2f\" (UID: \"5e8a9745-28f0-47b0-a930-49ce65ca1ae0\") " pod="openshift-console-operator/console-operator-76b6568d85-cxn2f" Mar 18 13:25:00.445219 master-0 kubenswrapper[27835]: I0318 13:25:00.445225 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5e8a9745-28f0-47b0-a930-49ce65ca1ae0-trusted-ca\") pod \"console-operator-76b6568d85-cxn2f\" (UID: \"5e8a9745-28f0-47b0-a930-49ce65ca1ae0\") " pod="openshift-console-operator/console-operator-76b6568d85-cxn2f" Mar 18 13:25:00.445549 master-0 kubenswrapper[27835]: I0318 13:25:00.445453 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e8a9745-28f0-47b0-a930-49ce65ca1ae0-serving-cert\") pod \"console-operator-76b6568d85-cxn2f\" (UID: \"5e8a9745-28f0-47b0-a930-49ce65ca1ae0\") " pod="openshift-console-operator/console-operator-76b6568d85-cxn2f" Mar 18 13:25:00.445650 master-0 kubenswrapper[27835]: I0318 13:25:00.445606 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e8a9745-28f0-47b0-a930-49ce65ca1ae0-config\") pod \"console-operator-76b6568d85-cxn2f\" (UID: \"5e8a9745-28f0-47b0-a930-49ce65ca1ae0\") " pod="openshift-console-operator/console-operator-76b6568d85-cxn2f" Mar 18 13:25:00.447379 master-0 kubenswrapper[27835]: I0318 13:25:00.447318 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e8a9745-28f0-47b0-a930-49ce65ca1ae0-config\") pod \"console-operator-76b6568d85-cxn2f\" (UID: \"5e8a9745-28f0-47b0-a930-49ce65ca1ae0\") " pod="openshift-console-operator/console-operator-76b6568d85-cxn2f" Mar 18 13:25:00.447733 master-0 kubenswrapper[27835]: I0318 13:25:00.447681 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5e8a9745-28f0-47b0-a930-49ce65ca1ae0-trusted-ca\") pod \"console-operator-76b6568d85-cxn2f\" (UID: \"5e8a9745-28f0-47b0-a930-49ce65ca1ae0\") " pod="openshift-console-operator/console-operator-76b6568d85-cxn2f" Mar 18 13:25:00.450332 master-0 kubenswrapper[27835]: I0318 13:25:00.450296 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e8a9745-28f0-47b0-a930-49ce65ca1ae0-serving-cert\") pod \"console-operator-76b6568d85-cxn2f\" (UID: \"5e8a9745-28f0-47b0-a930-49ce65ca1ae0\") " pod="openshift-console-operator/console-operator-76b6568d85-cxn2f" Mar 18 13:25:00.519403 master-0 kubenswrapper[27835]: I0318 13:25:00.519326 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-76b6568d85-cxn2f"] Mar 18 13:25:00.586163 master-0 kubenswrapper[27835]: I0318 13:25:00.586110 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz8vq\" (UniqueName: \"kubernetes.io/projected/5e8a9745-28f0-47b0-a930-49ce65ca1ae0-kube-api-access-fz8vq\") pod \"console-operator-76b6568d85-cxn2f\" (UID: \"5e8a9745-28f0-47b0-a930-49ce65ca1ae0\") " pod="openshift-console-operator/console-operator-76b6568d85-cxn2f" Mar 18 13:25:00.638792 master-0 kubenswrapper[27835]: I0318 13:25:00.638727 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-76b6568d85-cxn2f" Mar 18 13:25:01.175337 master-0 kubenswrapper[27835]: I0318 13:25:01.175259 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-76b6568d85-cxn2f"] Mar 18 13:25:01.194471 master-0 kubenswrapper[27835]: I0318 13:25:01.194336 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:25:01.194471 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:25:01.194471 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:25:01.194471 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:25:01.194471 master-0 kubenswrapper[27835]: I0318 13:25:01.194444 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:25:01.584279 master-0 kubenswrapper[27835]: I0318 13:25:01.583906 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 18 13:25:01.586255 master-0 kubenswrapper[27835]: I0318 13:25:01.584812 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 13:25:01.587331 master-0 kubenswrapper[27835]: I0318 13:25:01.586954 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 18 13:25:01.587331 master-0 kubenswrapper[27835]: I0318 13:25:01.587217 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-sd5ht" Mar 18 13:25:01.647376 master-0 kubenswrapper[27835]: I0318 13:25:01.615484 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 18 13:25:01.665400 master-0 kubenswrapper[27835]: I0318 13:25:01.664213 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0737b13d-faed-44e2-9d20-1f3860dcc9bd-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"0737b13d-faed-44e2-9d20-1f3860dcc9bd\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 13:25:01.665400 master-0 kubenswrapper[27835]: I0318 13:25:01.664545 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0737b13d-faed-44e2-9d20-1f3860dcc9bd-var-lock\") pod \"installer-4-master-0\" (UID: \"0737b13d-faed-44e2-9d20-1f3860dcc9bd\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 13:25:01.665400 master-0 kubenswrapper[27835]: I0318 13:25:01.664668 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0737b13d-faed-44e2-9d20-1f3860dcc9bd-kube-api-access\") pod \"installer-4-master-0\" (UID: \"0737b13d-faed-44e2-9d20-1f3860dcc9bd\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 13:25:01.765731 master-0 kubenswrapper[27835]: I0318 13:25:01.765671 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0737b13d-faed-44e2-9d20-1f3860dcc9bd-var-lock\") pod \"installer-4-master-0\" (UID: \"0737b13d-faed-44e2-9d20-1f3860dcc9bd\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 13:25:01.765731 master-0 kubenswrapper[27835]: I0318 13:25:01.765731 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0737b13d-faed-44e2-9d20-1f3860dcc9bd-kube-api-access\") pod \"installer-4-master-0\" (UID: \"0737b13d-faed-44e2-9d20-1f3860dcc9bd\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 13:25:01.765948 master-0 kubenswrapper[27835]: I0318 13:25:01.765756 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0737b13d-faed-44e2-9d20-1f3860dcc9bd-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"0737b13d-faed-44e2-9d20-1f3860dcc9bd\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 13:25:01.765948 master-0 kubenswrapper[27835]: I0318 13:25:01.765837 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0737b13d-faed-44e2-9d20-1f3860dcc9bd-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"0737b13d-faed-44e2-9d20-1f3860dcc9bd\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 13:25:01.765948 master-0 kubenswrapper[27835]: I0318 13:25:01.765881 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0737b13d-faed-44e2-9d20-1f3860dcc9bd-var-lock\") pod \"installer-4-master-0\" (UID: \"0737b13d-faed-44e2-9d20-1f3860dcc9bd\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 13:25:01.780650 master-0 kubenswrapper[27835]: I0318 13:25:01.780607 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0737b13d-faed-44e2-9d20-1f3860dcc9bd-kube-api-access\") pod \"installer-4-master-0\" (UID: \"0737b13d-faed-44e2-9d20-1f3860dcc9bd\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 13:25:01.950994 master-0 kubenswrapper[27835]: I0318 13:25:01.950930 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 13:25:02.051256 master-0 kubenswrapper[27835]: I0318 13:25:02.051163 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-76b6568d85-cxn2f" event={"ID":"5e8a9745-28f0-47b0-a930-49ce65ca1ae0","Type":"ContainerStarted","Data":"4ef0c48067dd3fa4832b756328090a5e421d050a8aa421152848981b02c76555"} Mar 18 13:25:02.193294 master-0 kubenswrapper[27835]: I0318 13:25:02.192850 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:25:02.193294 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:25:02.193294 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:25:02.193294 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:25:02.193294 master-0 kubenswrapper[27835]: I0318 13:25:02.192938 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:25:02.382254 master-0 kubenswrapper[27835]: I0318 13:25:02.381866 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 18 13:25:03.061338 master-0 kubenswrapper[27835]: I0318 13:25:03.061178 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"0737b13d-faed-44e2-9d20-1f3860dcc9bd","Type":"ContainerStarted","Data":"45c1bfe81a4ec9a67e0f96ccae8aa8e92cc20e9572ced1d331993a3be67d4dd1"} Mar 18 13:25:03.061338 master-0 kubenswrapper[27835]: I0318 13:25:03.061244 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"0737b13d-faed-44e2-9d20-1f3860dcc9bd","Type":"ContainerStarted","Data":"6cf7021ca6792db8c60b9145b52ea7d21d54d79b1809f86547fc87d8f8a60ed1"} Mar 18 13:25:03.111347 master-0 kubenswrapper[27835]: I0318 13:25:03.106760 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-4-master-0" podStartSLOduration=2.106724687 podStartE2EDuration="2.106724687s" podCreationTimestamp="2026-03-18 13:25:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:25:03.104149 +0000 UTC m=+67.069360590" watchObservedRunningTime="2026-03-18 13:25:03.106724687 +0000 UTC m=+67.071936257" Mar 18 13:25:03.193027 master-0 kubenswrapper[27835]: I0318 13:25:03.192954 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:25:03.193027 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:25:03.193027 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:25:03.193027 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:25:03.193665 master-0 kubenswrapper[27835]: I0318 13:25:03.193030 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:25:04.195015 master-0 kubenswrapper[27835]: I0318 13:25:04.194938 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:25:04.195015 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:25:04.195015 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:25:04.195015 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:25:04.195685 master-0 kubenswrapper[27835]: I0318 13:25:04.195021 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:25:05.008952 master-0 kubenswrapper[27835]: I0318 13:25:05.008865 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-66b8ffb895-2tkdh"] Mar 18 13:25:05.009886 master-0 kubenswrapper[27835]: I0318 13:25:05.009859 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-66b8ffb895-2tkdh" Mar 18 13:25:05.012608 master-0 kubenswrapper[27835]: I0318 13:25:05.012567 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 18 13:25:05.012608 master-0 kubenswrapper[27835]: I0318 13:25:05.012586 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-kcq89" Mar 18 13:25:05.015749 master-0 kubenswrapper[27835]: I0318 13:25:05.015713 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 18 13:25:05.026841 master-0 kubenswrapper[27835]: I0318 13:25:05.026783 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-66b8ffb895-2tkdh"] Mar 18 13:25:05.075672 master-0 kubenswrapper[27835]: I0318 13:25:05.075603 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-76b6568d85-cxn2f" event={"ID":"5e8a9745-28f0-47b0-a930-49ce65ca1ae0","Type":"ContainerStarted","Data":"8e84d47a29e6517fca92cd9b0830b79000d242ae65ad1de8a929ffb53c2a9c48"} Mar 18 13:25:05.075947 master-0 kubenswrapper[27835]: I0318 13:25:05.075900 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-76b6568d85-cxn2f" Mar 18 13:25:05.095339 master-0 kubenswrapper[27835]: I0318 13:25:05.082617 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-76b6568d85-cxn2f" Mar 18 13:25:05.107185 master-0 kubenswrapper[27835]: I0318 13:25:05.107103 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-76b6568d85-cxn2f" podStartSLOduration=2.258435042 podStartE2EDuration="5.107085466s" podCreationTimestamp="2026-03-18 13:25:00 +0000 UTC" firstStartedPulling="2026-03-18 13:25:01.194856585 +0000 UTC m=+65.160068195" lastFinishedPulling="2026-03-18 13:25:04.043507059 +0000 UTC m=+68.008718619" observedRunningTime="2026-03-18 13:25:05.104327284 +0000 UTC m=+69.069538854" watchObservedRunningTime="2026-03-18 13:25:05.107085466 +0000 UTC m=+69.072297026" Mar 18 13:25:05.113567 master-0 kubenswrapper[27835]: I0318 13:25:05.113518 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l927g\" (UniqueName: \"kubernetes.io/projected/3f688009-66eb-490d-a0fb-464dba69fb96-kube-api-access-l927g\") pod \"downloads-66b8ffb895-2tkdh\" (UID: \"3f688009-66eb-490d-a0fb-464dba69fb96\") " pod="openshift-console/downloads-66b8ffb895-2tkdh" Mar 18 13:25:05.192530 master-0 kubenswrapper[27835]: I0318 13:25:05.192394 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:25:05.192530 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:25:05.192530 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:25:05.192530 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:25:05.192808 master-0 kubenswrapper[27835]: I0318 13:25:05.192545 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:25:05.215201 master-0 kubenswrapper[27835]: I0318 13:25:05.215136 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l927g\" (UniqueName: \"kubernetes.io/projected/3f688009-66eb-490d-a0fb-464dba69fb96-kube-api-access-l927g\") pod \"downloads-66b8ffb895-2tkdh\" (UID: \"3f688009-66eb-490d-a0fb-464dba69fb96\") " pod="openshift-console/downloads-66b8ffb895-2tkdh" Mar 18 13:25:05.230391 master-0 kubenswrapper[27835]: I0318 13:25:05.230350 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l927g\" (UniqueName: \"kubernetes.io/projected/3f688009-66eb-490d-a0fb-464dba69fb96-kube-api-access-l927g\") pod \"downloads-66b8ffb895-2tkdh\" (UID: \"3f688009-66eb-490d-a0fb-464dba69fb96\") " pod="openshift-console/downloads-66b8ffb895-2tkdh" Mar 18 13:25:05.369520 master-0 kubenswrapper[27835]: I0318 13:25:05.369282 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-66b8ffb895-2tkdh" Mar 18 13:25:05.840077 master-0 kubenswrapper[27835]: W0318 13:25:05.839998 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f688009_66eb_490d_a0fb_464dba69fb96.slice/crio-83d4f909dbee60168888205b42335486171dc9d09cae5f3178d7492d43a22925 WatchSource:0}: Error finding container 83d4f909dbee60168888205b42335486171dc9d09cae5f3178d7492d43a22925: Status 404 returned error can't find the container with id 83d4f909dbee60168888205b42335486171dc9d09cae5f3178d7492d43a22925 Mar 18 13:25:05.840649 master-0 kubenswrapper[27835]: I0318 13:25:05.840494 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-66b8ffb895-2tkdh"] Mar 18 13:25:06.084235 master-0 kubenswrapper[27835]: I0318 13:25:06.084013 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-66b8ffb895-2tkdh" event={"ID":"3f688009-66eb-490d-a0fb-464dba69fb96","Type":"ContainerStarted","Data":"83d4f909dbee60168888205b42335486171dc9d09cae5f3178d7492d43a22925"} Mar 18 13:25:06.193306 master-0 kubenswrapper[27835]: I0318 13:25:06.193245 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:25:06.193306 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:25:06.193306 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:25:06.193306 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:25:06.193306 master-0 kubenswrapper[27835]: I0318 13:25:06.193307 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:25:07.193876 master-0 kubenswrapper[27835]: I0318 13:25:07.193791 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:25:07.193876 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:25:07.193876 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:25:07.193876 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:25:07.194464 master-0 kubenswrapper[27835]: I0318 13:25:07.193916 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:25:08.196166 master-0 kubenswrapper[27835]: I0318 13:25:08.196084 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:25:08.196166 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:25:08.196166 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:25:08.196166 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:25:08.197392 master-0 kubenswrapper[27835]: I0318 13:25:08.196181 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:25:09.192941 master-0 kubenswrapper[27835]: I0318 13:25:09.192879 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:25:09.192941 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:25:09.192941 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:25:09.192941 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:25:09.193228 master-0 kubenswrapper[27835]: I0318 13:25:09.192958 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:25:10.192935 master-0 kubenswrapper[27835]: I0318 13:25:10.192879 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:25:10.192935 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:25:10.192935 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:25:10.192935 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:25:10.193514 master-0 kubenswrapper[27835]: I0318 13:25:10.192965 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:25:11.192091 master-0 kubenswrapper[27835]: I0318 13:25:11.192038 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:25:11.192091 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:25:11.192091 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:25:11.192091 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:25:11.192462 master-0 kubenswrapper[27835]: I0318 13:25:11.192111 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:25:12.192114 master-0 kubenswrapper[27835]: I0318 13:25:12.192069 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:25:12.192114 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:25:12.192114 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:25:12.192114 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:25:12.192756 master-0 kubenswrapper[27835]: I0318 13:25:12.192146 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:25:13.193026 master-0 kubenswrapper[27835]: I0318 13:25:13.192921 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:25:13.193026 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:25:13.193026 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:25:13.193026 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:25:13.193999 master-0 kubenswrapper[27835]: I0318 13:25:13.193051 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:25:14.193583 master-0 kubenswrapper[27835]: I0318 13:25:14.193518 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:25:14.193583 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:25:14.193583 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:25:14.193583 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:25:14.194389 master-0 kubenswrapper[27835]: I0318 13:25:14.193605 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:25:15.192822 master-0 kubenswrapper[27835]: I0318 13:25:15.192766 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:25:15.192822 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:25:15.192822 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:25:15.192822 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:25:15.192822 master-0 kubenswrapper[27835]: I0318 13:25:15.192820 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:25:16.192886 master-0 kubenswrapper[27835]: I0318 13:25:16.192835 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:25:16.192886 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:25:16.192886 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:25:16.192886 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:25:16.193488 master-0 kubenswrapper[27835]: I0318 13:25:16.192901 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:25:17.192842 master-0 kubenswrapper[27835]: I0318 13:25:17.192631 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:25:17.192842 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:25:17.192842 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:25:17.192842 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:25:17.193638 master-0 kubenswrapper[27835]: I0318 13:25:17.192900 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:25:18.192726 master-0 kubenswrapper[27835]: I0318 13:25:18.192665 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:25:18.192726 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:25:18.192726 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:25:18.192726 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:25:18.193022 master-0 kubenswrapper[27835]: I0318 13:25:18.192728 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:25:19.194037 master-0 kubenswrapper[27835]: I0318 13:25:19.193622 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:25:19.194037 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:25:19.194037 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:25:19.194037 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:25:19.194037 master-0 kubenswrapper[27835]: I0318 13:25:19.193696 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:25:20.193391 master-0 kubenswrapper[27835]: I0318 13:25:20.193327 27835 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-gvmtv container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 13:25:20.193391 master-0 kubenswrapper[27835]: [-]has-synced failed: reason withheld Mar 18 13:25:20.193391 master-0 kubenswrapper[27835]: [+]process-running ok Mar 18 13:25:20.193391 master-0 kubenswrapper[27835]: healthz check failed Mar 18 13:25:20.193693 master-0 kubenswrapper[27835]: I0318 13:25:20.193424 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" podUID="00375107-9a3b-4161-a90d-72ea8827c5fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 13:25:21.199952 master-0 kubenswrapper[27835]: I0318 13:25:21.199865 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:25:21.209656 master-0 kubenswrapper[27835]: I0318 13:25:21.209595 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5df65d974f-mpf5j"] Mar 18 13:25:21.211462 master-0 kubenswrapper[27835]: I0318 13:25:21.210562 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-7dcf5569b5-gvmtv" Mar 18 13:25:21.211462 master-0 kubenswrapper[27835]: I0318 13:25:21.210671 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5df65d974f-mpf5j" Mar 18 13:25:21.218196 master-0 kubenswrapper[27835]: I0318 13:25:21.218158 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 18 13:25:21.218372 master-0 kubenswrapper[27835]: I0318 13:25:21.218356 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-sszww" Mar 18 13:25:21.218505 master-0 kubenswrapper[27835]: I0318 13:25:21.218486 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 18 13:25:21.218611 master-0 kubenswrapper[27835]: I0318 13:25:21.218577 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 18 13:25:21.218656 master-0 kubenswrapper[27835]: I0318 13:25:21.218635 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 18 13:25:21.220075 master-0 kubenswrapper[27835]: I0318 13:25:21.218922 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 18 13:25:21.249436 master-0 kubenswrapper[27835]: I0318 13:25:21.248439 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5df65d974f-mpf5j"] Mar 18 13:25:21.370088 master-0 kubenswrapper[27835]: I0318 13:25:21.370041 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a42bf050-6c38-4023-a8b4-dc795f3aadc7-console-serving-cert\") pod \"console-5df65d974f-mpf5j\" (UID: \"a42bf050-6c38-4023-a8b4-dc795f3aadc7\") " pod="openshift-console/console-5df65d974f-mpf5j" Mar 18 13:25:21.370353 master-0 kubenswrapper[27835]: I0318 13:25:21.370336 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a42bf050-6c38-4023-a8b4-dc795f3aadc7-service-ca\") pod \"console-5df65d974f-mpf5j\" (UID: \"a42bf050-6c38-4023-a8b4-dc795f3aadc7\") " pod="openshift-console/console-5df65d974f-mpf5j" Mar 18 13:25:21.370481 master-0 kubenswrapper[27835]: I0318 13:25:21.370463 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a42bf050-6c38-4023-a8b4-dc795f3aadc7-console-oauth-config\") pod \"console-5df65d974f-mpf5j\" (UID: \"a42bf050-6c38-4023-a8b4-dc795f3aadc7\") " pod="openshift-console/console-5df65d974f-mpf5j" Mar 18 13:25:21.370594 master-0 kubenswrapper[27835]: I0318 13:25:21.370580 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq6dl\" (UniqueName: \"kubernetes.io/projected/a42bf050-6c38-4023-a8b4-dc795f3aadc7-kube-api-access-qq6dl\") pod \"console-5df65d974f-mpf5j\" (UID: \"a42bf050-6c38-4023-a8b4-dc795f3aadc7\") " pod="openshift-console/console-5df65d974f-mpf5j" Mar 18 13:25:21.370678 master-0 kubenswrapper[27835]: I0318 13:25:21.370666 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a42bf050-6c38-4023-a8b4-dc795f3aadc7-console-config\") pod \"console-5df65d974f-mpf5j\" (UID: \"a42bf050-6c38-4023-a8b4-dc795f3aadc7\") " pod="openshift-console/console-5df65d974f-mpf5j" Mar 18 13:25:21.370756 master-0 kubenswrapper[27835]: I0318 13:25:21.370744 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a42bf050-6c38-4023-a8b4-dc795f3aadc7-oauth-serving-cert\") pod \"console-5df65d974f-mpf5j\" (UID: \"a42bf050-6c38-4023-a8b4-dc795f3aadc7\") " pod="openshift-console/console-5df65d974f-mpf5j" Mar 18 13:25:21.472115 master-0 kubenswrapper[27835]: I0318 13:25:21.471983 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq6dl\" (UniqueName: \"kubernetes.io/projected/a42bf050-6c38-4023-a8b4-dc795f3aadc7-kube-api-access-qq6dl\") pod \"console-5df65d974f-mpf5j\" (UID: \"a42bf050-6c38-4023-a8b4-dc795f3aadc7\") " pod="openshift-console/console-5df65d974f-mpf5j" Mar 18 13:25:21.472115 master-0 kubenswrapper[27835]: I0318 13:25:21.472043 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a42bf050-6c38-4023-a8b4-dc795f3aadc7-console-config\") pod \"console-5df65d974f-mpf5j\" (UID: \"a42bf050-6c38-4023-a8b4-dc795f3aadc7\") " pod="openshift-console/console-5df65d974f-mpf5j" Mar 18 13:25:21.472115 master-0 kubenswrapper[27835]: I0318 13:25:21.472065 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a42bf050-6c38-4023-a8b4-dc795f3aadc7-oauth-serving-cert\") pod \"console-5df65d974f-mpf5j\" (UID: \"a42bf050-6c38-4023-a8b4-dc795f3aadc7\") " pod="openshift-console/console-5df65d974f-mpf5j" Mar 18 13:25:21.472115 master-0 kubenswrapper[27835]: I0318 13:25:21.472106 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a42bf050-6c38-4023-a8b4-dc795f3aadc7-console-serving-cert\") pod \"console-5df65d974f-mpf5j\" (UID: \"a42bf050-6c38-4023-a8b4-dc795f3aadc7\") " pod="openshift-console/console-5df65d974f-mpf5j" Mar 18 13:25:21.472467 master-0 kubenswrapper[27835]: I0318 13:25:21.472144 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a42bf050-6c38-4023-a8b4-dc795f3aadc7-service-ca\") pod \"console-5df65d974f-mpf5j\" (UID: \"a42bf050-6c38-4023-a8b4-dc795f3aadc7\") " pod="openshift-console/console-5df65d974f-mpf5j" Mar 18 13:25:21.472467 master-0 kubenswrapper[27835]: I0318 13:25:21.472176 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a42bf050-6c38-4023-a8b4-dc795f3aadc7-console-oauth-config\") pod \"console-5df65d974f-mpf5j\" (UID: \"a42bf050-6c38-4023-a8b4-dc795f3aadc7\") " pod="openshift-console/console-5df65d974f-mpf5j" Mar 18 13:25:21.473106 master-0 kubenswrapper[27835]: I0318 13:25:21.472959 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a42bf050-6c38-4023-a8b4-dc795f3aadc7-console-config\") pod \"console-5df65d974f-mpf5j\" (UID: \"a42bf050-6c38-4023-a8b4-dc795f3aadc7\") " pod="openshift-console/console-5df65d974f-mpf5j" Mar 18 13:25:21.473549 master-0 kubenswrapper[27835]: I0318 13:25:21.473515 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a42bf050-6c38-4023-a8b4-dc795f3aadc7-service-ca\") pod \"console-5df65d974f-mpf5j\" (UID: \"a42bf050-6c38-4023-a8b4-dc795f3aadc7\") " pod="openshift-console/console-5df65d974f-mpf5j" Mar 18 13:25:21.473610 master-0 kubenswrapper[27835]: I0318 13:25:21.473550 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a42bf050-6c38-4023-a8b4-dc795f3aadc7-oauth-serving-cert\") pod \"console-5df65d974f-mpf5j\" (UID: \"a42bf050-6c38-4023-a8b4-dc795f3aadc7\") " pod="openshift-console/console-5df65d974f-mpf5j" Mar 18 13:25:21.476751 master-0 kubenswrapper[27835]: I0318 13:25:21.476705 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a42bf050-6c38-4023-a8b4-dc795f3aadc7-console-oauth-config\") pod \"console-5df65d974f-mpf5j\" (UID: \"a42bf050-6c38-4023-a8b4-dc795f3aadc7\") " pod="openshift-console/console-5df65d974f-mpf5j" Mar 18 13:25:21.477442 master-0 kubenswrapper[27835]: I0318 13:25:21.477403 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a42bf050-6c38-4023-a8b4-dc795f3aadc7-console-serving-cert\") pod \"console-5df65d974f-mpf5j\" (UID: \"a42bf050-6c38-4023-a8b4-dc795f3aadc7\") " pod="openshift-console/console-5df65d974f-mpf5j" Mar 18 13:25:21.488177 master-0 kubenswrapper[27835]: I0318 13:25:21.488111 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq6dl\" (UniqueName: \"kubernetes.io/projected/a42bf050-6c38-4023-a8b4-dc795f3aadc7-kube-api-access-qq6dl\") pod \"console-5df65d974f-mpf5j\" (UID: \"a42bf050-6c38-4023-a8b4-dc795f3aadc7\") " pod="openshift-console/console-5df65d974f-mpf5j" Mar 18 13:25:21.553802 master-0 kubenswrapper[27835]: I0318 13:25:21.553438 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5df65d974f-mpf5j" Mar 18 13:25:22.008235 master-0 kubenswrapper[27835]: I0318 13:25:22.008113 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5df65d974f-mpf5j"] Mar 18 13:25:22.016589 master-0 kubenswrapper[27835]: W0318 13:25:22.016541 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda42bf050_6c38_4023_a8b4_dc795f3aadc7.slice/crio-e5f3340ca1c039a5a630a44295ded1a43916f16a647a68807e432f618e8cb3db WatchSource:0}: Error finding container e5f3340ca1c039a5a630a44295ded1a43916f16a647a68807e432f618e8cb3db: Status 404 returned error can't find the container with id e5f3340ca1c039a5a630a44295ded1a43916f16a647a68807e432f618e8cb3db Mar 18 13:25:22.194962 master-0 kubenswrapper[27835]: I0318 13:25:22.194896 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5df65d974f-mpf5j" event={"ID":"a42bf050-6c38-4023-a8b4-dc795f3aadc7","Type":"ContainerStarted","Data":"e5f3340ca1c039a5a630a44295ded1a43916f16a647a68807e432f618e8cb3db"} Mar 18 13:25:23.748622 master-0 kubenswrapper[27835]: I0318 13:25:23.748339 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-686bcb5cf-88rcq"] Mar 18 13:25:23.749483 master-0 kubenswrapper[27835]: I0318 13:25:23.749267 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-686bcb5cf-88rcq" Mar 18 13:25:23.755175 master-0 kubenswrapper[27835]: I0318 13:25:23.755107 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-686bcb5cf-88rcq"] Mar 18 13:25:23.764443 master-0 kubenswrapper[27835]: I0318 13:25:23.764349 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 18 13:25:23.810796 master-0 kubenswrapper[27835]: I0318 13:25:23.810593 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/18d00d36-387c-4c03-affa-9abc8e2d4fe0-console-oauth-config\") pod \"console-686bcb5cf-88rcq\" (UID: \"18d00d36-387c-4c03-affa-9abc8e2d4fe0\") " pod="openshift-console/console-686bcb5cf-88rcq" Mar 18 13:25:23.810796 master-0 kubenswrapper[27835]: I0318 13:25:23.810644 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18d00d36-387c-4c03-affa-9abc8e2d4fe0-trusted-ca-bundle\") pod \"console-686bcb5cf-88rcq\" (UID: \"18d00d36-387c-4c03-affa-9abc8e2d4fe0\") " pod="openshift-console/console-686bcb5cf-88rcq" Mar 18 13:25:23.810796 master-0 kubenswrapper[27835]: I0318 13:25:23.810662 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/18d00d36-387c-4c03-affa-9abc8e2d4fe0-oauth-serving-cert\") pod \"console-686bcb5cf-88rcq\" (UID: \"18d00d36-387c-4c03-affa-9abc8e2d4fe0\") " pod="openshift-console/console-686bcb5cf-88rcq" Mar 18 13:25:23.810796 master-0 kubenswrapper[27835]: I0318 13:25:23.810692 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/18d00d36-387c-4c03-affa-9abc8e2d4fe0-console-config\") pod \"console-686bcb5cf-88rcq\" (UID: \"18d00d36-387c-4c03-affa-9abc8e2d4fe0\") " pod="openshift-console/console-686bcb5cf-88rcq" Mar 18 13:25:23.810796 master-0 kubenswrapper[27835]: I0318 13:25:23.810717 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/18d00d36-387c-4c03-affa-9abc8e2d4fe0-service-ca\") pod \"console-686bcb5cf-88rcq\" (UID: \"18d00d36-387c-4c03-affa-9abc8e2d4fe0\") " pod="openshift-console/console-686bcb5cf-88rcq" Mar 18 13:25:23.810796 master-0 kubenswrapper[27835]: I0318 13:25:23.810734 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nzff\" (UniqueName: \"kubernetes.io/projected/18d00d36-387c-4c03-affa-9abc8e2d4fe0-kube-api-access-6nzff\") pod \"console-686bcb5cf-88rcq\" (UID: \"18d00d36-387c-4c03-affa-9abc8e2d4fe0\") " pod="openshift-console/console-686bcb5cf-88rcq" Mar 18 13:25:23.810796 master-0 kubenswrapper[27835]: I0318 13:25:23.810763 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/18d00d36-387c-4c03-affa-9abc8e2d4fe0-console-serving-cert\") pod \"console-686bcb5cf-88rcq\" (UID: \"18d00d36-387c-4c03-affa-9abc8e2d4fe0\") " pod="openshift-console/console-686bcb5cf-88rcq" Mar 18 13:25:23.914112 master-0 kubenswrapper[27835]: I0318 13:25:23.913069 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/18d00d36-387c-4c03-affa-9abc8e2d4fe0-console-oauth-config\") pod \"console-686bcb5cf-88rcq\" (UID: \"18d00d36-387c-4c03-affa-9abc8e2d4fe0\") " pod="openshift-console/console-686bcb5cf-88rcq" Mar 18 13:25:23.914112 master-0 kubenswrapper[27835]: I0318 13:25:23.913163 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18d00d36-387c-4c03-affa-9abc8e2d4fe0-trusted-ca-bundle\") pod \"console-686bcb5cf-88rcq\" (UID: \"18d00d36-387c-4c03-affa-9abc8e2d4fe0\") " pod="openshift-console/console-686bcb5cf-88rcq" Mar 18 13:25:23.914112 master-0 kubenswrapper[27835]: I0318 13:25:23.913185 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/18d00d36-387c-4c03-affa-9abc8e2d4fe0-oauth-serving-cert\") pod \"console-686bcb5cf-88rcq\" (UID: \"18d00d36-387c-4c03-affa-9abc8e2d4fe0\") " pod="openshift-console/console-686bcb5cf-88rcq" Mar 18 13:25:23.914112 master-0 kubenswrapper[27835]: I0318 13:25:23.913223 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/18d00d36-387c-4c03-affa-9abc8e2d4fe0-console-config\") pod \"console-686bcb5cf-88rcq\" (UID: \"18d00d36-387c-4c03-affa-9abc8e2d4fe0\") " pod="openshift-console/console-686bcb5cf-88rcq" Mar 18 13:25:23.914112 master-0 kubenswrapper[27835]: I0318 13:25:23.913260 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/18d00d36-387c-4c03-affa-9abc8e2d4fe0-service-ca\") pod \"console-686bcb5cf-88rcq\" (UID: \"18d00d36-387c-4c03-affa-9abc8e2d4fe0\") " pod="openshift-console/console-686bcb5cf-88rcq" Mar 18 13:25:23.914112 master-0 kubenswrapper[27835]: I0318 13:25:23.913289 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nzff\" (UniqueName: \"kubernetes.io/projected/18d00d36-387c-4c03-affa-9abc8e2d4fe0-kube-api-access-6nzff\") pod \"console-686bcb5cf-88rcq\" (UID: \"18d00d36-387c-4c03-affa-9abc8e2d4fe0\") " pod="openshift-console/console-686bcb5cf-88rcq" Mar 18 13:25:23.914112 master-0 kubenswrapper[27835]: I0318 13:25:23.913330 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/18d00d36-387c-4c03-affa-9abc8e2d4fe0-console-serving-cert\") pod \"console-686bcb5cf-88rcq\" (UID: \"18d00d36-387c-4c03-affa-9abc8e2d4fe0\") " pod="openshift-console/console-686bcb5cf-88rcq" Mar 18 13:25:23.916478 master-0 kubenswrapper[27835]: I0318 13:25:23.916158 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/18d00d36-387c-4c03-affa-9abc8e2d4fe0-service-ca\") pod \"console-686bcb5cf-88rcq\" (UID: \"18d00d36-387c-4c03-affa-9abc8e2d4fe0\") " pod="openshift-console/console-686bcb5cf-88rcq" Mar 18 13:25:23.916478 master-0 kubenswrapper[27835]: I0318 13:25:23.916436 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/18d00d36-387c-4c03-affa-9abc8e2d4fe0-oauth-serving-cert\") pod \"console-686bcb5cf-88rcq\" (UID: \"18d00d36-387c-4c03-affa-9abc8e2d4fe0\") " pod="openshift-console/console-686bcb5cf-88rcq" Mar 18 13:25:23.917129 master-0 kubenswrapper[27835]: I0318 13:25:23.917085 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18d00d36-387c-4c03-affa-9abc8e2d4fe0-trusted-ca-bundle\") pod \"console-686bcb5cf-88rcq\" (UID: \"18d00d36-387c-4c03-affa-9abc8e2d4fe0\") " pod="openshift-console/console-686bcb5cf-88rcq" Mar 18 13:25:23.917379 master-0 kubenswrapper[27835]: I0318 13:25:23.917345 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/18d00d36-387c-4c03-affa-9abc8e2d4fe0-console-config\") pod \"console-686bcb5cf-88rcq\" (UID: \"18d00d36-387c-4c03-affa-9abc8e2d4fe0\") " pod="openshift-console/console-686bcb5cf-88rcq" Mar 18 13:25:23.927916 master-0 kubenswrapper[27835]: I0318 13:25:23.927828 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/18d00d36-387c-4c03-affa-9abc8e2d4fe0-console-oauth-config\") pod \"console-686bcb5cf-88rcq\" (UID: \"18d00d36-387c-4c03-affa-9abc8e2d4fe0\") " pod="openshift-console/console-686bcb5cf-88rcq" Mar 18 13:25:23.930338 master-0 kubenswrapper[27835]: I0318 13:25:23.930118 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/18d00d36-387c-4c03-affa-9abc8e2d4fe0-console-serving-cert\") pod \"console-686bcb5cf-88rcq\" (UID: \"18d00d36-387c-4c03-affa-9abc8e2d4fe0\") " pod="openshift-console/console-686bcb5cf-88rcq" Mar 18 13:25:23.931653 master-0 kubenswrapper[27835]: I0318 13:25:23.931621 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nzff\" (UniqueName: \"kubernetes.io/projected/18d00d36-387c-4c03-affa-9abc8e2d4fe0-kube-api-access-6nzff\") pod \"console-686bcb5cf-88rcq\" (UID: \"18d00d36-387c-4c03-affa-9abc8e2d4fe0\") " pod="openshift-console/console-686bcb5cf-88rcq" Mar 18 13:25:24.082218 master-0 kubenswrapper[27835]: I0318 13:25:24.082003 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-686bcb5cf-88rcq" Mar 18 13:25:24.279735 master-0 kubenswrapper[27835]: I0318 13:25:24.279581 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-657fb76bf7-pvlvc"] Mar 18 13:25:24.283612 master-0 kubenswrapper[27835]: I0318 13:25:24.280493 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.289136 master-0 kubenswrapper[27835]: I0318 13:25:24.288894 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 18 13:25:24.289136 master-0 kubenswrapper[27835]: I0318 13:25:24.289129 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-vxpff" Mar 18 13:25:24.289473 master-0 kubenswrapper[27835]: I0318 13:25:24.289294 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 18 13:25:24.289761 master-0 kubenswrapper[27835]: I0318 13:25:24.289585 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 18 13:25:24.289990 master-0 kubenswrapper[27835]: I0318 13:25:24.289970 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 18 13:25:24.295581 master-0 kubenswrapper[27835]: I0318 13:25:24.290474 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 18 13:25:24.295581 master-0 kubenswrapper[27835]: I0318 13:25:24.292347 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 18 13:25:24.295581 master-0 kubenswrapper[27835]: I0318 13:25:24.292546 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 18 13:25:24.295581 master-0 kubenswrapper[27835]: I0318 13:25:24.293100 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 18 13:25:24.295581 master-0 kubenswrapper[27835]: I0318 13:25:24.293255 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 18 13:25:24.295581 master-0 kubenswrapper[27835]: I0318 13:25:24.293704 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 18 13:25:24.295581 master-0 kubenswrapper[27835]: I0318 13:25:24.294090 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 18 13:25:24.301057 master-0 kubenswrapper[27835]: I0318 13:25:24.301013 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-657fb76bf7-pvlvc"] Mar 18 13:25:24.315442 master-0 kubenswrapper[27835]: I0318 13:25:24.310193 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 18 13:25:24.318290 master-0 kubenswrapper[27835]: I0318 13:25:24.318226 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dbc9ea24-0c80-4453-8313-f8ffe06714e5-audit-policies\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.318381 master-0 kubenswrapper[27835]: I0318 13:25:24.318305 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-user-template-error\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.318381 master-0 kubenswrapper[27835]: I0318 13:25:24.318362 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-user-template-login\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.319321 master-0 kubenswrapper[27835]: I0318 13:25:24.319251 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvbnv\" (UniqueName: \"kubernetes.io/projected/dbc9ea24-0c80-4453-8313-f8ffe06714e5-kube-api-access-bvbnv\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.319384 master-0 kubenswrapper[27835]: I0318 13:25:24.319350 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.319454 master-0 kubenswrapper[27835]: I0318 13:25:24.319393 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-session\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.319518 master-0 kubenswrapper[27835]: I0318 13:25:24.319489 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.319560 master-0 kubenswrapper[27835]: I0318 13:25:24.319519 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.319560 master-0 kubenswrapper[27835]: I0318 13:25:24.319541 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-router-certs\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.319763 master-0 kubenswrapper[27835]: I0318 13:25:24.319644 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.319763 master-0 kubenswrapper[27835]: I0318 13:25:24.319682 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dbc9ea24-0c80-4453-8313-f8ffe06714e5-audit-dir\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.319763 master-0 kubenswrapper[27835]: I0318 13:25:24.319727 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.319862 master-0 kubenswrapper[27835]: I0318 13:25:24.319782 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-service-ca\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.322270 master-0 kubenswrapper[27835]: I0318 13:25:24.322242 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 18 13:25:24.421437 master-0 kubenswrapper[27835]: I0318 13:25:24.421363 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvbnv\" (UniqueName: \"kubernetes.io/projected/dbc9ea24-0c80-4453-8313-f8ffe06714e5-kube-api-access-bvbnv\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.421684 master-0 kubenswrapper[27835]: I0318 13:25:24.421565 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.421684 master-0 kubenswrapper[27835]: I0318 13:25:24.421593 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-session\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.423404 master-0 kubenswrapper[27835]: I0318 13:25:24.422020 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.423404 master-0 kubenswrapper[27835]: I0318 13:25:24.422109 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.423404 master-0 kubenswrapper[27835]: I0318 13:25:24.422164 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-router-certs\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.423404 master-0 kubenswrapper[27835]: I0318 13:25:24.422290 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.423404 master-0 kubenswrapper[27835]: I0318 13:25:24.422344 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dbc9ea24-0c80-4453-8313-f8ffe06714e5-audit-dir\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.423404 master-0 kubenswrapper[27835]: I0318 13:25:24.422388 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.423404 master-0 kubenswrapper[27835]: I0318 13:25:24.422491 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-service-ca\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.423404 master-0 kubenswrapper[27835]: I0318 13:25:24.422593 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dbc9ea24-0c80-4453-8313-f8ffe06714e5-audit-policies\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.423404 master-0 kubenswrapper[27835]: I0318 13:25:24.422651 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-user-template-error\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.423404 master-0 kubenswrapper[27835]: I0318 13:25:24.422759 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-user-template-login\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.423404 master-0 kubenswrapper[27835]: I0318 13:25:24.423345 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.424126 master-0 kubenswrapper[27835]: I0318 13:25:24.423999 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dbc9ea24-0c80-4453-8313-f8ffe06714e5-audit-dir\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.424689 master-0 kubenswrapper[27835]: I0318 13:25:24.424308 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.424689 master-0 kubenswrapper[27835]: I0318 13:25:24.424453 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dbc9ea24-0c80-4453-8313-f8ffe06714e5-audit-policies\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.425167 master-0 kubenswrapper[27835]: I0318 13:25:24.425112 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-service-ca\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.426317 master-0 kubenswrapper[27835]: I0318 13:25:24.426284 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.427495 master-0 kubenswrapper[27835]: I0318 13:25:24.427463 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-router-certs\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.427872 master-0 kubenswrapper[27835]: I0318 13:25:24.427844 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.428588 master-0 kubenswrapper[27835]: I0318 13:25:24.428534 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.429735 master-0 kubenswrapper[27835]: I0318 13:25:24.429310 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-user-template-login\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.435933 master-0 kubenswrapper[27835]: I0318 13:25:24.435889 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-session\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.438211 master-0 kubenswrapper[27835]: I0318 13:25:24.438186 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvbnv\" (UniqueName: \"kubernetes.io/projected/dbc9ea24-0c80-4453-8313-f8ffe06714e5-kube-api-access-bvbnv\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.445002 master-0 kubenswrapper[27835]: I0318 13:25:24.444825 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-user-template-error\") pod \"oauth-openshift-657fb76bf7-pvlvc\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.616519 master-0 kubenswrapper[27835]: I0318 13:25:24.616332 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:24.627387 master-0 kubenswrapper[27835]: I0318 13:25:24.627314 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-686bcb5cf-88rcq"] Mar 18 13:25:24.638022 master-0 kubenswrapper[27835]: W0318 13:25:24.637956 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18d00d36_387c_4c03_affa_9abc8e2d4fe0.slice/crio-c3bc8b931abcfbab71285b0d05c76feae75b12d9b8c762c7ccaf412d14e7044b WatchSource:0}: Error finding container c3bc8b931abcfbab71285b0d05c76feae75b12d9b8c762c7ccaf412d14e7044b: Status 404 returned error can't find the container with id c3bc8b931abcfbab71285b0d05c76feae75b12d9b8c762c7ccaf412d14e7044b Mar 18 13:25:25.053241 master-0 kubenswrapper[27835]: I0318 13:25:25.053174 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-657fb76bf7-pvlvc"] Mar 18 13:25:25.229411 master-0 kubenswrapper[27835]: I0318 13:25:25.229311 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-686bcb5cf-88rcq" event={"ID":"18d00d36-387c-4c03-affa-9abc8e2d4fe0","Type":"ContainerStarted","Data":"c3bc8b931abcfbab71285b0d05c76feae75b12d9b8c762c7ccaf412d14e7044b"} Mar 18 13:25:25.849855 master-0 kubenswrapper[27835]: I0318 13:25:25.849764 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b853631-ff77-4643-aa07-b1f8056320a3-kube-api-access\") pod \"installer-3-master-0\" (UID: \"9b853631-ff77-4643-aa07-b1f8056320a3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:25:25.871862 master-0 kubenswrapper[27835]: I0318 13:25:25.871795 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b853631-ff77-4643-aa07-b1f8056320a3-kube-api-access\") pod \"installer-3-master-0\" (UID: \"9b853631-ff77-4643-aa07-b1f8056320a3\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 13:25:25.950797 master-0 kubenswrapper[27835]: I0318 13:25:25.950721 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b853631-ff77-4643-aa07-b1f8056320a3-kube-api-access\") pod \"9b853631-ff77-4643-aa07-b1f8056320a3\" (UID: \"9b853631-ff77-4643-aa07-b1f8056320a3\") " Mar 18 13:25:25.953285 master-0 kubenswrapper[27835]: I0318 13:25:25.953240 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b853631-ff77-4643-aa07-b1f8056320a3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9b853631-ff77-4643-aa07-b1f8056320a3" (UID: "9b853631-ff77-4643-aa07-b1f8056320a3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:25:26.052909 master-0 kubenswrapper[27835]: I0318 13:25:26.052832 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b853631-ff77-4643-aa07-b1f8056320a3-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:26.713776 master-0 kubenswrapper[27835]: W0318 13:25:26.713698 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbc9ea24_0c80_4453_8313_f8ffe06714e5.slice/crio-2d7bb29d6f941018c028606e913bf7899c39d23a18acfd6c4bc5f8b9463603da WatchSource:0}: Error finding container 2d7bb29d6f941018c028606e913bf7899c39d23a18acfd6c4bc5f8b9463603da: Status 404 returned error can't find the container with id 2d7bb29d6f941018c028606e913bf7899c39d23a18acfd6c4bc5f8b9463603da Mar 18 13:25:27.244603 master-0 kubenswrapper[27835]: I0318 13:25:27.244556 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" event={"ID":"dbc9ea24-0c80-4453-8313-f8ffe06714e5","Type":"ContainerStarted","Data":"2d7bb29d6f941018c028606e913bf7899c39d23a18acfd6c4bc5f8b9463603da"} Mar 18 13:25:31.971488 master-0 kubenswrapper[27835]: I0318 13:25:31.969128 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d7c95db55-d6lqm"] Mar 18 13:25:31.971488 master-0 kubenswrapper[27835]: I0318 13:25:31.969327 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" podUID="c6c35e08-cdbc-4a86-a64a-3e5c34e941d7" containerName="controller-manager" containerID="cri-o://313c72120bec2b6d08365ada8135c3dfd105d61c037f0f5155256e309f9275b8" gracePeriod=30 Mar 18 13:25:32.029522 master-0 kubenswrapper[27835]: I0318 13:25:32.026290 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6"] Mar 18 13:25:32.029522 master-0 kubenswrapper[27835]: I0318 13:25:32.026606 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" podUID="a350f317-f058-4102-af5c-cbba46d35e02" containerName="route-controller-manager" containerID="cri-o://74afab1d4776e159eb27ac77593909c8a0f9782fdc2bad1e15b99fc960c20db9" gracePeriod=30 Mar 18 13:25:36.677280 master-0 kubenswrapper[27835]: I0318 13:25:36.677215 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-657fb76bf7-pvlvc"] Mar 18 13:25:38.325639 master-0 kubenswrapper[27835]: I0318 13:25:38.325569 27835 generic.go:334] "Generic (PLEG): container finished" podID="a350f317-f058-4102-af5c-cbba46d35e02" containerID="74afab1d4776e159eb27ac77593909c8a0f9782fdc2bad1e15b99fc960c20db9" exitCode=0 Mar 18 13:25:38.346195 master-0 kubenswrapper[27835]: I0318 13:25:38.325690 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" event={"ID":"a350f317-f058-4102-af5c-cbba46d35e02","Type":"ContainerDied","Data":"74afab1d4776e159eb27ac77593909c8a0f9782fdc2bad1e15b99fc960c20db9"} Mar 18 13:25:38.346195 master-0 kubenswrapper[27835]: I0318 13:25:38.343571 27835 generic.go:334] "Generic (PLEG): container finished" podID="c6c35e08-cdbc-4a86-a64a-3e5c34e941d7" containerID="313c72120bec2b6d08365ada8135c3dfd105d61c037f0f5155256e309f9275b8" exitCode=0 Mar 18 13:25:38.346195 master-0 kubenswrapper[27835]: I0318 13:25:38.343620 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" event={"ID":"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7","Type":"ContainerDied","Data":"313c72120bec2b6d08365ada8135c3dfd105d61c037f0f5155256e309f9275b8"} Mar 18 13:25:38.346195 master-0 kubenswrapper[27835]: I0318 13:25:38.343652 27835 scope.go:117] "RemoveContainer" containerID="8fd3086731035b08c09720259a6ef231b1be865d3ade946ceb31136e3b43913c" Mar 18 13:25:38.368588 master-0 kubenswrapper[27835]: I0318 13:25:38.365044 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-686bcb5cf-88rcq" podStartSLOduration=1.923422905 podStartE2EDuration="15.365021004s" podCreationTimestamp="2026-03-18 13:25:23 +0000 UTC" firstStartedPulling="2026-03-18 13:25:24.646764096 +0000 UTC m=+88.611975656" lastFinishedPulling="2026-03-18 13:25:38.088362205 +0000 UTC m=+102.053573755" observedRunningTime="2026-03-18 13:25:38.358872539 +0000 UTC m=+102.324084099" watchObservedRunningTime="2026-03-18 13:25:38.365021004 +0000 UTC m=+102.330232574" Mar 18 13:25:38.447898 master-0 kubenswrapper[27835]: I0318 13:25:38.447862 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" Mar 18 13:25:38.513155 master-0 kubenswrapper[27835]: I0318 13:25:38.513094 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c597dfd69-w66v8"] Mar 18 13:25:38.513918 master-0 kubenswrapper[27835]: E0318 13:25:38.513375 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a350f317-f058-4102-af5c-cbba46d35e02" containerName="route-controller-manager" Mar 18 13:25:38.513918 master-0 kubenswrapper[27835]: I0318 13:25:38.513400 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="a350f317-f058-4102-af5c-cbba46d35e02" containerName="route-controller-manager" Mar 18 13:25:38.513918 master-0 kubenswrapper[27835]: I0318 13:25:38.513636 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="a350f317-f058-4102-af5c-cbba46d35e02" containerName="route-controller-manager" Mar 18 13:25:38.514695 master-0 kubenswrapper[27835]: I0318 13:25:38.514198 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c597dfd69-w66v8" Mar 18 13:25:38.521499 master-0 kubenswrapper[27835]: I0318 13:25:38.520582 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c597dfd69-w66v8"] Mar 18 13:25:38.527234 master-0 kubenswrapper[27835]: I0318 13:25:38.526151 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-hl5hl" Mar 18 13:25:38.530008 master-0 kubenswrapper[27835]: I0318 13:25:38.529663 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:25:38.572998 master-0 kubenswrapper[27835]: I0318 13:25:38.572935 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a350f317-f058-4102-af5c-cbba46d35e02-config\") pod \"a350f317-f058-4102-af5c-cbba46d35e02\" (UID: \"a350f317-f058-4102-af5c-cbba46d35e02\") " Mar 18 13:25:38.573181 master-0 kubenswrapper[27835]: I0318 13:25:38.573029 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a350f317-f058-4102-af5c-cbba46d35e02-serving-cert\") pod \"a350f317-f058-4102-af5c-cbba46d35e02\" (UID: \"a350f317-f058-4102-af5c-cbba46d35e02\") " Mar 18 13:25:38.573181 master-0 kubenswrapper[27835]: I0318 13:25:38.573086 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a350f317-f058-4102-af5c-cbba46d35e02-client-ca\") pod \"a350f317-f058-4102-af5c-cbba46d35e02\" (UID: \"a350f317-f058-4102-af5c-cbba46d35e02\") " Mar 18 13:25:38.573181 master-0 kubenswrapper[27835]: I0318 13:25:38.573129 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t56bf\" (UniqueName: \"kubernetes.io/projected/a350f317-f058-4102-af5c-cbba46d35e02-kube-api-access-t56bf\") pod \"a350f317-f058-4102-af5c-cbba46d35e02\" (UID: \"a350f317-f058-4102-af5c-cbba46d35e02\") " Mar 18 13:25:38.574258 master-0 kubenswrapper[27835]: I0318 13:25:38.574228 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a350f317-f058-4102-af5c-cbba46d35e02-client-ca" (OuterVolumeSpecName: "client-ca") pod "a350f317-f058-4102-af5c-cbba46d35e02" (UID: "a350f317-f058-4102-af5c-cbba46d35e02"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:25:38.574310 master-0 kubenswrapper[27835]: I0318 13:25:38.574275 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a350f317-f058-4102-af5c-cbba46d35e02-config" (OuterVolumeSpecName: "config") pod "a350f317-f058-4102-af5c-cbba46d35e02" (UID: "a350f317-f058-4102-af5c-cbba46d35e02"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:25:38.576444 master-0 kubenswrapper[27835]: I0318 13:25:38.576399 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a350f317-f058-4102-af5c-cbba46d35e02-kube-api-access-t56bf" (OuterVolumeSpecName: "kube-api-access-t56bf") pod "a350f317-f058-4102-af5c-cbba46d35e02" (UID: "a350f317-f058-4102-af5c-cbba46d35e02"). InnerVolumeSpecName "kube-api-access-t56bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:25:38.576863 master-0 kubenswrapper[27835]: I0318 13:25:38.576815 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a350f317-f058-4102-af5c-cbba46d35e02-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a350f317-f058-4102-af5c-cbba46d35e02" (UID: "a350f317-f058-4102-af5c-cbba46d35e02"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:25:38.674627 master-0 kubenswrapper[27835]: I0318 13:25:38.674553 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-proxy-ca-bundles\") pod \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\" (UID: \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\") " Mar 18 13:25:38.674627 master-0 kubenswrapper[27835]: I0318 13:25:38.674657 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-serving-cert\") pod \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\" (UID: \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\") " Mar 18 13:25:38.675127 master-0 kubenswrapper[27835]: I0318 13:25:38.674689 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-config\") pod \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\" (UID: \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\") " Mar 18 13:25:38.675127 master-0 kubenswrapper[27835]: I0318 13:25:38.674731 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qn7f\" (UniqueName: \"kubernetes.io/projected/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-kube-api-access-5qn7f\") pod \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\" (UID: \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\") " Mar 18 13:25:38.675127 master-0 kubenswrapper[27835]: I0318 13:25:38.674750 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-client-ca\") pod \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\" (UID: \"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7\") " Mar 18 13:25:38.675127 master-0 kubenswrapper[27835]: I0318 13:25:38.674919 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/101fe9f1-3211-4844-bb11-c1b6c7696e10-serving-cert\") pod \"route-controller-manager-6c597dfd69-w66v8\" (UID: \"101fe9f1-3211-4844-bb11-c1b6c7696e10\") " pod="openshift-route-controller-manager/route-controller-manager-6c597dfd69-w66v8" Mar 18 13:25:38.675127 master-0 kubenswrapper[27835]: I0318 13:25:38.674973 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twcsj\" (UniqueName: \"kubernetes.io/projected/101fe9f1-3211-4844-bb11-c1b6c7696e10-kube-api-access-twcsj\") pod \"route-controller-manager-6c597dfd69-w66v8\" (UID: \"101fe9f1-3211-4844-bb11-c1b6c7696e10\") " pod="openshift-route-controller-manager/route-controller-manager-6c597dfd69-w66v8" Mar 18 13:25:38.675127 master-0 kubenswrapper[27835]: I0318 13:25:38.674994 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/101fe9f1-3211-4844-bb11-c1b6c7696e10-client-ca\") pod \"route-controller-manager-6c597dfd69-w66v8\" (UID: \"101fe9f1-3211-4844-bb11-c1b6c7696e10\") " pod="openshift-route-controller-manager/route-controller-manager-6c597dfd69-w66v8" Mar 18 13:25:38.675127 master-0 kubenswrapper[27835]: I0318 13:25:38.675026 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/101fe9f1-3211-4844-bb11-c1b6c7696e10-config\") pod \"route-controller-manager-6c597dfd69-w66v8\" (UID: \"101fe9f1-3211-4844-bb11-c1b6c7696e10\") " pod="openshift-route-controller-manager/route-controller-manager-6c597dfd69-w66v8" Mar 18 13:25:38.675127 master-0 kubenswrapper[27835]: I0318 13:25:38.675090 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t56bf\" (UniqueName: \"kubernetes.io/projected/a350f317-f058-4102-af5c-cbba46d35e02-kube-api-access-t56bf\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:38.675369 master-0 kubenswrapper[27835]: I0318 13:25:38.675173 27835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a350f317-f058-4102-af5c-cbba46d35e02-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:38.675369 master-0 kubenswrapper[27835]: I0318 13:25:38.675215 27835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a350f317-f058-4102-af5c-cbba46d35e02-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:38.675369 master-0 kubenswrapper[27835]: I0318 13:25:38.675254 27835 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a350f317-f058-4102-af5c-cbba46d35e02-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:38.675717 master-0 kubenswrapper[27835]: I0318 13:25:38.675481 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-config" (OuterVolumeSpecName: "config") pod "c6c35e08-cdbc-4a86-a64a-3e5c34e941d7" (UID: "c6c35e08-cdbc-4a86-a64a-3e5c34e941d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:25:38.675893 master-0 kubenswrapper[27835]: I0318 13:25:38.675822 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-client-ca" (OuterVolumeSpecName: "client-ca") pod "c6c35e08-cdbc-4a86-a64a-3e5c34e941d7" (UID: "c6c35e08-cdbc-4a86-a64a-3e5c34e941d7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:25:38.675956 master-0 kubenswrapper[27835]: I0318 13:25:38.675923 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c6c35e08-cdbc-4a86-a64a-3e5c34e941d7" (UID: "c6c35e08-cdbc-4a86-a64a-3e5c34e941d7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:25:38.677806 master-0 kubenswrapper[27835]: I0318 13:25:38.677724 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c6c35e08-cdbc-4a86-a64a-3e5c34e941d7" (UID: "c6c35e08-cdbc-4a86-a64a-3e5c34e941d7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:25:38.678911 master-0 kubenswrapper[27835]: I0318 13:25:38.678873 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-kube-api-access-5qn7f" (OuterVolumeSpecName: "kube-api-access-5qn7f") pod "c6c35e08-cdbc-4a86-a64a-3e5c34e941d7" (UID: "c6c35e08-cdbc-4a86-a64a-3e5c34e941d7"). InnerVolumeSpecName "kube-api-access-5qn7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:25:38.775937 master-0 kubenswrapper[27835]: I0318 13:25:38.775848 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twcsj\" (UniqueName: \"kubernetes.io/projected/101fe9f1-3211-4844-bb11-c1b6c7696e10-kube-api-access-twcsj\") pod \"route-controller-manager-6c597dfd69-w66v8\" (UID: \"101fe9f1-3211-4844-bb11-c1b6c7696e10\") " pod="openshift-route-controller-manager/route-controller-manager-6c597dfd69-w66v8" Mar 18 13:25:38.775937 master-0 kubenswrapper[27835]: I0318 13:25:38.775907 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/101fe9f1-3211-4844-bb11-c1b6c7696e10-client-ca\") pod \"route-controller-manager-6c597dfd69-w66v8\" (UID: \"101fe9f1-3211-4844-bb11-c1b6c7696e10\") " pod="openshift-route-controller-manager/route-controller-manager-6c597dfd69-w66v8" Mar 18 13:25:38.776386 master-0 kubenswrapper[27835]: I0318 13:25:38.775983 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/101fe9f1-3211-4844-bb11-c1b6c7696e10-config\") pod \"route-controller-manager-6c597dfd69-w66v8\" (UID: \"101fe9f1-3211-4844-bb11-c1b6c7696e10\") " pod="openshift-route-controller-manager/route-controller-manager-6c597dfd69-w66v8" Mar 18 13:25:38.776386 master-0 kubenswrapper[27835]: I0318 13:25:38.776061 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/101fe9f1-3211-4844-bb11-c1b6c7696e10-serving-cert\") pod \"route-controller-manager-6c597dfd69-w66v8\" (UID: \"101fe9f1-3211-4844-bb11-c1b6c7696e10\") " pod="openshift-route-controller-manager/route-controller-manager-6c597dfd69-w66v8" Mar 18 13:25:38.776386 master-0 kubenswrapper[27835]: I0318 13:25:38.776135 27835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:38.776386 master-0 kubenswrapper[27835]: I0318 13:25:38.776148 27835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:38.776386 master-0 kubenswrapper[27835]: I0318 13:25:38.776158 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qn7f\" (UniqueName: \"kubernetes.io/projected/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-kube-api-access-5qn7f\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:38.776386 master-0 kubenswrapper[27835]: I0318 13:25:38.776169 27835 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:38.776386 master-0 kubenswrapper[27835]: I0318 13:25:38.776178 27835 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:38.777375 master-0 kubenswrapper[27835]: I0318 13:25:38.777310 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/101fe9f1-3211-4844-bb11-c1b6c7696e10-client-ca\") pod \"route-controller-manager-6c597dfd69-w66v8\" (UID: \"101fe9f1-3211-4844-bb11-c1b6c7696e10\") " pod="openshift-route-controller-manager/route-controller-manager-6c597dfd69-w66v8" Mar 18 13:25:38.779170 master-0 kubenswrapper[27835]: I0318 13:25:38.779131 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/101fe9f1-3211-4844-bb11-c1b6c7696e10-config\") pod \"route-controller-manager-6c597dfd69-w66v8\" (UID: \"101fe9f1-3211-4844-bb11-c1b6c7696e10\") " pod="openshift-route-controller-manager/route-controller-manager-6c597dfd69-w66v8" Mar 18 13:25:38.779783 master-0 kubenswrapper[27835]: I0318 13:25:38.779739 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/101fe9f1-3211-4844-bb11-c1b6c7696e10-serving-cert\") pod \"route-controller-manager-6c597dfd69-w66v8\" (UID: \"101fe9f1-3211-4844-bb11-c1b6c7696e10\") " pod="openshift-route-controller-manager/route-controller-manager-6c597dfd69-w66v8" Mar 18 13:25:38.939320 master-0 kubenswrapper[27835]: I0318 13:25:38.939249 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twcsj\" (UniqueName: \"kubernetes.io/projected/101fe9f1-3211-4844-bb11-c1b6c7696e10-kube-api-access-twcsj\") pod \"route-controller-manager-6c597dfd69-w66v8\" (UID: \"101fe9f1-3211-4844-bb11-c1b6c7696e10\") " pod="openshift-route-controller-manager/route-controller-manager-6c597dfd69-w66v8" Mar 18 13:25:39.151824 master-0 kubenswrapper[27835]: I0318 13:25:39.151739 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c597dfd69-w66v8" Mar 18 13:25:39.355537 master-0 kubenswrapper[27835]: I0318 13:25:39.355028 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" Mar 18 13:25:39.355537 master-0 kubenswrapper[27835]: I0318 13:25:39.355038 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d7c95db55-d6lqm" event={"ID":"c6c35e08-cdbc-4a86-a64a-3e5c34e941d7","Type":"ContainerDied","Data":"3f4c5edfdc04ff6f06a18f7e79a33fe2c7ca34a279290a61c3b81818bc079d6b"} Mar 18 13:25:39.355537 master-0 kubenswrapper[27835]: I0318 13:25:39.355124 27835 scope.go:117] "RemoveContainer" containerID="313c72120bec2b6d08365ada8135c3dfd105d61c037f0f5155256e309f9275b8" Mar 18 13:25:39.358788 master-0 kubenswrapper[27835]: I0318 13:25:39.358681 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-686bcb5cf-88rcq" event={"ID":"18d00d36-387c-4c03-affa-9abc8e2d4fe0","Type":"ContainerStarted","Data":"f34e7adb2cb006bc8a93977875f942662294b291d04af30371e70d1940adf03d"} Mar 18 13:25:39.360993 master-0 kubenswrapper[27835]: I0318 13:25:39.360936 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5df65d974f-mpf5j" event={"ID":"a42bf050-6c38-4023-a8b4-dc795f3aadc7","Type":"ContainerStarted","Data":"bcea1c31c8eafd3e0cc4eaee9afbe2a8bdda8a5acc6c08c03a14414c75f82365"} Mar 18 13:25:39.363699 master-0 kubenswrapper[27835]: I0318 13:25:39.363299 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" event={"ID":"a350f317-f058-4102-af5c-cbba46d35e02","Type":"ContainerDied","Data":"71a8c5f3dcdb995fab9a53a08bf0c85f486ffb1845cc1c509788983f480ff491"} Mar 18 13:25:39.363699 master-0 kubenswrapper[27835]: I0318 13:25:39.363386 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6" Mar 18 13:25:39.374624 master-0 kubenswrapper[27835]: I0318 13:25:39.374008 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-66b8ffb895-2tkdh" event={"ID":"3f688009-66eb-490d-a0fb-464dba69fb96","Type":"ContainerStarted","Data":"d2b53c607ca2165991fecbeb2fe9cca8f715da2698f1d668b3f16391259175b4"} Mar 18 13:25:39.374624 master-0 kubenswrapper[27835]: I0318 13:25:39.374223 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-66b8ffb895-2tkdh" Mar 18 13:25:39.382207 master-0 kubenswrapper[27835]: I0318 13:25:39.382165 27835 patch_prober.go:28] interesting pod/downloads-66b8ffb895-2tkdh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.87:8080/\": dial tcp 10.128.0.87:8080: connect: connection refused" start-of-body= Mar 18 13:25:39.382370 master-0 kubenswrapper[27835]: I0318 13:25:39.382227 27835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-66b8ffb895-2tkdh" podUID="3f688009-66eb-490d-a0fb-464dba69fb96" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.87:8080/\": dial tcp 10.128.0.87:8080: connect: connection refused" Mar 18 13:25:39.391732 master-0 kubenswrapper[27835]: I0318 13:25:39.391695 27835 scope.go:117] "RemoveContainer" containerID="74afab1d4776e159eb27ac77593909c8a0f9782fdc2bad1e15b99fc960c20db9" Mar 18 13:25:39.656180 master-0 kubenswrapper[27835]: I0318 13:25:39.655856 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5df65d974f-mpf5j" podStartSLOduration=2.59292192 podStartE2EDuration="18.655777308s" podCreationTimestamp="2026-03-18 13:25:21 +0000 UTC" firstStartedPulling="2026-03-18 13:25:22.019730594 +0000 UTC m=+85.984942154" lastFinishedPulling="2026-03-18 13:25:38.082585992 +0000 UTC m=+102.047797542" observedRunningTime="2026-03-18 13:25:39.651857452 +0000 UTC m=+103.617069032" watchObservedRunningTime="2026-03-18 13:25:39.655777308 +0000 UTC m=+103.620988948" Mar 18 13:25:40.387716 master-0 kubenswrapper[27835]: I0318 13:25:40.387665 27835 patch_prober.go:28] interesting pod/downloads-66b8ffb895-2tkdh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.87:8080/\": dial tcp 10.128.0.87:8080: connect: connection refused" start-of-body= Mar 18 13:25:40.388806 master-0 kubenswrapper[27835]: I0318 13:25:40.388766 27835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-66b8ffb895-2tkdh" podUID="3f688009-66eb-490d-a0fb-464dba69fb96" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.87:8080/\": dial tcp 10.128.0.87:8080: connect: connection refused" Mar 18 13:25:40.664147 master-0 kubenswrapper[27835]: I0318 13:25:40.664107 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c597dfd69-w66v8"] Mar 18 13:25:40.667704 master-0 kubenswrapper[27835]: I0318 13:25:40.665369 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-66b8ffb895-2tkdh" podStartSLOduration=4.268081598 podStartE2EDuration="36.66535368s" podCreationTimestamp="2026-03-18 13:25:04 +0000 UTC" firstStartedPulling="2026-03-18 13:25:05.843317758 +0000 UTC m=+69.808529328" lastFinishedPulling="2026-03-18 13:25:38.24058986 +0000 UTC m=+102.205801410" observedRunningTime="2026-03-18 13:25:40.661641638 +0000 UTC m=+104.626853238" watchObservedRunningTime="2026-03-18 13:25:40.66535368 +0000 UTC m=+104.630565240" Mar 18 13:25:41.554353 master-0 kubenswrapper[27835]: I0318 13:25:41.554028 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5df65d974f-mpf5j" Mar 18 13:25:41.554353 master-0 kubenswrapper[27835]: I0318 13:25:41.554164 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5df65d974f-mpf5j" Mar 18 13:25:41.556976 master-0 kubenswrapper[27835]: I0318 13:25:41.556908 27835 patch_prober.go:28] interesting pod/console-5df65d974f-mpf5j container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.88:8443/health\": dial tcp 10.128.0.88:8443: connect: connection refused" start-of-body= Mar 18 13:25:41.557100 master-0 kubenswrapper[27835]: I0318 13:25:41.557006 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5df65d974f-mpf5j" podUID="a42bf050-6c38-4023-a8b4-dc795f3aadc7" containerName="console" probeResult="failure" output="Get \"https://10.128.0.88:8443/health\": dial tcp 10.128.0.88:8443: connect: connection refused" Mar 18 13:25:41.728758 master-0 kubenswrapper[27835]: I0318 13:25:41.728692 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5f77b8654b-44f6m"] Mar 18 13:25:41.729050 master-0 kubenswrapper[27835]: E0318 13:25:41.728919 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c35e08-cdbc-4a86-a64a-3e5c34e941d7" containerName="controller-manager" Mar 18 13:25:41.729050 master-0 kubenswrapper[27835]: I0318 13:25:41.728932 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c35e08-cdbc-4a86-a64a-3e5c34e941d7" containerName="controller-manager" Mar 18 13:25:41.729050 master-0 kubenswrapper[27835]: E0318 13:25:41.728942 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c35e08-cdbc-4a86-a64a-3e5c34e941d7" containerName="controller-manager" Mar 18 13:25:41.729050 master-0 kubenswrapper[27835]: I0318 13:25:41.728949 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c35e08-cdbc-4a86-a64a-3e5c34e941d7" containerName="controller-manager" Mar 18 13:25:41.729458 master-0 kubenswrapper[27835]: I0318 13:25:41.729082 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6c35e08-cdbc-4a86-a64a-3e5c34e941d7" containerName="controller-manager" Mar 18 13:25:41.729589 master-0 kubenswrapper[27835]: I0318 13:25:41.729476 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f77b8654b-44f6m" Mar 18 13:25:41.735368 master-0 kubenswrapper[27835]: I0318 13:25:41.735290 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 13:25:41.739372 master-0 kubenswrapper[27835]: I0318 13:25:41.739328 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 13:25:41.739842 master-0 kubenswrapper[27835]: I0318 13:25:41.739751 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 13:25:41.739968 master-0 kubenswrapper[27835]: I0318 13:25:41.739844 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-b4r5l" Mar 18 13:25:41.739968 master-0 kubenswrapper[27835]: I0318 13:25:41.739854 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 13:25:41.740187 master-0 kubenswrapper[27835]: I0318 13:25:41.740042 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 13:25:41.746012 master-0 kubenswrapper[27835]: I0318 13:25:41.745960 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 13:25:41.851670 master-0 kubenswrapper[27835]: I0318 13:25:41.851557 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e315913d-cd6e-4455-9035-b35d24adbb2a-config\") pod \"controller-manager-5f77b8654b-44f6m\" (UID: \"e315913d-cd6e-4455-9035-b35d24adbb2a\") " pod="openshift-controller-manager/controller-manager-5f77b8654b-44f6m" Mar 18 13:25:41.851670 master-0 kubenswrapper[27835]: I0318 13:25:41.851621 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwxck\" (UniqueName: \"kubernetes.io/projected/e315913d-cd6e-4455-9035-b35d24adbb2a-kube-api-access-mwxck\") pod \"controller-manager-5f77b8654b-44f6m\" (UID: \"e315913d-cd6e-4455-9035-b35d24adbb2a\") " pod="openshift-controller-manager/controller-manager-5f77b8654b-44f6m" Mar 18 13:25:41.851670 master-0 kubenswrapper[27835]: I0318 13:25:41.851654 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e315913d-cd6e-4455-9035-b35d24adbb2a-client-ca\") pod \"controller-manager-5f77b8654b-44f6m\" (UID: \"e315913d-cd6e-4455-9035-b35d24adbb2a\") " pod="openshift-controller-manager/controller-manager-5f77b8654b-44f6m" Mar 18 13:25:41.851893 master-0 kubenswrapper[27835]: I0318 13:25:41.851696 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e315913d-cd6e-4455-9035-b35d24adbb2a-serving-cert\") pod \"controller-manager-5f77b8654b-44f6m\" (UID: \"e315913d-cd6e-4455-9035-b35d24adbb2a\") " pod="openshift-controller-manager/controller-manager-5f77b8654b-44f6m" Mar 18 13:25:41.851893 master-0 kubenswrapper[27835]: I0318 13:25:41.851794 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e315913d-cd6e-4455-9035-b35d24adbb2a-proxy-ca-bundles\") pod \"controller-manager-5f77b8654b-44f6m\" (UID: \"e315913d-cd6e-4455-9035-b35d24adbb2a\") " pod="openshift-controller-manager/controller-manager-5f77b8654b-44f6m" Mar 18 13:25:41.953219 master-0 kubenswrapper[27835]: I0318 13:25:41.953125 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e315913d-cd6e-4455-9035-b35d24adbb2a-proxy-ca-bundles\") pod \"controller-manager-5f77b8654b-44f6m\" (UID: \"e315913d-cd6e-4455-9035-b35d24adbb2a\") " pod="openshift-controller-manager/controller-manager-5f77b8654b-44f6m" Mar 18 13:25:41.953219 master-0 kubenswrapper[27835]: I0318 13:25:41.953205 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e315913d-cd6e-4455-9035-b35d24adbb2a-config\") pod \"controller-manager-5f77b8654b-44f6m\" (UID: \"e315913d-cd6e-4455-9035-b35d24adbb2a\") " pod="openshift-controller-manager/controller-manager-5f77b8654b-44f6m" Mar 18 13:25:41.953219 master-0 kubenswrapper[27835]: I0318 13:25:41.953232 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwxck\" (UniqueName: \"kubernetes.io/projected/e315913d-cd6e-4455-9035-b35d24adbb2a-kube-api-access-mwxck\") pod \"controller-manager-5f77b8654b-44f6m\" (UID: \"e315913d-cd6e-4455-9035-b35d24adbb2a\") " pod="openshift-controller-manager/controller-manager-5f77b8654b-44f6m" Mar 18 13:25:41.953848 master-0 kubenswrapper[27835]: I0318 13:25:41.953252 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e315913d-cd6e-4455-9035-b35d24adbb2a-client-ca\") pod \"controller-manager-5f77b8654b-44f6m\" (UID: \"e315913d-cd6e-4455-9035-b35d24adbb2a\") " pod="openshift-controller-manager/controller-manager-5f77b8654b-44f6m" Mar 18 13:25:41.953848 master-0 kubenswrapper[27835]: I0318 13:25:41.953305 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e315913d-cd6e-4455-9035-b35d24adbb2a-serving-cert\") pod \"controller-manager-5f77b8654b-44f6m\" (UID: \"e315913d-cd6e-4455-9035-b35d24adbb2a\") " pod="openshift-controller-manager/controller-manager-5f77b8654b-44f6m" Mar 18 13:25:41.955874 master-0 kubenswrapper[27835]: I0318 13:25:41.955825 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e315913d-cd6e-4455-9035-b35d24adbb2a-config\") pod \"controller-manager-5f77b8654b-44f6m\" (UID: \"e315913d-cd6e-4455-9035-b35d24adbb2a\") " pod="openshift-controller-manager/controller-manager-5f77b8654b-44f6m" Mar 18 13:25:41.956901 master-0 kubenswrapper[27835]: I0318 13:25:41.956860 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e315913d-cd6e-4455-9035-b35d24adbb2a-proxy-ca-bundles\") pod \"controller-manager-5f77b8654b-44f6m\" (UID: \"e315913d-cd6e-4455-9035-b35d24adbb2a\") " pod="openshift-controller-manager/controller-manager-5f77b8654b-44f6m" Mar 18 13:25:41.957310 master-0 kubenswrapper[27835]: I0318 13:25:41.957102 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e315913d-cd6e-4455-9035-b35d24adbb2a-client-ca\") pod \"controller-manager-5f77b8654b-44f6m\" (UID: \"e315913d-cd6e-4455-9035-b35d24adbb2a\") " pod="openshift-controller-manager/controller-manager-5f77b8654b-44f6m" Mar 18 13:25:41.959559 master-0 kubenswrapper[27835]: I0318 13:25:41.959513 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e315913d-cd6e-4455-9035-b35d24adbb2a-serving-cert\") pod \"controller-manager-5f77b8654b-44f6m\" (UID: \"e315913d-cd6e-4455-9035-b35d24adbb2a\") " pod="openshift-controller-manager/controller-manager-5f77b8654b-44f6m" Mar 18 13:25:42.405843 master-0 kubenswrapper[27835]: I0318 13:25:42.405677 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c597dfd69-w66v8" event={"ID":"101fe9f1-3211-4844-bb11-c1b6c7696e10","Type":"ContainerStarted","Data":"0773c5941cce70b677d825abf0728992b276df184ebc3fd800495be5380db684"} Mar 18 13:25:42.762182 master-0 kubenswrapper[27835]: I0318 13:25:42.762067 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f77b8654b-44f6m"] Mar 18 13:25:43.416150 master-0 kubenswrapper[27835]: I0318 13:25:43.416092 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c597dfd69-w66v8" event={"ID":"101fe9f1-3211-4844-bb11-c1b6c7696e10","Type":"ContainerStarted","Data":"d6e44a488f915b9329f46d99b6e64b13b9094fb13286af15bee48392950cfc2a"} Mar 18 13:25:43.416868 master-0 kubenswrapper[27835]: I0318 13:25:43.416844 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6c597dfd69-w66v8" Mar 18 13:25:43.497566 master-0 kubenswrapper[27835]: I0318 13:25:43.497458 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwxck\" (UniqueName: \"kubernetes.io/projected/e315913d-cd6e-4455-9035-b35d24adbb2a-kube-api-access-mwxck\") pod \"controller-manager-5f77b8654b-44f6m\" (UID: \"e315913d-cd6e-4455-9035-b35d24adbb2a\") " pod="openshift-controller-manager/controller-manager-5f77b8654b-44f6m" Mar 18 13:25:43.581134 master-0 kubenswrapper[27835]: I0318 13:25:43.581007 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f77b8654b-44f6m" Mar 18 13:25:44.082582 master-0 kubenswrapper[27835]: I0318 13:25:44.082373 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-686bcb5cf-88rcq" Mar 18 13:25:44.083601 master-0 kubenswrapper[27835]: I0318 13:25:44.083491 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-686bcb5cf-88rcq" Mar 18 13:25:44.085637 master-0 kubenswrapper[27835]: I0318 13:25:44.085566 27835 patch_prober.go:28] interesting pod/console-686bcb5cf-88rcq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.89:8443/health\": dial tcp 10.128.0.89:8443: connect: connection refused" start-of-body= Mar 18 13:25:44.085720 master-0 kubenswrapper[27835]: I0318 13:25:44.085661 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-686bcb5cf-88rcq" podUID="18d00d36-387c-4c03-affa-9abc8e2d4fe0" containerName="console" probeResult="failure" output="Get \"https://10.128.0.89:8443/health\": dial tcp 10.128.0.89:8443: connect: connection refused" Mar 18 13:25:44.135230 master-0 kubenswrapper[27835]: I0318 13:25:44.135148 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d7c95db55-d6lqm"] Mar 18 13:25:44.417578 master-0 kubenswrapper[27835]: I0318 13:25:44.416696 27835 patch_prober.go:28] interesting pod/route-controller-manager-6c597dfd69-w66v8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.91:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 13:25:44.417578 master-0 kubenswrapper[27835]: I0318 13:25:44.416773 27835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6c597dfd69-w66v8" podUID="101fe9f1-3211-4844-bb11-c1b6c7696e10" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.91:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 13:25:44.460527 master-0 kubenswrapper[27835]: I0318 13:25:44.459895 27835 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 13:25:44.461406 master-0 kubenswrapper[27835]: I0318 13:25:44.461334 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6c35e08-cdbc-4a86-a64a-3e5c34e941d7" containerName="controller-manager" Mar 18 13:25:44.463488 master-0 kubenswrapper[27835]: I0318 13:25:44.462464 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:25:44.464705 master-0 kubenswrapper[27835]: I0318 13:25:44.464446 27835 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 13:25:44.464843 master-0 kubenswrapper[27835]: I0318 13:25:44.464702 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver" containerID="cri-o://65d7e5b55a010fe304ccf53f634d090df67438b3eddaa2aa17e4e898f972f8b1" gracePeriod=15 Mar 18 13:25:44.464916 master-0 kubenswrapper[27835]: I0318 13:25:44.464742 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" containerID="cri-o://6ad928278a5d1eff5df519521f7ea13f1a3eb8fdfb5c1803c15d7810fb9de453" gracePeriod=15 Mar 18 13:25:44.464977 master-0 kubenswrapper[27835]: I0318 13:25:44.464779 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://9cf2f9b8128dbf10ede3eb562a24cc58c7aba33dba8eeddf4d4f3e24e5798ca1" gracePeriod=15 Mar 18 13:25:44.464977 master-0 kubenswrapper[27835]: I0318 13:25:44.464790 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://c7fc7f0847ee79291a6574d7683fcc6d9f536ed82d5caf4aefcfccd3e9a849e8" gracePeriod=15 Mar 18 13:25:44.464977 master-0 kubenswrapper[27835]: I0318 13:25:44.464799 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-syncer" containerID="cri-o://1f5fb1784d061c1838ee1821bce9e8751dbeeab4df3260e90e439d05f3ac2d1d" gracePeriod=15 Mar 18 13:25:44.467953 master-0 kubenswrapper[27835]: I0318 13:25:44.467903 27835 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 13:25:44.468285 master-0 kubenswrapper[27835]: E0318 13:25:44.468250 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="setup" Mar 18 13:25:44.468285 master-0 kubenswrapper[27835]: I0318 13:25:44.468278 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="setup" Mar 18 13:25:44.468402 master-0 kubenswrapper[27835]: E0318 13:25:44.468292 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver" Mar 18 13:25:44.468402 master-0 kubenswrapper[27835]: I0318 13:25:44.468303 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver" Mar 18 13:25:44.468402 master-0 kubenswrapper[27835]: E0318 13:25:44.468330 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" Mar 18 13:25:44.468402 master-0 kubenswrapper[27835]: I0318 13:25:44.468338 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" Mar 18 13:25:44.468402 master-0 kubenswrapper[27835]: E0318 13:25:44.468346 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-regeneration-controller" Mar 18 13:25:44.468402 master-0 kubenswrapper[27835]: I0318 13:25:44.468354 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-regeneration-controller" Mar 18 13:25:44.468402 master-0 kubenswrapper[27835]: E0318 13:25:44.468372 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-insecure-readyz" Mar 18 13:25:44.468402 master-0 kubenswrapper[27835]: I0318 13:25:44.468380 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-insecure-readyz" Mar 18 13:25:44.468402 master-0 kubenswrapper[27835]: E0318 13:25:44.468392 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-syncer" Mar 18 13:25:44.468402 master-0 kubenswrapper[27835]: I0318 13:25:44.468400 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-syncer" Mar 18 13:25:44.469362 master-0 kubenswrapper[27835]: I0318 13:25:44.468618 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-regeneration-controller" Mar 18 13:25:44.469362 master-0 kubenswrapper[27835]: I0318 13:25:44.468643 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-insecure-readyz" Mar 18 13:25:44.469362 master-0 kubenswrapper[27835]: I0318 13:25:44.468656 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-syncer" Mar 18 13:25:44.469362 master-0 kubenswrapper[27835]: I0318 13:25:44.468666 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" Mar 18 13:25:44.469362 master-0 kubenswrapper[27835]: I0318 13:25:44.468681 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver" Mar 18 13:25:44.469362 master-0 kubenswrapper[27835]: E0318 13:25:44.469069 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" Mar 18 13:25:44.469362 master-0 kubenswrapper[27835]: I0318 13:25:44.469083 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" Mar 18 13:25:44.469362 master-0 kubenswrapper[27835]: I0318 13:25:44.469269 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" Mar 18 13:25:44.540757 master-0 kubenswrapper[27835]: I0318 13:25:44.540704 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-d7c95db55-d6lqm"] Mar 18 13:25:44.563176 master-0 kubenswrapper[27835]: E0318 13:25:44.563109 27835 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb45ea2ef1cf2bc9d1d994d6538ae0a64.slice/crio-c7fc7f0847ee79291a6574d7683fcc6d9f536ed82d5caf4aefcfccd3e9a849e8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb45ea2ef1cf2bc9d1d994d6538ae0a64.slice/crio-6ad928278a5d1eff5df519521f7ea13f1a3eb8fdfb5c1803c15d7810fb9de453.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb45ea2ef1cf2bc9d1d994d6538ae0a64.slice/crio-1f5fb1784d061c1838ee1821bce9e8751dbeeab4df3260e90e439d05f3ac2d1d.scope\": RecentStats: unable to find data in memory cache]" Mar 18 13:25:44.600819 master-0 kubenswrapper[27835]: I0318 13:25:44.600724 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:25:44.600956 master-0 kubenswrapper[27835]: I0318 13:25:44.600819 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:25:44.600956 master-0 kubenswrapper[27835]: I0318 13:25:44.600911 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:25:44.601025 master-0 kubenswrapper[27835]: I0318 13:25:44.600989 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:25:44.601292 master-0 kubenswrapper[27835]: I0318 13:25:44.601276 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:25:44.601331 master-0 kubenswrapper[27835]: I0318 13:25:44.601300 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:25:44.601365 master-0 kubenswrapper[27835]: I0318 13:25:44.601347 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:25:44.601558 master-0 kubenswrapper[27835]: I0318 13:25:44.601515 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:25:44.708953 master-0 kubenswrapper[27835]: I0318 13:25:44.708697 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:25:44.708953 master-0 kubenswrapper[27835]: I0318 13:25:44.708784 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:25:44.708953 master-0 kubenswrapper[27835]: I0318 13:25:44.708812 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:25:44.708953 master-0 kubenswrapper[27835]: I0318 13:25:44.708845 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:25:44.708953 master-0 kubenswrapper[27835]: I0318 13:25:44.708887 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:25:44.708953 master-0 kubenswrapper[27835]: I0318 13:25:44.708899 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f77b8654b-44f6m"] Mar 18 13:25:44.708953 master-0 kubenswrapper[27835]: I0318 13:25:44.708924 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:25:44.709467 master-0 kubenswrapper[27835]: I0318 13:25:44.708991 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:25:44.709467 master-0 kubenswrapper[27835]: I0318 13:25:44.709033 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:25:44.709467 master-0 kubenswrapper[27835]: I0318 13:25:44.709094 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:25:44.709467 master-0 kubenswrapper[27835]: I0318 13:25:44.709097 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:25:44.709467 master-0 kubenswrapper[27835]: I0318 13:25:44.709114 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:25:44.709467 master-0 kubenswrapper[27835]: I0318 13:25:44.709189 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:25:44.709467 master-0 kubenswrapper[27835]: I0318 13:25:44.709242 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:25:44.709467 master-0 kubenswrapper[27835]: I0318 13:25:44.709257 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:25:44.709467 master-0 kubenswrapper[27835]: I0318 13:25:44.709303 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:25:44.709467 master-0 kubenswrapper[27835]: I0318 13:25:44.709330 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:25:44.710179 master-0 kubenswrapper[27835]: I0318 13:25:44.710140 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6c597dfd69-w66v8" Mar 18 13:25:44.987099 master-0 kubenswrapper[27835]: I0318 13:25:44.986633 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:25:44.990979 master-0 kubenswrapper[27835]: I0318 13:25:44.990904 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 13:25:45.386695 master-0 kubenswrapper[27835]: I0318 13:25:45.386352 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-66b8ffb895-2tkdh" Mar 18 13:25:46.008498 master-0 kubenswrapper[27835]: I0318 13:25:46.008430 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6"] Mar 18 13:25:46.295107 master-0 kubenswrapper[27835]: I0318 13:25:46.294977 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6c35e08-cdbc-4a86-a64a-3e5c34e941d7" path="/var/lib/kubelet/pods/c6c35e08-cdbc-4a86-a64a-3e5c34e941d7/volumes" Mar 18 13:25:46.439191 master-0 kubenswrapper[27835]: I0318 13:25:46.439112 27835 generic.go:334] "Generic (PLEG): container finished" podID="0737b13d-faed-44e2-9d20-1f3860dcc9bd" containerID="45c1bfe81a4ec9a67e0f96ccae8aa8e92cc20e9572ced1d331993a3be67d4dd1" exitCode=0 Mar 18 13:25:46.439191 master-0 kubenswrapper[27835]: I0318 13:25:46.439168 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"0737b13d-faed-44e2-9d20-1f3860dcc9bd","Type":"ContainerDied","Data":"45c1bfe81a4ec9a67e0f96ccae8aa8e92cc20e9572ced1d331993a3be67d4dd1"} Mar 18 13:25:46.441010 master-0 kubenswrapper[27835]: I0318 13:25:46.440945 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f77b8654b-44f6m" event={"ID":"e315913d-cd6e-4455-9035-b35d24adbb2a","Type":"ContainerStarted","Data":"d8b33a66574a579a528359540630e59af72346a12d9326f37dfaf5b6f6d366d9"} Mar 18 13:25:46.443893 master-0 kubenswrapper[27835]: I0318 13:25:46.443841 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-check-endpoints/0.log" Mar 18 13:25:46.447310 master-0 kubenswrapper[27835]: I0318 13:25:46.447254 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-cert-syncer/0.log" Mar 18 13:25:46.449473 master-0 kubenswrapper[27835]: I0318 13:25:46.449438 27835 scope.go:117] "RemoveContainer" containerID="721fa0a6e32ffbe367060749a069ffa65b9f6ad129708e70bf8fe6c632945146" Mar 18 13:25:46.449473 master-0 kubenswrapper[27835]: I0318 13:25:46.449334 27835 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="6ad928278a5d1eff5df519521f7ea13f1a3eb8fdfb5c1803c15d7810fb9de453" exitCode=0 Mar 18 13:25:46.449656 master-0 kubenswrapper[27835]: I0318 13:25:46.449502 27835 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="c7fc7f0847ee79291a6574d7683fcc6d9f536ed82d5caf4aefcfccd3e9a849e8" exitCode=0 Mar 18 13:25:46.449656 master-0 kubenswrapper[27835]: I0318 13:25:46.449518 27835 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="9cf2f9b8128dbf10ede3eb562a24cc58c7aba33dba8eeddf4d4f3e24e5798ca1" exitCode=0 Mar 18 13:25:46.449656 master-0 kubenswrapper[27835]: I0318 13:25:46.449534 27835 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="1f5fb1784d061c1838ee1821bce9e8751dbeeab4df3260e90e439d05f3ac2d1d" exitCode=2 Mar 18 13:25:46.836584 master-0 kubenswrapper[27835]: W0318 13:25:46.836516 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16fb4ea7f83036d9c6adf3454fc7e9db.slice/crio-4f6ee4dcfb5d4c804fbce249594f74693fd17ac740793507c513743eef69de11 WatchSource:0}: Error finding container 4f6ee4dcfb5d4c804fbce249594f74693fd17ac740793507c513743eef69de11: Status 404 returned error can't find the container with id 4f6ee4dcfb5d4c804fbce249594f74693fd17ac740793507c513743eef69de11 Mar 18 13:25:47.475464 master-0 kubenswrapper[27835]: I0318 13:25:47.465393 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 18 13:25:47.491299 master-0 kubenswrapper[27835]: I0318 13:25:47.484994 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c8888769b-8mxp6"] Mar 18 13:25:47.516445 master-0 kubenswrapper[27835]: I0318 13:25:47.512636 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"16fb4ea7f83036d9c6adf3454fc7e9db","Type":"ContainerStarted","Data":"4f6ee4dcfb5d4c804fbce249594f74693fd17ac740793507c513743eef69de11"} Mar 18 13:25:47.540341 master-0 kubenswrapper[27835]: I0318 13:25:47.540209 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-cert-syncer/0.log" Mar 18 13:25:47.917913 master-0 kubenswrapper[27835]: I0318 13:25:47.917847 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 13:25:48.069668 master-0 kubenswrapper[27835]: I0318 13:25:48.069444 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0737b13d-faed-44e2-9d20-1f3860dcc9bd-kube-api-access\") pod \"0737b13d-faed-44e2-9d20-1f3860dcc9bd\" (UID: \"0737b13d-faed-44e2-9d20-1f3860dcc9bd\") " Mar 18 13:25:48.069668 master-0 kubenswrapper[27835]: I0318 13:25:48.069576 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0737b13d-faed-44e2-9d20-1f3860dcc9bd-var-lock\") pod \"0737b13d-faed-44e2-9d20-1f3860dcc9bd\" (UID: \"0737b13d-faed-44e2-9d20-1f3860dcc9bd\") " Mar 18 13:25:48.069994 master-0 kubenswrapper[27835]: I0318 13:25:48.069684 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0737b13d-faed-44e2-9d20-1f3860dcc9bd-var-lock" (OuterVolumeSpecName: "var-lock") pod "0737b13d-faed-44e2-9d20-1f3860dcc9bd" (UID: "0737b13d-faed-44e2-9d20-1f3860dcc9bd"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:25:48.069994 master-0 kubenswrapper[27835]: I0318 13:25:48.069840 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0737b13d-faed-44e2-9d20-1f3860dcc9bd-kubelet-dir\") pod \"0737b13d-faed-44e2-9d20-1f3860dcc9bd\" (UID: \"0737b13d-faed-44e2-9d20-1f3860dcc9bd\") " Mar 18 13:25:48.069994 master-0 kubenswrapper[27835]: I0318 13:25:48.069912 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0737b13d-faed-44e2-9d20-1f3860dcc9bd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0737b13d-faed-44e2-9d20-1f3860dcc9bd" (UID: "0737b13d-faed-44e2-9d20-1f3860dcc9bd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:25:48.070199 master-0 kubenswrapper[27835]: I0318 13:25:48.070176 27835 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0737b13d-faed-44e2-9d20-1f3860dcc9bd-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:48.070199 master-0 kubenswrapper[27835]: I0318 13:25:48.070194 27835 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0737b13d-faed-44e2-9d20-1f3860dcc9bd-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:48.072949 master-0 kubenswrapper[27835]: I0318 13:25:48.072887 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0737b13d-faed-44e2-9d20-1f3860dcc9bd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0737b13d-faed-44e2-9d20-1f3860dcc9bd" (UID: "0737b13d-faed-44e2-9d20-1f3860dcc9bd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:25:48.171542 master-0 kubenswrapper[27835]: I0318 13:25:48.171467 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0737b13d-faed-44e2-9d20-1f3860dcc9bd-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:48.295751 master-0 kubenswrapper[27835]: I0318 13:25:48.295692 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a350f317-f058-4102-af5c-cbba46d35e02" path="/var/lib/kubelet/pods/a350f317-f058-4102-af5c-cbba46d35e02/volumes" Mar 18 13:25:48.558837 master-0 kubenswrapper[27835]: I0318 13:25:48.558727 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"0737b13d-faed-44e2-9d20-1f3860dcc9bd","Type":"ContainerDied","Data":"6cf7021ca6792db8c60b9145b52ea7d21d54d79b1809f86547fc87d8f8a60ed1"} Mar 18 13:25:48.558837 master-0 kubenswrapper[27835]: I0318 13:25:48.558805 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cf7021ca6792db8c60b9145b52ea7d21d54d79b1809f86547fc87d8f8a60ed1" Mar 18 13:25:48.558837 master-0 kubenswrapper[27835]: I0318 13:25:48.558742 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 13:25:48.561934 master-0 kubenswrapper[27835]: I0318 13:25:48.561191 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-657fb76bf7-pvlvc_dbc9ea24-0c80-4453-8313-f8ffe06714e5/oauth-openshift/0.log" Mar 18 13:25:48.561934 master-0 kubenswrapper[27835]: I0318 13:25:48.561283 27835 generic.go:334] "Generic (PLEG): container finished" podID="dbc9ea24-0c80-4453-8313-f8ffe06714e5" containerID="db8fa3e7589d4fd562025d921bf9529df0a2974afef93caf5adc7e5173616ff1" exitCode=255 Mar 18 13:25:48.561934 master-0 kubenswrapper[27835]: I0318 13:25:48.561351 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" event={"ID":"dbc9ea24-0c80-4453-8313-f8ffe06714e5","Type":"ContainerDied","Data":"db8fa3e7589d4fd562025d921bf9529df0a2974afef93caf5adc7e5173616ff1"} Mar 18 13:25:48.565101 master-0 kubenswrapper[27835]: I0318 13:25:48.564799 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f77b8654b-44f6m" event={"ID":"e315913d-cd6e-4455-9035-b35d24adbb2a","Type":"ContainerStarted","Data":"6a4a14dddbf5c1aeb637360197f1ae4a43e3fd59799a62f7a4ff11648b6d2baf"} Mar 18 13:25:48.565101 master-0 kubenswrapper[27835]: I0318 13:25:48.564960 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5f77b8654b-44f6m" Mar 18 13:25:48.569140 master-0 kubenswrapper[27835]: I0318 13:25:48.569079 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"16fb4ea7f83036d9c6adf3454fc7e9db","Type":"ContainerStarted","Data":"a47091116d34af37c6ea269c78da163805f18cfa0d6c2e8a8c3b428da6f84af7"} Mar 18 13:25:48.571987 master-0 kubenswrapper[27835]: I0318 13:25:48.571944 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5f77b8654b-44f6m" Mar 18 13:25:48.914968 master-0 kubenswrapper[27835]: I0318 13:25:48.914887 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-657fb76bf7-pvlvc_dbc9ea24-0c80-4453-8313-f8ffe06714e5/oauth-openshift/0.log" Mar 18 13:25:48.914968 master-0 kubenswrapper[27835]: I0318 13:25:48.914966 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:49.082871 master-0 kubenswrapper[27835]: I0318 13:25:49.082729 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvbnv\" (UniqueName: \"kubernetes.io/projected/dbc9ea24-0c80-4453-8313-f8ffe06714e5-kube-api-access-bvbnv\") pod \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " Mar 18 13:25:49.082871 master-0 kubenswrapper[27835]: I0318 13:25:49.082828 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-user-template-error\") pod \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " Mar 18 13:25:49.083266 master-0 kubenswrapper[27835]: I0318 13:25:49.082934 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dbc9ea24-0c80-4453-8313-f8ffe06714e5-audit-dir\") pod \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " Mar 18 13:25:49.083266 master-0 kubenswrapper[27835]: I0318 13:25:49.083041 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dbc9ea24-0c80-4453-8313-f8ffe06714e5-audit-policies\") pod \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " Mar 18 13:25:49.083266 master-0 kubenswrapper[27835]: I0318 13:25:49.083086 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-session\") pod \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " Mar 18 13:25:49.083266 master-0 kubenswrapper[27835]: I0318 13:25:49.083171 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-trusted-ca-bundle\") pod \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " Mar 18 13:25:49.083266 master-0 kubenswrapper[27835]: I0318 13:25:49.083230 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-ocp-branding-template\") pod \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " Mar 18 13:25:49.083755 master-0 kubenswrapper[27835]: I0318 13:25:49.083280 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-router-certs\") pod \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " Mar 18 13:25:49.083755 master-0 kubenswrapper[27835]: I0318 13:25:49.083376 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-serving-cert\") pod \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " Mar 18 13:25:49.083755 master-0 kubenswrapper[27835]: I0318 13:25:49.083524 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-user-template-login\") pod \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " Mar 18 13:25:49.083755 master-0 kubenswrapper[27835]: I0318 13:25:49.083578 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-cliconfig\") pod \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " Mar 18 13:25:49.083755 master-0 kubenswrapper[27835]: I0318 13:25:49.083626 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-service-ca\") pod \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " Mar 18 13:25:49.083755 master-0 kubenswrapper[27835]: I0318 13:25:49.083682 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-user-template-provider-selection\") pod \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\" (UID: \"dbc9ea24-0c80-4453-8313-f8ffe06714e5\") " Mar 18 13:25:49.085586 master-0 kubenswrapper[27835]: I0318 13:25:49.084649 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbc9ea24-0c80-4453-8313-f8ffe06714e5-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "dbc9ea24-0c80-4453-8313-f8ffe06714e5" (UID: "dbc9ea24-0c80-4453-8313-f8ffe06714e5"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:25:49.085837 master-0 kubenswrapper[27835]: I0318 13:25:49.085738 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "dbc9ea24-0c80-4453-8313-f8ffe06714e5" (UID: "dbc9ea24-0c80-4453-8313-f8ffe06714e5"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:25:49.086520 master-0 kubenswrapper[27835]: I0318 13:25:49.086455 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbc9ea24-0c80-4453-8313-f8ffe06714e5-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "dbc9ea24-0c80-4453-8313-f8ffe06714e5" (UID: "dbc9ea24-0c80-4453-8313-f8ffe06714e5"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:25:49.086768 master-0 kubenswrapper[27835]: I0318 13:25:49.086701 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "dbc9ea24-0c80-4453-8313-f8ffe06714e5" (UID: "dbc9ea24-0c80-4453-8313-f8ffe06714e5"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:25:49.086859 master-0 kubenswrapper[27835]: I0318 13:25:49.086810 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "dbc9ea24-0c80-4453-8313-f8ffe06714e5" (UID: "dbc9ea24-0c80-4453-8313-f8ffe06714e5"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:25:49.086859 master-0 kubenswrapper[27835]: I0318 13:25:49.086834 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "dbc9ea24-0c80-4453-8313-f8ffe06714e5" (UID: "dbc9ea24-0c80-4453-8313-f8ffe06714e5"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:25:49.088618 master-0 kubenswrapper[27835]: I0318 13:25:49.088530 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "dbc9ea24-0c80-4453-8313-f8ffe06714e5" (UID: "dbc9ea24-0c80-4453-8313-f8ffe06714e5"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:25:49.089862 master-0 kubenswrapper[27835]: I0318 13:25:49.089784 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "dbc9ea24-0c80-4453-8313-f8ffe06714e5" (UID: "dbc9ea24-0c80-4453-8313-f8ffe06714e5"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:25:49.090318 master-0 kubenswrapper[27835]: I0318 13:25:49.090266 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "dbc9ea24-0c80-4453-8313-f8ffe06714e5" (UID: "dbc9ea24-0c80-4453-8313-f8ffe06714e5"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:25:49.090661 master-0 kubenswrapper[27835]: I0318 13:25:49.090601 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "dbc9ea24-0c80-4453-8313-f8ffe06714e5" (UID: "dbc9ea24-0c80-4453-8313-f8ffe06714e5"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:25:49.091236 master-0 kubenswrapper[27835]: I0318 13:25:49.091161 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "dbc9ea24-0c80-4453-8313-f8ffe06714e5" (UID: "dbc9ea24-0c80-4453-8313-f8ffe06714e5"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:25:49.093883 master-0 kubenswrapper[27835]: I0318 13:25:49.093691 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbc9ea24-0c80-4453-8313-f8ffe06714e5-kube-api-access-bvbnv" (OuterVolumeSpecName: "kube-api-access-bvbnv") pod "dbc9ea24-0c80-4453-8313-f8ffe06714e5" (UID: "dbc9ea24-0c80-4453-8313-f8ffe06714e5"). InnerVolumeSpecName "kube-api-access-bvbnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:25:49.094253 master-0 kubenswrapper[27835]: I0318 13:25:49.094195 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "dbc9ea24-0c80-4453-8313-f8ffe06714e5" (UID: "dbc9ea24-0c80-4453-8313-f8ffe06714e5"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:25:49.185394 master-0 kubenswrapper[27835]: I0318 13:25:49.185302 27835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:49.185394 master-0 kubenswrapper[27835]: I0318 13:25:49.185375 27835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:49.185394 master-0 kubenswrapper[27835]: I0318 13:25:49.185396 27835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:49.185834 master-0 kubenswrapper[27835]: I0318 13:25:49.185435 27835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:49.185834 master-0 kubenswrapper[27835]: I0318 13:25:49.185458 27835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:49.185834 master-0 kubenswrapper[27835]: I0318 13:25:49.185479 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvbnv\" (UniqueName: \"kubernetes.io/projected/dbc9ea24-0c80-4453-8313-f8ffe06714e5-kube-api-access-bvbnv\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:49.185834 master-0 kubenswrapper[27835]: I0318 13:25:49.185496 27835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:49.185834 master-0 kubenswrapper[27835]: I0318 13:25:49.185516 27835 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dbc9ea24-0c80-4453-8313-f8ffe06714e5-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:49.185834 master-0 kubenswrapper[27835]: I0318 13:25:49.185533 27835 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dbc9ea24-0c80-4453-8313-f8ffe06714e5-audit-policies\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:49.185834 master-0 kubenswrapper[27835]: I0318 13:25:49.185546 27835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:49.185834 master-0 kubenswrapper[27835]: I0318 13:25:49.185557 27835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:49.185834 master-0 kubenswrapper[27835]: I0318 13:25:49.185570 27835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:49.185834 master-0 kubenswrapper[27835]: I0318 13:25:49.185582 27835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dbc9ea24-0c80-4453-8313-f8ffe06714e5-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:49.588276 master-0 kubenswrapper[27835]: I0318 13:25:49.588138 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-657fb76bf7-pvlvc_dbc9ea24-0c80-4453-8313-f8ffe06714e5/oauth-openshift/0.log" Mar 18 13:25:49.588908 master-0 kubenswrapper[27835]: I0318 13:25:49.588268 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" event={"ID":"dbc9ea24-0c80-4453-8313-f8ffe06714e5","Type":"ContainerDied","Data":"2d7bb29d6f941018c028606e913bf7899c39d23a18acfd6c4bc5f8b9463603da"} Mar 18 13:25:49.588908 master-0 kubenswrapper[27835]: I0318 13:25:49.588339 27835 scope.go:117] "RemoveContainer" containerID="db8fa3e7589d4fd562025d921bf9529df0a2974afef93caf5adc7e5173616ff1" Mar 18 13:25:49.588908 master-0 kubenswrapper[27835]: I0318 13:25:49.588807 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" Mar 18 13:25:49.958603 master-0 kubenswrapper[27835]: I0318 13:25:49.958535 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-cert-syncer/0.log" Mar 18 13:25:49.959534 master-0 kubenswrapper[27835]: I0318 13:25:49.959491 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:25:50.097251 master-0 kubenswrapper[27835]: I0318 13:25:50.097137 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"b45ea2ef1cf2bc9d1d994d6538ae0a64\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " Mar 18 13:25:50.097623 master-0 kubenswrapper[27835]: I0318 13:25:50.097289 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"b45ea2ef1cf2bc9d1d994d6538ae0a64\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " Mar 18 13:25:50.097623 master-0 kubenswrapper[27835]: I0318 13:25:50.097344 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"b45ea2ef1cf2bc9d1d994d6538ae0a64\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " Mar 18 13:25:50.097970 master-0 kubenswrapper[27835]: I0318 13:25:50.097858 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b45ea2ef1cf2bc9d1d994d6538ae0a64" (UID: "b45ea2ef1cf2bc9d1d994d6538ae0a64"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:25:50.097970 master-0 kubenswrapper[27835]: I0318 13:25:50.097890 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "b45ea2ef1cf2bc9d1d994d6538ae0a64" (UID: "b45ea2ef1cf2bc9d1d994d6538ae0a64"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:25:50.097970 master-0 kubenswrapper[27835]: I0318 13:25:50.097948 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "b45ea2ef1cf2bc9d1d994d6538ae0a64" (UID: "b45ea2ef1cf2bc9d1d994d6538ae0a64"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:25:50.199657 master-0 kubenswrapper[27835]: I0318 13:25:50.199604 27835 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:50.199955 master-0 kubenswrapper[27835]: I0318 13:25:50.199933 27835 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:50.200081 master-0 kubenswrapper[27835]: I0318 13:25:50.200060 27835 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:25:50.289534 master-0 kubenswrapper[27835]: I0318 13:25:50.289450 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" path="/var/lib/kubelet/pods/b45ea2ef1cf2bc9d1d994d6538ae0a64/volumes" Mar 18 13:25:50.600648 master-0 kubenswrapper[27835]: I0318 13:25:50.600559 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-cert-syncer/0.log" Mar 18 13:25:50.601595 master-0 kubenswrapper[27835]: I0318 13:25:50.601549 27835 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="65d7e5b55a010fe304ccf53f634d090df67438b3eddaa2aa17e4e898f972f8b1" exitCode=0 Mar 18 13:25:50.602222 master-0 kubenswrapper[27835]: I0318 13:25:50.602171 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:25:50.602329 master-0 kubenswrapper[27835]: I0318 13:25:50.602249 27835 scope.go:117] "RemoveContainer" containerID="6ad928278a5d1eff5df519521f7ea13f1a3eb8fdfb5c1803c15d7810fb9de453" Mar 18 13:25:50.619642 master-0 kubenswrapper[27835]: I0318 13:25:50.619596 27835 scope.go:117] "RemoveContainer" containerID="c7fc7f0847ee79291a6574d7683fcc6d9f536ed82d5caf4aefcfccd3e9a849e8" Mar 18 13:25:50.634138 master-0 kubenswrapper[27835]: I0318 13:25:50.633995 27835 scope.go:117] "RemoveContainer" containerID="9cf2f9b8128dbf10ede3eb562a24cc58c7aba33dba8eeddf4d4f3e24e5798ca1" Mar 18 13:25:50.650571 master-0 kubenswrapper[27835]: I0318 13:25:50.650551 27835 scope.go:117] "RemoveContainer" containerID="1f5fb1784d061c1838ee1821bce9e8751dbeeab4df3260e90e439d05f3ac2d1d" Mar 18 13:25:50.665050 master-0 kubenswrapper[27835]: I0318 13:25:50.665014 27835 scope.go:117] "RemoveContainer" containerID="65d7e5b55a010fe304ccf53f634d090df67438b3eddaa2aa17e4e898f972f8b1" Mar 18 13:25:50.679723 master-0 kubenswrapper[27835]: I0318 13:25:50.679686 27835 scope.go:117] "RemoveContainer" containerID="04b008430621fdfcd606415bb682fb2830b69521c04743dd7425bc1288a8ae9b" Mar 18 13:25:50.693522 master-0 kubenswrapper[27835]: I0318 13:25:50.693486 27835 scope.go:117] "RemoveContainer" containerID="6ad928278a5d1eff5df519521f7ea13f1a3eb8fdfb5c1803c15d7810fb9de453" Mar 18 13:25:50.693941 master-0 kubenswrapper[27835]: E0318 13:25:50.693903 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ad928278a5d1eff5df519521f7ea13f1a3eb8fdfb5c1803c15d7810fb9de453\": container with ID starting with 6ad928278a5d1eff5df519521f7ea13f1a3eb8fdfb5c1803c15d7810fb9de453 not found: ID does not exist" containerID="6ad928278a5d1eff5df519521f7ea13f1a3eb8fdfb5c1803c15d7810fb9de453" Mar 18 13:25:50.693992 master-0 kubenswrapper[27835]: I0318 13:25:50.693946 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ad928278a5d1eff5df519521f7ea13f1a3eb8fdfb5c1803c15d7810fb9de453"} err="failed to get container status \"6ad928278a5d1eff5df519521f7ea13f1a3eb8fdfb5c1803c15d7810fb9de453\": rpc error: code = NotFound desc = could not find container \"6ad928278a5d1eff5df519521f7ea13f1a3eb8fdfb5c1803c15d7810fb9de453\": container with ID starting with 6ad928278a5d1eff5df519521f7ea13f1a3eb8fdfb5c1803c15d7810fb9de453 not found: ID does not exist" Mar 18 13:25:50.693992 master-0 kubenswrapper[27835]: I0318 13:25:50.693973 27835 scope.go:117] "RemoveContainer" containerID="c7fc7f0847ee79291a6574d7683fcc6d9f536ed82d5caf4aefcfccd3e9a849e8" Mar 18 13:25:50.694366 master-0 kubenswrapper[27835]: E0318 13:25:50.694345 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7fc7f0847ee79291a6574d7683fcc6d9f536ed82d5caf4aefcfccd3e9a849e8\": container with ID starting with c7fc7f0847ee79291a6574d7683fcc6d9f536ed82d5caf4aefcfccd3e9a849e8 not found: ID does not exist" containerID="c7fc7f0847ee79291a6574d7683fcc6d9f536ed82d5caf4aefcfccd3e9a849e8" Mar 18 13:25:50.694621 master-0 kubenswrapper[27835]: I0318 13:25:50.694367 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7fc7f0847ee79291a6574d7683fcc6d9f536ed82d5caf4aefcfccd3e9a849e8"} err="failed to get container status \"c7fc7f0847ee79291a6574d7683fcc6d9f536ed82d5caf4aefcfccd3e9a849e8\": rpc error: code = NotFound desc = could not find container \"c7fc7f0847ee79291a6574d7683fcc6d9f536ed82d5caf4aefcfccd3e9a849e8\": container with ID starting with c7fc7f0847ee79291a6574d7683fcc6d9f536ed82d5caf4aefcfccd3e9a849e8 not found: ID does not exist" Mar 18 13:25:50.694621 master-0 kubenswrapper[27835]: I0318 13:25:50.694380 27835 scope.go:117] "RemoveContainer" containerID="9cf2f9b8128dbf10ede3eb562a24cc58c7aba33dba8eeddf4d4f3e24e5798ca1" Mar 18 13:25:50.695237 master-0 kubenswrapper[27835]: E0318 13:25:50.695185 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cf2f9b8128dbf10ede3eb562a24cc58c7aba33dba8eeddf4d4f3e24e5798ca1\": container with ID starting with 9cf2f9b8128dbf10ede3eb562a24cc58c7aba33dba8eeddf4d4f3e24e5798ca1 not found: ID does not exist" containerID="9cf2f9b8128dbf10ede3eb562a24cc58c7aba33dba8eeddf4d4f3e24e5798ca1" Mar 18 13:25:50.695287 master-0 kubenswrapper[27835]: I0318 13:25:50.695231 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cf2f9b8128dbf10ede3eb562a24cc58c7aba33dba8eeddf4d4f3e24e5798ca1"} err="failed to get container status \"9cf2f9b8128dbf10ede3eb562a24cc58c7aba33dba8eeddf4d4f3e24e5798ca1\": rpc error: code = NotFound desc = could not find container \"9cf2f9b8128dbf10ede3eb562a24cc58c7aba33dba8eeddf4d4f3e24e5798ca1\": container with ID starting with 9cf2f9b8128dbf10ede3eb562a24cc58c7aba33dba8eeddf4d4f3e24e5798ca1 not found: ID does not exist" Mar 18 13:25:50.695287 master-0 kubenswrapper[27835]: I0318 13:25:50.695254 27835 scope.go:117] "RemoveContainer" containerID="1f5fb1784d061c1838ee1821bce9e8751dbeeab4df3260e90e439d05f3ac2d1d" Mar 18 13:25:50.695550 master-0 kubenswrapper[27835]: E0318 13:25:50.695522 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f5fb1784d061c1838ee1821bce9e8751dbeeab4df3260e90e439d05f3ac2d1d\": container with ID starting with 1f5fb1784d061c1838ee1821bce9e8751dbeeab4df3260e90e439d05f3ac2d1d not found: ID does not exist" containerID="1f5fb1784d061c1838ee1821bce9e8751dbeeab4df3260e90e439d05f3ac2d1d" Mar 18 13:25:50.695608 master-0 kubenswrapper[27835]: I0318 13:25:50.695544 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f5fb1784d061c1838ee1821bce9e8751dbeeab4df3260e90e439d05f3ac2d1d"} err="failed to get container status \"1f5fb1784d061c1838ee1821bce9e8751dbeeab4df3260e90e439d05f3ac2d1d\": rpc error: code = NotFound desc = could not find container \"1f5fb1784d061c1838ee1821bce9e8751dbeeab4df3260e90e439d05f3ac2d1d\": container with ID starting with 1f5fb1784d061c1838ee1821bce9e8751dbeeab4df3260e90e439d05f3ac2d1d not found: ID does not exist" Mar 18 13:25:50.695608 master-0 kubenswrapper[27835]: I0318 13:25:50.695559 27835 scope.go:117] "RemoveContainer" containerID="65d7e5b55a010fe304ccf53f634d090df67438b3eddaa2aa17e4e898f972f8b1" Mar 18 13:25:50.695853 master-0 kubenswrapper[27835]: E0318 13:25:50.695831 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65d7e5b55a010fe304ccf53f634d090df67438b3eddaa2aa17e4e898f972f8b1\": container with ID starting with 65d7e5b55a010fe304ccf53f634d090df67438b3eddaa2aa17e4e898f972f8b1 not found: ID does not exist" containerID="65d7e5b55a010fe304ccf53f634d090df67438b3eddaa2aa17e4e898f972f8b1" Mar 18 13:25:50.695907 master-0 kubenswrapper[27835]: I0318 13:25:50.695851 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65d7e5b55a010fe304ccf53f634d090df67438b3eddaa2aa17e4e898f972f8b1"} err="failed to get container status \"65d7e5b55a010fe304ccf53f634d090df67438b3eddaa2aa17e4e898f972f8b1\": rpc error: code = NotFound desc = could not find container \"65d7e5b55a010fe304ccf53f634d090df67438b3eddaa2aa17e4e898f972f8b1\": container with ID starting with 65d7e5b55a010fe304ccf53f634d090df67438b3eddaa2aa17e4e898f972f8b1 not found: ID does not exist" Mar 18 13:25:50.695907 master-0 kubenswrapper[27835]: I0318 13:25:50.695864 27835 scope.go:117] "RemoveContainer" containerID="04b008430621fdfcd606415bb682fb2830b69521c04743dd7425bc1288a8ae9b" Mar 18 13:25:50.696118 master-0 kubenswrapper[27835]: E0318 13:25:50.696099 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04b008430621fdfcd606415bb682fb2830b69521c04743dd7425bc1288a8ae9b\": container with ID starting with 04b008430621fdfcd606415bb682fb2830b69521c04743dd7425bc1288a8ae9b not found: ID does not exist" containerID="04b008430621fdfcd606415bb682fb2830b69521c04743dd7425bc1288a8ae9b" Mar 18 13:25:50.696159 master-0 kubenswrapper[27835]: I0318 13:25:50.696118 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04b008430621fdfcd606415bb682fb2830b69521c04743dd7425bc1288a8ae9b"} err="failed to get container status \"04b008430621fdfcd606415bb682fb2830b69521c04743dd7425bc1288a8ae9b\": rpc error: code = NotFound desc = could not find container \"04b008430621fdfcd606415bb682fb2830b69521c04743dd7425bc1288a8ae9b\": container with ID starting with 04b008430621fdfcd606415bb682fb2830b69521c04743dd7425bc1288a8ae9b not found: ID does not exist" Mar 18 13:25:51.554024 master-0 kubenswrapper[27835]: I0318 13:25:51.553960 27835 patch_prober.go:28] interesting pod/console-5df65d974f-mpf5j container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.88:8443/health\": dial tcp 10.128.0.88:8443: connect: connection refused" start-of-body= Mar 18 13:25:51.554219 master-0 kubenswrapper[27835]: I0318 13:25:51.554035 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5df65d974f-mpf5j" podUID="a42bf050-6c38-4023-a8b4-dc795f3aadc7" containerName="console" probeResult="failure" output="Get \"https://10.128.0.88:8443/health\": dial tcp 10.128.0.88:8443: connect: connection refused" Mar 18 13:25:52.541556 master-0 kubenswrapper[27835]: E0318 13:25:52.540738 27835 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-master-0.189df2652413be3b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-master-0,UID:b45ea2ef1cf2bc9d1d994d6538ae0a64,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Killing,Message:Stopping container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:25:44.464735803 +0000 UTC m=+108.429947363,LastTimestamp:2026-03-18 13:25:44.464735803 +0000 UTC m=+108.429947363,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:25:52.550405 master-0 kubenswrapper[27835]: I0318 13:25:52.550296 27835 status_manager.go:851] "Failed to get status for pod" podUID="0737b13d-faed-44e2-9d20-1f3860dcc9bd" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:25:52.552112 master-0 kubenswrapper[27835]: I0318 13:25:52.552058 27835 status_manager.go:851] "Failed to get status for pod" podUID="3f688009-66eb-490d-a0fb-464dba69fb96" pod="openshift-console/downloads-66b8ffb895-2tkdh" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-66b8ffb895-2tkdh\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:25:52.553655 master-0 kubenswrapper[27835]: I0318 13:25:52.553323 27835 status_manager.go:851] "Failed to get status for pod" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:25:52.554665 master-0 kubenswrapper[27835]: I0318 13:25:52.554615 27835 status_manager.go:851] "Failed to get status for pod" podUID="101fe9f1-3211-4844-bb11-c1b6c7696e10" pod="openshift-route-controller-manager/route-controller-manager-6c597dfd69-w66v8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6c597dfd69-w66v8\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:25:52.557182 master-0 kubenswrapper[27835]: I0318 13:25:52.557068 27835 status_manager.go:851] "Failed to get status for pod" podUID="101fe9f1-3211-4844-bb11-c1b6c7696e10" pod="openshift-route-controller-manager/route-controller-manager-6c597dfd69-w66v8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6c597dfd69-w66v8\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:25:52.557992 master-0 kubenswrapper[27835]: I0318 13:25:52.557937 27835 status_manager.go:851] "Failed to get status for pod" podUID="dbc9ea24-0c80-4453-8313-f8ffe06714e5" pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-657fb76bf7-pvlvc\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:25:52.558688 master-0 kubenswrapper[27835]: I0318 13:25:52.558611 27835 status_manager.go:851] "Failed to get status for pod" podUID="e315913d-cd6e-4455-9035-b35d24adbb2a" pod="openshift-controller-manager/controller-manager-5f77b8654b-44f6m" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5f77b8654b-44f6m\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:25:52.559992 master-0 kubenswrapper[27835]: I0318 13:25:52.559903 27835 status_manager.go:851] "Failed to get status for pod" podUID="0737b13d-faed-44e2-9d20-1f3860dcc9bd" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:25:52.562433 master-0 kubenswrapper[27835]: I0318 13:25:52.562333 27835 status_manager.go:851] "Failed to get status for pod" podUID="3f688009-66eb-490d-a0fb-464dba69fb96" pod="openshift-console/downloads-66b8ffb895-2tkdh" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-66b8ffb895-2tkdh\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:25:52.573836 master-0 kubenswrapper[27835]: I0318 13:25:52.573735 27835 status_manager.go:851] "Failed to get status for pod" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:25:53.042674 master-0 kubenswrapper[27835]: E0318 13:25:53.042565 27835 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:25:53.043316 master-0 kubenswrapper[27835]: E0318 13:25:53.043228 27835 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:25:53.044230 master-0 kubenswrapper[27835]: E0318 13:25:53.044131 27835 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:25:53.045447 master-0 kubenswrapper[27835]: E0318 13:25:53.045287 27835 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:25:53.046305 master-0 kubenswrapper[27835]: E0318 13:25:53.046222 27835 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:25:53.046952 master-0 kubenswrapper[27835]: I0318 13:25:53.046283 27835 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 18 13:25:53.047951 master-0 kubenswrapper[27835]: E0318 13:25:53.047866 27835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 18 13:25:53.250492 master-0 kubenswrapper[27835]: E0318 13:25:53.248565 27835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 18 13:25:53.649796 master-0 kubenswrapper[27835]: E0318 13:25:53.649735 27835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 18 13:25:54.083684 master-0 kubenswrapper[27835]: I0318 13:25:54.083574 27835 patch_prober.go:28] interesting pod/console-686bcb5cf-88rcq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.89:8443/health\": dial tcp 10.128.0.89:8443: connect: connection refused" start-of-body= Mar 18 13:25:54.083684 master-0 kubenswrapper[27835]: I0318 13:25:54.083693 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-686bcb5cf-88rcq" podUID="18d00d36-387c-4c03-affa-9abc8e2d4fe0" containerName="console" probeResult="failure" output="Get \"https://10.128.0.89:8443/health\": dial tcp 10.128.0.89:8443: connect: connection refused" Mar 18 13:25:54.450810 master-0 kubenswrapper[27835]: E0318 13:25:54.450741 27835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 18 13:25:55.754576 master-0 kubenswrapper[27835]: E0318 13:25:55.753351 27835 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-master-0.189df2652413be3b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-master-0,UID:b45ea2ef1cf2bc9d1d994d6538ae0a64,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Killing,Message:Stopping container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:25:44.464735803 +0000 UTC m=+108.429947363,LastTimestamp:2026-03-18 13:25:44.464735803 +0000 UTC m=+108.429947363,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:25:56.054441 master-0 kubenswrapper[27835]: E0318 13:25:56.053740 27835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 18 13:25:56.290234 master-0 kubenswrapper[27835]: I0318 13:25:56.290143 27835 status_manager.go:851] "Failed to get status for pod" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:25:56.290987 master-0 kubenswrapper[27835]: I0318 13:25:56.290909 27835 status_manager.go:851] "Failed to get status for pod" podUID="101fe9f1-3211-4844-bb11-c1b6c7696e10" pod="openshift-route-controller-manager/route-controller-manager-6c597dfd69-w66v8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6c597dfd69-w66v8\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:25:56.291662 master-0 kubenswrapper[27835]: I0318 13:25:56.291603 27835 status_manager.go:851] "Failed to get status for pod" podUID="dbc9ea24-0c80-4453-8313-f8ffe06714e5" pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-657fb76bf7-pvlvc\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:25:56.292496 master-0 kubenswrapper[27835]: I0318 13:25:56.292429 27835 status_manager.go:851] "Failed to get status for pod" podUID="e315913d-cd6e-4455-9035-b35d24adbb2a" pod="openshift-controller-manager/controller-manager-5f77b8654b-44f6m" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5f77b8654b-44f6m\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:25:56.293435 master-0 kubenswrapper[27835]: I0318 13:25:56.293334 27835 status_manager.go:851] "Failed to get status for pod" podUID="0737b13d-faed-44e2-9d20-1f3860dcc9bd" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:25:56.294604 master-0 kubenswrapper[27835]: I0318 13:25:56.294536 27835 status_manager.go:851] "Failed to get status for pod" podUID="3f688009-66eb-490d-a0fb-464dba69fb96" pod="openshift-console/downloads-66b8ffb895-2tkdh" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-66b8ffb895-2tkdh\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:25:57.281199 master-0 kubenswrapper[27835]: I0318 13:25:57.281117 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:25:57.282516 master-0 kubenswrapper[27835]: I0318 13:25:57.282476 27835 status_manager.go:851] "Failed to get status for pod" podUID="e315913d-cd6e-4455-9035-b35d24adbb2a" pod="openshift-controller-manager/controller-manager-5f77b8654b-44f6m" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5f77b8654b-44f6m\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:25:57.282907 master-0 kubenswrapper[27835]: I0318 13:25:57.282872 27835 status_manager.go:851] "Failed to get status for pod" podUID="0737b13d-faed-44e2-9d20-1f3860dcc9bd" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:25:57.283310 master-0 kubenswrapper[27835]: I0318 13:25:57.283283 27835 status_manager.go:851] "Failed to get status for pod" podUID="3f688009-66eb-490d-a0fb-464dba69fb96" pod="openshift-console/downloads-66b8ffb895-2tkdh" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-66b8ffb895-2tkdh\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:25:57.286478 master-0 kubenswrapper[27835]: I0318 13:25:57.286300 27835 status_manager.go:851] "Failed to get status for pod" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:25:57.287321 master-0 kubenswrapper[27835]: I0318 13:25:57.287220 27835 status_manager.go:851] "Failed to get status for pod" podUID="101fe9f1-3211-4844-bb11-c1b6c7696e10" pod="openshift-route-controller-manager/route-controller-manager-6c597dfd69-w66v8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6c597dfd69-w66v8\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:25:57.288154 master-0 kubenswrapper[27835]: I0318 13:25:57.288073 27835 status_manager.go:851] "Failed to get status for pod" podUID="dbc9ea24-0c80-4453-8313-f8ffe06714e5" pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-657fb76bf7-pvlvc\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:25:57.301782 master-0 kubenswrapper[27835]: I0318 13:25:57.301743 27835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="70ee7a5d-5760-49fd-98af-f1df5055a085" Mar 18 13:25:57.302037 master-0 kubenswrapper[27835]: I0318 13:25:57.302021 27835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="70ee7a5d-5760-49fd-98af-f1df5055a085" Mar 18 13:25:57.303099 master-0 kubenswrapper[27835]: E0318 13:25:57.303036 27835 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:25:57.303860 master-0 kubenswrapper[27835]: I0318 13:25:57.303815 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:25:57.330200 master-0 kubenswrapper[27835]: W0318 13:25:57.330117 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d5ce05b3d592e63f1f92202d52b9635.slice/crio-95df23a079f6570f75a5380345c2edc5b3d1adaa914dde7f3db5469f565c781d WatchSource:0}: Error finding container 95df23a079f6570f75a5380345c2edc5b3d1adaa914dde7f3db5469f565c781d: Status 404 returned error can't find the container with id 95df23a079f6570f75a5380345c2edc5b3d1adaa914dde7f3db5469f565c781d Mar 18 13:25:57.658579 master-0 kubenswrapper[27835]: I0318 13:25:57.658518 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"95df23a079f6570f75a5380345c2edc5b3d1adaa914dde7f3db5469f565c781d"} Mar 18 13:25:58.668322 master-0 kubenswrapper[27835]: I0318 13:25:58.668230 27835 generic.go:334] "Generic (PLEG): container finished" podID="7d5ce05b3d592e63f1f92202d52b9635" containerID="d2f34ff6e866c27595deda451855f1a4008a299a57f12d119251d3d49448157b" exitCode=0 Mar 18 13:25:58.669199 master-0 kubenswrapper[27835]: I0318 13:25:58.668289 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerDied","Data":"d2f34ff6e866c27595deda451855f1a4008a299a57f12d119251d3d49448157b"} Mar 18 13:25:58.669199 master-0 kubenswrapper[27835]: I0318 13:25:58.668786 27835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="70ee7a5d-5760-49fd-98af-f1df5055a085" Mar 18 13:25:58.669199 master-0 kubenswrapper[27835]: I0318 13:25:58.668837 27835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="70ee7a5d-5760-49fd-98af-f1df5055a085" Mar 18 13:25:58.669501 master-0 kubenswrapper[27835]: I0318 13:25:58.669264 27835 status_manager.go:851] "Failed to get status for pod" podUID="dbc9ea24-0c80-4453-8313-f8ffe06714e5" pod="openshift-authentication/oauth-openshift-657fb76bf7-pvlvc" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-657fb76bf7-pvlvc\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:25:58.670937 master-0 kubenswrapper[27835]: I0318 13:25:58.669855 27835 status_manager.go:851] "Failed to get status for pod" podUID="e315913d-cd6e-4455-9035-b35d24adbb2a" pod="openshift-controller-manager/controller-manager-5f77b8654b-44f6m" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5f77b8654b-44f6m\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:25:58.670937 master-0 kubenswrapper[27835]: E0318 13:25:58.669913 27835 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:25:58.670937 master-0 kubenswrapper[27835]: I0318 13:25:58.670812 27835 status_manager.go:851] "Failed to get status for pod" podUID="0737b13d-faed-44e2-9d20-1f3860dcc9bd" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:25:58.671402 master-0 kubenswrapper[27835]: I0318 13:25:58.671313 27835 status_manager.go:851] "Failed to get status for pod" podUID="3f688009-66eb-490d-a0fb-464dba69fb96" pod="openshift-console/downloads-66b8ffb895-2tkdh" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-66b8ffb895-2tkdh\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:25:58.672173 master-0 kubenswrapper[27835]: I0318 13:25:58.671956 27835 status_manager.go:851] "Failed to get status for pod" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:25:58.672674 master-0 kubenswrapper[27835]: I0318 13:25:58.672616 27835 status_manager.go:851] "Failed to get status for pod" podUID="101fe9f1-3211-4844-bb11-c1b6c7696e10" pod="openshift-route-controller-manager/route-controller-manager-6c597dfd69-w66v8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6c597dfd69-w66v8\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:25:59.256820 master-0 kubenswrapper[27835]: E0318 13:25:59.255370 27835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 18 13:25:59.678797 master-0 kubenswrapper[27835]: I0318 13:25:59.678737 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"96d6d5f06ba1610ef367eb7cb9848074e8e050aa3718b047f91cb7f88184ec36"} Mar 18 13:25:59.678797 master-0 kubenswrapper[27835]: I0318 13:25:59.678786 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"ff4cc0ab0545a1ff78db4c44d4a6b4fbd4fafe95e0619b97f79063c0eeba0df1"} Mar 18 13:26:00.688147 master-0 kubenswrapper[27835]: I0318 13:26:00.688088 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"ca8ccc964fb04ea68139ff0c86d7918814ea0da0437de0153ad05bce2b1f456f"} Mar 18 13:26:00.688147 master-0 kubenswrapper[27835]: I0318 13:26:00.688146 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"cf0173ce1093b233054a91f4b682e330dd8986ad5e5b2215370c46af8d2688dd"} Mar 18 13:26:00.688147 master-0 kubenswrapper[27835]: I0318 13:26:00.688157 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"49d6c263d942f0bab661dcdfe15f6600d31d6a3c1fbd3334b615fb9a5aa7c895"} Mar 18 13:26:00.688726 master-0 kubenswrapper[27835]: I0318 13:26:00.688197 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:26:00.688726 master-0 kubenswrapper[27835]: I0318 13:26:00.688299 27835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="70ee7a5d-5760-49fd-98af-f1df5055a085" Mar 18 13:26:00.688726 master-0 kubenswrapper[27835]: I0318 13:26:00.688327 27835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="70ee7a5d-5760-49fd-98af-f1df5055a085" Mar 18 13:26:01.554598 master-0 kubenswrapper[27835]: I0318 13:26:01.554521 27835 patch_prober.go:28] interesting pod/console-5df65d974f-mpf5j container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.88:8443/health\": dial tcp 10.128.0.88:8443: connect: connection refused" start-of-body= Mar 18 13:26:01.554832 master-0 kubenswrapper[27835]: I0318 13:26:01.554632 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5df65d974f-mpf5j" podUID="a42bf050-6c38-4023-a8b4-dc795f3aadc7" containerName="console" probeResult="failure" output="Get \"https://10.128.0.88:8443/health\": dial tcp 10.128.0.88:8443: connect: connection refused" Mar 18 13:26:02.304024 master-0 kubenswrapper[27835]: I0318 13:26:02.303964 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:26:02.304024 master-0 kubenswrapper[27835]: I0318 13:26:02.304022 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:26:02.312534 master-0 kubenswrapper[27835]: I0318 13:26:02.312494 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:26:04.082911 master-0 kubenswrapper[27835]: I0318 13:26:04.082792 27835 patch_prober.go:28] interesting pod/console-686bcb5cf-88rcq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.89:8443/health\": dial tcp 10.128.0.89:8443: connect: connection refused" start-of-body= Mar 18 13:26:04.082911 master-0 kubenswrapper[27835]: I0318 13:26:04.082896 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-686bcb5cf-88rcq" podUID="18d00d36-387c-4c03-affa-9abc8e2d4fe0" containerName="console" probeResult="failure" output="Get \"https://10.128.0.89:8443/health\": dial tcp 10.128.0.89:8443: connect: connection refused" Mar 18 13:26:05.710608 master-0 kubenswrapper[27835]: I0318 13:26:05.710119 27835 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:26:05.743497 master-0 kubenswrapper[27835]: I0318 13:26:05.742329 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c129e07da670ff3af256d72652e4b1da/kube-controller-manager/0.log" Mar 18 13:26:05.743497 master-0 kubenswrapper[27835]: I0318 13:26:05.742393 27835 generic.go:334] "Generic (PLEG): container finished" podID="c129e07da670ff3af256d72652e4b1da" containerID="771bd5b4b91a07c5659ebb9ce85816fcbf0812eb5cfe253bf1a7b334533c5d55" exitCode=1 Mar 18 13:26:05.743497 master-0 kubenswrapper[27835]: I0318 13:26:05.742449 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c129e07da670ff3af256d72652e4b1da","Type":"ContainerDied","Data":"771bd5b4b91a07c5659ebb9ce85816fcbf0812eb5cfe253bf1a7b334533c5d55"} Mar 18 13:26:05.743497 master-0 kubenswrapper[27835]: I0318 13:26:05.743005 27835 scope.go:117] "RemoveContainer" containerID="771bd5b4b91a07c5659ebb9ce85816fcbf0812eb5cfe253bf1a7b334533c5d55" Mar 18 13:26:06.299386 master-0 kubenswrapper[27835]: I0318 13:26:06.299216 27835 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="7d5ce05b3d592e63f1f92202d52b9635" podUID="0033a693-a6fc-449f-9a62-699cd3433715" Mar 18 13:26:06.751278 master-0 kubenswrapper[27835]: I0318 13:26:06.751237 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c129e07da670ff3af256d72652e4b1da/kube-controller-manager/0.log" Mar 18 13:26:06.751774 master-0 kubenswrapper[27835]: I0318 13:26:06.751317 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c129e07da670ff3af256d72652e4b1da","Type":"ContainerStarted","Data":"9b5f2a1af05afc1d7e7cb36c0823f94aa2ee39888af2d9a6e00b457182627afd"} Mar 18 13:26:06.751774 master-0 kubenswrapper[27835]: I0318 13:26:06.751534 27835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="70ee7a5d-5760-49fd-98af-f1df5055a085" Mar 18 13:26:06.751774 master-0 kubenswrapper[27835]: I0318 13:26:06.751548 27835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="70ee7a5d-5760-49fd-98af-f1df5055a085" Mar 18 13:26:06.754873 master-0 kubenswrapper[27835]: I0318 13:26:06.754853 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:26:06.772373 master-0 kubenswrapper[27835]: I0318 13:26:06.772314 27835 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="7d5ce05b3d592e63f1f92202d52b9635" podUID="0033a693-a6fc-449f-9a62-699cd3433715" Mar 18 13:26:07.763775 master-0 kubenswrapper[27835]: I0318 13:26:07.763677 27835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="70ee7a5d-5760-49fd-98af-f1df5055a085" Mar 18 13:26:07.763775 master-0 kubenswrapper[27835]: I0318 13:26:07.763749 27835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="70ee7a5d-5760-49fd-98af-f1df5055a085" Mar 18 13:26:07.768107 master-0 kubenswrapper[27835]: I0318 13:26:07.767975 27835 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="7d5ce05b3d592e63f1f92202d52b9635" podUID="0033a693-a6fc-449f-9a62-699cd3433715" Mar 18 13:26:08.085017 master-0 kubenswrapper[27835]: I0318 13:26:08.084885 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:26:08.085290 master-0 kubenswrapper[27835]: I0318 13:26:08.085262 27835 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 18 13:26:08.085394 master-0 kubenswrapper[27835]: I0318 13:26:08.085370 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c129e07da670ff3af256d72652e4b1da" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 18 13:26:08.085906 master-0 kubenswrapper[27835]: I0318 13:26:08.085891 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:26:11.554426 master-0 kubenswrapper[27835]: I0318 13:26:11.554350 27835 patch_prober.go:28] interesting pod/console-5df65d974f-mpf5j container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.88:8443/health\": dial tcp 10.128.0.88:8443: connect: connection refused" start-of-body= Mar 18 13:26:11.555160 master-0 kubenswrapper[27835]: I0318 13:26:11.554440 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5df65d974f-mpf5j" podUID="a42bf050-6c38-4023-a8b4-dc795f3aadc7" containerName="console" probeResult="failure" output="Get \"https://10.128.0.88:8443/health\": dial tcp 10.128.0.88:8443: connect: connection refused" Mar 18 13:26:12.064092 master-0 kubenswrapper[27835]: I0318 13:26:12.064025 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 18 13:26:12.136370 master-0 kubenswrapper[27835]: I0318 13:26:12.136296 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 18 13:26:12.390717 master-0 kubenswrapper[27835]: I0318 13:26:12.390569 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 18 13:26:12.428188 master-0 kubenswrapper[27835]: I0318 13:26:12.428111 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 18 13:26:12.540779 master-0 kubenswrapper[27835]: I0318 13:26:12.540724 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 13:26:12.663520 master-0 kubenswrapper[27835]: I0318 13:26:12.663469 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 18 13:26:12.788187 master-0 kubenswrapper[27835]: I0318 13:26:12.788151 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 18 13:26:12.915000 master-0 kubenswrapper[27835]: I0318 13:26:12.914885 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 18 13:26:13.478595 master-0 kubenswrapper[27835]: I0318 13:26:13.478510 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 18 13:26:13.639023 master-0 kubenswrapper[27835]: I0318 13:26:13.638970 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 18 13:26:14.045389 master-0 kubenswrapper[27835]: I0318 13:26:14.045097 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 18 13:26:14.083363 master-0 kubenswrapper[27835]: I0318 13:26:14.083282 27835 patch_prober.go:28] interesting pod/console-686bcb5cf-88rcq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.89:8443/health\": dial tcp 10.128.0.89:8443: connect: connection refused" start-of-body= Mar 18 13:26:14.083695 master-0 kubenswrapper[27835]: I0318 13:26:14.083378 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-686bcb5cf-88rcq" podUID="18d00d36-387c-4c03-affa-9abc8e2d4fe0" containerName="console" probeResult="failure" output="Get \"https://10.128.0.89:8443/health\": dial tcp 10.128.0.89:8443: connect: connection refused" Mar 18 13:26:14.139527 master-0 kubenswrapper[27835]: I0318 13:26:14.139443 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 18 13:26:14.222581 master-0 kubenswrapper[27835]: I0318 13:26:14.222462 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 18 13:26:14.299029 master-0 kubenswrapper[27835]: I0318 13:26:14.298891 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 18 13:26:14.551500 master-0 kubenswrapper[27835]: I0318 13:26:14.551354 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 18 13:26:14.664490 master-0 kubenswrapper[27835]: I0318 13:26:14.664387 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 18 13:26:14.666073 master-0 kubenswrapper[27835]: I0318 13:26:14.666018 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 18 13:26:14.709963 master-0 kubenswrapper[27835]: I0318 13:26:14.709899 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 18 13:26:14.744426 master-0 kubenswrapper[27835]: I0318 13:26:14.744348 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 18 13:26:14.765519 master-0 kubenswrapper[27835]: I0318 13:26:14.765465 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 18 13:26:14.782730 master-0 kubenswrapper[27835]: I0318 13:26:14.782686 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 18 13:26:15.087005 master-0 kubenswrapper[27835]: I0318 13:26:15.086902 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 18 13:26:15.094265 master-0 kubenswrapper[27835]: I0318 13:26:15.094160 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 18 13:26:15.161796 master-0 kubenswrapper[27835]: I0318 13:26:15.161751 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 18 13:26:15.220995 master-0 kubenswrapper[27835]: I0318 13:26:15.220947 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 18 13:26:15.403150 master-0 kubenswrapper[27835]: I0318 13:26:15.401787 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 18 13:26:15.410549 master-0 kubenswrapper[27835]: I0318 13:26:15.410462 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 18 13:26:15.575279 master-0 kubenswrapper[27835]: I0318 13:26:15.575182 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 18 13:26:15.842972 master-0 kubenswrapper[27835]: I0318 13:26:15.842901 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 18 13:26:15.874219 master-0 kubenswrapper[27835]: I0318 13:26:15.874098 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 18 13:26:16.119582 master-0 kubenswrapper[27835]: I0318 13:26:16.119466 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 18 13:26:16.204713 master-0 kubenswrapper[27835]: I0318 13:26:16.204636 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 18 13:26:16.268758 master-0 kubenswrapper[27835]: I0318 13:26:16.268682 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 18 13:26:16.397875 master-0 kubenswrapper[27835]: I0318 13:26:16.397707 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 18 13:26:16.804208 master-0 kubenswrapper[27835]: I0318 13:26:16.804102 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 18 13:26:16.886600 master-0 kubenswrapper[27835]: I0318 13:26:16.886564 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 18 13:26:17.647550 master-0 kubenswrapper[27835]: I0318 13:26:17.647498 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 18 13:26:17.682657 master-0 kubenswrapper[27835]: I0318 13:26:17.682587 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 18 13:26:17.804025 master-0 kubenswrapper[27835]: I0318 13:26:17.803977 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 18 13:26:18.000812 master-0 kubenswrapper[27835]: I0318 13:26:18.000750 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-67jff" Mar 18 13:26:18.084283 master-0 kubenswrapper[27835]: I0318 13:26:18.084220 27835 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 18 13:26:18.084283 master-0 kubenswrapper[27835]: I0318 13:26:18.084272 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c129e07da670ff3af256d72652e4b1da" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 18 13:26:18.153037 master-0 kubenswrapper[27835]: I0318 13:26:18.152989 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-5fm1li8uoic3j" Mar 18 13:26:18.263749 master-0 kubenswrapper[27835]: I0318 13:26:18.263616 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 18 13:26:18.375114 master-0 kubenswrapper[27835]: I0318 13:26:18.375056 27835 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 18 13:26:18.435176 master-0 kubenswrapper[27835]: I0318 13:26:18.435108 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 18 13:26:18.505743 master-0 kubenswrapper[27835]: I0318 13:26:18.505657 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 18 13:26:18.817939 master-0 kubenswrapper[27835]: I0318 13:26:18.817866 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 13:26:19.071368 master-0 kubenswrapper[27835]: I0318 13:26:19.071198 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 18 13:26:19.100040 master-0 kubenswrapper[27835]: I0318 13:26:19.099993 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 18 13:26:19.270167 master-0 kubenswrapper[27835]: I0318 13:26:19.270088 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 13:26:19.433800 master-0 kubenswrapper[27835]: I0318 13:26:19.433754 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-dprq6" Mar 18 13:26:19.591308 master-0 kubenswrapper[27835]: I0318 13:26:19.591237 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-hr2xw" Mar 18 13:26:19.708120 master-0 kubenswrapper[27835]: I0318 13:26:19.707962 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 18 13:26:19.796142 master-0 kubenswrapper[27835]: I0318 13:26:19.796078 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 18 13:26:20.034273 master-0 kubenswrapper[27835]: I0318 13:26:20.034150 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 18 13:26:20.061389 master-0 kubenswrapper[27835]: I0318 13:26:20.061332 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 18 13:26:20.185152 master-0 kubenswrapper[27835]: I0318 13:26:20.185088 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 18 13:26:20.203970 master-0 kubenswrapper[27835]: I0318 13:26:20.203921 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 18 13:26:20.236348 master-0 kubenswrapper[27835]: I0318 13:26:20.236274 27835 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 18 13:26:20.418915 master-0 kubenswrapper[27835]: I0318 13:26:20.418849 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 18 13:26:20.622269 master-0 kubenswrapper[27835]: I0318 13:26:20.622221 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 18 13:26:20.775401 master-0 kubenswrapper[27835]: I0318 13:26:20.775266 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 18 13:26:20.909197 master-0 kubenswrapper[27835]: I0318 13:26:20.908953 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 18 13:26:20.910377 master-0 kubenswrapper[27835]: I0318 13:26:20.909538 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 18 13:26:20.910377 master-0 kubenswrapper[27835]: I0318 13:26:20.909784 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 18 13:26:20.953392 master-0 kubenswrapper[27835]: I0318 13:26:20.953311 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 18 13:26:21.001610 master-0 kubenswrapper[27835]: I0318 13:26:21.001548 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 18 13:26:21.097619 master-0 kubenswrapper[27835]: I0318 13:26:21.097469 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 18 13:26:21.144375 master-0 kubenswrapper[27835]: I0318 13:26:21.144304 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 18 13:26:21.273921 master-0 kubenswrapper[27835]: I0318 13:26:21.273830 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 18 13:26:21.484051 master-0 kubenswrapper[27835]: I0318 13:26:21.483961 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 18 13:26:21.553940 master-0 kubenswrapper[27835]: I0318 13:26:21.553857 27835 patch_prober.go:28] interesting pod/console-5df65d974f-mpf5j container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.88:8443/health\": dial tcp 10.128.0.88:8443: connect: connection refused" start-of-body= Mar 18 13:26:21.553940 master-0 kubenswrapper[27835]: I0318 13:26:21.553921 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5df65d974f-mpf5j" podUID="a42bf050-6c38-4023-a8b4-dc795f3aadc7" containerName="console" probeResult="failure" output="Get \"https://10.128.0.88:8443/health\": dial tcp 10.128.0.88:8443: connect: connection refused" Mar 18 13:26:21.558576 master-0 kubenswrapper[27835]: I0318 13:26:21.558526 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 18 13:26:21.609731 master-0 kubenswrapper[27835]: I0318 13:26:21.609559 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 13:26:21.639341 master-0 kubenswrapper[27835]: I0318 13:26:21.638960 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 18 13:26:21.849362 master-0 kubenswrapper[27835]: I0318 13:26:21.849283 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 18 13:26:22.016561 master-0 kubenswrapper[27835]: I0318 13:26:22.016505 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 18 13:26:22.252385 master-0 kubenswrapper[27835]: I0318 13:26:22.252309 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-6fb5w" Mar 18 13:26:22.407967 master-0 kubenswrapper[27835]: I0318 13:26:22.407910 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 18 13:26:22.453896 master-0 kubenswrapper[27835]: I0318 13:26:22.453828 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 18 13:26:22.655500 master-0 kubenswrapper[27835]: I0318 13:26:22.655329 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 18 13:26:22.687784 master-0 kubenswrapper[27835]: I0318 13:26:22.687718 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 18 13:26:22.825103 master-0 kubenswrapper[27835]: I0318 13:26:22.825050 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 18 13:26:22.850924 master-0 kubenswrapper[27835]: I0318 13:26:22.850882 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 18 13:26:22.877828 master-0 kubenswrapper[27835]: I0318 13:26:22.876768 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 18 13:26:22.921639 master-0 kubenswrapper[27835]: I0318 13:26:22.921558 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 18 13:26:22.944466 master-0 kubenswrapper[27835]: I0318 13:26:22.944376 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 18 13:26:22.963457 master-0 kubenswrapper[27835]: I0318 13:26:22.963402 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-7kt87" Mar 18 13:26:23.006029 master-0 kubenswrapper[27835]: I0318 13:26:23.003895 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 18 13:26:23.036557 master-0 kubenswrapper[27835]: I0318 13:26:23.036397 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 18 13:26:23.062524 master-0 kubenswrapper[27835]: I0318 13:26:23.062393 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 18 13:26:23.192744 master-0 kubenswrapper[27835]: I0318 13:26:23.192549 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-sn888" Mar 18 13:26:23.193033 master-0 kubenswrapper[27835]: I0318 13:26:23.192757 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 18 13:26:23.287338 master-0 kubenswrapper[27835]: I0318 13:26:23.287239 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 18 13:26:23.358032 master-0 kubenswrapper[27835]: I0318 13:26:23.357935 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 18 13:26:23.497851 master-0 kubenswrapper[27835]: I0318 13:26:23.497659 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 18 13:26:23.502706 master-0 kubenswrapper[27835]: I0318 13:26:23.502636 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 18 13:26:23.546171 master-0 kubenswrapper[27835]: I0318 13:26:23.546082 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-hl5hl" Mar 18 13:26:23.611088 master-0 kubenswrapper[27835]: I0318 13:26:23.611006 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 18 13:26:23.787586 master-0 kubenswrapper[27835]: I0318 13:26:23.787400 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 18 13:26:23.802790 master-0 kubenswrapper[27835]: I0318 13:26:23.802713 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 18 13:26:23.887545 master-0 kubenswrapper[27835]: I0318 13:26:23.887490 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 18 13:26:23.965044 master-0 kubenswrapper[27835]: I0318 13:26:23.964990 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-kn6rx" Mar 18 13:26:24.048215 master-0 kubenswrapper[27835]: I0318 13:26:24.048102 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 13:26:24.083442 master-0 kubenswrapper[27835]: I0318 13:26:24.083183 27835 patch_prober.go:28] interesting pod/console-686bcb5cf-88rcq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.89:8443/health\": dial tcp 10.128.0.89:8443: connect: connection refused" start-of-body= Mar 18 13:26:24.083442 master-0 kubenswrapper[27835]: I0318 13:26:24.083249 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-686bcb5cf-88rcq" podUID="18d00d36-387c-4c03-affa-9abc8e2d4fe0" containerName="console" probeResult="failure" output="Get \"https://10.128.0.89:8443/health\": dial tcp 10.128.0.89:8443: connect: connection refused" Mar 18 13:26:24.124072 master-0 kubenswrapper[27835]: I0318 13:26:24.123961 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 18 13:26:24.183554 master-0 kubenswrapper[27835]: I0318 13:26:24.183483 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 18 13:26:24.213364 master-0 kubenswrapper[27835]: I0318 13:26:24.213300 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 18 13:26:24.272102 master-0 kubenswrapper[27835]: I0318 13:26:24.272038 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 18 13:26:24.319097 master-0 kubenswrapper[27835]: I0318 13:26:24.318974 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 18 13:26:24.320096 master-0 kubenswrapper[27835]: I0318 13:26:24.320051 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 18 13:26:24.367705 master-0 kubenswrapper[27835]: I0318 13:26:24.367646 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 18 13:26:24.407020 master-0 kubenswrapper[27835]: I0318 13:26:24.406951 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 18 13:26:24.432962 master-0 kubenswrapper[27835]: I0318 13:26:24.432906 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 18 13:26:24.468335 master-0 kubenswrapper[27835]: I0318 13:26:24.468296 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 18 13:26:24.546530 master-0 kubenswrapper[27835]: I0318 13:26:24.546489 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 18 13:26:24.552604 master-0 kubenswrapper[27835]: I0318 13:26:24.552564 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 18 13:26:24.657865 master-0 kubenswrapper[27835]: I0318 13:26:24.657744 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 18 13:26:24.686698 master-0 kubenswrapper[27835]: I0318 13:26:24.686633 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 18 13:26:24.708740 master-0 kubenswrapper[27835]: I0318 13:26:24.708696 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 13:26:24.777445 master-0 kubenswrapper[27835]: I0318 13:26:24.777331 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 18 13:26:24.789546 master-0 kubenswrapper[27835]: I0318 13:26:24.789509 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 18 13:26:24.814486 master-0 kubenswrapper[27835]: I0318 13:26:24.814375 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 18 13:26:24.937806 master-0 kubenswrapper[27835]: I0318 13:26:24.937753 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 18 13:26:24.945585 master-0 kubenswrapper[27835]: I0318 13:26:24.945527 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 18 13:26:24.979326 master-0 kubenswrapper[27835]: I0318 13:26:24.979220 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-7vfv5" Mar 18 13:26:25.180492 master-0 kubenswrapper[27835]: I0318 13:26:25.176518 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 18 13:26:25.296764 master-0 kubenswrapper[27835]: I0318 13:26:25.296641 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 18 13:26:25.298633 master-0 kubenswrapper[27835]: I0318 13:26:25.298603 27835 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 18 13:26:25.342838 master-0 kubenswrapper[27835]: I0318 13:26:25.342742 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 18 13:26:25.344526 master-0 kubenswrapper[27835]: I0318 13:26:25.344489 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 18 13:26:25.416356 master-0 kubenswrapper[27835]: I0318 13:26:25.416280 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 18 13:26:25.511002 master-0 kubenswrapper[27835]: I0318 13:26:25.510919 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 18 13:26:25.559929 master-0 kubenswrapper[27835]: I0318 13:26:25.559803 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-9lwzk" Mar 18 13:26:25.594797 master-0 kubenswrapper[27835]: I0318 13:26:25.594733 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 18 13:26:25.689893 master-0 kubenswrapper[27835]: I0318 13:26:25.689815 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 18 13:26:25.736796 master-0 kubenswrapper[27835]: I0318 13:26:25.736743 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 18 13:26:25.992666 master-0 kubenswrapper[27835]: I0318 13:26:25.992602 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 18 13:26:26.085051 master-0 kubenswrapper[27835]: I0318 13:26:26.084955 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-sjstk" Mar 18 13:26:26.092471 master-0 kubenswrapper[27835]: I0318 13:26:26.092394 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 18 13:26:26.271056 master-0 kubenswrapper[27835]: I0318 13:26:26.270927 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 18 13:26:26.400022 master-0 kubenswrapper[27835]: I0318 13:26:26.399927 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 18 13:26:26.437640 master-0 kubenswrapper[27835]: I0318 13:26:26.437528 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 18 13:26:26.447908 master-0 kubenswrapper[27835]: I0318 13:26:26.447825 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 13:26:26.466911 master-0 kubenswrapper[27835]: I0318 13:26:26.466859 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 18 13:26:26.521519 master-0 kubenswrapper[27835]: I0318 13:26:26.519686 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 18 13:26:26.565444 master-0 kubenswrapper[27835]: I0318 13:26:26.565328 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 13:26:26.625454 master-0 kubenswrapper[27835]: I0318 13:26:26.625298 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 18 13:26:26.650548 master-0 kubenswrapper[27835]: I0318 13:26:26.650515 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 18 13:26:26.663199 master-0 kubenswrapper[27835]: I0318 13:26:26.663143 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 18 13:26:26.671316 master-0 kubenswrapper[27835]: I0318 13:26:26.671234 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 18 13:26:26.682222 master-0 kubenswrapper[27835]: I0318 13:26:26.682194 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 18 13:26:26.706052 master-0 kubenswrapper[27835]: I0318 13:26:26.705971 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 18 13:26:26.747789 master-0 kubenswrapper[27835]: I0318 13:26:26.747699 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 18 13:26:26.933677 master-0 kubenswrapper[27835]: I0318 13:26:26.933625 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 18 13:26:26.971220 master-0 kubenswrapper[27835]: I0318 13:26:26.971161 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 18 13:26:27.039784 master-0 kubenswrapper[27835]: I0318 13:26:27.039726 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 18 13:26:27.064333 master-0 kubenswrapper[27835]: I0318 13:26:27.064275 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 18 13:26:27.072160 master-0 kubenswrapper[27835]: I0318 13:26:27.072088 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 18 13:26:27.073475 master-0 kubenswrapper[27835]: I0318 13:26:27.073388 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 18 13:26:27.080380 master-0 kubenswrapper[27835]: I0318 13:26:27.080325 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 13:26:27.142035 master-0 kubenswrapper[27835]: I0318 13:26:27.141969 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 18 13:26:27.154263 master-0 kubenswrapper[27835]: I0318 13:26:27.154224 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 18 13:26:27.237315 master-0 kubenswrapper[27835]: I0318 13:26:27.237105 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 18 13:26:27.239605 master-0 kubenswrapper[27835]: I0318 13:26:27.239534 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-b4r5l" Mar 18 13:26:27.348231 master-0 kubenswrapper[27835]: I0318 13:26:27.348160 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 18 13:26:27.357436 master-0 kubenswrapper[27835]: I0318 13:26:27.357354 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 18 13:26:27.359813 master-0 kubenswrapper[27835]: I0318 13:26:27.359739 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 18 13:26:27.401915 master-0 kubenswrapper[27835]: I0318 13:26:27.401800 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 13:26:27.458699 master-0 kubenswrapper[27835]: I0318 13:26:27.458643 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-dz6jc" Mar 18 13:26:27.483513 master-0 kubenswrapper[27835]: I0318 13:26:27.483404 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 18 13:26:27.551470 master-0 kubenswrapper[27835]: I0318 13:26:27.551279 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 18 13:26:27.555176 master-0 kubenswrapper[27835]: I0318 13:26:27.555131 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 13:26:27.596434 master-0 kubenswrapper[27835]: I0318 13:26:27.596328 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 18 13:26:27.651560 master-0 kubenswrapper[27835]: I0318 13:26:27.651500 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 18 13:26:27.767289 master-0 kubenswrapper[27835]: I0318 13:26:27.767178 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-5dnvq" Mar 18 13:26:27.770457 master-0 kubenswrapper[27835]: I0318 13:26:27.770347 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 18 13:26:27.804626 master-0 kubenswrapper[27835]: I0318 13:26:27.804390 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 18 13:26:27.816156 master-0 kubenswrapper[27835]: I0318 13:26:27.816035 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 18 13:26:27.862560 master-0 kubenswrapper[27835]: I0318 13:26:27.861559 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 18 13:26:27.862560 master-0 kubenswrapper[27835]: I0318 13:26:27.862015 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 18 13:26:27.865880 master-0 kubenswrapper[27835]: I0318 13:26:27.865821 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 18 13:26:27.966864 master-0 kubenswrapper[27835]: I0318 13:26:27.966789 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 18 13:26:28.012926 master-0 kubenswrapper[27835]: I0318 13:26:28.012888 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 18 13:26:28.051929 master-0 kubenswrapper[27835]: I0318 13:26:28.051894 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 18 13:26:28.084323 master-0 kubenswrapper[27835]: I0318 13:26:28.084208 27835 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 18 13:26:28.084323 master-0 kubenswrapper[27835]: I0318 13:26:28.084270 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c129e07da670ff3af256d72652e4b1da" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 18 13:26:28.084323 master-0 kubenswrapper[27835]: I0318 13:26:28.084320 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:26:28.084983 master-0 kubenswrapper[27835]: I0318 13:26:28.084954 27835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"9b5f2a1af05afc1d7e7cb36c0823f94aa2ee39888af2d9a6e00b457182627afd"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 18 13:26:28.085086 master-0 kubenswrapper[27835]: I0318 13:26:28.085062 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c129e07da670ff3af256d72652e4b1da" containerName="kube-controller-manager" containerID="cri-o://9b5f2a1af05afc1d7e7cb36c0823f94aa2ee39888af2d9a6e00b457182627afd" gracePeriod=30 Mar 18 13:26:28.099478 master-0 kubenswrapper[27835]: I0318 13:26:28.099390 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 18 13:26:28.134050 master-0 kubenswrapper[27835]: I0318 13:26:28.133981 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 18 13:26:28.221866 master-0 kubenswrapper[27835]: I0318 13:26:28.221746 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 18 13:26:28.318769 master-0 kubenswrapper[27835]: I0318 13:26:28.318698 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-lz5d6" Mar 18 13:26:28.372215 master-0 kubenswrapper[27835]: I0318 13:26:28.372078 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-mbkdw" Mar 18 13:26:28.450227 master-0 kubenswrapper[27835]: I0318 13:26:28.450155 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 18 13:26:28.494151 master-0 kubenswrapper[27835]: I0318 13:26:28.494062 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 13:26:28.546359 master-0 kubenswrapper[27835]: I0318 13:26:28.546282 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 18 13:26:28.561284 master-0 kubenswrapper[27835]: I0318 13:26:28.561222 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 18 13:26:28.569027 master-0 kubenswrapper[27835]: I0318 13:26:28.568982 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 18 13:26:28.599158 master-0 kubenswrapper[27835]: I0318 13:26:28.599109 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 18 13:26:28.599855 master-0 kubenswrapper[27835]: I0318 13:26:28.599807 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 18 13:26:28.754265 master-0 kubenswrapper[27835]: I0318 13:26:28.754207 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 18 13:26:28.982493 master-0 kubenswrapper[27835]: I0318 13:26:28.982402 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 18 13:26:29.039509 master-0 kubenswrapper[27835]: I0318 13:26:29.039316 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 18 13:26:29.174546 master-0 kubenswrapper[27835]: I0318 13:26:29.174485 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 18 13:26:29.266291 master-0 kubenswrapper[27835]: I0318 13:26:29.266229 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-6lm6r" Mar 18 13:26:29.449771 master-0 kubenswrapper[27835]: I0318 13:26:29.449689 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 18 13:26:29.467215 master-0 kubenswrapper[27835]: I0318 13:26:29.467185 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 18 13:26:29.491159 master-0 kubenswrapper[27835]: I0318 13:26:29.491115 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 18 13:26:29.550865 master-0 kubenswrapper[27835]: I0318 13:26:29.548197 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-lvs7l" Mar 18 13:26:29.626778 master-0 kubenswrapper[27835]: I0318 13:26:29.626690 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 18 13:26:29.633350 master-0 kubenswrapper[27835]: I0318 13:26:29.633318 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 18 13:26:29.638169 master-0 kubenswrapper[27835]: I0318 13:26:29.638134 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 18 13:26:29.692881 master-0 kubenswrapper[27835]: I0318 13:26:29.692814 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 18 13:26:29.709576 master-0 kubenswrapper[27835]: I0318 13:26:29.709456 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 18 13:26:29.747343 master-0 kubenswrapper[27835]: I0318 13:26:29.747284 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 18 13:26:29.827333 master-0 kubenswrapper[27835]: I0318 13:26:29.827236 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 18 13:26:29.895439 master-0 kubenswrapper[27835]: I0318 13:26:29.895327 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 18 13:26:29.907903 master-0 kubenswrapper[27835]: I0318 13:26:29.907857 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 18 13:26:29.953204 master-0 kubenswrapper[27835]: I0318 13:26:29.953120 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 18 13:26:29.998179 master-0 kubenswrapper[27835]: I0318 13:26:29.997977 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 18 13:26:30.065103 master-0 kubenswrapper[27835]: I0318 13:26:30.065044 27835 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 18 13:26:30.150619 master-0 kubenswrapper[27835]: I0318 13:26:30.150551 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 18 13:26:30.160317 master-0 kubenswrapper[27835]: I0318 13:26:30.160263 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 18 13:26:30.165167 master-0 kubenswrapper[27835]: I0318 13:26:30.165125 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 18 13:26:30.198087 master-0 kubenswrapper[27835]: I0318 13:26:30.198037 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-qqvgp" Mar 18 13:26:30.198550 master-0 kubenswrapper[27835]: I0318 13:26:30.198523 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 18 13:26:30.305091 master-0 kubenswrapper[27835]: I0318 13:26:30.304982 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 18 13:26:30.525531 master-0 kubenswrapper[27835]: I0318 13:26:30.525452 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 18 13:26:30.556090 master-0 kubenswrapper[27835]: I0318 13:26:30.555953 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-sszww" Mar 18 13:26:30.771133 master-0 kubenswrapper[27835]: I0318 13:26:30.771082 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 18 13:26:30.859850 master-0 kubenswrapper[27835]: I0318 13:26:30.859760 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 18 13:26:30.862138 master-0 kubenswrapper[27835]: I0318 13:26:30.862121 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 18 13:26:30.888604 master-0 kubenswrapper[27835]: I0318 13:26:30.888523 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 18 13:26:31.043483 master-0 kubenswrapper[27835]: I0318 13:26:31.043428 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 18 13:26:31.171126 master-0 kubenswrapper[27835]: I0318 13:26:31.170996 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 18 13:26:31.470819 master-0 kubenswrapper[27835]: I0318 13:26:31.470657 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 18 13:26:31.488290 master-0 kubenswrapper[27835]: I0318 13:26:31.488199 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-q8tt6" Mar 18 13:26:31.516662 master-0 kubenswrapper[27835]: I0318 13:26:31.516595 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 18 13:26:31.555036 master-0 kubenswrapper[27835]: I0318 13:26:31.554932 27835 patch_prober.go:28] interesting pod/console-5df65d974f-mpf5j container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.88:8443/health\": dial tcp 10.128.0.88:8443: connect: connection refused" start-of-body= Mar 18 13:26:31.556040 master-0 kubenswrapper[27835]: I0318 13:26:31.555044 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5df65d974f-mpf5j" podUID="a42bf050-6c38-4023-a8b4-dc795f3aadc7" containerName="console" probeResult="failure" output="Get \"https://10.128.0.88:8443/health\": dial tcp 10.128.0.88:8443: connect: connection refused" Mar 18 13:26:31.567869 master-0 kubenswrapper[27835]: I0318 13:26:31.567807 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 18 13:26:31.588673 master-0 kubenswrapper[27835]: I0318 13:26:31.588608 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 18 13:26:31.715756 master-0 kubenswrapper[27835]: I0318 13:26:31.715700 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 18 13:26:31.809306 master-0 kubenswrapper[27835]: I0318 13:26:31.809157 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 18 13:26:31.938352 master-0 kubenswrapper[27835]: I0318 13:26:31.938266 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 18 13:26:32.070522 master-0 kubenswrapper[27835]: I0318 13:26:32.070243 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 18 13:26:32.168758 master-0 kubenswrapper[27835]: I0318 13:26:32.168692 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 18 13:26:32.298513 master-0 kubenswrapper[27835]: I0318 13:26:32.298378 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 18 13:26:32.416955 master-0 kubenswrapper[27835]: I0318 13:26:32.416896 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-vbmv6" Mar 18 13:26:32.501258 master-0 kubenswrapper[27835]: I0318 13:26:32.501153 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-kcq89" Mar 18 13:26:32.619085 master-0 kubenswrapper[27835]: I0318 13:26:32.619027 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-qfz5b" Mar 18 13:26:32.658960 master-0 kubenswrapper[27835]: I0318 13:26:32.658887 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 13:26:32.676150 master-0 kubenswrapper[27835]: I0318 13:26:32.676038 27835 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 18 13:26:32.676330 master-0 kubenswrapper[27835]: I0318 13:26:32.676204 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5f77b8654b-44f6m" podStartSLOduration=60.676152754 podStartE2EDuration="1m0.676152754s" podCreationTimestamp="2026-03-18 13:25:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:26:05.934642944 +0000 UTC m=+129.899854514" watchObservedRunningTime="2026-03-18 13:26:32.676152754 +0000 UTC m=+156.641364324" Mar 18 13:26:32.677638 master-0 kubenswrapper[27835]: I0318 13:26:32.677586 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6c597dfd69-w66v8" podStartSLOduration=60.677575666 podStartE2EDuration="1m0.677575666s" podCreationTimestamp="2026-03-18 13:25:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:26:05.893064059 +0000 UTC m=+129.858275639" watchObservedRunningTime="2026-03-18 13:26:32.677575666 +0000 UTC m=+156.642787236" Mar 18 13:26:32.680332 master-0 kubenswrapper[27835]: I0318 13:26:32.680280 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=48.680269136 podStartE2EDuration="48.680269136s" podCreationTimestamp="2026-03-18 13:25:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:26:05.872099243 +0000 UTC m=+129.837310803" watchObservedRunningTime="2026-03-18 13:26:32.680269136 +0000 UTC m=+156.645480706" Mar 18 13:26:32.683612 master-0 kubenswrapper[27835]: I0318 13:26:32.683566 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-657fb76bf7-pvlvc","openshift-kube-apiserver/installer-4-master-0","openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 13:26:32.683679 master-0 kubenswrapper[27835]: I0318 13:26:32.683644 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6b4867d948-qsvkm","openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 13:26:32.683952 master-0 kubenswrapper[27835]: E0318 13:26:32.683917 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbc9ea24-0c80-4453-8313-f8ffe06714e5" containerName="oauth-openshift" Mar 18 13:26:32.683952 master-0 kubenswrapper[27835]: I0318 13:26:32.683940 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbc9ea24-0c80-4453-8313-f8ffe06714e5" containerName="oauth-openshift" Mar 18 13:26:32.684038 master-0 kubenswrapper[27835]: E0318 13:26:32.683979 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0737b13d-faed-44e2-9d20-1f3860dcc9bd" containerName="installer" Mar 18 13:26:32.684038 master-0 kubenswrapper[27835]: I0318 13:26:32.683993 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="0737b13d-faed-44e2-9d20-1f3860dcc9bd" containerName="installer" Mar 18 13:26:32.684161 master-0 kubenswrapper[27835]: I0318 13:26:32.683985 27835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="70ee7a5d-5760-49fd-98af-f1df5055a085" Mar 18 13:26:32.684161 master-0 kubenswrapper[27835]: I0318 13:26:32.684150 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="0737b13d-faed-44e2-9d20-1f3860dcc9bd" containerName="installer" Mar 18 13:26:32.684233 master-0 kubenswrapper[27835]: I0318 13:26:32.684165 27835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="70ee7a5d-5760-49fd-98af-f1df5055a085" Mar 18 13:26:32.684233 master-0 kubenswrapper[27835]: I0318 13:26:32.684177 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbc9ea24-0c80-4453-8313-f8ffe06714e5" containerName="oauth-openshift" Mar 18 13:26:32.685850 master-0 kubenswrapper[27835]: I0318 13:26:32.685806 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.687129 master-0 kubenswrapper[27835]: I0318 13:26:32.687086 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 18 13:26:32.688199 master-0 kubenswrapper[27835]: I0318 13:26:32.688138 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 18 13:26:32.688757 master-0 kubenswrapper[27835]: I0318 13:26:32.688677 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-vxpff" Mar 18 13:26:32.688985 master-0 kubenswrapper[27835]: I0318 13:26:32.688949 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 18 13:26:32.689149 master-0 kubenswrapper[27835]: I0318 13:26:32.689121 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 18 13:26:32.689298 master-0 kubenswrapper[27835]: I0318 13:26:32.689263 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 18 13:26:32.689502 master-0 kubenswrapper[27835]: I0318 13:26:32.689458 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 18 13:26:32.693745 master-0 kubenswrapper[27835]: I0318 13:26:32.689638 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 18 13:26:32.693745 master-0 kubenswrapper[27835]: I0318 13:26:32.689747 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 18 13:26:32.693745 master-0 kubenswrapper[27835]: I0318 13:26:32.689765 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 18 13:26:32.693745 master-0 kubenswrapper[27835]: I0318 13:26:32.690211 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 18 13:26:32.693745 master-0 kubenswrapper[27835]: I0318 13:26:32.691821 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 18 13:26:32.694083 master-0 kubenswrapper[27835]: I0318 13:26:32.693854 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:26:32.701686 master-0 kubenswrapper[27835]: I0318 13:26:32.699065 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 18 13:26:32.703311 master-0 kubenswrapper[27835]: I0318 13:26:32.703273 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 18 13:26:32.709377 master-0 kubenswrapper[27835]: I0318 13:26:32.709321 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 18 13:26:32.759151 master-0 kubenswrapper[27835]: I0318 13:26:32.759061 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=27.759043474 podStartE2EDuration="27.759043474s" podCreationTimestamp="2026-03-18 13:26:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:26:32.732363346 +0000 UTC m=+156.697574926" watchObservedRunningTime="2026-03-18 13:26:32.759043474 +0000 UTC m=+156.724255044" Mar 18 13:26:32.785129 master-0 kubenswrapper[27835]: I0318 13:26:32.785018 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5b46736-78ed-49f2-88ea-b5f864675d0f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.785129 master-0 kubenswrapper[27835]: I0318 13:26:32.785128 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5b46736-78ed-49f2-88ea-b5f864675d0f-audit-dir\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.785129 master-0 kubenswrapper[27835]: I0318 13:26:32.785154 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d5b46736-78ed-49f2-88ea-b5f864675d0f-v4-0-config-system-service-ca\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.785491 master-0 kubenswrapper[27835]: I0318 13:26:32.785182 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d5b46736-78ed-49f2-88ea-b5f864675d0f-v4-0-config-user-template-error\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.785491 master-0 kubenswrapper[27835]: I0318 13:26:32.785211 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d5b46736-78ed-49f2-88ea-b5f864675d0f-audit-policies\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.785491 master-0 kubenswrapper[27835]: I0318 13:26:32.785242 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d5b46736-78ed-49f2-88ea-b5f864675d0f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.785491 master-0 kubenswrapper[27835]: I0318 13:26:32.785300 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d5b46736-78ed-49f2-88ea-b5f864675d0f-v4-0-config-system-router-certs\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.785491 master-0 kubenswrapper[27835]: I0318 13:26:32.785325 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhkdg\" (UniqueName: \"kubernetes.io/projected/d5b46736-78ed-49f2-88ea-b5f864675d0f-kube-api-access-mhkdg\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.785491 master-0 kubenswrapper[27835]: I0318 13:26:32.785436 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d5b46736-78ed-49f2-88ea-b5f864675d0f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.785749 master-0 kubenswrapper[27835]: I0318 13:26:32.785624 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d5b46736-78ed-49f2-88ea-b5f864675d0f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.785844 master-0 kubenswrapper[27835]: I0318 13:26:32.785808 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d5b46736-78ed-49f2-88ea-b5f864675d0f-v4-0-config-system-session\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.785882 master-0 kubenswrapper[27835]: I0318 13:26:32.785852 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d5b46736-78ed-49f2-88ea-b5f864675d0f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.786063 master-0 kubenswrapper[27835]: I0318 13:26:32.786029 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d5b46736-78ed-49f2-88ea-b5f864675d0f-v4-0-config-user-template-login\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.887144 master-0 kubenswrapper[27835]: I0318 13:26:32.887052 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d5b46736-78ed-49f2-88ea-b5f864675d0f-v4-0-config-system-router-certs\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.887144 master-0 kubenswrapper[27835]: I0318 13:26:32.887111 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhkdg\" (UniqueName: \"kubernetes.io/projected/d5b46736-78ed-49f2-88ea-b5f864675d0f-kube-api-access-mhkdg\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.887144 master-0 kubenswrapper[27835]: I0318 13:26:32.887141 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d5b46736-78ed-49f2-88ea-b5f864675d0f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.887983 master-0 kubenswrapper[27835]: I0318 13:26:32.887918 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d5b46736-78ed-49f2-88ea-b5f864675d0f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.888123 master-0 kubenswrapper[27835]: I0318 13:26:32.887993 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d5b46736-78ed-49f2-88ea-b5f864675d0f-v4-0-config-system-session\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.888123 master-0 kubenswrapper[27835]: I0318 13:26:32.888027 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d5b46736-78ed-49f2-88ea-b5f864675d0f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.888123 master-0 kubenswrapper[27835]: I0318 13:26:32.888079 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d5b46736-78ed-49f2-88ea-b5f864675d0f-v4-0-config-user-template-login\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.888408 master-0 kubenswrapper[27835]: I0318 13:26:32.888305 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5b46736-78ed-49f2-88ea-b5f864675d0f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.888408 master-0 kubenswrapper[27835]: I0318 13:26:32.888383 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5b46736-78ed-49f2-88ea-b5f864675d0f-audit-dir\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.888408 master-0 kubenswrapper[27835]: I0318 13:26:32.888444 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d5b46736-78ed-49f2-88ea-b5f864675d0f-v4-0-config-system-service-ca\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.888710 master-0 kubenswrapper[27835]: I0318 13:26:32.888478 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d5b46736-78ed-49f2-88ea-b5f864675d0f-v4-0-config-user-template-error\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.888710 master-0 kubenswrapper[27835]: I0318 13:26:32.888520 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d5b46736-78ed-49f2-88ea-b5f864675d0f-audit-policies\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.888710 master-0 kubenswrapper[27835]: I0318 13:26:32.888562 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d5b46736-78ed-49f2-88ea-b5f864675d0f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.889765 master-0 kubenswrapper[27835]: I0318 13:26:32.889704 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d5b46736-78ed-49f2-88ea-b5f864675d0f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.890574 master-0 kubenswrapper[27835]: I0318 13:26:32.890501 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5b46736-78ed-49f2-88ea-b5f864675d0f-audit-dir\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.891403 master-0 kubenswrapper[27835]: I0318 13:26:32.891349 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d5b46736-78ed-49f2-88ea-b5f864675d0f-v4-0-config-system-service-ca\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.892222 master-0 kubenswrapper[27835]: I0318 13:26:32.892158 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5b46736-78ed-49f2-88ea-b5f864675d0f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.893026 master-0 kubenswrapper[27835]: I0318 13:26:32.892913 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d5b46736-78ed-49f2-88ea-b5f864675d0f-audit-policies\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.894571 master-0 kubenswrapper[27835]: I0318 13:26:32.894509 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d5b46736-78ed-49f2-88ea-b5f864675d0f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.895086 master-0 kubenswrapper[27835]: I0318 13:26:32.895009 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d5b46736-78ed-49f2-88ea-b5f864675d0f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.896014 master-0 kubenswrapper[27835]: I0318 13:26:32.895945 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d5b46736-78ed-49f2-88ea-b5f864675d0f-v4-0-config-user-template-login\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.896128 master-0 kubenswrapper[27835]: I0318 13:26:32.895961 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d5b46736-78ed-49f2-88ea-b5f864675d0f-v4-0-config-system-router-certs\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.896522 master-0 kubenswrapper[27835]: I0318 13:26:32.896460 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d5b46736-78ed-49f2-88ea-b5f864675d0f-v4-0-config-user-template-error\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.897004 master-0 kubenswrapper[27835]: I0318 13:26:32.896950 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d5b46736-78ed-49f2-88ea-b5f864675d0f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.901083 master-0 kubenswrapper[27835]: I0318 13:26:32.901027 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d5b46736-78ed-49f2-88ea-b5f864675d0f-v4-0-config-system-session\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.917899 master-0 kubenswrapper[27835]: I0318 13:26:32.917774 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhkdg\" (UniqueName: \"kubernetes.io/projected/d5b46736-78ed-49f2-88ea-b5f864675d0f-kube-api-access-mhkdg\") pod \"oauth-openshift-6b4867d948-qsvkm\" (UID: \"d5b46736-78ed-49f2-88ea-b5f864675d0f\") " pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:32.930207 master-0 kubenswrapper[27835]: I0318 13:26:32.930033 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 18 13:26:32.982998 master-0 kubenswrapper[27835]: I0318 13:26:32.982917 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 18 13:26:33.002848 master-0 kubenswrapper[27835]: I0318 13:26:33.002762 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 18 13:26:33.010942 master-0 kubenswrapper[27835]: I0318 13:26:33.010867 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:33.114479 master-0 kubenswrapper[27835]: I0318 13:26:33.112908 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 18 13:26:33.145935 master-0 kubenswrapper[27835]: I0318 13:26:33.145090 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 18 13:26:33.310621 master-0 kubenswrapper[27835]: I0318 13:26:33.310443 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 18 13:26:33.457207 master-0 kubenswrapper[27835]: I0318 13:26:33.457112 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6b4867d948-qsvkm"] Mar 18 13:26:33.486386 master-0 kubenswrapper[27835]: I0318 13:26:33.486337 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 18 13:26:33.508967 master-0 kubenswrapper[27835]: I0318 13:26:33.508916 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 18 13:26:33.692467 master-0 kubenswrapper[27835]: I0318 13:26:33.692433 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-hnp25" Mar 18 13:26:33.809791 master-0 kubenswrapper[27835]: I0318 13:26:33.809720 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 18 13:26:33.945102 master-0 kubenswrapper[27835]: I0318 13:26:33.944969 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 18 13:26:34.008297 master-0 kubenswrapper[27835]: I0318 13:26:34.008244 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" event={"ID":"d5b46736-78ed-49f2-88ea-b5f864675d0f","Type":"ContainerStarted","Data":"bb689ee5ef22b5d981fa30c745a5b5ccf2ca7f4faaaa0cb6422b46654ee592c8"} Mar 18 13:26:34.008297 master-0 kubenswrapper[27835]: I0318 13:26:34.008288 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" event={"ID":"d5b46736-78ed-49f2-88ea-b5f864675d0f","Type":"ContainerStarted","Data":"535730dd4b617429cb664488164564284156d7d9d560c5b8c8ae7670186c5261"} Mar 18 13:26:34.038328 master-0 kubenswrapper[27835]: I0318 13:26:34.038171 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" podStartSLOduration=58.038119199 podStartE2EDuration="58.038119199s" podCreationTimestamp="2026-03-18 13:25:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:26:34.036059728 +0000 UTC m=+158.001271318" watchObservedRunningTime="2026-03-18 13:26:34.038119199 +0000 UTC m=+158.003330799" Mar 18 13:26:34.082823 master-0 kubenswrapper[27835]: I0318 13:26:34.082756 27835 patch_prober.go:28] interesting pod/console-686bcb5cf-88rcq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.89:8443/health\": dial tcp 10.128.0.89:8443: connect: connection refused" start-of-body= Mar 18 13:26:34.083067 master-0 kubenswrapper[27835]: I0318 13:26:34.082852 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-686bcb5cf-88rcq" podUID="18d00d36-387c-4c03-affa-9abc8e2d4fe0" containerName="console" probeResult="failure" output="Get \"https://10.128.0.89:8443/health\": dial tcp 10.128.0.89:8443: connect: connection refused" Mar 18 13:26:34.140115 master-0 kubenswrapper[27835]: I0318 13:26:34.140034 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-r54p6" Mar 18 13:26:34.291103 master-0 kubenswrapper[27835]: I0318 13:26:34.290945 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0737b13d-faed-44e2-9d20-1f3860dcc9bd" path="/var/lib/kubelet/pods/0737b13d-faed-44e2-9d20-1f3860dcc9bd/volumes" Mar 18 13:26:34.292031 master-0 kubenswrapper[27835]: I0318 13:26:34.291984 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbc9ea24-0c80-4453-8313-f8ffe06714e5" path="/var/lib/kubelet/pods/dbc9ea24-0c80-4453-8313-f8ffe06714e5/volumes" Mar 18 13:26:34.370713 master-0 kubenswrapper[27835]: I0318 13:26:34.370652 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 18 13:26:34.413138 master-0 kubenswrapper[27835]: I0318 13:26:34.413069 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 18 13:26:34.536794 master-0 kubenswrapper[27835]: I0318 13:26:34.536710 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 18 13:26:35.016530 master-0 kubenswrapper[27835]: I0318 13:26:35.016451 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:35.024665 master-0 kubenswrapper[27835]: I0318 13:26:35.024623 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6b4867d948-qsvkm" Mar 18 13:26:35.150679 master-0 kubenswrapper[27835]: I0318 13:26:35.150599 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 18 13:26:35.394087 master-0 kubenswrapper[27835]: I0318 13:26:35.393901 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 18 13:26:35.743507 master-0 kubenswrapper[27835]: I0318 13:26:35.743460 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 18 13:26:35.823752 master-0 kubenswrapper[27835]: I0318 13:26:35.823708 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 18 13:26:39.708443 master-0 kubenswrapper[27835]: I0318 13:26:39.708333 27835 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 13:26:39.709547 master-0 kubenswrapper[27835]: I0318 13:26:39.708601 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" containerName="startup-monitor" containerID="cri-o://a47091116d34af37c6ea269c78da163805f18cfa0d6c2e8a8c3b428da6f84af7" gracePeriod=5 Mar 18 13:26:41.554778 master-0 kubenswrapper[27835]: I0318 13:26:41.554695 27835 patch_prober.go:28] interesting pod/console-5df65d974f-mpf5j container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.88:8443/health\": dial tcp 10.128.0.88:8443: connect: connection refused" start-of-body= Mar 18 13:26:41.555579 master-0 kubenswrapper[27835]: I0318 13:26:41.554781 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5df65d974f-mpf5j" podUID="a42bf050-6c38-4023-a8b4-dc795f3aadc7" containerName="console" probeResult="failure" output="Get \"https://10.128.0.88:8443/health\": dial tcp 10.128.0.88:8443: connect: connection refused" Mar 18 13:26:44.083146 master-0 kubenswrapper[27835]: I0318 13:26:44.083070 27835 patch_prober.go:28] interesting pod/console-686bcb5cf-88rcq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.89:8443/health\": dial tcp 10.128.0.89:8443: connect: connection refused" start-of-body= Mar 18 13:26:44.083146 master-0 kubenswrapper[27835]: I0318 13:26:44.083129 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-686bcb5cf-88rcq" podUID="18d00d36-387c-4c03-affa-9abc8e2d4fe0" containerName="console" probeResult="failure" output="Get \"https://10.128.0.89:8443/health\": dial tcp 10.128.0.89:8443: connect: connection refused" Mar 18 13:26:45.102521 master-0 kubenswrapper[27835]: I0318 13:26:45.102365 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_16fb4ea7f83036d9c6adf3454fc7e9db/startup-monitor/0.log" Mar 18 13:26:45.102521 master-0 kubenswrapper[27835]: I0318 13:26:45.102481 27835 generic.go:334] "Generic (PLEG): container finished" podID="16fb4ea7f83036d9c6adf3454fc7e9db" containerID="a47091116d34af37c6ea269c78da163805f18cfa0d6c2e8a8c3b428da6f84af7" exitCode=137 Mar 18 13:26:45.284495 master-0 kubenswrapper[27835]: I0318 13:26:45.284440 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_16fb4ea7f83036d9c6adf3454fc7e9db/startup-monitor/0.log" Mar 18 13:26:45.284495 master-0 kubenswrapper[27835]: I0318 13:26:45.284492 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:26:45.368166 master-0 kubenswrapper[27835]: I0318 13:26:45.368032 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-pod-resource-dir\") pod \"16fb4ea7f83036d9c6adf3454fc7e9db\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " Mar 18 13:26:45.368166 master-0 kubenswrapper[27835]: I0318 13:26:45.368095 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-resource-dir\") pod \"16fb4ea7f83036d9c6adf3454fc7e9db\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " Mar 18 13:26:45.368166 master-0 kubenswrapper[27835]: I0318 13:26:45.368123 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-log\") pod \"16fb4ea7f83036d9c6adf3454fc7e9db\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " Mar 18 13:26:45.368510 master-0 kubenswrapper[27835]: I0318 13:26:45.368178 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-lock\") pod \"16fb4ea7f83036d9c6adf3454fc7e9db\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " Mar 18 13:26:45.368510 master-0 kubenswrapper[27835]: I0318 13:26:45.368201 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-manifests\") pod \"16fb4ea7f83036d9c6adf3454fc7e9db\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " Mar 18 13:26:45.368510 master-0 kubenswrapper[27835]: I0318 13:26:45.368233 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "16fb4ea7f83036d9c6adf3454fc7e9db" (UID: "16fb4ea7f83036d9c6adf3454fc7e9db"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:26:45.368510 master-0 kubenswrapper[27835]: I0318 13:26:45.368286 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-lock" (OuterVolumeSpecName: "var-lock") pod "16fb4ea7f83036d9c6adf3454fc7e9db" (UID: "16fb4ea7f83036d9c6adf3454fc7e9db"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:26:45.368510 master-0 kubenswrapper[27835]: I0318 13:26:45.368290 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-log" (OuterVolumeSpecName: "var-log") pod "16fb4ea7f83036d9c6adf3454fc7e9db" (UID: "16fb4ea7f83036d9c6adf3454fc7e9db"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:26:45.368510 master-0 kubenswrapper[27835]: I0318 13:26:45.368327 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-manifests" (OuterVolumeSpecName: "manifests") pod "16fb4ea7f83036d9c6adf3454fc7e9db" (UID: "16fb4ea7f83036d9c6adf3454fc7e9db"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:26:45.368789 master-0 kubenswrapper[27835]: I0318 13:26:45.368610 27835 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:26:45.368789 master-0 kubenswrapper[27835]: I0318 13:26:45.368626 27835 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-log\") on node \"master-0\" DevicePath \"\"" Mar 18 13:26:45.368789 master-0 kubenswrapper[27835]: I0318 13:26:45.368634 27835 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:26:45.368789 master-0 kubenswrapper[27835]: I0318 13:26:45.368643 27835 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-manifests\") on node \"master-0\" DevicePath \"\"" Mar 18 13:26:45.374202 master-0 kubenswrapper[27835]: I0318 13:26:45.374170 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "16fb4ea7f83036d9c6adf3454fc7e9db" (UID: "16fb4ea7f83036d9c6adf3454fc7e9db"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:26:45.470199 master-0 kubenswrapper[27835]: I0318 13:26:45.470125 27835 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:26:46.114692 master-0 kubenswrapper[27835]: I0318 13:26:46.114623 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_16fb4ea7f83036d9c6adf3454fc7e9db/startup-monitor/0.log" Mar 18 13:26:46.115630 master-0 kubenswrapper[27835]: I0318 13:26:46.114741 27835 scope.go:117] "RemoveContainer" containerID="a47091116d34af37c6ea269c78da163805f18cfa0d6c2e8a8c3b428da6f84af7" Mar 18 13:26:46.115630 master-0 kubenswrapper[27835]: I0318 13:26:46.114812 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:26:46.290621 master-0 kubenswrapper[27835]: I0318 13:26:46.290548 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" path="/var/lib/kubelet/pods/16fb4ea7f83036d9c6adf3454fc7e9db/volumes" Mar 18 13:26:46.291191 master-0 kubenswrapper[27835]: I0318 13:26:46.291134 27835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Mar 18 13:26:46.315561 master-0 kubenswrapper[27835]: I0318 13:26:46.315466 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 13:26:46.315561 master-0 kubenswrapper[27835]: I0318 13:26:46.315530 27835 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="8d1ad55c-f2a2-48d4-b416-0dfeb45843b8" Mar 18 13:26:46.324456 master-0 kubenswrapper[27835]: I0318 13:26:46.324384 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 13:26:46.324601 master-0 kubenswrapper[27835]: I0318 13:26:46.324459 27835 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="8d1ad55c-f2a2-48d4-b416-0dfeb45843b8" Mar 18 13:26:51.554829 master-0 kubenswrapper[27835]: I0318 13:26:51.554743 27835 patch_prober.go:28] interesting pod/console-5df65d974f-mpf5j container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.88:8443/health\": dial tcp 10.128.0.88:8443: connect: connection refused" start-of-body= Mar 18 13:26:51.555588 master-0 kubenswrapper[27835]: I0318 13:26:51.554864 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5df65d974f-mpf5j" podUID="a42bf050-6c38-4023-a8b4-dc795f3aadc7" containerName="console" probeResult="failure" output="Get \"https://10.128.0.88:8443/health\": dial tcp 10.128.0.88:8443: connect: connection refused" Mar 18 13:26:54.083615 master-0 kubenswrapper[27835]: I0318 13:26:54.083525 27835 patch_prober.go:28] interesting pod/console-686bcb5cf-88rcq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.89:8443/health\": dial tcp 10.128.0.89:8443: connect: connection refused" start-of-body= Mar 18 13:26:54.084645 master-0 kubenswrapper[27835]: I0318 13:26:54.083625 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-686bcb5cf-88rcq" podUID="18d00d36-387c-4c03-affa-9abc8e2d4fe0" containerName="console" probeResult="failure" output="Get \"https://10.128.0.89:8443/health\": dial tcp 10.128.0.89:8443: connect: connection refused" Mar 18 13:26:58.213481 master-0 kubenswrapper[27835]: I0318 13:26:58.213371 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c129e07da670ff3af256d72652e4b1da/kube-controller-manager/1.log" Mar 18 13:26:58.215659 master-0 kubenswrapper[27835]: I0318 13:26:58.215608 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c129e07da670ff3af256d72652e4b1da/kube-controller-manager/0.log" Mar 18 13:26:58.215781 master-0 kubenswrapper[27835]: I0318 13:26:58.215664 27835 generic.go:334] "Generic (PLEG): container finished" podID="c129e07da670ff3af256d72652e4b1da" containerID="9b5f2a1af05afc1d7e7cb36c0823f94aa2ee39888af2d9a6e00b457182627afd" exitCode=137 Mar 18 13:26:58.215781 master-0 kubenswrapper[27835]: I0318 13:26:58.215699 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c129e07da670ff3af256d72652e4b1da","Type":"ContainerDied","Data":"9b5f2a1af05afc1d7e7cb36c0823f94aa2ee39888af2d9a6e00b457182627afd"} Mar 18 13:26:58.215781 master-0 kubenswrapper[27835]: I0318 13:26:58.215737 27835 scope.go:117] "RemoveContainer" containerID="771bd5b4b91a07c5659ebb9ce85816fcbf0812eb5cfe253bf1a7b334533c5d55" Mar 18 13:26:59.230685 master-0 kubenswrapper[27835]: I0318 13:26:59.230567 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c129e07da670ff3af256d72652e4b1da/kube-controller-manager/1.log" Mar 18 13:26:59.231972 master-0 kubenswrapper[27835]: I0318 13:26:59.231921 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c129e07da670ff3af256d72652e4b1da","Type":"ContainerStarted","Data":"b4b5c8d1e406d13fe58909e853bd3a2f1dbf56f74da4311baf7f48390d808a56"} Mar 18 13:27:01.555285 master-0 kubenswrapper[27835]: I0318 13:27:01.555192 27835 patch_prober.go:28] interesting pod/console-5df65d974f-mpf5j container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.88:8443/health\": dial tcp 10.128.0.88:8443: connect: connection refused" start-of-body= Mar 18 13:27:01.555285 master-0 kubenswrapper[27835]: I0318 13:27:01.555283 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5df65d974f-mpf5j" podUID="a42bf050-6c38-4023-a8b4-dc795f3aadc7" containerName="console" probeResult="failure" output="Get \"https://10.128.0.88:8443/health\": dial tcp 10.128.0.88:8443: connect: connection refused" Mar 18 13:27:02.026703 master-0 kubenswrapper[27835]: I0318 13:27:02.026619 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 18 13:27:02.027223 master-0 kubenswrapper[27835]: E0318 13:27:02.027197 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" containerName="startup-monitor" Mar 18 13:27:02.027296 master-0 kubenswrapper[27835]: I0318 13:27:02.027225 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" containerName="startup-monitor" Mar 18 13:27:02.027754 master-0 kubenswrapper[27835]: I0318 13:27:02.027678 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" containerName="startup-monitor" Mar 18 13:27:02.028523 master-0 kubenswrapper[27835]: I0318 13:27:02.028488 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 13:27:02.031642 master-0 kubenswrapper[27835]: I0318 13:27:02.031584 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 18 13:27:02.032175 master-0 kubenswrapper[27835]: I0318 13:27:02.032133 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-sd5ht" Mar 18 13:27:02.037857 master-0 kubenswrapper[27835]: I0318 13:27:02.037786 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 18 13:27:02.118943 master-0 kubenswrapper[27835]: I0318 13:27:02.118803 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bcf7b950-0686-4e8e-87da-84c45d4ca1b4-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"bcf7b950-0686-4e8e-87da-84c45d4ca1b4\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 13:27:02.118943 master-0 kubenswrapper[27835]: I0318 13:27:02.118904 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bcf7b950-0686-4e8e-87da-84c45d4ca1b4-var-lock\") pod \"installer-5-master-0\" (UID: \"bcf7b950-0686-4e8e-87da-84c45d4ca1b4\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 13:27:02.119234 master-0 kubenswrapper[27835]: I0318 13:27:02.119049 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bcf7b950-0686-4e8e-87da-84c45d4ca1b4-kube-api-access\") pod \"installer-5-master-0\" (UID: \"bcf7b950-0686-4e8e-87da-84c45d4ca1b4\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 13:27:02.220236 master-0 kubenswrapper[27835]: I0318 13:27:02.220179 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bcf7b950-0686-4e8e-87da-84c45d4ca1b4-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"bcf7b950-0686-4e8e-87da-84c45d4ca1b4\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 13:27:02.220476 master-0 kubenswrapper[27835]: I0318 13:27:02.220252 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bcf7b950-0686-4e8e-87da-84c45d4ca1b4-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"bcf7b950-0686-4e8e-87da-84c45d4ca1b4\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 13:27:02.220476 master-0 kubenswrapper[27835]: I0318 13:27:02.220300 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bcf7b950-0686-4e8e-87da-84c45d4ca1b4-var-lock\") pod \"installer-5-master-0\" (UID: \"bcf7b950-0686-4e8e-87da-84c45d4ca1b4\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 13:27:02.220476 master-0 kubenswrapper[27835]: I0318 13:27:02.220389 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bcf7b950-0686-4e8e-87da-84c45d4ca1b4-kube-api-access\") pod \"installer-5-master-0\" (UID: \"bcf7b950-0686-4e8e-87da-84c45d4ca1b4\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 13:27:02.220726 master-0 kubenswrapper[27835]: I0318 13:27:02.220654 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bcf7b950-0686-4e8e-87da-84c45d4ca1b4-var-lock\") pod \"installer-5-master-0\" (UID: \"bcf7b950-0686-4e8e-87da-84c45d4ca1b4\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 13:27:02.243294 master-0 kubenswrapper[27835]: I0318 13:27:02.243252 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bcf7b950-0686-4e8e-87da-84c45d4ca1b4-kube-api-access\") pod \"installer-5-master-0\" (UID: \"bcf7b950-0686-4e8e-87da-84c45d4ca1b4\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 13:27:02.366590 master-0 kubenswrapper[27835]: I0318 13:27:02.366393 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 13:27:02.799991 master-0 kubenswrapper[27835]: I0318 13:27:02.797919 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 18 13:27:02.803118 master-0 kubenswrapper[27835]: W0318 13:27:02.803072 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podbcf7b950_0686_4e8e_87da_84c45d4ca1b4.slice/crio-08e4dd44618b9242aeef8503f3e807d55dc2c027717e45148cc2357b97ca2cd0 WatchSource:0}: Error finding container 08e4dd44618b9242aeef8503f3e807d55dc2c027717e45148cc2357b97ca2cd0: Status 404 returned error can't find the container with id 08e4dd44618b9242aeef8503f3e807d55dc2c027717e45148cc2357b97ca2cd0 Mar 18 13:27:03.266530 master-0 kubenswrapper[27835]: I0318 13:27:03.266468 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"bcf7b950-0686-4e8e-87da-84c45d4ca1b4","Type":"ContainerStarted","Data":"0f79e96911cfa1e56118a281776977c083f7a6b8827655084e1ffc23c13690f3"} Mar 18 13:27:03.266869 master-0 kubenswrapper[27835]: I0318 13:27:03.266835 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"bcf7b950-0686-4e8e-87da-84c45d4ca1b4","Type":"ContainerStarted","Data":"08e4dd44618b9242aeef8503f3e807d55dc2c027717e45148cc2357b97ca2cd0"} Mar 18 13:27:03.298121 master-0 kubenswrapper[27835]: I0318 13:27:03.298041 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-5-master-0" podStartSLOduration=1.298015282 podStartE2EDuration="1.298015282s" podCreationTimestamp="2026-03-18 13:27:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:27:03.287015407 +0000 UTC m=+187.252226977" watchObservedRunningTime="2026-03-18 13:27:03.298015282 +0000 UTC m=+187.263226852" Mar 18 13:27:04.083583 master-0 kubenswrapper[27835]: I0318 13:27:04.083480 27835 patch_prober.go:28] interesting pod/console-686bcb5cf-88rcq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.89:8443/health\": dial tcp 10.128.0.89:8443: connect: connection refused" start-of-body= Mar 18 13:27:04.084331 master-0 kubenswrapper[27835]: I0318 13:27:04.083602 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-686bcb5cf-88rcq" podUID="18d00d36-387c-4c03-affa-9abc8e2d4fe0" containerName="console" probeResult="failure" output="Get \"https://10.128.0.89:8443/health\": dial tcp 10.128.0.89:8443: connect: connection refused" Mar 18 13:27:08.083979 master-0 kubenswrapper[27835]: I0318 13:27:08.083927 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:27:08.084878 master-0 kubenswrapper[27835]: I0318 13:27:08.084192 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:27:08.091353 master-0 kubenswrapper[27835]: I0318 13:27:08.091314 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:27:08.324352 master-0 kubenswrapper[27835]: I0318 13:27:08.324303 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:27:11.554748 master-0 kubenswrapper[27835]: I0318 13:27:11.554669 27835 patch_prober.go:28] interesting pod/console-5df65d974f-mpf5j container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.88:8443/health\": dial tcp 10.128.0.88:8443: connect: connection refused" start-of-body= Mar 18 13:27:11.555727 master-0 kubenswrapper[27835]: I0318 13:27:11.554753 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5df65d974f-mpf5j" podUID="a42bf050-6c38-4023-a8b4-dc795f3aadc7" containerName="console" probeResult="failure" output="Get \"https://10.128.0.88:8443/health\": dial tcp 10.128.0.88:8443: connect: connection refused" Mar 18 13:27:14.082975 master-0 kubenswrapper[27835]: I0318 13:27:14.082908 27835 patch_prober.go:28] interesting pod/console-686bcb5cf-88rcq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.89:8443/health\": dial tcp 10.128.0.89:8443: connect: connection refused" start-of-body= Mar 18 13:27:14.082975 master-0 kubenswrapper[27835]: I0318 13:27:14.082983 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-686bcb5cf-88rcq" podUID="18d00d36-387c-4c03-affa-9abc8e2d4fe0" containerName="console" probeResult="failure" output="Get \"https://10.128.0.89:8443/health\": dial tcp 10.128.0.89:8443: connect: connection refused" Mar 18 13:27:20.451023 master-0 kubenswrapper[27835]: I0318 13:27:20.448734 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 18 13:27:20.456444 master-0 kubenswrapper[27835]: I0318 13:27:20.454359 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:20.459644 master-0 kubenswrapper[27835]: I0318 13:27:20.457332 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 18 13:27:20.459644 master-0 kubenswrapper[27835]: I0318 13:27:20.457547 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 18 13:27:20.459644 master-0 kubenswrapper[27835]: I0318 13:27:20.457652 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 18 13:27:20.459644 master-0 kubenswrapper[27835]: I0318 13:27:20.457752 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 18 13:27:20.459644 master-0 kubenswrapper[27835]: I0318 13:27:20.457860 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 18 13:27:20.478450 master-0 kubenswrapper[27835]: I0318 13:27:20.462300 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 18 13:27:20.478450 master-0 kubenswrapper[27835]: I0318 13:27:20.473162 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 18 13:27:20.478450 master-0 kubenswrapper[27835]: I0318 13:27:20.473382 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 18 13:27:20.515395 master-0 kubenswrapper[27835]: I0318 13:27:20.514950 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 18 13:27:20.527269 master-0 kubenswrapper[27835]: I0318 13:27:20.523949 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-5668cbc594-2kzhf"] Mar 18 13:27:20.528195 master-0 kubenswrapper[27835]: I0318 13:27:20.527902 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-5668cbc594-2kzhf" Mar 18 13:27:20.533410 master-0 kubenswrapper[27835]: I0318 13:27:20.533345 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-c8bj17hs40gij" Mar 18 13:27:20.539454 master-0 kubenswrapper[27835]: I0318 13:27:20.538120 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-65dbcd767c-7bqc9"] Mar 18 13:27:20.551676 master-0 kubenswrapper[27835]: I0318 13:27:20.538773 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" podUID="41cc6278-8f99-407c-ba5f-750a40e3058c" containerName="metrics-server" containerID="cri-o://de4324b4c32cf4e9cbdf79af1c88339cded8c6fd18295426d2e5f309799e44c1" gracePeriod=170 Mar 18 13:27:20.573543 master-0 kubenswrapper[27835]: I0318 13:27:20.573090 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:20.573543 master-0 kubenswrapper[27835]: I0318 13:27:20.573210 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:20.573543 master-0 kubenswrapper[27835]: I0318 13:27:20.573248 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-config-volume\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:20.573543 master-0 kubenswrapper[27835]: I0318 13:27:20.573303 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-web-config\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:20.573543 master-0 kubenswrapper[27835]: I0318 13:27:20.573334 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:20.573543 master-0 kubenswrapper[27835]: I0318 13:27:20.573363 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:20.573543 master-0 kubenswrapper[27835]: I0318 13:27:20.573384 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk7ms\" (UniqueName: \"kubernetes.io/projected/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-kube-api-access-xk7ms\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:20.573543 master-0 kubenswrapper[27835]: I0318 13:27:20.573433 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:20.573543 master-0 kubenswrapper[27835]: I0318 13:27:20.573477 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-config-out\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:20.573543 master-0 kubenswrapper[27835]: I0318 13:27:20.573511 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:20.573543 master-0 kubenswrapper[27835]: I0318 13:27:20.573551 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-tls-assets\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:20.574055 master-0 kubenswrapper[27835]: I0318 13:27:20.573586 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:20.584264 master-0 kubenswrapper[27835]: I0318 13:27:20.584154 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-5f7fb669fb-msvkz"] Mar 18 13:27:20.586449 master-0 kubenswrapper[27835]: I0318 13:27:20.586384 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-5f7fb669fb-msvkz" Mar 18 13:27:20.587964 master-0 kubenswrapper[27835]: I0318 13:27:20.587921 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 18 13:27:20.588499 master-0 kubenswrapper[27835]: I0318 13:27:20.588476 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 18 13:27:20.588611 master-0 kubenswrapper[27835]: I0318 13:27:20.588595 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 18 13:27:20.589179 master-0 kubenswrapper[27835]: I0318 13:27:20.588774 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 18 13:27:20.589179 master-0 kubenswrapper[27835]: I0318 13:27:20.588876 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 18 13:27:20.589179 master-0 kubenswrapper[27835]: I0318 13:27:20.588986 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-5rua6jvkkc769" Mar 18 13:27:20.606349 master-0 kubenswrapper[27835]: I0318 13:27:20.606303 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-d855b697f-6v4bh"] Mar 18 13:27:20.608786 master-0 kubenswrapper[27835]: I0318 13:27:20.608759 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-d855b697f-6v4bh" Mar 18 13:27:20.618170 master-0 kubenswrapper[27835]: I0318 13:27:20.617992 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-5668cbc594-2kzhf"] Mar 18 13:27:20.618170 master-0 kubenswrapper[27835]: I0318 13:27:20.618114 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 18 13:27:20.618424 master-0 kubenswrapper[27835]: I0318 13:27:20.618327 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 18 13:27:20.618488 master-0 kubenswrapper[27835]: I0318 13:27:20.618405 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 18 13:27:20.618488 master-0 kubenswrapper[27835]: I0318 13:27:20.618485 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 18 13:27:20.618593 master-0 kubenswrapper[27835]: I0318 13:27:20.618564 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 18 13:27:20.625270 master-0 kubenswrapper[27835]: I0318 13:27:20.625209 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 18 13:27:20.632074 master-0 kubenswrapper[27835]: I0318 13:27:20.632021 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-5f7fb669fb-msvkz"] Mar 18 13:27:20.648319 master-0 kubenswrapper[27835]: I0318 13:27:20.647609 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-d855b697f-6v4bh"] Mar 18 13:27:20.669439 master-0 kubenswrapper[27835]: I0318 13:27:20.664488 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-7c6b76c555-8tq2r"] Mar 18 13:27:20.669439 master-0 kubenswrapper[27835]: I0318 13:27:20.665519 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c6b76c555-8tq2r" Mar 18 13:27:20.669439 master-0 kubenswrapper[27835]: I0318 13:27:20.667555 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 18 13:27:20.673110 master-0 kubenswrapper[27835]: I0318 13:27:20.671721 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 18 13:27:20.675013 master-0 kubenswrapper[27835]: I0318 13:27:20.674973 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:20.675260 master-0 kubenswrapper[27835]: I0318 13:27:20.675198 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk7ms\" (UniqueName: \"kubernetes.io/projected/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-kube-api-access-xk7ms\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:20.675371 master-0 kubenswrapper[27835]: I0318 13:27:20.675311 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/1edb65b1-2635-4c6b-95c9-da2befb434b2-audit-log\") pod \"metrics-server-5668cbc594-2kzhf\" (UID: \"1edb65b1-2635-4c6b-95c9-da2befb434b2\") " pod="openshift-monitoring/metrics-server-5668cbc594-2kzhf" Mar 18 13:27:20.675371 master-0 kubenswrapper[27835]: I0318 13:27:20.675359 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:20.675517 master-0 kubenswrapper[27835]: I0318 13:27:20.675491 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1edb65b1-2635-4c6b-95c9-da2befb434b2-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-5668cbc594-2kzhf\" (UID: \"1edb65b1-2635-4c6b-95c9-da2befb434b2\") " pod="openshift-monitoring/metrics-server-5668cbc594-2kzhf" Mar 18 13:27:20.675691 master-0 kubenswrapper[27835]: I0318 13:27:20.675658 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/1edb65b1-2635-4c6b-95c9-da2befb434b2-metrics-server-audit-profiles\") pod \"metrics-server-5668cbc594-2kzhf\" (UID: \"1edb65b1-2635-4c6b-95c9-da2befb434b2\") " pod="openshift-monitoring/metrics-server-5668cbc594-2kzhf" Mar 18 13:27:20.675748 master-0 kubenswrapper[27835]: I0318 13:27:20.675727 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-config-out\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:20.675859 master-0 kubenswrapper[27835]: I0318 13:27:20.675805 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:20.675940 master-0 kubenswrapper[27835]: I0318 13:27:20.675898 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/1edb65b1-2635-4c6b-95c9-da2befb434b2-secret-metrics-server-tls\") pod \"metrics-server-5668cbc594-2kzhf\" (UID: \"1edb65b1-2635-4c6b-95c9-da2befb434b2\") " pod="openshift-monitoring/metrics-server-5668cbc594-2kzhf" Mar 18 13:27:20.676003 master-0 kubenswrapper[27835]: I0318 13:27:20.675981 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn58d\" (UniqueName: \"kubernetes.io/projected/1edb65b1-2635-4c6b-95c9-da2befb434b2-kube-api-access-pn58d\") pod \"metrics-server-5668cbc594-2kzhf\" (UID: \"1edb65b1-2635-4c6b-95c9-da2befb434b2\") " pod="openshift-monitoring/metrics-server-5668cbc594-2kzhf" Mar 18 13:27:20.676043 master-0 kubenswrapper[27835]: I0318 13:27:20.676021 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1edb65b1-2635-4c6b-95c9-da2befb434b2-secret-metrics-client-certs\") pod \"metrics-server-5668cbc594-2kzhf\" (UID: \"1edb65b1-2635-4c6b-95c9-da2befb434b2\") " pod="openshift-monitoring/metrics-server-5668cbc594-2kzhf" Mar 18 13:27:20.676082 master-0 kubenswrapper[27835]: I0318 13:27:20.676050 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-tls-assets\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:20.676114 master-0 kubenswrapper[27835]: I0318 13:27:20.676094 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:20.676148 master-0 kubenswrapper[27835]: I0318 13:27:20.676130 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1edb65b1-2635-4c6b-95c9-da2befb434b2-client-ca-bundle\") pod \"metrics-server-5668cbc594-2kzhf\" (UID: \"1edb65b1-2635-4c6b-95c9-da2befb434b2\") " pod="openshift-monitoring/metrics-server-5668cbc594-2kzhf" Mar 18 13:27:20.676177 master-0 kubenswrapper[27835]: I0318 13:27:20.676162 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:20.676844 master-0 kubenswrapper[27835]: I0318 13:27:20.676750 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:20.676909 master-0 kubenswrapper[27835]: I0318 13:27:20.676863 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-config-volume\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:20.676952 master-0 kubenswrapper[27835]: I0318 13:27:20.676919 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-web-config\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:20.676994 master-0 kubenswrapper[27835]: I0318 13:27:20.676950 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:20.677168 master-0 kubenswrapper[27835]: E0318 13:27:20.677139 27835 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Mar 18 13:27:20.677224 master-0 kubenswrapper[27835]: E0318 13:27:20.677197 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-secret-alertmanager-main-tls podName:3d46bdde-fa29-4faa-a7a8-fb52f9bdd939 nodeName:}" failed. No retries permitted until 2026-03-18 13:27:21.177181501 +0000 UTC m=+205.142393061 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "3d46bdde-fa29-4faa-a7a8-fb52f9bdd939") : secret "alertmanager-main-tls" not found Mar 18 13:27:20.678756 master-0 kubenswrapper[27835]: I0318 13:27:20.678710 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:20.679195 master-0 kubenswrapper[27835]: I0318 13:27:20.679166 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:20.679251 master-0 kubenswrapper[27835]: I0318 13:27:20.679214 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-7c6b76c555-8tq2r"] Mar 18 13:27:20.679515 master-0 kubenswrapper[27835]: I0318 13:27:20.679499 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:20.679748 master-0 kubenswrapper[27835]: I0318 13:27:20.679713 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:20.681638 master-0 kubenswrapper[27835]: I0318 13:27:20.679923 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-tls-assets\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:20.681638 master-0 kubenswrapper[27835]: I0318 13:27:20.680097 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:20.681638 master-0 kubenswrapper[27835]: I0318 13:27:20.680744 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:20.693457 master-0 kubenswrapper[27835]: I0318 13:27:20.684897 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-config-out\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:20.693457 master-0 kubenswrapper[27835]: I0318 13:27:20.691135 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-web-config\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:20.694175 master-0 kubenswrapper[27835]: I0318 13:27:20.694152 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-config-volume\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:20.712744 master-0 kubenswrapper[27835]: I0318 13:27:20.711352 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk7ms\" (UniqueName: \"kubernetes.io/projected/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-kube-api-access-xk7ms\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:20.778979 master-0 kubenswrapper[27835]: I0318 13:27:20.778912 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/cfcf230d-b184-4d7f-aedc-e58264252b88-secret-grpc-tls\") pod \"thanos-querier-5f7fb669fb-msvkz\" (UID: \"cfcf230d-b184-4d7f-aedc-e58264252b88\") " pod="openshift-monitoring/thanos-querier-5f7fb669fb-msvkz" Mar 18 13:27:20.779197 master-0 kubenswrapper[27835]: I0318 13:27:20.778986 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/cfcf230d-b184-4d7f-aedc-e58264252b88-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5f7fb669fb-msvkz\" (UID: \"cfcf230d-b184-4d7f-aedc-e58264252b88\") " pod="openshift-monitoring/thanos-querier-5f7fb669fb-msvkz" Mar 18 13:27:20.779197 master-0 kubenswrapper[27835]: I0318 13:27:20.779014 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2phvn\" (UniqueName: \"kubernetes.io/projected/cfcf230d-b184-4d7f-aedc-e58264252b88-kube-api-access-2phvn\") pod \"thanos-querier-5f7fb669fb-msvkz\" (UID: \"cfcf230d-b184-4d7f-aedc-e58264252b88\") " pod="openshift-monitoring/thanos-querier-5f7fb669fb-msvkz" Mar 18 13:27:20.779197 master-0 kubenswrapper[27835]: I0318 13:27:20.779072 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/1edb65b1-2635-4c6b-95c9-da2befb434b2-audit-log\") pod \"metrics-server-5668cbc594-2kzhf\" (UID: \"1edb65b1-2635-4c6b-95c9-da2befb434b2\") " pod="openshift-monitoring/metrics-server-5668cbc594-2kzhf" Mar 18 13:27:20.779197 master-0 kubenswrapper[27835]: I0318 13:27:20.779103 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1edb65b1-2635-4c6b-95c9-da2befb434b2-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-5668cbc594-2kzhf\" (UID: \"1edb65b1-2635-4c6b-95c9-da2befb434b2\") " pod="openshift-monitoring/metrics-server-5668cbc594-2kzhf" Mar 18 13:27:20.779197 master-0 kubenswrapper[27835]: I0318 13:27:20.779137 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/1edb65b1-2635-4c6b-95c9-da2befb434b2-metrics-server-audit-profiles\") pod \"metrics-server-5668cbc594-2kzhf\" (UID: \"1edb65b1-2635-4c6b-95c9-da2befb434b2\") " pod="openshift-monitoring/metrics-server-5668cbc594-2kzhf" Mar 18 13:27:20.779197 master-0 kubenswrapper[27835]: I0318 13:27:20.779169 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7e1495df-e141-4c4d-9a05-5e8f3ee2667f-metrics-client-ca\") pod \"telemeter-client-d855b697f-6v4bh\" (UID: \"7e1495df-e141-4c4d-9a05-5e8f3ee2667f\") " pod="openshift-monitoring/telemeter-client-d855b697f-6v4bh" Mar 18 13:27:20.779519 master-0 kubenswrapper[27835]: I0318 13:27:20.779230 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e1495df-e141-4c4d-9a05-5e8f3ee2667f-serving-certs-ca-bundle\") pod \"telemeter-client-d855b697f-6v4bh\" (UID: \"7e1495df-e141-4c4d-9a05-5e8f3ee2667f\") " pod="openshift-monitoring/telemeter-client-d855b697f-6v4bh" Mar 18 13:27:20.779519 master-0 kubenswrapper[27835]: I0318 13:27:20.779265 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7e1495df-e141-4c4d-9a05-5e8f3ee2667f-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-d855b697f-6v4bh\" (UID: \"7e1495df-e141-4c4d-9a05-5e8f3ee2667f\") " pod="openshift-monitoring/telemeter-client-d855b697f-6v4bh" Mar 18 13:27:20.779519 master-0 kubenswrapper[27835]: I0318 13:27:20.779294 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e1495df-e141-4c4d-9a05-5e8f3ee2667f-telemeter-trusted-ca-bundle\") pod \"telemeter-client-d855b697f-6v4bh\" (UID: \"7e1495df-e141-4c4d-9a05-5e8f3ee2667f\") " pod="openshift-monitoring/telemeter-client-d855b697f-6v4bh" Mar 18 13:27:20.779519 master-0 kubenswrapper[27835]: I0318 13:27:20.779322 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/1edb65b1-2635-4c6b-95c9-da2befb434b2-secret-metrics-server-tls\") pod \"metrics-server-5668cbc594-2kzhf\" (UID: \"1edb65b1-2635-4c6b-95c9-da2befb434b2\") " pod="openshift-monitoring/metrics-server-5668cbc594-2kzhf" Mar 18 13:27:20.779519 master-0 kubenswrapper[27835]: I0318 13:27:20.779350 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn58d\" (UniqueName: \"kubernetes.io/projected/1edb65b1-2635-4c6b-95c9-da2befb434b2-kube-api-access-pn58d\") pod \"metrics-server-5668cbc594-2kzhf\" (UID: \"1edb65b1-2635-4c6b-95c9-da2befb434b2\") " pod="openshift-monitoring/metrics-server-5668cbc594-2kzhf" Mar 18 13:27:20.779519 master-0 kubenswrapper[27835]: I0318 13:27:20.779400 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cfcf230d-b184-4d7f-aedc-e58264252b88-metrics-client-ca\") pod \"thanos-querier-5f7fb669fb-msvkz\" (UID: \"cfcf230d-b184-4d7f-aedc-e58264252b88\") " pod="openshift-monitoring/thanos-querier-5f7fb669fb-msvkz" Mar 18 13:27:20.779519 master-0 kubenswrapper[27835]: I0318 13:27:20.779459 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1edb65b1-2635-4c6b-95c9-da2befb434b2-secret-metrics-client-certs\") pod \"metrics-server-5668cbc594-2kzhf\" (UID: \"1edb65b1-2635-4c6b-95c9-da2befb434b2\") " pod="openshift-monitoring/metrics-server-5668cbc594-2kzhf" Mar 18 13:27:20.779519 master-0 kubenswrapper[27835]: I0318 13:27:20.779489 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9575265e-a8b1-4bf1-aea7-71448e782b44-nginx-conf\") pod \"networking-console-plugin-7c6b76c555-8tq2r\" (UID: \"9575265e-a8b1-4bf1-aea7-71448e782b44\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-8tq2r" Mar 18 13:27:20.779886 master-0 kubenswrapper[27835]: I0318 13:27:20.779807 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/7e1495df-e141-4c4d-9a05-5e8f3ee2667f-federate-client-tls\") pod \"telemeter-client-d855b697f-6v4bh\" (UID: \"7e1495df-e141-4c4d-9a05-5e8f3ee2667f\") " pod="openshift-monitoring/telemeter-client-d855b697f-6v4bh" Mar 18 13:27:20.780645 master-0 kubenswrapper[27835]: I0318 13:27:20.780580 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/1edb65b1-2635-4c6b-95c9-da2befb434b2-audit-log\") pod \"metrics-server-5668cbc594-2kzhf\" (UID: \"1edb65b1-2635-4c6b-95c9-da2befb434b2\") " pod="openshift-monitoring/metrics-server-5668cbc594-2kzhf" Mar 18 13:27:20.785821 master-0 kubenswrapper[27835]: I0318 13:27:20.780919 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1edb65b1-2635-4c6b-95c9-da2befb434b2-client-ca-bundle\") pod \"metrics-server-5668cbc594-2kzhf\" (UID: \"1edb65b1-2635-4c6b-95c9-da2befb434b2\") " pod="openshift-monitoring/metrics-server-5668cbc594-2kzhf" Mar 18 13:27:20.785821 master-0 kubenswrapper[27835]: I0318 13:27:20.781749 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/7e1495df-e141-4c4d-9a05-5e8f3ee2667f-secret-telemeter-client\") pod \"telemeter-client-d855b697f-6v4bh\" (UID: \"7e1495df-e141-4c4d-9a05-5e8f3ee2667f\") " pod="openshift-monitoring/telemeter-client-d855b697f-6v4bh" Mar 18 13:27:20.785821 master-0 kubenswrapper[27835]: I0318 13:27:20.781794 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/cfcf230d-b184-4d7f-aedc-e58264252b88-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5f7fb669fb-msvkz\" (UID: \"cfcf230d-b184-4d7f-aedc-e58264252b88\") " pod="openshift-monitoring/thanos-querier-5f7fb669fb-msvkz" Mar 18 13:27:20.785821 master-0 kubenswrapper[27835]: I0318 13:27:20.781895 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/cfcf230d-b184-4d7f-aedc-e58264252b88-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5f7fb669fb-msvkz\" (UID: \"cfcf230d-b184-4d7f-aedc-e58264252b88\") " pod="openshift-monitoring/thanos-querier-5f7fb669fb-msvkz" Mar 18 13:27:20.785821 master-0 kubenswrapper[27835]: I0318 13:27:20.781952 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/cfcf230d-b184-4d7f-aedc-e58264252b88-secret-thanos-querier-tls\") pod \"thanos-querier-5f7fb669fb-msvkz\" (UID: \"cfcf230d-b184-4d7f-aedc-e58264252b88\") " pod="openshift-monitoring/thanos-querier-5f7fb669fb-msvkz" Mar 18 13:27:20.785821 master-0 kubenswrapper[27835]: I0318 13:27:20.782014 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dg8p\" (UniqueName: \"kubernetes.io/projected/7e1495df-e141-4c4d-9a05-5e8f3ee2667f-kube-api-access-8dg8p\") pod \"telemeter-client-d855b697f-6v4bh\" (UID: \"7e1495df-e141-4c4d-9a05-5e8f3ee2667f\") " pod="openshift-monitoring/telemeter-client-d855b697f-6v4bh" Mar 18 13:27:20.785821 master-0 kubenswrapper[27835]: I0318 13:27:20.782057 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9575265e-a8b1-4bf1-aea7-71448e782b44-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-8tq2r\" (UID: \"9575265e-a8b1-4bf1-aea7-71448e782b44\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-8tq2r" Mar 18 13:27:20.785821 master-0 kubenswrapper[27835]: I0318 13:27:20.782105 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/cfcf230d-b184-4d7f-aedc-e58264252b88-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5f7fb669fb-msvkz\" (UID: \"cfcf230d-b184-4d7f-aedc-e58264252b88\") " pod="openshift-monitoring/thanos-querier-5f7fb669fb-msvkz" Mar 18 13:27:20.785821 master-0 kubenswrapper[27835]: I0318 13:27:20.782134 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/7e1495df-e141-4c4d-9a05-5e8f3ee2667f-telemeter-client-tls\") pod \"telemeter-client-d855b697f-6v4bh\" (UID: \"7e1495df-e141-4c4d-9a05-5e8f3ee2667f\") " pod="openshift-monitoring/telemeter-client-d855b697f-6v4bh" Mar 18 13:27:20.785821 master-0 kubenswrapper[27835]: I0318 13:27:20.780949 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1edb65b1-2635-4c6b-95c9-da2befb434b2-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-5668cbc594-2kzhf\" (UID: \"1edb65b1-2635-4c6b-95c9-da2befb434b2\") " pod="openshift-monitoring/metrics-server-5668cbc594-2kzhf" Mar 18 13:27:20.785821 master-0 kubenswrapper[27835]: I0318 13:27:20.781558 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/1edb65b1-2635-4c6b-95c9-da2befb434b2-metrics-server-audit-profiles\") pod \"metrics-server-5668cbc594-2kzhf\" (UID: \"1edb65b1-2635-4c6b-95c9-da2befb434b2\") " pod="openshift-monitoring/metrics-server-5668cbc594-2kzhf" Mar 18 13:27:20.785821 master-0 kubenswrapper[27835]: I0318 13:27:20.784403 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1edb65b1-2635-4c6b-95c9-da2befb434b2-secret-metrics-client-certs\") pod \"metrics-server-5668cbc594-2kzhf\" (UID: \"1edb65b1-2635-4c6b-95c9-da2befb434b2\") " pod="openshift-monitoring/metrics-server-5668cbc594-2kzhf" Mar 18 13:27:20.785821 master-0 kubenswrapper[27835]: I0318 13:27:20.785072 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/1edb65b1-2635-4c6b-95c9-da2befb434b2-secret-metrics-server-tls\") pod \"metrics-server-5668cbc594-2kzhf\" (UID: \"1edb65b1-2635-4c6b-95c9-da2befb434b2\") " pod="openshift-monitoring/metrics-server-5668cbc594-2kzhf" Mar 18 13:27:20.785821 master-0 kubenswrapper[27835]: I0318 13:27:20.785692 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1edb65b1-2635-4c6b-95c9-da2befb434b2-client-ca-bundle\") pod \"metrics-server-5668cbc594-2kzhf\" (UID: \"1edb65b1-2635-4c6b-95c9-da2befb434b2\") " pod="openshift-monitoring/metrics-server-5668cbc594-2kzhf" Mar 18 13:27:20.801977 master-0 kubenswrapper[27835]: I0318 13:27:20.801930 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn58d\" (UniqueName: \"kubernetes.io/projected/1edb65b1-2635-4c6b-95c9-da2befb434b2-kube-api-access-pn58d\") pod \"metrics-server-5668cbc594-2kzhf\" (UID: \"1edb65b1-2635-4c6b-95c9-da2befb434b2\") " pod="openshift-monitoring/metrics-server-5668cbc594-2kzhf" Mar 18 13:27:20.882178 master-0 kubenswrapper[27835]: I0318 13:27:20.882108 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-5668cbc594-2kzhf" Mar 18 13:27:20.883469 master-0 kubenswrapper[27835]: I0318 13:27:20.883380 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/7e1495df-e141-4c4d-9a05-5e8f3ee2667f-secret-telemeter-client\") pod \"telemeter-client-d855b697f-6v4bh\" (UID: \"7e1495df-e141-4c4d-9a05-5e8f3ee2667f\") " pod="openshift-monitoring/telemeter-client-d855b697f-6v4bh" Mar 18 13:27:20.883552 master-0 kubenswrapper[27835]: I0318 13:27:20.883498 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/cfcf230d-b184-4d7f-aedc-e58264252b88-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5f7fb669fb-msvkz\" (UID: \"cfcf230d-b184-4d7f-aedc-e58264252b88\") " pod="openshift-monitoring/thanos-querier-5f7fb669fb-msvkz" Mar 18 13:27:20.883552 master-0 kubenswrapper[27835]: I0318 13:27:20.883532 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/cfcf230d-b184-4d7f-aedc-e58264252b88-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5f7fb669fb-msvkz\" (UID: \"cfcf230d-b184-4d7f-aedc-e58264252b88\") " pod="openshift-monitoring/thanos-querier-5f7fb669fb-msvkz" Mar 18 13:27:20.883644 master-0 kubenswrapper[27835]: I0318 13:27:20.883567 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/cfcf230d-b184-4d7f-aedc-e58264252b88-secret-thanos-querier-tls\") pod \"thanos-querier-5f7fb669fb-msvkz\" (UID: \"cfcf230d-b184-4d7f-aedc-e58264252b88\") " pod="openshift-monitoring/thanos-querier-5f7fb669fb-msvkz" Mar 18 13:27:20.883644 master-0 kubenswrapper[27835]: I0318 13:27:20.883604 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dg8p\" (UniqueName: \"kubernetes.io/projected/7e1495df-e141-4c4d-9a05-5e8f3ee2667f-kube-api-access-8dg8p\") pod \"telemeter-client-d855b697f-6v4bh\" (UID: \"7e1495df-e141-4c4d-9a05-5e8f3ee2667f\") " pod="openshift-monitoring/telemeter-client-d855b697f-6v4bh" Mar 18 13:27:20.884067 master-0 kubenswrapper[27835]: I0318 13:27:20.884033 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9575265e-a8b1-4bf1-aea7-71448e782b44-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-8tq2r\" (UID: \"9575265e-a8b1-4bf1-aea7-71448e782b44\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-8tq2r" Mar 18 13:27:20.884150 master-0 kubenswrapper[27835]: I0318 13:27:20.884090 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/cfcf230d-b184-4d7f-aedc-e58264252b88-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5f7fb669fb-msvkz\" (UID: \"cfcf230d-b184-4d7f-aedc-e58264252b88\") " pod="openshift-monitoring/thanos-querier-5f7fb669fb-msvkz" Mar 18 13:27:20.884150 master-0 kubenswrapper[27835]: I0318 13:27:20.884119 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/7e1495df-e141-4c4d-9a05-5e8f3ee2667f-telemeter-client-tls\") pod \"telemeter-client-d855b697f-6v4bh\" (UID: \"7e1495df-e141-4c4d-9a05-5e8f3ee2667f\") " pod="openshift-monitoring/telemeter-client-d855b697f-6v4bh" Mar 18 13:27:20.884270 master-0 kubenswrapper[27835]: I0318 13:27:20.884166 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/cfcf230d-b184-4d7f-aedc-e58264252b88-secret-grpc-tls\") pod \"thanos-querier-5f7fb669fb-msvkz\" (UID: \"cfcf230d-b184-4d7f-aedc-e58264252b88\") " pod="openshift-monitoring/thanos-querier-5f7fb669fb-msvkz" Mar 18 13:27:20.884270 master-0 kubenswrapper[27835]: I0318 13:27:20.884184 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/cfcf230d-b184-4d7f-aedc-e58264252b88-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5f7fb669fb-msvkz\" (UID: \"cfcf230d-b184-4d7f-aedc-e58264252b88\") " pod="openshift-monitoring/thanos-querier-5f7fb669fb-msvkz" Mar 18 13:27:20.884270 master-0 kubenswrapper[27835]: I0318 13:27:20.884203 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2phvn\" (UniqueName: \"kubernetes.io/projected/cfcf230d-b184-4d7f-aedc-e58264252b88-kube-api-access-2phvn\") pod \"thanos-querier-5f7fb669fb-msvkz\" (UID: \"cfcf230d-b184-4d7f-aedc-e58264252b88\") " pod="openshift-monitoring/thanos-querier-5f7fb669fb-msvkz" Mar 18 13:27:20.884407 master-0 kubenswrapper[27835]: I0318 13:27:20.884282 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7e1495df-e141-4c4d-9a05-5e8f3ee2667f-metrics-client-ca\") pod \"telemeter-client-d855b697f-6v4bh\" (UID: \"7e1495df-e141-4c4d-9a05-5e8f3ee2667f\") " pod="openshift-monitoring/telemeter-client-d855b697f-6v4bh" Mar 18 13:27:20.884407 master-0 kubenswrapper[27835]: I0318 13:27:20.884324 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e1495df-e141-4c4d-9a05-5e8f3ee2667f-serving-certs-ca-bundle\") pod \"telemeter-client-d855b697f-6v4bh\" (UID: \"7e1495df-e141-4c4d-9a05-5e8f3ee2667f\") " pod="openshift-monitoring/telemeter-client-d855b697f-6v4bh" Mar 18 13:27:20.884407 master-0 kubenswrapper[27835]: I0318 13:27:20.884349 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7e1495df-e141-4c4d-9a05-5e8f3ee2667f-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-d855b697f-6v4bh\" (UID: \"7e1495df-e141-4c4d-9a05-5e8f3ee2667f\") " pod="openshift-monitoring/telemeter-client-d855b697f-6v4bh" Mar 18 13:27:20.884407 master-0 kubenswrapper[27835]: I0318 13:27:20.884370 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e1495df-e141-4c4d-9a05-5e8f3ee2667f-telemeter-trusted-ca-bundle\") pod \"telemeter-client-d855b697f-6v4bh\" (UID: \"7e1495df-e141-4c4d-9a05-5e8f3ee2667f\") " pod="openshift-monitoring/telemeter-client-d855b697f-6v4bh" Mar 18 13:27:20.884407 master-0 kubenswrapper[27835]: I0318 13:27:20.884397 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cfcf230d-b184-4d7f-aedc-e58264252b88-metrics-client-ca\") pod \"thanos-querier-5f7fb669fb-msvkz\" (UID: \"cfcf230d-b184-4d7f-aedc-e58264252b88\") " pod="openshift-monitoring/thanos-querier-5f7fb669fb-msvkz" Mar 18 13:27:20.884642 master-0 kubenswrapper[27835]: I0318 13:27:20.884436 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9575265e-a8b1-4bf1-aea7-71448e782b44-nginx-conf\") pod \"networking-console-plugin-7c6b76c555-8tq2r\" (UID: \"9575265e-a8b1-4bf1-aea7-71448e782b44\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-8tq2r" Mar 18 13:27:20.884642 master-0 kubenswrapper[27835]: I0318 13:27:20.884463 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/7e1495df-e141-4c4d-9a05-5e8f3ee2667f-federate-client-tls\") pod \"telemeter-client-d855b697f-6v4bh\" (UID: \"7e1495df-e141-4c4d-9a05-5e8f3ee2667f\") " pod="openshift-monitoring/telemeter-client-d855b697f-6v4bh" Mar 18 13:27:20.890602 master-0 kubenswrapper[27835]: E0318 13:27:20.885286 27835 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Mar 18 13:27:20.890602 master-0 kubenswrapper[27835]: E0318 13:27:20.885452 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e1495df-e141-4c4d-9a05-5e8f3ee2667f-telemeter-client-tls podName:7e1495df-e141-4c4d-9a05-5e8f3ee2667f nodeName:}" failed. No retries permitted until 2026-03-18 13:27:21.38537205 +0000 UTC m=+205.350583800 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/7e1495df-e141-4c4d-9a05-5e8f3ee2667f-telemeter-client-tls") pod "telemeter-client-d855b697f-6v4bh" (UID: "7e1495df-e141-4c4d-9a05-5e8f3ee2667f") : secret "telemeter-client-tls" not found Mar 18 13:27:20.890602 master-0 kubenswrapper[27835]: I0318 13:27:20.887333 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/7e1495df-e141-4c4d-9a05-5e8f3ee2667f-secret-telemeter-client\") pod \"telemeter-client-d855b697f-6v4bh\" (UID: \"7e1495df-e141-4c4d-9a05-5e8f3ee2667f\") " pod="openshift-monitoring/telemeter-client-d855b697f-6v4bh" Mar 18 13:27:20.890602 master-0 kubenswrapper[27835]: E0318 13:27:20.888344 27835 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Mar 18 13:27:20.890602 master-0 kubenswrapper[27835]: E0318 13:27:20.888461 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9575265e-a8b1-4bf1-aea7-71448e782b44-networking-console-plugin-cert podName:9575265e-a8b1-4bf1-aea7-71448e782b44 nodeName:}" failed. No retries permitted until 2026-03-18 13:27:21.388431978 +0000 UTC m=+205.353643538 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/9575265e-a8b1-4bf1-aea7-71448e782b44-networking-console-plugin-cert") pod "networking-console-plugin-7c6b76c555-8tq2r" (UID: "9575265e-a8b1-4bf1-aea7-71448e782b44") : secret "networking-console-plugin-cert" not found Mar 18 13:27:20.890602 master-0 kubenswrapper[27835]: I0318 13:27:20.889254 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e1495df-e141-4c4d-9a05-5e8f3ee2667f-telemeter-trusted-ca-bundle\") pod \"telemeter-client-d855b697f-6v4bh\" (UID: \"7e1495df-e141-4c4d-9a05-5e8f3ee2667f\") " pod="openshift-monitoring/telemeter-client-d855b697f-6v4bh" Mar 18 13:27:20.890602 master-0 kubenswrapper[27835]: I0318 13:27:20.889350 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7e1495df-e141-4c4d-9a05-5e8f3ee2667f-metrics-client-ca\") pod \"telemeter-client-d855b697f-6v4bh\" (UID: \"7e1495df-e141-4c4d-9a05-5e8f3ee2667f\") " pod="openshift-monitoring/telemeter-client-d855b697f-6v4bh" Mar 18 13:27:20.890602 master-0 kubenswrapper[27835]: I0318 13:27:20.889562 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9575265e-a8b1-4bf1-aea7-71448e782b44-nginx-conf\") pod \"networking-console-plugin-7c6b76c555-8tq2r\" (UID: \"9575265e-a8b1-4bf1-aea7-71448e782b44\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-8tq2r" Mar 18 13:27:20.890602 master-0 kubenswrapper[27835]: I0318 13:27:20.889771 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e1495df-e141-4c4d-9a05-5e8f3ee2667f-serving-certs-ca-bundle\") pod \"telemeter-client-d855b697f-6v4bh\" (UID: \"7e1495df-e141-4c4d-9a05-5e8f3ee2667f\") " pod="openshift-monitoring/telemeter-client-d855b697f-6v4bh" Mar 18 13:27:20.890602 master-0 kubenswrapper[27835]: I0318 13:27:20.889777 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cfcf230d-b184-4d7f-aedc-e58264252b88-metrics-client-ca\") pod \"thanos-querier-5f7fb669fb-msvkz\" (UID: \"cfcf230d-b184-4d7f-aedc-e58264252b88\") " pod="openshift-monitoring/thanos-querier-5f7fb669fb-msvkz" Mar 18 13:27:20.890602 master-0 kubenswrapper[27835]: I0318 13:27:20.890176 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/cfcf230d-b184-4d7f-aedc-e58264252b88-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5f7fb669fb-msvkz\" (UID: \"cfcf230d-b184-4d7f-aedc-e58264252b88\") " pod="openshift-monitoring/thanos-querier-5f7fb669fb-msvkz" Mar 18 13:27:20.890602 master-0 kubenswrapper[27835]: I0318 13:27:20.890316 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/cfcf230d-b184-4d7f-aedc-e58264252b88-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5f7fb669fb-msvkz\" (UID: \"cfcf230d-b184-4d7f-aedc-e58264252b88\") " pod="openshift-monitoring/thanos-querier-5f7fb669fb-msvkz" Mar 18 13:27:20.890602 master-0 kubenswrapper[27835]: I0318 13:27:20.890437 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/7e1495df-e141-4c4d-9a05-5e8f3ee2667f-federate-client-tls\") pod \"telemeter-client-d855b697f-6v4bh\" (UID: \"7e1495df-e141-4c4d-9a05-5e8f3ee2667f\") " pod="openshift-monitoring/telemeter-client-d855b697f-6v4bh" Mar 18 13:27:20.890602 master-0 kubenswrapper[27835]: I0318 13:27:20.890535 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/cfcf230d-b184-4d7f-aedc-e58264252b88-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5f7fb669fb-msvkz\" (UID: \"cfcf230d-b184-4d7f-aedc-e58264252b88\") " pod="openshift-monitoring/thanos-querier-5f7fb669fb-msvkz" Mar 18 13:27:20.895158 master-0 kubenswrapper[27835]: I0318 13:27:20.894074 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7e1495df-e141-4c4d-9a05-5e8f3ee2667f-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-d855b697f-6v4bh\" (UID: \"7e1495df-e141-4c4d-9a05-5e8f3ee2667f\") " pod="openshift-monitoring/telemeter-client-d855b697f-6v4bh" Mar 18 13:27:20.895713 master-0 kubenswrapper[27835]: I0318 13:27:20.895667 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/cfcf230d-b184-4d7f-aedc-e58264252b88-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5f7fb669fb-msvkz\" (UID: \"cfcf230d-b184-4d7f-aedc-e58264252b88\") " pod="openshift-monitoring/thanos-querier-5f7fb669fb-msvkz" Mar 18 13:27:20.896460 master-0 kubenswrapper[27835]: I0318 13:27:20.896136 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/cfcf230d-b184-4d7f-aedc-e58264252b88-secret-grpc-tls\") pod \"thanos-querier-5f7fb669fb-msvkz\" (UID: \"cfcf230d-b184-4d7f-aedc-e58264252b88\") " pod="openshift-monitoring/thanos-querier-5f7fb669fb-msvkz" Mar 18 13:27:20.898236 master-0 kubenswrapper[27835]: I0318 13:27:20.898172 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/cfcf230d-b184-4d7f-aedc-e58264252b88-secret-thanos-querier-tls\") pod \"thanos-querier-5f7fb669fb-msvkz\" (UID: \"cfcf230d-b184-4d7f-aedc-e58264252b88\") " pod="openshift-monitoring/thanos-querier-5f7fb669fb-msvkz" Mar 18 13:27:20.912354 master-0 kubenswrapper[27835]: I0318 13:27:20.910785 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2phvn\" (UniqueName: \"kubernetes.io/projected/cfcf230d-b184-4d7f-aedc-e58264252b88-kube-api-access-2phvn\") pod \"thanos-querier-5f7fb669fb-msvkz\" (UID: \"cfcf230d-b184-4d7f-aedc-e58264252b88\") " pod="openshift-monitoring/thanos-querier-5f7fb669fb-msvkz" Mar 18 13:27:20.912354 master-0 kubenswrapper[27835]: I0318 13:27:20.911146 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-5f7fb669fb-msvkz" Mar 18 13:27:20.913829 master-0 kubenswrapper[27835]: I0318 13:27:20.913790 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dg8p\" (UniqueName: \"kubernetes.io/projected/7e1495df-e141-4c4d-9a05-5e8f3ee2667f-kube-api-access-8dg8p\") pod \"telemeter-client-d855b697f-6v4bh\" (UID: \"7e1495df-e141-4c4d-9a05-5e8f3ee2667f\") " pod="openshift-monitoring/telemeter-client-d855b697f-6v4bh" Mar 18 13:27:21.187944 master-0 kubenswrapper[27835]: I0318 13:27:21.187886 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:21.188244 master-0 kubenswrapper[27835]: E0318 13:27:21.188151 27835 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Mar 18 13:27:21.188363 master-0 kubenswrapper[27835]: E0318 13:27:21.188337 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-secret-alertmanager-main-tls podName:3d46bdde-fa29-4faa-a7a8-fb52f9bdd939 nodeName:}" failed. No retries permitted until 2026-03-18 13:27:22.188310247 +0000 UTC m=+206.153521797 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "3d46bdde-fa29-4faa-a7a8-fb52f9bdd939") : secret "alertmanager-main-tls" not found Mar 18 13:27:21.391486 master-0 kubenswrapper[27835]: I0318 13:27:21.391432 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9575265e-a8b1-4bf1-aea7-71448e782b44-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-8tq2r\" (UID: \"9575265e-a8b1-4bf1-aea7-71448e782b44\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-8tq2r" Mar 18 13:27:21.391792 master-0 kubenswrapper[27835]: I0318 13:27:21.391771 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/7e1495df-e141-4c4d-9a05-5e8f3ee2667f-telemeter-client-tls\") pod \"telemeter-client-d855b697f-6v4bh\" (UID: \"7e1495df-e141-4c4d-9a05-5e8f3ee2667f\") " pod="openshift-monitoring/telemeter-client-d855b697f-6v4bh" Mar 18 13:27:21.392807 master-0 kubenswrapper[27835]: E0318 13:27:21.392788 27835 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Mar 18 13:27:21.392946 master-0 kubenswrapper[27835]: E0318 13:27:21.392933 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9575265e-a8b1-4bf1-aea7-71448e782b44-networking-console-plugin-cert podName:9575265e-a8b1-4bf1-aea7-71448e782b44 nodeName:}" failed. No retries permitted until 2026-03-18 13:27:22.392914684 +0000 UTC m=+206.358126244 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/9575265e-a8b1-4bf1-aea7-71448e782b44-networking-console-plugin-cert") pod "networking-console-plugin-7c6b76c555-8tq2r" (UID: "9575265e-a8b1-4bf1-aea7-71448e782b44") : secret "networking-console-plugin-cert" not found Mar 18 13:27:21.394327 master-0 kubenswrapper[27835]: E0318 13:27:21.394292 27835 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Mar 18 13:27:21.394398 master-0 kubenswrapper[27835]: E0318 13:27:21.394340 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e1495df-e141-4c4d-9a05-5e8f3ee2667f-telemeter-client-tls podName:7e1495df-e141-4c4d-9a05-5e8f3ee2667f nodeName:}" failed. No retries permitted until 2026-03-18 13:27:22.394329524 +0000 UTC m=+206.359541084 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/7e1495df-e141-4c4d-9a05-5e8f3ee2667f-telemeter-client-tls") pod "telemeter-client-d855b697f-6v4bh" (UID: "7e1495df-e141-4c4d-9a05-5e8f3ee2667f") : secret "telemeter-client-tls" not found Mar 18 13:27:21.438740 master-0 kubenswrapper[27835]: I0318 13:27:21.438679 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-5f7fb669fb-msvkz"] Mar 18 13:27:21.439896 master-0 kubenswrapper[27835]: W0318 13:27:21.439839 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfcf230d_b184_4d7f_aedc_e58264252b88.slice/crio-1ab972badcf8a61c1fe810269a880b73b49367194e467a5a1963342d91f4afca WatchSource:0}: Error finding container 1ab972badcf8a61c1fe810269a880b73b49367194e467a5a1963342d91f4afca: Status 404 returned error can't find the container with id 1ab972badcf8a61c1fe810269a880b73b49367194e467a5a1963342d91f4afca Mar 18 13:27:21.558111 master-0 kubenswrapper[27835]: I0318 13:27:21.558000 27835 patch_prober.go:28] interesting pod/console-5df65d974f-mpf5j container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.88:8443/health\": dial tcp 10.128.0.88:8443: connect: connection refused" start-of-body= Mar 18 13:27:21.559055 master-0 kubenswrapper[27835]: I0318 13:27:21.558700 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5df65d974f-mpf5j" podUID="a42bf050-6c38-4023-a8b4-dc795f3aadc7" containerName="console" probeResult="failure" output="Get \"https://10.128.0.88:8443/health\": dial tcp 10.128.0.88:8443: connect: connection refused" Mar 18 13:27:21.580536 master-0 kubenswrapper[27835]: I0318 13:27:21.580486 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-5668cbc594-2kzhf"] Mar 18 13:27:21.584600 master-0 kubenswrapper[27835]: W0318 13:27:21.584550 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1edb65b1_2635_4c6b_95c9_da2befb434b2.slice/crio-48f4f7639eaae7420dd8bd0a26b6d642ed72de6feac6a7aa7f0f289a090e0d67 WatchSource:0}: Error finding container 48f4f7639eaae7420dd8bd0a26b6d642ed72de6feac6a7aa7f0f289a090e0d67: Status 404 returned error can't find the container with id 48f4f7639eaae7420dd8bd0a26b6d642ed72de6feac6a7aa7f0f289a090e0d67 Mar 18 13:27:22.209483 master-0 kubenswrapper[27835]: I0318 13:27:22.209355 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:22.209716 master-0 kubenswrapper[27835]: E0318 13:27:22.209561 27835 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Mar 18 13:27:22.209716 master-0 kubenswrapper[27835]: E0318 13:27:22.209612 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-secret-alertmanager-main-tls podName:3d46bdde-fa29-4faa-a7a8-fb52f9bdd939 nodeName:}" failed. No retries permitted until 2026-03-18 13:27:24.209597222 +0000 UTC m=+208.174808782 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "3d46bdde-fa29-4faa-a7a8-fb52f9bdd939") : secret "alertmanager-main-tls" not found Mar 18 13:27:22.413136 master-0 kubenswrapper[27835]: I0318 13:27:22.412970 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9575265e-a8b1-4bf1-aea7-71448e782b44-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-8tq2r\" (UID: \"9575265e-a8b1-4bf1-aea7-71448e782b44\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-8tq2r" Mar 18 13:27:22.413395 master-0 kubenswrapper[27835]: E0318 13:27:22.413224 27835 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Mar 18 13:27:22.413395 master-0 kubenswrapper[27835]: E0318 13:27:22.413336 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9575265e-a8b1-4bf1-aea7-71448e782b44-networking-console-plugin-cert podName:9575265e-a8b1-4bf1-aea7-71448e782b44 nodeName:}" failed. No retries permitted until 2026-03-18 13:27:24.413314363 +0000 UTC m=+208.378525923 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/9575265e-a8b1-4bf1-aea7-71448e782b44-networking-console-plugin-cert") pod "networking-console-plugin-7c6b76c555-8tq2r" (UID: "9575265e-a8b1-4bf1-aea7-71448e782b44") : secret "networking-console-plugin-cert" not found Mar 18 13:27:22.413395 master-0 kubenswrapper[27835]: I0318 13:27:22.413241 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/7e1495df-e141-4c4d-9a05-5e8f3ee2667f-telemeter-client-tls\") pod \"telemeter-client-d855b697f-6v4bh\" (UID: \"7e1495df-e141-4c4d-9a05-5e8f3ee2667f\") " pod="openshift-monitoring/telemeter-client-d855b697f-6v4bh" Mar 18 13:27:22.413515 master-0 kubenswrapper[27835]: E0318 13:27:22.413478 27835 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Mar 18 13:27:22.413771 master-0 kubenswrapper[27835]: E0318 13:27:22.413596 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e1495df-e141-4c4d-9a05-5e8f3ee2667f-telemeter-client-tls podName:7e1495df-e141-4c4d-9a05-5e8f3ee2667f nodeName:}" failed. No retries permitted until 2026-03-18 13:27:24.41355459 +0000 UTC m=+208.378766150 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/7e1495df-e141-4c4d-9a05-5e8f3ee2667f-telemeter-client-tls") pod "telemeter-client-d855b697f-6v4bh" (UID: "7e1495df-e141-4c4d-9a05-5e8f3ee2667f") : secret "telemeter-client-tls" not found Mar 18 13:27:22.430899 master-0 kubenswrapper[27835]: I0318 13:27:22.430832 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-5668cbc594-2kzhf" event={"ID":"1edb65b1-2635-4c6b-95c9-da2befb434b2","Type":"ContainerStarted","Data":"99a61cb67a46491766fe57549f76593b4b73ec2342f933904d02bfe28fb0b9b5"} Mar 18 13:27:22.430899 master-0 kubenswrapper[27835]: I0318 13:27:22.430895 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-5668cbc594-2kzhf" event={"ID":"1edb65b1-2635-4c6b-95c9-da2befb434b2","Type":"ContainerStarted","Data":"48f4f7639eaae7420dd8bd0a26b6d642ed72de6feac6a7aa7f0f289a090e0d67"} Mar 18 13:27:22.434612 master-0 kubenswrapper[27835]: I0318 13:27:22.434563 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5f7fb669fb-msvkz" event={"ID":"cfcf230d-b184-4d7f-aedc-e58264252b88","Type":"ContainerStarted","Data":"1ab972badcf8a61c1fe810269a880b73b49367194e467a5a1963342d91f4afca"} Mar 18 13:27:22.457397 master-0 kubenswrapper[27835]: I0318 13:27:22.457306 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-5668cbc594-2kzhf" podStartSLOduration=2.45728488 podStartE2EDuration="2.45728488s" podCreationTimestamp="2026-03-18 13:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:27:22.448574931 +0000 UTC m=+206.413786501" watchObservedRunningTime="2026-03-18 13:27:22.45728488 +0000 UTC m=+206.422496440" Mar 18 13:27:24.083279 master-0 kubenswrapper[27835]: I0318 13:27:24.083198 27835 patch_prober.go:28] interesting pod/console-686bcb5cf-88rcq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.89:8443/health\": dial tcp 10.128.0.89:8443: connect: connection refused" start-of-body= Mar 18 13:27:24.083814 master-0 kubenswrapper[27835]: I0318 13:27:24.083292 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-686bcb5cf-88rcq" podUID="18d00d36-387c-4c03-affa-9abc8e2d4fe0" containerName="console" probeResult="failure" output="Get \"https://10.128.0.89:8443/health\": dial tcp 10.128.0.89:8443: connect: connection refused" Mar 18 13:27:24.248849 master-0 kubenswrapper[27835]: I0318 13:27:24.248746 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:24.249558 master-0 kubenswrapper[27835]: E0318 13:27:24.249044 27835 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Mar 18 13:27:24.249558 master-0 kubenswrapper[27835]: E0318 13:27:24.249137 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-secret-alertmanager-main-tls podName:3d46bdde-fa29-4faa-a7a8-fb52f9bdd939 nodeName:}" failed. No retries permitted until 2026-03-18 13:27:28.249110035 +0000 UTC m=+212.214321595 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "3d46bdde-fa29-4faa-a7a8-fb52f9bdd939") : secret "alertmanager-main-tls" not found Mar 18 13:27:24.453204 master-0 kubenswrapper[27835]: I0318 13:27:24.453116 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9575265e-a8b1-4bf1-aea7-71448e782b44-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-8tq2r\" (UID: \"9575265e-a8b1-4bf1-aea7-71448e782b44\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-8tq2r" Mar 18 13:27:24.453370 master-0 kubenswrapper[27835]: I0318 13:27:24.453214 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/7e1495df-e141-4c4d-9a05-5e8f3ee2667f-telemeter-client-tls\") pod \"telemeter-client-d855b697f-6v4bh\" (UID: \"7e1495df-e141-4c4d-9a05-5e8f3ee2667f\") " pod="openshift-monitoring/telemeter-client-d855b697f-6v4bh" Mar 18 13:27:24.454149 master-0 kubenswrapper[27835]: E0318 13:27:24.453679 27835 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Mar 18 13:27:24.454149 master-0 kubenswrapper[27835]: E0318 13:27:24.453777 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9575265e-a8b1-4bf1-aea7-71448e782b44-networking-console-plugin-cert podName:9575265e-a8b1-4bf1-aea7-71448e782b44 nodeName:}" failed. No retries permitted until 2026-03-18 13:27:28.453755433 +0000 UTC m=+212.418967003 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/9575265e-a8b1-4bf1-aea7-71448e782b44-networking-console-plugin-cert") pod "networking-console-plugin-7c6b76c555-8tq2r" (UID: "9575265e-a8b1-4bf1-aea7-71448e782b44") : secret "networking-console-plugin-cert" not found Mar 18 13:27:24.454149 master-0 kubenswrapper[27835]: E0318 13:27:24.453761 27835 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Mar 18 13:27:24.454149 master-0 kubenswrapper[27835]: E0318 13:27:24.453873 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e1495df-e141-4c4d-9a05-5e8f3ee2667f-telemeter-client-tls podName:7e1495df-e141-4c4d-9a05-5e8f3ee2667f nodeName:}" failed. No retries permitted until 2026-03-18 13:27:28.453834556 +0000 UTC m=+212.419046116 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/7e1495df-e141-4c4d-9a05-5e8f3ee2667f-telemeter-client-tls") pod "telemeter-client-d855b697f-6v4bh" (UID: "7e1495df-e141-4c4d-9a05-5e8f3ee2667f") : secret "telemeter-client-tls" not found Mar 18 13:27:25.462060 master-0 kubenswrapper[27835]: I0318 13:27:25.461997 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5f7fb669fb-msvkz" event={"ID":"cfcf230d-b184-4d7f-aedc-e58264252b88","Type":"ContainerStarted","Data":"7d4536a5870c22e19303a46bdc9322fa51534fe8551374934a0227b1558a631e"} Mar 18 13:27:25.462060 master-0 kubenswrapper[27835]: I0318 13:27:25.462055 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5f7fb669fb-msvkz" event={"ID":"cfcf230d-b184-4d7f-aedc-e58264252b88","Type":"ContainerStarted","Data":"07c5cd058514ae44ef69543b38234def25750064606dc7d44697955150401d14"} Mar 18 13:27:25.462756 master-0 kubenswrapper[27835]: I0318 13:27:25.462079 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5f7fb669fb-msvkz" event={"ID":"cfcf230d-b184-4d7f-aedc-e58264252b88","Type":"ContainerStarted","Data":"c4043202cfe3427b656458ebae0cde10ff31fff7420860751d05f85e9d55b227"} Mar 18 13:27:26.471884 master-0 kubenswrapper[27835]: I0318 13:27:26.471688 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5f7fb669fb-msvkz" event={"ID":"cfcf230d-b184-4d7f-aedc-e58264252b88","Type":"ContainerStarted","Data":"90fb41a21d84cf7e25bc9258369f802c2d7ca29220ae071df2876317d5a18126"} Mar 18 13:27:26.471884 master-0 kubenswrapper[27835]: I0318 13:27:26.471763 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5f7fb669fb-msvkz" event={"ID":"cfcf230d-b184-4d7f-aedc-e58264252b88","Type":"ContainerStarted","Data":"f1687766fd7552fe8a147ddba416e1b67a22f9c88692239d3bbefc7e6caabda5"} Mar 18 13:27:26.471884 master-0 kubenswrapper[27835]: I0318 13:27:26.471783 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5f7fb669fb-msvkz" event={"ID":"cfcf230d-b184-4d7f-aedc-e58264252b88","Type":"ContainerStarted","Data":"287df07b87937607c2dcdcb401fd78c9f52a5fb69f107b922d2c734a7285ac53"} Mar 18 13:27:26.472814 master-0 kubenswrapper[27835]: I0318 13:27:26.471941 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-5f7fb669fb-msvkz" Mar 18 13:27:26.503339 master-0 kubenswrapper[27835]: I0318 13:27:26.503235 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-5f7fb669fb-msvkz" podStartSLOduration=2.022565138 podStartE2EDuration="6.503218231s" podCreationTimestamp="2026-03-18 13:27:20 +0000 UTC" firstStartedPulling="2026-03-18 13:27:21.442211153 +0000 UTC m=+205.407422713" lastFinishedPulling="2026-03-18 13:27:25.922864236 +0000 UTC m=+209.888075806" observedRunningTime="2026-03-18 13:27:26.499180016 +0000 UTC m=+210.464391586" watchObservedRunningTime="2026-03-18 13:27:26.503218231 +0000 UTC m=+210.468429791" Mar 18 13:27:26.511983 master-0 kubenswrapper[27835]: I0318 13:27:26.511912 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 18 13:27:26.514223 master-0 kubenswrapper[27835]: I0318 13:27:26.514185 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.531944 master-0 kubenswrapper[27835]: I0318 13:27:26.531865 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 18 13:27:26.531944 master-0 kubenswrapper[27835]: I0318 13:27:26.531896 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 18 13:27:26.532130 master-0 kubenswrapper[27835]: I0318 13:27:26.531862 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 18 13:27:26.532130 master-0 kubenswrapper[27835]: I0318 13:27:26.531911 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 18 13:27:26.537674 master-0 kubenswrapper[27835]: I0318 13:27:26.537630 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 18 13:27:26.538505 master-0 kubenswrapper[27835]: I0318 13:27:26.538440 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 18 13:27:26.538650 master-0 kubenswrapper[27835]: I0318 13:27:26.538625 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-6v458hjp7b0gm" Mar 18 13:27:26.540300 master-0 kubenswrapper[27835]: I0318 13:27:26.540269 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 18 13:27:26.540943 master-0 kubenswrapper[27835]: I0318 13:27:26.540912 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 18 13:27:26.546691 master-0 kubenswrapper[27835]: I0318 13:27:26.546635 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 18 13:27:26.559804 master-0 kubenswrapper[27835]: I0318 13:27:26.559742 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 18 13:27:26.570560 master-0 kubenswrapper[27835]: I0318 13:27:26.570487 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 18 13:27:26.571524 master-0 kubenswrapper[27835]: I0318 13:27:26.571477 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 18 13:27:26.593130 master-0 kubenswrapper[27835]: I0318 13:27:26.591972 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7b756a66-3b31-4c6c-acf2-94a47924cd17-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.593130 master-0 kubenswrapper[27835]: I0318 13:27:26.592030 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/7b756a66-3b31-4c6c-acf2-94a47924cd17-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.593130 master-0 kubenswrapper[27835]: I0318 13:27:26.592061 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/7b756a66-3b31-4c6c-acf2-94a47924cd17-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.593130 master-0 kubenswrapper[27835]: I0318 13:27:26.592095 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/7b756a66-3b31-4c6c-acf2-94a47924cd17-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.593130 master-0 kubenswrapper[27835]: I0318 13:27:26.592117 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b756a66-3b31-4c6c-acf2-94a47924cd17-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.593130 master-0 kubenswrapper[27835]: I0318 13:27:26.592146 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/7b756a66-3b31-4c6c-acf2-94a47924cd17-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.593130 master-0 kubenswrapper[27835]: I0318 13:27:26.592179 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b756a66-3b31-4c6c-acf2-94a47924cd17-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.593130 master-0 kubenswrapper[27835]: I0318 13:27:26.592202 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7b756a66-3b31-4c6c-acf2-94a47924cd17-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.593130 master-0 kubenswrapper[27835]: I0318 13:27:26.592225 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7b756a66-3b31-4c6c-acf2-94a47924cd17-config\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.593130 master-0 kubenswrapper[27835]: I0318 13:27:26.592244 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7b756a66-3b31-4c6c-acf2-94a47924cd17-config-out\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.593130 master-0 kubenswrapper[27835]: I0318 13:27:26.592265 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7b756a66-3b31-4c6c-acf2-94a47924cd17-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.593130 master-0 kubenswrapper[27835]: I0318 13:27:26.592284 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7b756a66-3b31-4c6c-acf2-94a47924cd17-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.593130 master-0 kubenswrapper[27835]: I0318 13:27:26.592301 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b756a66-3b31-4c6c-acf2-94a47924cd17-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.593130 master-0 kubenswrapper[27835]: I0318 13:27:26.592321 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lplw\" (UniqueName: \"kubernetes.io/projected/7b756a66-3b31-4c6c-acf2-94a47924cd17-kube-api-access-5lplw\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.593130 master-0 kubenswrapper[27835]: I0318 13:27:26.592338 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/7b756a66-3b31-4c6c-acf2-94a47924cd17-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.593130 master-0 kubenswrapper[27835]: I0318 13:27:26.592352 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/7b756a66-3b31-4c6c-acf2-94a47924cd17-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.593130 master-0 kubenswrapper[27835]: I0318 13:27:26.592367 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7b756a66-3b31-4c6c-acf2-94a47924cd17-web-config\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.593130 master-0 kubenswrapper[27835]: I0318 13:27:26.592394 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/7b756a66-3b31-4c6c-acf2-94a47924cd17-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.694798 master-0 kubenswrapper[27835]: I0318 13:27:26.693807 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/7b756a66-3b31-4c6c-acf2-94a47924cd17-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.694798 master-0 kubenswrapper[27835]: I0318 13:27:26.693882 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/7b756a66-3b31-4c6c-acf2-94a47924cd17-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.694798 master-0 kubenswrapper[27835]: E0318 13:27:26.694038 27835 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Mar 18 13:27:26.694798 master-0 kubenswrapper[27835]: I0318 13:27:26.694119 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b756a66-3b31-4c6c-acf2-94a47924cd17-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.694798 master-0 kubenswrapper[27835]: E0318 13:27:26.694136 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b756a66-3b31-4c6c-acf2-94a47924cd17-secret-prometheus-k8s-tls podName:7b756a66-3b31-4c6c-acf2-94a47924cd17 nodeName:}" failed. No retries permitted until 2026-03-18 13:27:27.194116166 +0000 UTC m=+211.159327726 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/7b756a66-3b31-4c6c-acf2-94a47924cd17-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "7b756a66-3b31-4c6c-acf2-94a47924cd17") : secret "prometheus-k8s-tls" not found Mar 18 13:27:26.694798 master-0 kubenswrapper[27835]: I0318 13:27:26.694222 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/7b756a66-3b31-4c6c-acf2-94a47924cd17-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.694798 master-0 kubenswrapper[27835]: I0318 13:27:26.694319 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b756a66-3b31-4c6c-acf2-94a47924cd17-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.694798 master-0 kubenswrapper[27835]: I0318 13:27:26.694366 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7b756a66-3b31-4c6c-acf2-94a47924cd17-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.694798 master-0 kubenswrapper[27835]: I0318 13:27:26.694453 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7b756a66-3b31-4c6c-acf2-94a47924cd17-config\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.694798 master-0 kubenswrapper[27835]: I0318 13:27:26.694492 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7b756a66-3b31-4c6c-acf2-94a47924cd17-config-out\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.694798 master-0 kubenswrapper[27835]: I0318 13:27:26.694534 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7b756a66-3b31-4c6c-acf2-94a47924cd17-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.694798 master-0 kubenswrapper[27835]: I0318 13:27:26.694562 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7b756a66-3b31-4c6c-acf2-94a47924cd17-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.694798 master-0 kubenswrapper[27835]: I0318 13:27:26.694584 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b756a66-3b31-4c6c-acf2-94a47924cd17-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.694798 master-0 kubenswrapper[27835]: I0318 13:27:26.694606 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lplw\" (UniqueName: \"kubernetes.io/projected/7b756a66-3b31-4c6c-acf2-94a47924cd17-kube-api-access-5lplw\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.694798 master-0 kubenswrapper[27835]: I0318 13:27:26.694625 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/7b756a66-3b31-4c6c-acf2-94a47924cd17-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.694798 master-0 kubenswrapper[27835]: I0318 13:27:26.694643 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/7b756a66-3b31-4c6c-acf2-94a47924cd17-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.694798 master-0 kubenswrapper[27835]: I0318 13:27:26.694663 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7b756a66-3b31-4c6c-acf2-94a47924cd17-web-config\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.694798 master-0 kubenswrapper[27835]: I0318 13:27:26.694708 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/7b756a66-3b31-4c6c-acf2-94a47924cd17-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.694798 master-0 kubenswrapper[27835]: I0318 13:27:26.694718 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/7b756a66-3b31-4c6c-acf2-94a47924cd17-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.694798 master-0 kubenswrapper[27835]: I0318 13:27:26.694788 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7b756a66-3b31-4c6c-acf2-94a47924cd17-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.694798 master-0 kubenswrapper[27835]: I0318 13:27:26.694811 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/7b756a66-3b31-4c6c-acf2-94a47924cd17-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.696225 master-0 kubenswrapper[27835]: I0318 13:27:26.696003 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b756a66-3b31-4c6c-acf2-94a47924cd17-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.696297 master-0 kubenswrapper[27835]: I0318 13:27:26.696261 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b756a66-3b31-4c6c-acf2-94a47924cd17-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.697445 master-0 kubenswrapper[27835]: I0318 13:27:26.697334 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7b756a66-3b31-4c6c-acf2-94a47924cd17-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.697529 master-0 kubenswrapper[27835]: I0318 13:27:26.697499 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/7b756a66-3b31-4c6c-acf2-94a47924cd17-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.703351 master-0 kubenswrapper[27835]: I0318 13:27:26.703173 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7b756a66-3b31-4c6c-acf2-94a47924cd17-config\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.703351 master-0 kubenswrapper[27835]: I0318 13:27:26.703281 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/7b756a66-3b31-4c6c-acf2-94a47924cd17-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.704184 master-0 kubenswrapper[27835]: I0318 13:27:26.704136 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b756a66-3b31-4c6c-acf2-94a47924cd17-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.707650 master-0 kubenswrapper[27835]: I0318 13:27:26.706943 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/7b756a66-3b31-4c6c-acf2-94a47924cd17-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.707650 master-0 kubenswrapper[27835]: I0318 13:27:26.706968 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7b756a66-3b31-4c6c-acf2-94a47924cd17-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.707650 master-0 kubenswrapper[27835]: I0318 13:27:26.707038 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7b756a66-3b31-4c6c-acf2-94a47924cd17-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.707650 master-0 kubenswrapper[27835]: I0318 13:27:26.707066 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/7b756a66-3b31-4c6c-acf2-94a47924cd17-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.707650 master-0 kubenswrapper[27835]: I0318 13:27:26.707495 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/7b756a66-3b31-4c6c-acf2-94a47924cd17-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.707650 master-0 kubenswrapper[27835]: I0318 13:27:26.707587 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7b756a66-3b31-4c6c-acf2-94a47924cd17-web-config\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.711431 master-0 kubenswrapper[27835]: I0318 13:27:26.707989 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7b756a66-3b31-4c6c-acf2-94a47924cd17-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.711431 master-0 kubenswrapper[27835]: I0318 13:27:26.710644 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7b756a66-3b31-4c6c-acf2-94a47924cd17-config-out\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:26.718456 master-0 kubenswrapper[27835]: I0318 13:27:26.717270 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lplw\" (UniqueName: \"kubernetes.io/projected/7b756a66-3b31-4c6c-acf2-94a47924cd17-kube-api-access-5lplw\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:27.201176 master-0 kubenswrapper[27835]: I0318 13:27:27.201101 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/7b756a66-3b31-4c6c-acf2-94a47924cd17-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:27.204020 master-0 kubenswrapper[27835]: I0318 13:27:27.203972 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/7b756a66-3b31-4c6c-acf2-94a47924cd17-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"7b756a66-3b31-4c6c-acf2-94a47924cd17\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:27.436646 master-0 kubenswrapper[27835]: I0318 13:27:27.436590 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:27.890305 master-0 kubenswrapper[27835]: I0318 13:27:27.890245 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 18 13:27:27.896819 master-0 kubenswrapper[27835]: W0318 13:27:27.896762 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b756a66_3b31_4c6c_acf2_94a47924cd17.slice/crio-5dcae1dcf298d25713cb306915ca24a86dfccb2c34fc4aae29eb6aa025a2038c WatchSource:0}: Error finding container 5dcae1dcf298d25713cb306915ca24a86dfccb2c34fc4aae29eb6aa025a2038c: Status 404 returned error can't find the container with id 5dcae1dcf298d25713cb306915ca24a86dfccb2c34fc4aae29eb6aa025a2038c Mar 18 13:27:28.318099 master-0 kubenswrapper[27835]: I0318 13:27:28.317217 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:28.320084 master-0 kubenswrapper[27835]: I0318 13:27:28.320042 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/3d46bdde-fa29-4faa-a7a8-fb52f9bdd939-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:28.492016 master-0 kubenswrapper[27835]: I0318 13:27:28.491947 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7b756a66-3b31-4c6c-acf2-94a47924cd17","Type":"ContainerStarted","Data":"5dcae1dcf298d25713cb306915ca24a86dfccb2c34fc4aae29eb6aa025a2038c"} Mar 18 13:27:28.519946 master-0 kubenswrapper[27835]: I0318 13:27:28.519885 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9575265e-a8b1-4bf1-aea7-71448e782b44-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-8tq2r\" (UID: \"9575265e-a8b1-4bf1-aea7-71448e782b44\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-8tq2r" Mar 18 13:27:28.519946 master-0 kubenswrapper[27835]: I0318 13:27:28.519951 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/7e1495df-e141-4c4d-9a05-5e8f3ee2667f-telemeter-client-tls\") pod \"telemeter-client-d855b697f-6v4bh\" (UID: \"7e1495df-e141-4c4d-9a05-5e8f3ee2667f\") " pod="openshift-monitoring/telemeter-client-d855b697f-6v4bh" Mar 18 13:27:28.523035 master-0 kubenswrapper[27835]: I0318 13:27:28.523015 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/7e1495df-e141-4c4d-9a05-5e8f3ee2667f-telemeter-client-tls\") pod \"telemeter-client-d855b697f-6v4bh\" (UID: \"7e1495df-e141-4c4d-9a05-5e8f3ee2667f\") " pod="openshift-monitoring/telemeter-client-d855b697f-6v4bh" Mar 18 13:27:28.524081 master-0 kubenswrapper[27835]: I0318 13:27:28.524038 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9575265e-a8b1-4bf1-aea7-71448e782b44-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-8tq2r\" (UID: \"9575265e-a8b1-4bf1-aea7-71448e782b44\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-8tq2r" Mar 18 13:27:28.561717 master-0 kubenswrapper[27835]: I0318 13:27:28.561663 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c6b76c555-8tq2r" Mar 18 13:27:28.587014 master-0 kubenswrapper[27835]: I0318 13:27:28.586917 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 18 13:27:28.746675 master-0 kubenswrapper[27835]: I0318 13:27:28.744927 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-d855b697f-6v4bh" Mar 18 13:27:29.028007 master-0 kubenswrapper[27835]: I0318 13:27:29.027927 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 18 13:27:29.101088 master-0 kubenswrapper[27835]: I0318 13:27:29.101009 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-7c6b76c555-8tq2r"] Mar 18 13:27:29.172903 master-0 kubenswrapper[27835]: I0318 13:27:29.172195 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-d855b697f-6v4bh"] Mar 18 13:27:29.174764 master-0 kubenswrapper[27835]: W0318 13:27:29.174701 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e1495df_e141_4c4d_9a05_5e8f3ee2667f.slice/crio-994295790b9aa1d9b07e599ac17af400d4f7c0efbc880e1c1c7c9bd248f543fb WatchSource:0}: Error finding container 994295790b9aa1d9b07e599ac17af400d4f7c0efbc880e1c1c7c9bd248f543fb: Status 404 returned error can't find the container with id 994295790b9aa1d9b07e599ac17af400d4f7c0efbc880e1c1c7c9bd248f543fb Mar 18 13:27:29.503768 master-0 kubenswrapper[27835]: I0318 13:27:29.503675 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-7c6b76c555-8tq2r" event={"ID":"9575265e-a8b1-4bf1-aea7-71448e782b44","Type":"ContainerStarted","Data":"307c6452835ac57d11f113e228a8bceaf8426a85b8b5cb86d72a0a885d88f322"} Mar 18 13:27:29.505684 master-0 kubenswrapper[27835]: I0318 13:27:29.505620 27835 generic.go:334] "Generic (PLEG): container finished" podID="7b756a66-3b31-4c6c-acf2-94a47924cd17" containerID="270a019a5b14294b06ce5e9cc7e483f8f231760994b22dbb79fead1161319c35" exitCode=0 Mar 18 13:27:29.505788 master-0 kubenswrapper[27835]: I0318 13:27:29.505708 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7b756a66-3b31-4c6c-acf2-94a47924cd17","Type":"ContainerDied","Data":"270a019a5b14294b06ce5e9cc7e483f8f231760994b22dbb79fead1161319c35"} Mar 18 13:27:29.508510 master-0 kubenswrapper[27835]: I0318 13:27:29.508451 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939","Type":"ContainerStarted","Data":"71125a7bd9167b185f29297150ff82b10d6c5def500f5cb6b957a872a9125b59"} Mar 18 13:27:29.512441 master-0 kubenswrapper[27835]: I0318 13:27:29.512357 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-d855b697f-6v4bh" event={"ID":"7e1495df-e141-4c4d-9a05-5e8f3ee2667f","Type":"ContainerStarted","Data":"994295790b9aa1d9b07e599ac17af400d4f7c0efbc880e1c1c7c9bd248f543fb"} Mar 18 13:27:30.521938 master-0 kubenswrapper[27835]: I0318 13:27:30.521843 27835 generic.go:334] "Generic (PLEG): container finished" podID="3d46bdde-fa29-4faa-a7a8-fb52f9bdd939" containerID="1453a05dd201ad6e54bbc3aed9319a3a68d4d8bac6f9f98fff7680d90d4dbb7b" exitCode=0 Mar 18 13:27:30.521938 master-0 kubenswrapper[27835]: I0318 13:27:30.521887 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939","Type":"ContainerDied","Data":"1453a05dd201ad6e54bbc3aed9319a3a68d4d8bac6f9f98fff7680d90d4dbb7b"} Mar 18 13:27:30.922114 master-0 kubenswrapper[27835]: I0318 13:27:30.922063 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-5f7fb669fb-msvkz" Mar 18 13:27:31.119066 master-0 kubenswrapper[27835]: I0318 13:27:31.119008 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-f6zh2"] Mar 18 13:27:31.120255 master-0 kubenswrapper[27835]: I0318 13:27:31.120229 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-f6zh2" Mar 18 13:27:31.122303 master-0 kubenswrapper[27835]: I0318 13:27:31.122281 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-7rrtk" Mar 18 13:27:31.122447 master-0 kubenswrapper[27835]: I0318 13:27:31.122437 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 18 13:27:31.170052 master-0 kubenswrapper[27835]: I0318 13:27:31.169998 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k5zh\" (UniqueName: \"kubernetes.io/projected/407238a6-5f5c-4676-8ece-b9146f67cfb9-kube-api-access-4k5zh\") pod \"node-ca-f6zh2\" (UID: \"407238a6-5f5c-4676-8ece-b9146f67cfb9\") " pod="openshift-image-registry/node-ca-f6zh2" Mar 18 13:27:31.170279 master-0 kubenswrapper[27835]: I0318 13:27:31.170071 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/407238a6-5f5c-4676-8ece-b9146f67cfb9-serviceca\") pod \"node-ca-f6zh2\" (UID: \"407238a6-5f5c-4676-8ece-b9146f67cfb9\") " pod="openshift-image-registry/node-ca-f6zh2" Mar 18 13:27:31.170279 master-0 kubenswrapper[27835]: I0318 13:27:31.170100 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/407238a6-5f5c-4676-8ece-b9146f67cfb9-host\") pod \"node-ca-f6zh2\" (UID: \"407238a6-5f5c-4676-8ece-b9146f67cfb9\") " pod="openshift-image-registry/node-ca-f6zh2" Mar 18 13:27:31.271266 master-0 kubenswrapper[27835]: I0318 13:27:31.271128 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k5zh\" (UniqueName: \"kubernetes.io/projected/407238a6-5f5c-4676-8ece-b9146f67cfb9-kube-api-access-4k5zh\") pod \"node-ca-f6zh2\" (UID: \"407238a6-5f5c-4676-8ece-b9146f67cfb9\") " pod="openshift-image-registry/node-ca-f6zh2" Mar 18 13:27:31.271266 master-0 kubenswrapper[27835]: I0318 13:27:31.271207 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/407238a6-5f5c-4676-8ece-b9146f67cfb9-serviceca\") pod \"node-ca-f6zh2\" (UID: \"407238a6-5f5c-4676-8ece-b9146f67cfb9\") " pod="openshift-image-registry/node-ca-f6zh2" Mar 18 13:27:31.271266 master-0 kubenswrapper[27835]: I0318 13:27:31.271232 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/407238a6-5f5c-4676-8ece-b9146f67cfb9-host\") pod \"node-ca-f6zh2\" (UID: \"407238a6-5f5c-4676-8ece-b9146f67cfb9\") " pod="openshift-image-registry/node-ca-f6zh2" Mar 18 13:27:31.271543 master-0 kubenswrapper[27835]: I0318 13:27:31.271433 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/407238a6-5f5c-4676-8ece-b9146f67cfb9-host\") pod \"node-ca-f6zh2\" (UID: \"407238a6-5f5c-4676-8ece-b9146f67cfb9\") " pod="openshift-image-registry/node-ca-f6zh2" Mar 18 13:27:31.272911 master-0 kubenswrapper[27835]: I0318 13:27:31.272885 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/407238a6-5f5c-4676-8ece-b9146f67cfb9-serviceca\") pod \"node-ca-f6zh2\" (UID: \"407238a6-5f5c-4676-8ece-b9146f67cfb9\") " pod="openshift-image-registry/node-ca-f6zh2" Mar 18 13:27:31.289726 master-0 kubenswrapper[27835]: I0318 13:27:31.289690 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k5zh\" (UniqueName: \"kubernetes.io/projected/407238a6-5f5c-4676-8ece-b9146f67cfb9-kube-api-access-4k5zh\") pod \"node-ca-f6zh2\" (UID: \"407238a6-5f5c-4676-8ece-b9146f67cfb9\") " pod="openshift-image-registry/node-ca-f6zh2" Mar 18 13:27:31.444013 master-0 kubenswrapper[27835]: I0318 13:27:31.443941 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-f6zh2" Mar 18 13:27:31.463471 master-0 kubenswrapper[27835]: W0318 13:27:31.463387 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod407238a6_5f5c_4676_8ece_b9146f67cfb9.slice/crio-102a6e968b16aee98305c3f56a962a70ab1f40c2c9908002e198c3abb3076e0b WatchSource:0}: Error finding container 102a6e968b16aee98305c3f56a962a70ab1f40c2c9908002e198c3abb3076e0b: Status 404 returned error can't find the container with id 102a6e968b16aee98305c3f56a962a70ab1f40c2c9908002e198c3abb3076e0b Mar 18 13:27:31.536062 master-0 kubenswrapper[27835]: I0318 13:27:31.535907 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-f6zh2" event={"ID":"407238a6-5f5c-4676-8ece-b9146f67cfb9","Type":"ContainerStarted","Data":"102a6e968b16aee98305c3f56a962a70ab1f40c2c9908002e198c3abb3076e0b"} Mar 18 13:27:31.540371 master-0 kubenswrapper[27835]: I0318 13:27:31.539594 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-7c6b76c555-8tq2r" event={"ID":"9575265e-a8b1-4bf1-aea7-71448e782b44","Type":"ContainerStarted","Data":"8ebd0a583be6202bef3ee672e3a2e2e806765335c5ed571bf8f1c779de02f6f5"} Mar 18 13:27:31.555696 master-0 kubenswrapper[27835]: I0318 13:27:31.555617 27835 patch_prober.go:28] interesting pod/console-5df65d974f-mpf5j container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.88:8443/health\": dial tcp 10.128.0.88:8443: connect: connection refused" start-of-body= Mar 18 13:27:31.556231 master-0 kubenswrapper[27835]: I0318 13:27:31.556058 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5df65d974f-mpf5j" podUID="a42bf050-6c38-4023-a8b4-dc795f3aadc7" containerName="console" probeResult="failure" output="Get \"https://10.128.0.88:8443/health\": dial tcp 10.128.0.88:8443: connect: connection refused" Mar 18 13:27:31.562805 master-0 kubenswrapper[27835]: I0318 13:27:31.562060 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-7c6b76c555-8tq2r" podStartSLOduration=10.127679849 podStartE2EDuration="11.562010328s" podCreationTimestamp="2026-03-18 13:27:20 +0000 UTC" firstStartedPulling="2026-03-18 13:27:29.104817278 +0000 UTC m=+213.070028838" lastFinishedPulling="2026-03-18 13:27:30.539147747 +0000 UTC m=+214.504359317" observedRunningTime="2026-03-18 13:27:31.555562114 +0000 UTC m=+215.520773694" watchObservedRunningTime="2026-03-18 13:27:31.562010328 +0000 UTC m=+215.527221898" Mar 18 13:27:32.548691 master-0 kubenswrapper[27835]: I0318 13:27:32.548529 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-d855b697f-6v4bh_7e1495df-e141-4c4d-9a05-5e8f3ee2667f/telemeter-client/0.log" Mar 18 13:27:32.548691 master-0 kubenswrapper[27835]: I0318 13:27:32.548587 27835 generic.go:334] "Generic (PLEG): container finished" podID="7e1495df-e141-4c4d-9a05-5e8f3ee2667f" containerID="888892f54ea33753605516c6fdb94e05be3924c18099f6910ede799b91a9eef2" exitCode=1 Mar 18 13:27:32.548691 master-0 kubenswrapper[27835]: I0318 13:27:32.548675 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-d855b697f-6v4bh" event={"ID":"7e1495df-e141-4c4d-9a05-5e8f3ee2667f","Type":"ContainerStarted","Data":"04359e21313dc7db788275b7ef4448d27efd4c02ab51c6493b7f4580616797c4"} Mar 18 13:27:32.549649 master-0 kubenswrapper[27835]: I0318 13:27:32.548728 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-d855b697f-6v4bh" event={"ID":"7e1495df-e141-4c4d-9a05-5e8f3ee2667f","Type":"ContainerDied","Data":"888892f54ea33753605516c6fdb94e05be3924c18099f6910ede799b91a9eef2"} Mar 18 13:27:34.087881 master-0 kubenswrapper[27835]: I0318 13:27:34.087605 27835 patch_prober.go:28] interesting pod/console-686bcb5cf-88rcq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.89:8443/health\": dial tcp 10.128.0.89:8443: connect: connection refused" start-of-body= Mar 18 13:27:34.087881 master-0 kubenswrapper[27835]: I0318 13:27:34.087657 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-686bcb5cf-88rcq" podUID="18d00d36-387c-4c03-affa-9abc8e2d4fe0" containerName="console" probeResult="failure" output="Get \"https://10.128.0.89:8443/health\": dial tcp 10.128.0.89:8443: connect: connection refused" Mar 18 13:27:34.567499 master-0 kubenswrapper[27835]: I0318 13:27:34.566670 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-d855b697f-6v4bh_7e1495df-e141-4c4d-9a05-5e8f3ee2667f/telemeter-client/0.log" Mar 18 13:27:34.567499 master-0 kubenswrapper[27835]: I0318 13:27:34.566733 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-d855b697f-6v4bh" event={"ID":"7e1495df-e141-4c4d-9a05-5e8f3ee2667f","Type":"ContainerStarted","Data":"f345b4203272b886c53eab54a1ec5165200159c85c4ff7426a5647aab8f1cfa0"} Mar 18 13:27:34.567499 master-0 kubenswrapper[27835]: I0318 13:27:34.567165 27835 scope.go:117] "RemoveContainer" containerID="888892f54ea33753605516c6fdb94e05be3924c18099f6910ede799b91a9eef2" Mar 18 13:27:34.575141 master-0 kubenswrapper[27835]: I0318 13:27:34.575104 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7b756a66-3b31-4c6c-acf2-94a47924cd17","Type":"ContainerStarted","Data":"b52368e3696c1ce36c2a53a3db5ea8b1377429c9fcad7c7be8a1cdabec6fa67f"} Mar 18 13:27:34.575324 master-0 kubenswrapper[27835]: I0318 13:27:34.575307 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7b756a66-3b31-4c6c-acf2-94a47924cd17","Type":"ContainerStarted","Data":"08f615d37ed0f24d78c55a03e7d0209fbba02f8c26d7cd043419896e1aba67a7"} Mar 18 13:27:34.575457 master-0 kubenswrapper[27835]: I0318 13:27:34.575435 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7b756a66-3b31-4c6c-acf2-94a47924cd17","Type":"ContainerStarted","Data":"effb954908480c44b58758c0f77b0fd89dd48fac45a829dca8fdc297b2c22fdc"} Mar 18 13:27:34.582718 master-0 kubenswrapper[27835]: I0318 13:27:34.582653 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939","Type":"ContainerStarted","Data":"e545d989093b790eaa87f74eb030144786c7750be7bcc19c416ef61715837993"} Mar 18 13:27:34.582718 master-0 kubenswrapper[27835]: I0318 13:27:34.582715 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939","Type":"ContainerStarted","Data":"c3de83278a40baac51ea03336166c901a49d556bc51ea72a041d9dba88abac04"} Mar 18 13:27:34.582935 master-0 kubenswrapper[27835]: I0318 13:27:34.582729 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939","Type":"ContainerStarted","Data":"c1cc4d3ed988b78069de0d9bf83f3d40ac94a84f958e838ff3f61a2411bc7363"} Mar 18 13:27:35.600352 master-0 kubenswrapper[27835]: I0318 13:27:35.600189 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-d855b697f-6v4bh_7e1495df-e141-4c4d-9a05-5e8f3ee2667f/telemeter-client/0.log" Mar 18 13:27:35.601182 master-0 kubenswrapper[27835]: I0318 13:27:35.600642 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-d855b697f-6v4bh" event={"ID":"7e1495df-e141-4c4d-9a05-5e8f3ee2667f","Type":"ContainerStarted","Data":"c7bff46fe43c96ea918165b7258d638daf5ef9b660617e011d81232ba2359680"} Mar 18 13:27:35.604606 master-0 kubenswrapper[27835]: I0318 13:27:35.604548 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-f6zh2" event={"ID":"407238a6-5f5c-4676-8ece-b9146f67cfb9","Type":"ContainerStarted","Data":"556fb97ccc66bee4153603e337b8ee889ac93ceb2d0b67f8b052c130ae5f80bb"} Mar 18 13:27:35.609788 master-0 kubenswrapper[27835]: I0318 13:27:35.609709 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7b756a66-3b31-4c6c-acf2-94a47924cd17","Type":"ContainerStarted","Data":"5f34ba3d2829246d4a891fd08a29d8387cdcbee343933bb54e01f51ae7849428"} Mar 18 13:27:35.609893 master-0 kubenswrapper[27835]: I0318 13:27:35.609793 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7b756a66-3b31-4c6c-acf2-94a47924cd17","Type":"ContainerStarted","Data":"dc33908549226d685c1fbc65a3b916d3738baf676b1d596b2e264c6977d508c7"} Mar 18 13:27:35.609893 master-0 kubenswrapper[27835]: I0318 13:27:35.609821 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"7b756a66-3b31-4c6c-acf2-94a47924cd17","Type":"ContainerStarted","Data":"bf69d6262bea8ba448d968dca5566c30888a5685c11b88d7aa85dd608f27c491"} Mar 18 13:27:35.614537 master-0 kubenswrapper[27835]: I0318 13:27:35.614492 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939","Type":"ContainerStarted","Data":"b18d60f984ec9ac700f29c2fb5a34a665f54058a8b6696a6c30d2a9dd6ae22d7"} Mar 18 13:27:35.614628 master-0 kubenswrapper[27835]: I0318 13:27:35.614550 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939","Type":"ContainerStarted","Data":"513fbfceaf087557393ac1ccf9f8fd7cf6026136b9473d4b1a3c228b5130fc06"} Mar 18 13:27:35.614628 master-0 kubenswrapper[27835]: I0318 13:27:35.614575 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"3d46bdde-fa29-4faa-a7a8-fb52f9bdd939","Type":"ContainerStarted","Data":"d085b29245b7b299ea673f0aae6850e675aa733c486e8feac2d1116f0f4157da"} Mar 18 13:27:35.654533 master-0 kubenswrapper[27835]: I0318 13:27:35.654268 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-d855b697f-6v4bh" podStartSLOduration=13.32186486 podStartE2EDuration="15.654243733s" podCreationTimestamp="2026-03-18 13:27:20 +0000 UTC" firstStartedPulling="2026-03-18 13:27:29.177148245 +0000 UTC m=+213.142359805" lastFinishedPulling="2026-03-18 13:27:31.509527118 +0000 UTC m=+215.474738678" observedRunningTime="2026-03-18 13:27:35.640344615 +0000 UTC m=+219.605556205" watchObservedRunningTime="2026-03-18 13:27:35.654243733 +0000 UTC m=+219.619455333" Mar 18 13:27:35.690577 master-0 kubenswrapper[27835]: I0318 13:27:35.687147 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-f6zh2" podStartSLOduration=1.206613529 podStartE2EDuration="4.687113002s" podCreationTimestamp="2026-03-18 13:27:31 +0000 UTC" firstStartedPulling="2026-03-18 13:27:31.496028702 +0000 UTC m=+215.461240262" lastFinishedPulling="2026-03-18 13:27:34.976528175 +0000 UTC m=+218.941739735" observedRunningTime="2026-03-18 13:27:35.662264872 +0000 UTC m=+219.627476472" watchObservedRunningTime="2026-03-18 13:27:35.687113002 +0000 UTC m=+219.652324632" Mar 18 13:27:35.710927 master-0 kubenswrapper[27835]: I0318 13:27:35.710835 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=11.097294768 podStartE2EDuration="15.710818109s" podCreationTimestamp="2026-03-18 13:27:20 +0000 UTC" firstStartedPulling="2026-03-18 13:27:29.103771548 +0000 UTC m=+213.068983108" lastFinishedPulling="2026-03-18 13:27:33.717294889 +0000 UTC m=+217.682506449" observedRunningTime="2026-03-18 13:27:35.706197237 +0000 UTC m=+219.671408827" watchObservedRunningTime="2026-03-18 13:27:35.710818109 +0000 UTC m=+219.676029669" Mar 18 13:27:35.749776 master-0 kubenswrapper[27835]: I0318 13:27:35.749596 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=3.909786201 podStartE2EDuration="9.749554986s" podCreationTimestamp="2026-03-18 13:27:26 +0000 UTC" firstStartedPulling="2026-03-18 13:27:27.89834415 +0000 UTC m=+211.863555730" lastFinishedPulling="2026-03-18 13:27:33.738112955 +0000 UTC m=+217.703324515" observedRunningTime="2026-03-18 13:27:35.742611367 +0000 UTC m=+219.707822937" watchObservedRunningTime="2026-03-18 13:27:35.749554986 +0000 UTC m=+219.714766546" Mar 18 13:27:37.437484 master-0 kubenswrapper[27835]: I0318 13:27:37.437364 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:27:37.564319 master-0 kubenswrapper[27835]: I0318 13:27:37.564251 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5df65d974f-mpf5j"] Mar 18 13:27:37.635277 master-0 kubenswrapper[27835]: I0318 13:27:37.635209 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-684cf44489-lfkt8"] Mar 18 13:27:37.636179 master-0 kubenswrapper[27835]: I0318 13:27:37.636153 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-684cf44489-lfkt8" Mar 18 13:27:37.660116 master-0 kubenswrapper[27835]: I0318 13:27:37.660068 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-684cf44489-lfkt8"] Mar 18 13:27:37.691395 master-0 kubenswrapper[27835]: I0318 13:27:37.691248 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4lhq\" (UniqueName: \"kubernetes.io/projected/81db56ef-4aac-48a8-aada-fb4c198f0b5c-kube-api-access-q4lhq\") pod \"console-684cf44489-lfkt8\" (UID: \"81db56ef-4aac-48a8-aada-fb4c198f0b5c\") " pod="openshift-console/console-684cf44489-lfkt8" Mar 18 13:27:37.691395 master-0 kubenswrapper[27835]: I0318 13:27:37.691328 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/81db56ef-4aac-48a8-aada-fb4c198f0b5c-service-ca\") pod \"console-684cf44489-lfkt8\" (UID: \"81db56ef-4aac-48a8-aada-fb4c198f0b5c\") " pod="openshift-console/console-684cf44489-lfkt8" Mar 18 13:27:37.691395 master-0 kubenswrapper[27835]: I0318 13:27:37.691350 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/81db56ef-4aac-48a8-aada-fb4c198f0b5c-console-oauth-config\") pod \"console-684cf44489-lfkt8\" (UID: \"81db56ef-4aac-48a8-aada-fb4c198f0b5c\") " pod="openshift-console/console-684cf44489-lfkt8" Mar 18 13:27:37.691395 master-0 kubenswrapper[27835]: I0318 13:27:37.691377 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81db56ef-4aac-48a8-aada-fb4c198f0b5c-trusted-ca-bundle\") pod \"console-684cf44489-lfkt8\" (UID: \"81db56ef-4aac-48a8-aada-fb4c198f0b5c\") " pod="openshift-console/console-684cf44489-lfkt8" Mar 18 13:27:37.691781 master-0 kubenswrapper[27835]: I0318 13:27:37.691499 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/81db56ef-4aac-48a8-aada-fb4c198f0b5c-console-config\") pod \"console-684cf44489-lfkt8\" (UID: \"81db56ef-4aac-48a8-aada-fb4c198f0b5c\") " pod="openshift-console/console-684cf44489-lfkt8" Mar 18 13:27:37.691781 master-0 kubenswrapper[27835]: I0318 13:27:37.691518 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/81db56ef-4aac-48a8-aada-fb4c198f0b5c-oauth-serving-cert\") pod \"console-684cf44489-lfkt8\" (UID: \"81db56ef-4aac-48a8-aada-fb4c198f0b5c\") " pod="openshift-console/console-684cf44489-lfkt8" Mar 18 13:27:37.691781 master-0 kubenswrapper[27835]: I0318 13:27:37.691580 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/81db56ef-4aac-48a8-aada-fb4c198f0b5c-console-serving-cert\") pod \"console-684cf44489-lfkt8\" (UID: \"81db56ef-4aac-48a8-aada-fb4c198f0b5c\") " pod="openshift-console/console-684cf44489-lfkt8" Mar 18 13:27:37.792617 master-0 kubenswrapper[27835]: I0318 13:27:37.792537 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4lhq\" (UniqueName: \"kubernetes.io/projected/81db56ef-4aac-48a8-aada-fb4c198f0b5c-kube-api-access-q4lhq\") pod \"console-684cf44489-lfkt8\" (UID: \"81db56ef-4aac-48a8-aada-fb4c198f0b5c\") " pod="openshift-console/console-684cf44489-lfkt8" Mar 18 13:27:37.792908 master-0 kubenswrapper[27835]: I0318 13:27:37.792778 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/81db56ef-4aac-48a8-aada-fb4c198f0b5c-service-ca\") pod \"console-684cf44489-lfkt8\" (UID: \"81db56ef-4aac-48a8-aada-fb4c198f0b5c\") " pod="openshift-console/console-684cf44489-lfkt8" Mar 18 13:27:37.792908 master-0 kubenswrapper[27835]: I0318 13:27:37.792859 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/81db56ef-4aac-48a8-aada-fb4c198f0b5c-console-oauth-config\") pod \"console-684cf44489-lfkt8\" (UID: \"81db56ef-4aac-48a8-aada-fb4c198f0b5c\") " pod="openshift-console/console-684cf44489-lfkt8" Mar 18 13:27:37.792908 master-0 kubenswrapper[27835]: I0318 13:27:37.792894 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81db56ef-4aac-48a8-aada-fb4c198f0b5c-trusted-ca-bundle\") pod \"console-684cf44489-lfkt8\" (UID: \"81db56ef-4aac-48a8-aada-fb4c198f0b5c\") " pod="openshift-console/console-684cf44489-lfkt8" Mar 18 13:27:37.793004 master-0 kubenswrapper[27835]: I0318 13:27:37.792983 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/81db56ef-4aac-48a8-aada-fb4c198f0b5c-console-config\") pod \"console-684cf44489-lfkt8\" (UID: \"81db56ef-4aac-48a8-aada-fb4c198f0b5c\") " pod="openshift-console/console-684cf44489-lfkt8" Mar 18 13:27:37.793038 master-0 kubenswrapper[27835]: I0318 13:27:37.793009 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/81db56ef-4aac-48a8-aada-fb4c198f0b5c-oauth-serving-cert\") pod \"console-684cf44489-lfkt8\" (UID: \"81db56ef-4aac-48a8-aada-fb4c198f0b5c\") " pod="openshift-console/console-684cf44489-lfkt8" Mar 18 13:27:37.793181 master-0 kubenswrapper[27835]: I0318 13:27:37.793144 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/81db56ef-4aac-48a8-aada-fb4c198f0b5c-console-serving-cert\") pod \"console-684cf44489-lfkt8\" (UID: \"81db56ef-4aac-48a8-aada-fb4c198f0b5c\") " pod="openshift-console/console-684cf44489-lfkt8" Mar 18 13:27:37.793922 master-0 kubenswrapper[27835]: I0318 13:27:37.793888 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/81db56ef-4aac-48a8-aada-fb4c198f0b5c-service-ca\") pod \"console-684cf44489-lfkt8\" (UID: \"81db56ef-4aac-48a8-aada-fb4c198f0b5c\") " pod="openshift-console/console-684cf44489-lfkt8" Mar 18 13:27:37.794131 master-0 kubenswrapper[27835]: I0318 13:27:37.794095 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/81db56ef-4aac-48a8-aada-fb4c198f0b5c-console-config\") pod \"console-684cf44489-lfkt8\" (UID: \"81db56ef-4aac-48a8-aada-fb4c198f0b5c\") " pod="openshift-console/console-684cf44489-lfkt8" Mar 18 13:27:37.794269 master-0 kubenswrapper[27835]: I0318 13:27:37.794224 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81db56ef-4aac-48a8-aada-fb4c198f0b5c-trusted-ca-bundle\") pod \"console-684cf44489-lfkt8\" (UID: \"81db56ef-4aac-48a8-aada-fb4c198f0b5c\") " pod="openshift-console/console-684cf44489-lfkt8" Mar 18 13:27:37.794326 master-0 kubenswrapper[27835]: I0318 13:27:37.794235 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/81db56ef-4aac-48a8-aada-fb4c198f0b5c-oauth-serving-cert\") pod \"console-684cf44489-lfkt8\" (UID: \"81db56ef-4aac-48a8-aada-fb4c198f0b5c\") " pod="openshift-console/console-684cf44489-lfkt8" Mar 18 13:27:37.796738 master-0 kubenswrapper[27835]: I0318 13:27:37.796692 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/81db56ef-4aac-48a8-aada-fb4c198f0b5c-console-oauth-config\") pod \"console-684cf44489-lfkt8\" (UID: \"81db56ef-4aac-48a8-aada-fb4c198f0b5c\") " pod="openshift-console/console-684cf44489-lfkt8" Mar 18 13:27:37.800553 master-0 kubenswrapper[27835]: I0318 13:27:37.799626 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/81db56ef-4aac-48a8-aada-fb4c198f0b5c-console-serving-cert\") pod \"console-684cf44489-lfkt8\" (UID: \"81db56ef-4aac-48a8-aada-fb4c198f0b5c\") " pod="openshift-console/console-684cf44489-lfkt8" Mar 18 13:27:37.815133 master-0 kubenswrapper[27835]: I0318 13:27:37.815086 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4lhq\" (UniqueName: \"kubernetes.io/projected/81db56ef-4aac-48a8-aada-fb4c198f0b5c-kube-api-access-q4lhq\") pod \"console-684cf44489-lfkt8\" (UID: \"81db56ef-4aac-48a8-aada-fb4c198f0b5c\") " pod="openshift-console/console-684cf44489-lfkt8" Mar 18 13:27:37.956619 master-0 kubenswrapper[27835]: I0318 13:27:37.956478 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-684cf44489-lfkt8" Mar 18 13:27:38.252809 master-0 kubenswrapper[27835]: I0318 13:27:38.252747 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-684cf44489-lfkt8"] Mar 18 13:27:38.639174 master-0 kubenswrapper[27835]: I0318 13:27:38.639012 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-684cf44489-lfkt8" event={"ID":"81db56ef-4aac-48a8-aada-fb4c198f0b5c","Type":"ContainerStarted","Data":"9fa672993c9b7c5196a3fb1556df3cdd54dab0fccfb2f0fc7df466544667f818"} Mar 18 13:27:38.639174 master-0 kubenswrapper[27835]: I0318 13:27:38.639063 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-684cf44489-lfkt8" event={"ID":"81db56ef-4aac-48a8-aada-fb4c198f0b5c","Type":"ContainerStarted","Data":"8bb10be2b6365a5a5679cfef0e278c2462d4eca1c11ab331102458b318fa5320"} Mar 18 13:27:38.661347 master-0 kubenswrapper[27835]: I0318 13:27:38.661239 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-684cf44489-lfkt8" podStartSLOduration=1.661218153 podStartE2EDuration="1.661218153s" podCreationTimestamp="2026-03-18 13:27:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:27:38.657350283 +0000 UTC m=+222.622561863" watchObservedRunningTime="2026-03-18 13:27:38.661218153 +0000 UTC m=+222.626429723" Mar 18 13:27:39.636179 master-0 kubenswrapper[27835]: I0318 13:27:39.636094 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-686bcb5cf-88rcq"] Mar 18 13:27:39.676059 master-0 kubenswrapper[27835]: I0318 13:27:39.675998 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7bb86d5d56-ffwhx"] Mar 18 13:27:39.676902 master-0 kubenswrapper[27835]: I0318 13:27:39.676869 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7bb86d5d56-ffwhx" Mar 18 13:27:39.700150 master-0 kubenswrapper[27835]: I0318 13:27:39.699752 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7bb86d5d56-ffwhx"] Mar 18 13:27:39.730330 master-0 kubenswrapper[27835]: I0318 13:27:39.730265 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-trusted-ca-bundle\") pod \"console-7bb86d5d56-ffwhx\" (UID: \"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4\") " pod="openshift-console/console-7bb86d5d56-ffwhx" Mar 18 13:27:39.730573 master-0 kubenswrapper[27835]: I0318 13:27:39.730481 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbh4t\" (UniqueName: \"kubernetes.io/projected/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-kube-api-access-fbh4t\") pod \"console-7bb86d5d56-ffwhx\" (UID: \"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4\") " pod="openshift-console/console-7bb86d5d56-ffwhx" Mar 18 13:27:39.730717 master-0 kubenswrapper[27835]: I0318 13:27:39.730660 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-console-serving-cert\") pod \"console-7bb86d5d56-ffwhx\" (UID: \"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4\") " pod="openshift-console/console-7bb86d5d56-ffwhx" Mar 18 13:27:39.730940 master-0 kubenswrapper[27835]: I0318 13:27:39.730907 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-service-ca\") pod \"console-7bb86d5d56-ffwhx\" (UID: \"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4\") " pod="openshift-console/console-7bb86d5d56-ffwhx" Mar 18 13:27:39.731026 master-0 kubenswrapper[27835]: I0318 13:27:39.731001 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-console-oauth-config\") pod \"console-7bb86d5d56-ffwhx\" (UID: \"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4\") " pod="openshift-console/console-7bb86d5d56-ffwhx" Mar 18 13:27:39.731063 master-0 kubenswrapper[27835]: I0318 13:27:39.731036 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-console-config\") pod \"console-7bb86d5d56-ffwhx\" (UID: \"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4\") " pod="openshift-console/console-7bb86d5d56-ffwhx" Mar 18 13:27:39.731177 master-0 kubenswrapper[27835]: I0318 13:27:39.731138 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-oauth-serving-cert\") pod \"console-7bb86d5d56-ffwhx\" (UID: \"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4\") " pod="openshift-console/console-7bb86d5d56-ffwhx" Mar 18 13:27:39.832593 master-0 kubenswrapper[27835]: I0318 13:27:39.832533 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-trusted-ca-bundle\") pod \"console-7bb86d5d56-ffwhx\" (UID: \"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4\") " pod="openshift-console/console-7bb86d5d56-ffwhx" Mar 18 13:27:39.832846 master-0 kubenswrapper[27835]: I0318 13:27:39.832626 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbh4t\" (UniqueName: \"kubernetes.io/projected/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-kube-api-access-fbh4t\") pod \"console-7bb86d5d56-ffwhx\" (UID: \"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4\") " pod="openshift-console/console-7bb86d5d56-ffwhx" Mar 18 13:27:39.832846 master-0 kubenswrapper[27835]: I0318 13:27:39.832665 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-console-serving-cert\") pod \"console-7bb86d5d56-ffwhx\" (UID: \"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4\") " pod="openshift-console/console-7bb86d5d56-ffwhx" Mar 18 13:27:39.832846 master-0 kubenswrapper[27835]: I0318 13:27:39.832706 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-service-ca\") pod \"console-7bb86d5d56-ffwhx\" (UID: \"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4\") " pod="openshift-console/console-7bb86d5d56-ffwhx" Mar 18 13:27:39.833128 master-0 kubenswrapper[27835]: I0318 13:27:39.833090 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-console-oauth-config\") pod \"console-7bb86d5d56-ffwhx\" (UID: \"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4\") " pod="openshift-console/console-7bb86d5d56-ffwhx" Mar 18 13:27:39.833246 master-0 kubenswrapper[27835]: I0318 13:27:39.833231 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-console-config\") pod \"console-7bb86d5d56-ffwhx\" (UID: \"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4\") " pod="openshift-console/console-7bb86d5d56-ffwhx" Mar 18 13:27:39.833387 master-0 kubenswrapper[27835]: I0318 13:27:39.833373 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-oauth-serving-cert\") pod \"console-7bb86d5d56-ffwhx\" (UID: \"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4\") " pod="openshift-console/console-7bb86d5d56-ffwhx" Mar 18 13:27:39.833796 master-0 kubenswrapper[27835]: I0318 13:27:39.833760 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-trusted-ca-bundle\") pod \"console-7bb86d5d56-ffwhx\" (UID: \"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4\") " pod="openshift-console/console-7bb86d5d56-ffwhx" Mar 18 13:27:39.833983 master-0 kubenswrapper[27835]: I0318 13:27:39.833947 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-service-ca\") pod \"console-7bb86d5d56-ffwhx\" (UID: \"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4\") " pod="openshift-console/console-7bb86d5d56-ffwhx" Mar 18 13:27:39.834716 master-0 kubenswrapper[27835]: I0318 13:27:39.834669 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-console-config\") pod \"console-7bb86d5d56-ffwhx\" (UID: \"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4\") " pod="openshift-console/console-7bb86d5d56-ffwhx" Mar 18 13:27:39.835029 master-0 kubenswrapper[27835]: I0318 13:27:39.834980 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-oauth-serving-cert\") pod \"console-7bb86d5d56-ffwhx\" (UID: \"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4\") " pod="openshift-console/console-7bb86d5d56-ffwhx" Mar 18 13:27:39.839562 master-0 kubenswrapper[27835]: I0318 13:27:39.839515 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-console-oauth-config\") pod \"console-7bb86d5d56-ffwhx\" (UID: \"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4\") " pod="openshift-console/console-7bb86d5d56-ffwhx" Mar 18 13:27:39.850079 master-0 kubenswrapper[27835]: I0318 13:27:39.850047 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbh4t\" (UniqueName: \"kubernetes.io/projected/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-kube-api-access-fbh4t\") pod \"console-7bb86d5d56-ffwhx\" (UID: \"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4\") " pod="openshift-console/console-7bb86d5d56-ffwhx" Mar 18 13:27:39.853271 master-0 kubenswrapper[27835]: I0318 13:27:39.853220 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-console-serving-cert\") pod \"console-7bb86d5d56-ffwhx\" (UID: \"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4\") " pod="openshift-console/console-7bb86d5d56-ffwhx" Mar 18 13:27:39.997696 master-0 kubenswrapper[27835]: I0318 13:27:39.997401 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7bb86d5d56-ffwhx" Mar 18 13:27:40.490963 master-0 kubenswrapper[27835]: I0318 13:27:40.487771 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7bb86d5d56-ffwhx"] Mar 18 13:27:40.504616 master-0 kubenswrapper[27835]: W0318 13:27:40.504486 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fb6b1c3_9c99_4a1c_ac37_f1ad9dfa73a4.slice/crio-28ff5b928d3aa871633aaa4dba6bb889908836cac6bb909062b1b831326332eb WatchSource:0}: Error finding container 28ff5b928d3aa871633aaa4dba6bb889908836cac6bb909062b1b831326332eb: Status 404 returned error can't find the container with id 28ff5b928d3aa871633aaa4dba6bb889908836cac6bb909062b1b831326332eb Mar 18 13:27:40.658246 master-0 kubenswrapper[27835]: I0318 13:27:40.658191 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7bb86d5d56-ffwhx" event={"ID":"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4","Type":"ContainerStarted","Data":"28ff5b928d3aa871633aaa4dba6bb889908836cac6bb909062b1b831326332eb"} Mar 18 13:27:40.851059 master-0 kubenswrapper[27835]: I0318 13:27:40.850960 27835 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 13:27:40.851845 master-0 kubenswrapper[27835]: I0318 13:27:40.851815 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:27:40.852116 master-0 kubenswrapper[27835]: I0318 13:27:40.852086 27835 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 13:27:40.852496 master-0 kubenswrapper[27835]: I0318 13:27:40.852468 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver" containerID="cri-o://ff4cc0ab0545a1ff78db4c44d4a6b4fbd4fafe95e0619b97f79063c0eeba0df1" gracePeriod=15 Mar 18 13:27:40.852621 master-0 kubenswrapper[27835]: I0318 13:27:40.852587 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://49d6c263d942f0bab661dcdfe15f6600d31d6a3c1fbd3334b615fb9a5aa7c895" gracePeriod=15 Mar 18 13:27:40.852732 master-0 kubenswrapper[27835]: I0318 13:27:40.852708 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://cf0173ce1093b233054a91f4b682e330dd8986ad5e5b2215370c46af8d2688dd" gracePeriod=15 Mar 18 13:27:40.852732 master-0 kubenswrapper[27835]: I0318 13:27:40.852694 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-syncer" containerID="cri-o://96d6d5f06ba1610ef367eb7cb9848074e8e050aa3718b047f91cb7f88184ec36" gracePeriod=15 Mar 18 13:27:40.852873 master-0 kubenswrapper[27835]: I0318 13:27:40.852490 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-check-endpoints" containerID="cri-o://ca8ccc964fb04ea68139ff0c86d7918814ea0da0437de0153ad05bce2b1f456f" gracePeriod=15 Mar 18 13:27:40.853366 master-0 kubenswrapper[27835]: I0318 13:27:40.853331 27835 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 13:27:40.853627 master-0 kubenswrapper[27835]: E0318 13:27:40.853597 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="setup" Mar 18 13:27:40.853627 master-0 kubenswrapper[27835]: I0318 13:27:40.853621 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="setup" Mar 18 13:27:40.853703 master-0 kubenswrapper[27835]: E0318 13:27:40.853640 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-syncer" Mar 18 13:27:40.853703 master-0 kubenswrapper[27835]: I0318 13:27:40.853646 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-syncer" Mar 18 13:27:40.853703 master-0 kubenswrapper[27835]: E0318 13:27:40.853667 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver" Mar 18 13:27:40.853703 master-0 kubenswrapper[27835]: I0318 13:27:40.853673 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver" Mar 18 13:27:40.853703 master-0 kubenswrapper[27835]: E0318 13:27:40.853683 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-insecure-readyz" Mar 18 13:27:40.853703 master-0 kubenswrapper[27835]: I0318 13:27:40.853689 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-insecure-readyz" Mar 18 13:27:40.853703 master-0 kubenswrapper[27835]: E0318 13:27:40.853696 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-regeneration-controller" Mar 18 13:27:40.853703 master-0 kubenswrapper[27835]: I0318 13:27:40.853702 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-regeneration-controller" Mar 18 13:27:40.853703 master-0 kubenswrapper[27835]: E0318 13:27:40.853716 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-check-endpoints" Mar 18 13:27:40.853965 master-0 kubenswrapper[27835]: I0318 13:27:40.853722 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-check-endpoints" Mar 18 13:27:40.853965 master-0 kubenswrapper[27835]: I0318 13:27:40.853873 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver" Mar 18 13:27:40.853965 master-0 kubenswrapper[27835]: I0318 13:27:40.853884 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-regeneration-controller" Mar 18 13:27:40.853965 master-0 kubenswrapper[27835]: I0318 13:27:40.853895 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-check-endpoints" Mar 18 13:27:40.853965 master-0 kubenswrapper[27835]: I0318 13:27:40.853903 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-syncer" Mar 18 13:27:40.853965 master-0 kubenswrapper[27835]: I0318 13:27:40.853915 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-insecure-readyz" Mar 18 13:27:40.885957 master-0 kubenswrapper[27835]: I0318 13:27:40.882993 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-5668cbc594-2kzhf" Mar 18 13:27:40.885957 master-0 kubenswrapper[27835]: I0318 13:27:40.884013 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-5668cbc594-2kzhf" Mar 18 13:27:40.954678 master-0 kubenswrapper[27835]: I0318 13:27:40.954621 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:27:40.954889 master-0 kubenswrapper[27835]: I0318 13:27:40.954691 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:27:40.954889 master-0 kubenswrapper[27835]: I0318 13:27:40.954727 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:27:40.954889 master-0 kubenswrapper[27835]: I0318 13:27:40.954838 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:27:40.955029 master-0 kubenswrapper[27835]: I0318 13:27:40.954986 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:27:40.955260 master-0 kubenswrapper[27835]: I0318 13:27:40.955094 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:27:40.955260 master-0 kubenswrapper[27835]: I0318 13:27:40.955236 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:27:40.955332 master-0 kubenswrapper[27835]: I0318 13:27:40.955309 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:27:40.958725 master-0 kubenswrapper[27835]: E0318 13:27:40.958692 27835 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:27:41.056600 master-0 kubenswrapper[27835]: I0318 13:27:41.056519 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:27:41.056600 master-0 kubenswrapper[27835]: I0318 13:27:41.056597 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:27:41.056738 master-0 kubenswrapper[27835]: I0318 13:27:41.056616 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:27:41.056907 master-0 kubenswrapper[27835]: I0318 13:27:41.056711 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:27:41.056907 master-0 kubenswrapper[27835]: I0318 13:27:41.056746 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:27:41.056907 master-0 kubenswrapper[27835]: I0318 13:27:41.056798 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:27:41.056907 master-0 kubenswrapper[27835]: I0318 13:27:41.056833 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:27:41.057271 master-0 kubenswrapper[27835]: I0318 13:27:41.057011 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:27:41.057271 master-0 kubenswrapper[27835]: I0318 13:27:41.057041 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:27:41.057271 master-0 kubenswrapper[27835]: I0318 13:27:41.057075 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:27:41.057271 master-0 kubenswrapper[27835]: I0318 13:27:41.057154 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:27:41.057271 master-0 kubenswrapper[27835]: I0318 13:27:41.057123 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:27:41.057271 master-0 kubenswrapper[27835]: I0318 13:27:41.057191 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:27:41.057271 master-0 kubenswrapper[27835]: I0318 13:27:41.057159 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:27:41.057573 master-0 kubenswrapper[27835]: I0318 13:27:41.057403 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:27:41.057573 master-0 kubenswrapper[27835]: I0318 13:27:41.057445 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:27:41.260078 master-0 kubenswrapper[27835]: I0318 13:27:41.259996 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:27:41.283367 master-0 kubenswrapper[27835]: E0318 13:27:41.283207 27835 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189df28056efe9bd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:85632c1cec8974aa874834e4cfff4c77,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:27:41.282142653 +0000 UTC m=+225.247354223,LastTimestamp:2026-03-18 13:27:41.282142653 +0000 UTC m=+225.247354223,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:27:41.668009 master-0 kubenswrapper[27835]: I0318 13:27:41.667947 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"85632c1cec8974aa874834e4cfff4c77","Type":"ContainerStarted","Data":"79bf0f7a4318c4fff8d9359a7475c6cfeeb6f4f8213f77abdbf1e29b054b000e"} Mar 18 13:27:41.669032 master-0 kubenswrapper[27835]: I0318 13:27:41.668021 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"85632c1cec8974aa874834e4cfff4c77","Type":"ContainerStarted","Data":"ace79a743850c904f9a95030516536cd023cad010ea87f7c2e072eb60428acbd"} Mar 18 13:27:41.669104 master-0 kubenswrapper[27835]: E0318 13:27:41.669056 27835 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:27:41.669217 master-0 kubenswrapper[27835]: I0318 13:27:41.669137 27835 status_manager.go:851] "Failed to get status for pod" podUID="7d5ce05b3d592e63f1f92202d52b9635" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:41.669396 master-0 kubenswrapper[27835]: I0318 13:27:41.669355 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7bb86d5d56-ffwhx" event={"ID":"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4","Type":"ContainerStarted","Data":"ce1c406456499af0ac269fe2feecc4f83fe8248112e4b0f9d2db6dd1022203ce"} Mar 18 13:27:41.670280 master-0 kubenswrapper[27835]: I0318 13:27:41.670234 27835 status_manager.go:851] "Failed to get status for pod" podUID="7d5ce05b3d592e63f1f92202d52b9635" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:41.670980 master-0 kubenswrapper[27835]: I0318 13:27:41.670919 27835 status_manager.go:851] "Failed to get status for pod" podUID="1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4" pod="openshift-console/console-7bb86d5d56-ffwhx" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-7bb86d5d56-ffwhx\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:41.673012 master-0 kubenswrapper[27835]: I0318 13:27:41.672955 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_7d5ce05b3d592e63f1f92202d52b9635/kube-apiserver-cert-syncer/0.log" Mar 18 13:27:41.673800 master-0 kubenswrapper[27835]: I0318 13:27:41.673759 27835 generic.go:334] "Generic (PLEG): container finished" podID="7d5ce05b3d592e63f1f92202d52b9635" containerID="ca8ccc964fb04ea68139ff0c86d7918814ea0da0437de0153ad05bce2b1f456f" exitCode=0 Mar 18 13:27:41.673800 master-0 kubenswrapper[27835]: I0318 13:27:41.673792 27835 generic.go:334] "Generic (PLEG): container finished" podID="7d5ce05b3d592e63f1f92202d52b9635" containerID="cf0173ce1093b233054a91f4b682e330dd8986ad5e5b2215370c46af8d2688dd" exitCode=0 Mar 18 13:27:41.673917 master-0 kubenswrapper[27835]: I0318 13:27:41.673809 27835 generic.go:334] "Generic (PLEG): container finished" podID="7d5ce05b3d592e63f1f92202d52b9635" containerID="49d6c263d942f0bab661dcdfe15f6600d31d6a3c1fbd3334b615fb9a5aa7c895" exitCode=0 Mar 18 13:27:41.673917 master-0 kubenswrapper[27835]: I0318 13:27:41.673821 27835 generic.go:334] "Generic (PLEG): container finished" podID="7d5ce05b3d592e63f1f92202d52b9635" containerID="96d6d5f06ba1610ef367eb7cb9848074e8e050aa3718b047f91cb7f88184ec36" exitCode=2 Mar 18 13:27:41.675258 master-0 kubenswrapper[27835]: I0318 13:27:41.675187 27835 generic.go:334] "Generic (PLEG): container finished" podID="bcf7b950-0686-4e8e-87da-84c45d4ca1b4" containerID="0f79e96911cfa1e56118a281776977c083f7a6b8827655084e1ffc23c13690f3" exitCode=0 Mar 18 13:27:41.675735 master-0 kubenswrapper[27835]: I0318 13:27:41.675467 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"bcf7b950-0686-4e8e-87da-84c45d4ca1b4","Type":"ContainerDied","Data":"0f79e96911cfa1e56118a281776977c083f7a6b8827655084e1ffc23c13690f3"} Mar 18 13:27:41.676643 master-0 kubenswrapper[27835]: I0318 13:27:41.676577 27835 status_manager.go:851] "Failed to get status for pod" podUID="bcf7b950-0686-4e8e-87da-84c45d4ca1b4" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:41.677286 master-0 kubenswrapper[27835]: I0318 13:27:41.677240 27835 status_manager.go:851] "Failed to get status for pod" podUID="7d5ce05b3d592e63f1f92202d52b9635" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:41.677821 master-0 kubenswrapper[27835]: I0318 13:27:41.677751 27835 status_manager.go:851] "Failed to get status for pod" podUID="1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4" pod="openshift-console/console-7bb86d5d56-ffwhx" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-7bb86d5d56-ffwhx\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:43.102951 master-0 kubenswrapper[27835]: I0318 13:27:43.102909 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 13:27:43.103695 master-0 kubenswrapper[27835]: I0318 13:27:43.103656 27835 status_manager.go:851] "Failed to get status for pod" podUID="bcf7b950-0686-4e8e-87da-84c45d4ca1b4" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:43.104192 master-0 kubenswrapper[27835]: I0318 13:27:43.104147 27835 status_manager.go:851] "Failed to get status for pod" podUID="1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4" pod="openshift-console/console-7bb86d5d56-ffwhx" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-7bb86d5d56-ffwhx\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:43.202557 master-0 kubenswrapper[27835]: I0318 13:27:43.202465 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bcf7b950-0686-4e8e-87da-84c45d4ca1b4-var-lock\") pod \"bcf7b950-0686-4e8e-87da-84c45d4ca1b4\" (UID: \"bcf7b950-0686-4e8e-87da-84c45d4ca1b4\") " Mar 18 13:27:43.202557 master-0 kubenswrapper[27835]: I0318 13:27:43.202535 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bcf7b950-0686-4e8e-87da-84c45d4ca1b4-kubelet-dir\") pod \"bcf7b950-0686-4e8e-87da-84c45d4ca1b4\" (UID: \"bcf7b950-0686-4e8e-87da-84c45d4ca1b4\") " Mar 18 13:27:43.202816 master-0 kubenswrapper[27835]: I0318 13:27:43.202572 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcf7b950-0686-4e8e-87da-84c45d4ca1b4-var-lock" (OuterVolumeSpecName: "var-lock") pod "bcf7b950-0686-4e8e-87da-84c45d4ca1b4" (UID: "bcf7b950-0686-4e8e-87da-84c45d4ca1b4"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:27:43.202816 master-0 kubenswrapper[27835]: I0318 13:27:43.202611 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bcf7b950-0686-4e8e-87da-84c45d4ca1b4-kube-api-access\") pod \"bcf7b950-0686-4e8e-87da-84c45d4ca1b4\" (UID: \"bcf7b950-0686-4e8e-87da-84c45d4ca1b4\") " Mar 18 13:27:43.202816 master-0 kubenswrapper[27835]: I0318 13:27:43.202620 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcf7b950-0686-4e8e-87da-84c45d4ca1b4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bcf7b950-0686-4e8e-87da-84c45d4ca1b4" (UID: "bcf7b950-0686-4e8e-87da-84c45d4ca1b4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:27:43.203011 master-0 kubenswrapper[27835]: I0318 13:27:43.202985 27835 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bcf7b950-0686-4e8e-87da-84c45d4ca1b4-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:27:43.203061 master-0 kubenswrapper[27835]: I0318 13:27:43.203008 27835 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bcf7b950-0686-4e8e-87da-84c45d4ca1b4-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:27:43.207290 master-0 kubenswrapper[27835]: I0318 13:27:43.207229 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcf7b950-0686-4e8e-87da-84c45d4ca1b4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bcf7b950-0686-4e8e-87da-84c45d4ca1b4" (UID: "bcf7b950-0686-4e8e-87da-84c45d4ca1b4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:27:43.251171 master-0 kubenswrapper[27835]: I0318 13:27:43.251056 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_7d5ce05b3d592e63f1f92202d52b9635/kube-apiserver-cert-syncer/0.log" Mar 18 13:27:43.252148 master-0 kubenswrapper[27835]: I0318 13:27:43.252085 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:27:43.253228 master-0 kubenswrapper[27835]: I0318 13:27:43.253179 27835 status_manager.go:851] "Failed to get status for pod" podUID="bcf7b950-0686-4e8e-87da-84c45d4ca1b4" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:43.253935 master-0 kubenswrapper[27835]: I0318 13:27:43.253863 27835 status_manager.go:851] "Failed to get status for pod" podUID="7d5ce05b3d592e63f1f92202d52b9635" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:43.254733 master-0 kubenswrapper[27835]: I0318 13:27:43.254690 27835 status_manager.go:851] "Failed to get status for pod" podUID="1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4" pod="openshift-console/console-7bb86d5d56-ffwhx" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-7bb86d5d56-ffwhx\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:43.308623 master-0 kubenswrapper[27835]: I0318 13:27:43.308393 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-resource-dir\") pod \"7d5ce05b3d592e63f1f92202d52b9635\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " Mar 18 13:27:43.308623 master-0 kubenswrapper[27835]: I0318 13:27:43.308476 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "7d5ce05b3d592e63f1f92202d52b9635" (UID: "7d5ce05b3d592e63f1f92202d52b9635"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:27:43.308623 master-0 kubenswrapper[27835]: I0318 13:27:43.308505 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-audit-dir\") pod \"7d5ce05b3d592e63f1f92202d52b9635\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " Mar 18 13:27:43.308623 master-0 kubenswrapper[27835]: I0318 13:27:43.308575 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-cert-dir\") pod \"7d5ce05b3d592e63f1f92202d52b9635\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " Mar 18 13:27:43.309206 master-0 kubenswrapper[27835]: I0318 13:27:43.308658 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "7d5ce05b3d592e63f1f92202d52b9635" (UID: "7d5ce05b3d592e63f1f92202d52b9635"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:27:43.309206 master-0 kubenswrapper[27835]: I0318 13:27:43.308761 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "7d5ce05b3d592e63f1f92202d52b9635" (UID: "7d5ce05b3d592e63f1f92202d52b9635"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:27:43.310193 master-0 kubenswrapper[27835]: I0318 13:27:43.310100 27835 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:27:43.310193 master-0 kubenswrapper[27835]: I0318 13:27:43.310173 27835 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:27:43.310472 master-0 kubenswrapper[27835]: I0318 13:27:43.310205 27835 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:27:43.310472 master-0 kubenswrapper[27835]: I0318 13:27:43.310232 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bcf7b950-0686-4e8e-87da-84c45d4ca1b4-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:27:43.698538 master-0 kubenswrapper[27835]: I0318 13:27:43.698461 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_7d5ce05b3d592e63f1f92202d52b9635/kube-apiserver-cert-syncer/0.log" Mar 18 13:27:43.699649 master-0 kubenswrapper[27835]: I0318 13:27:43.699584 27835 generic.go:334] "Generic (PLEG): container finished" podID="7d5ce05b3d592e63f1f92202d52b9635" containerID="ff4cc0ab0545a1ff78db4c44d4a6b4fbd4fafe95e0619b97f79063c0eeba0df1" exitCode=0 Mar 18 13:27:43.699760 master-0 kubenswrapper[27835]: I0318 13:27:43.699710 27835 scope.go:117] "RemoveContainer" containerID="ca8ccc964fb04ea68139ff0c86d7918814ea0da0437de0153ad05bce2b1f456f" Mar 18 13:27:43.699809 master-0 kubenswrapper[27835]: I0318 13:27:43.699742 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:27:43.702204 master-0 kubenswrapper[27835]: I0318 13:27:43.702153 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"bcf7b950-0686-4e8e-87da-84c45d4ca1b4","Type":"ContainerDied","Data":"08e4dd44618b9242aeef8503f3e807d55dc2c027717e45148cc2357b97ca2cd0"} Mar 18 13:27:43.702277 master-0 kubenswrapper[27835]: I0318 13:27:43.702212 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 13:27:43.702323 master-0 kubenswrapper[27835]: I0318 13:27:43.702227 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08e4dd44618b9242aeef8503f3e807d55dc2c027717e45148cc2357b97ca2cd0" Mar 18 13:27:43.725709 master-0 kubenswrapper[27835]: I0318 13:27:43.725475 27835 scope.go:117] "RemoveContainer" containerID="cf0173ce1093b233054a91f4b682e330dd8986ad5e5b2215370c46af8d2688dd" Mar 18 13:27:43.725814 master-0 kubenswrapper[27835]: I0318 13:27:43.722738 27835 status_manager.go:851] "Failed to get status for pod" podUID="bcf7b950-0686-4e8e-87da-84c45d4ca1b4" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:43.729184 master-0 kubenswrapper[27835]: I0318 13:27:43.728840 27835 status_manager.go:851] "Failed to get status for pod" podUID="7d5ce05b3d592e63f1f92202d52b9635" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:43.729897 master-0 kubenswrapper[27835]: I0318 13:27:43.729831 27835 status_manager.go:851] "Failed to get status for pod" podUID="1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4" pod="openshift-console/console-7bb86d5d56-ffwhx" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-7bb86d5d56-ffwhx\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:43.733264 master-0 kubenswrapper[27835]: I0318 13:27:43.733185 27835 status_manager.go:851] "Failed to get status for pod" podUID="bcf7b950-0686-4e8e-87da-84c45d4ca1b4" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:43.733754 master-0 kubenswrapper[27835]: I0318 13:27:43.733700 27835 status_manager.go:851] "Failed to get status for pod" podUID="7d5ce05b3d592e63f1f92202d52b9635" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:43.734230 master-0 kubenswrapper[27835]: I0318 13:27:43.734181 27835 status_manager.go:851] "Failed to get status for pod" podUID="1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4" pod="openshift-console/console-7bb86d5d56-ffwhx" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-7bb86d5d56-ffwhx\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:43.749906 master-0 kubenswrapper[27835]: I0318 13:27:43.749855 27835 scope.go:117] "RemoveContainer" containerID="49d6c263d942f0bab661dcdfe15f6600d31d6a3c1fbd3334b615fb9a5aa7c895" Mar 18 13:27:43.765731 master-0 kubenswrapper[27835]: I0318 13:27:43.765680 27835 scope.go:117] "RemoveContainer" containerID="96d6d5f06ba1610ef367eb7cb9848074e8e050aa3718b047f91cb7f88184ec36" Mar 18 13:27:43.782576 master-0 kubenswrapper[27835]: I0318 13:27:43.782522 27835 scope.go:117] "RemoveContainer" containerID="ff4cc0ab0545a1ff78db4c44d4a6b4fbd4fafe95e0619b97f79063c0eeba0df1" Mar 18 13:27:43.802560 master-0 kubenswrapper[27835]: I0318 13:27:43.802490 27835 scope.go:117] "RemoveContainer" containerID="d2f34ff6e866c27595deda451855f1a4008a299a57f12d119251d3d49448157b" Mar 18 13:27:43.823578 master-0 kubenswrapper[27835]: I0318 13:27:43.823498 27835 scope.go:117] "RemoveContainer" containerID="ca8ccc964fb04ea68139ff0c86d7918814ea0da0437de0153ad05bce2b1f456f" Mar 18 13:27:43.824901 master-0 kubenswrapper[27835]: E0318 13:27:43.824817 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca8ccc964fb04ea68139ff0c86d7918814ea0da0437de0153ad05bce2b1f456f\": container with ID starting with ca8ccc964fb04ea68139ff0c86d7918814ea0da0437de0153ad05bce2b1f456f not found: ID does not exist" containerID="ca8ccc964fb04ea68139ff0c86d7918814ea0da0437de0153ad05bce2b1f456f" Mar 18 13:27:43.825032 master-0 kubenswrapper[27835]: I0318 13:27:43.824919 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca8ccc964fb04ea68139ff0c86d7918814ea0da0437de0153ad05bce2b1f456f"} err="failed to get container status \"ca8ccc964fb04ea68139ff0c86d7918814ea0da0437de0153ad05bce2b1f456f\": rpc error: code = NotFound desc = could not find container \"ca8ccc964fb04ea68139ff0c86d7918814ea0da0437de0153ad05bce2b1f456f\": container with ID starting with ca8ccc964fb04ea68139ff0c86d7918814ea0da0437de0153ad05bce2b1f456f not found: ID does not exist" Mar 18 13:27:43.825032 master-0 kubenswrapper[27835]: I0318 13:27:43.824977 27835 scope.go:117] "RemoveContainer" containerID="cf0173ce1093b233054a91f4b682e330dd8986ad5e5b2215370c46af8d2688dd" Mar 18 13:27:43.825650 master-0 kubenswrapper[27835]: E0318 13:27:43.825596 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf0173ce1093b233054a91f4b682e330dd8986ad5e5b2215370c46af8d2688dd\": container with ID starting with cf0173ce1093b233054a91f4b682e330dd8986ad5e5b2215370c46af8d2688dd not found: ID does not exist" containerID="cf0173ce1093b233054a91f4b682e330dd8986ad5e5b2215370c46af8d2688dd" Mar 18 13:27:43.825749 master-0 kubenswrapper[27835]: I0318 13:27:43.825659 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf0173ce1093b233054a91f4b682e330dd8986ad5e5b2215370c46af8d2688dd"} err="failed to get container status \"cf0173ce1093b233054a91f4b682e330dd8986ad5e5b2215370c46af8d2688dd\": rpc error: code = NotFound desc = could not find container \"cf0173ce1093b233054a91f4b682e330dd8986ad5e5b2215370c46af8d2688dd\": container with ID starting with cf0173ce1093b233054a91f4b682e330dd8986ad5e5b2215370c46af8d2688dd not found: ID does not exist" Mar 18 13:27:43.825749 master-0 kubenswrapper[27835]: I0318 13:27:43.825693 27835 scope.go:117] "RemoveContainer" containerID="49d6c263d942f0bab661dcdfe15f6600d31d6a3c1fbd3334b615fb9a5aa7c895" Mar 18 13:27:43.826186 master-0 kubenswrapper[27835]: E0318 13:27:43.826113 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49d6c263d942f0bab661dcdfe15f6600d31d6a3c1fbd3334b615fb9a5aa7c895\": container with ID starting with 49d6c263d942f0bab661dcdfe15f6600d31d6a3c1fbd3334b615fb9a5aa7c895 not found: ID does not exist" containerID="49d6c263d942f0bab661dcdfe15f6600d31d6a3c1fbd3334b615fb9a5aa7c895" Mar 18 13:27:43.826286 master-0 kubenswrapper[27835]: I0318 13:27:43.826175 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49d6c263d942f0bab661dcdfe15f6600d31d6a3c1fbd3334b615fb9a5aa7c895"} err="failed to get container status \"49d6c263d942f0bab661dcdfe15f6600d31d6a3c1fbd3334b615fb9a5aa7c895\": rpc error: code = NotFound desc = could not find container \"49d6c263d942f0bab661dcdfe15f6600d31d6a3c1fbd3334b615fb9a5aa7c895\": container with ID starting with 49d6c263d942f0bab661dcdfe15f6600d31d6a3c1fbd3334b615fb9a5aa7c895 not found: ID does not exist" Mar 18 13:27:43.826286 master-0 kubenswrapper[27835]: I0318 13:27:43.826209 27835 scope.go:117] "RemoveContainer" containerID="96d6d5f06ba1610ef367eb7cb9848074e8e050aa3718b047f91cb7f88184ec36" Mar 18 13:27:43.826682 master-0 kubenswrapper[27835]: E0318 13:27:43.826615 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96d6d5f06ba1610ef367eb7cb9848074e8e050aa3718b047f91cb7f88184ec36\": container with ID starting with 96d6d5f06ba1610ef367eb7cb9848074e8e050aa3718b047f91cb7f88184ec36 not found: ID does not exist" containerID="96d6d5f06ba1610ef367eb7cb9848074e8e050aa3718b047f91cb7f88184ec36" Mar 18 13:27:43.826768 master-0 kubenswrapper[27835]: I0318 13:27:43.826684 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96d6d5f06ba1610ef367eb7cb9848074e8e050aa3718b047f91cb7f88184ec36"} err="failed to get container status \"96d6d5f06ba1610ef367eb7cb9848074e8e050aa3718b047f91cb7f88184ec36\": rpc error: code = NotFound desc = could not find container \"96d6d5f06ba1610ef367eb7cb9848074e8e050aa3718b047f91cb7f88184ec36\": container with ID starting with 96d6d5f06ba1610ef367eb7cb9848074e8e050aa3718b047f91cb7f88184ec36 not found: ID does not exist" Mar 18 13:27:43.826768 master-0 kubenswrapper[27835]: I0318 13:27:43.826724 27835 scope.go:117] "RemoveContainer" containerID="ff4cc0ab0545a1ff78db4c44d4a6b4fbd4fafe95e0619b97f79063c0eeba0df1" Mar 18 13:27:43.827103 master-0 kubenswrapper[27835]: E0318 13:27:43.827054 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff4cc0ab0545a1ff78db4c44d4a6b4fbd4fafe95e0619b97f79063c0eeba0df1\": container with ID starting with ff4cc0ab0545a1ff78db4c44d4a6b4fbd4fafe95e0619b97f79063c0eeba0df1 not found: ID does not exist" containerID="ff4cc0ab0545a1ff78db4c44d4a6b4fbd4fafe95e0619b97f79063c0eeba0df1" Mar 18 13:27:43.827174 master-0 kubenswrapper[27835]: I0318 13:27:43.827091 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff4cc0ab0545a1ff78db4c44d4a6b4fbd4fafe95e0619b97f79063c0eeba0df1"} err="failed to get container status \"ff4cc0ab0545a1ff78db4c44d4a6b4fbd4fafe95e0619b97f79063c0eeba0df1\": rpc error: code = NotFound desc = could not find container \"ff4cc0ab0545a1ff78db4c44d4a6b4fbd4fafe95e0619b97f79063c0eeba0df1\": container with ID starting with ff4cc0ab0545a1ff78db4c44d4a6b4fbd4fafe95e0619b97f79063c0eeba0df1 not found: ID does not exist" Mar 18 13:27:43.827174 master-0 kubenswrapper[27835]: I0318 13:27:43.827133 27835 scope.go:117] "RemoveContainer" containerID="d2f34ff6e866c27595deda451855f1a4008a299a57f12d119251d3d49448157b" Mar 18 13:27:43.827506 master-0 kubenswrapper[27835]: E0318 13:27:43.827465 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2f34ff6e866c27595deda451855f1a4008a299a57f12d119251d3d49448157b\": container with ID starting with d2f34ff6e866c27595deda451855f1a4008a299a57f12d119251d3d49448157b not found: ID does not exist" containerID="d2f34ff6e866c27595deda451855f1a4008a299a57f12d119251d3d49448157b" Mar 18 13:27:43.827592 master-0 kubenswrapper[27835]: I0318 13:27:43.827495 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2f34ff6e866c27595deda451855f1a4008a299a57f12d119251d3d49448157b"} err="failed to get container status \"d2f34ff6e866c27595deda451855f1a4008a299a57f12d119251d3d49448157b\": rpc error: code = NotFound desc = could not find container \"d2f34ff6e866c27595deda451855f1a4008a299a57f12d119251d3d49448157b\": container with ID starting with d2f34ff6e866c27595deda451855f1a4008a299a57f12d119251d3d49448157b not found: ID does not exist" Mar 18 13:27:44.300371 master-0 kubenswrapper[27835]: I0318 13:27:44.300253 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d5ce05b3d592e63f1f92202d52b9635" path="/var/lib/kubelet/pods/7d5ce05b3d592e63f1f92202d52b9635/volumes" Mar 18 13:27:46.290830 master-0 kubenswrapper[27835]: I0318 13:27:46.290730 27835 status_manager.go:851] "Failed to get status for pod" podUID="1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4" pod="openshift-console/console-7bb86d5d56-ffwhx" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-7bb86d5d56-ffwhx\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:46.291971 master-0 kubenswrapper[27835]: I0318 13:27:46.291762 27835 status_manager.go:851] "Failed to get status for pod" podUID="bcf7b950-0686-4e8e-87da-84c45d4ca1b4" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:46.983339 master-0 kubenswrapper[27835]: E0318 13:27:46.983186 27835 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189df28056efe9bd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:85632c1cec8974aa874834e4cfff4c77,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:27:41.282142653 +0000 UTC m=+225.247354223,LastTimestamp:2026-03-18 13:27:41.282142653 +0000 UTC m=+225.247354223,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:27:47.896656 master-0 kubenswrapper[27835]: E0318 13:27:47.896555 27835 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:47.897617 master-0 kubenswrapper[27835]: E0318 13:27:47.897551 27835 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:47.898394 master-0 kubenswrapper[27835]: E0318 13:27:47.898339 27835 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:47.899144 master-0 kubenswrapper[27835]: E0318 13:27:47.899065 27835 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:47.899930 master-0 kubenswrapper[27835]: E0318 13:27:47.899860 27835 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:47.899930 master-0 kubenswrapper[27835]: I0318 13:27:47.899922 27835 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 18 13:27:47.900874 master-0 kubenswrapper[27835]: E0318 13:27:47.900808 27835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 18 13:27:47.957268 master-0 kubenswrapper[27835]: I0318 13:27:47.957141 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-684cf44489-lfkt8" Mar 18 13:27:47.957268 master-0 kubenswrapper[27835]: I0318 13:27:47.957267 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-684cf44489-lfkt8" Mar 18 13:27:47.959702 master-0 kubenswrapper[27835]: I0318 13:27:47.959660 27835 patch_prober.go:28] interesting pod/console-684cf44489-lfkt8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" start-of-body= Mar 18 13:27:47.959806 master-0 kubenswrapper[27835]: I0318 13:27:47.959730 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-684cf44489-lfkt8" podUID="81db56ef-4aac-48a8-aada-fb4c198f0b5c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" Mar 18 13:27:48.104092 master-0 kubenswrapper[27835]: E0318 13:27:48.104005 27835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 18 13:27:48.141305 master-0 kubenswrapper[27835]: E0318 13:27:48.141091 27835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:27:48Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:27:48Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:27:48Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T13:27:48Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ddc5283caf2ced75a94ddf0e8a43c431889692007e8a875a187b25c35b45a9e2\\\"],\\\"sizeBytes\\\":2895807090},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:491bd3a9c1f09106983d7c3b85f1c97c80dd582f8d1a10e6f6794bf430d7ac19\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8b28c7575f0f57c4dfc6bf61038ad06affeca0d25d7741b97abc25aa54b74e42\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1746888156},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7\\\"],\\\"sizeBytes\\\":1637455533},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:86833de447f25d1d0fc15ed5460c5068cc48b18b78b8108304c5b5fd1dff04ab\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a41181d28dfacb78bea3690c390c965912300bc666e6e31a54a9382dd0329758\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1251896539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36\\\"],\\\"sizeBytes\\\":1238100502},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:31180c161c4416e7c7d06d63571b814732d4ff11a14e2bfdcac0681ed14204ac\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:e4bd35b83c0fba6d225bfa8f356a8e5df013653884a4233d5a7c4e3b5d503bae\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1223730902},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af0fe0ca926422a6471d5bf22fc0e682c36c24fba05496a3bdfac0b7d3733015\\\"],\\\"sizeBytes\\\":991832673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\\\"],\\\"sizeBytes\\\":943841779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1\\\"],\\\"sizeBytes\\\":918289953},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f2c59d19eb73ad5c0f93b0a63003c1885f5297959c9c45b401d1a74aea6e76\\\"],\\\"sizeBytes\\\":880382887},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a09f5a3ba4f60cce0145769509bab92553c8075d210af4ac058965d2ae11efa\\\"],\\\"sizeBytes\\\":876160834},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578\\\"],\\\"sizeBytes\\\":862657321},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de91abd5ad76fb491881a75a0feb4b8ca5600ceb5e15a4b0b687ada01ea0a44c\\\"],\\\"sizeBytes\\\":862205633},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a9e8da5c6114f062b814936d4db7a47a04d248e160d6bb28ad4e4a081496ee4\\\"],\\\"sizeBytes\\\":772943435},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e1faad2d9167d84e23585c1cea5962301845548043cf09578f943f79ca98016\\\"],\\\"sizeBytes\\\":687949580},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5e12e4dc52214d3ada5ba5106caebe079eac1d9292c2571a5fe83411ce8e900d\\\"],\\\"sizeBytes\\\":683195416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa5e782406f71c048b1ac3a4bf5d1227ff4be81111114083ad4c7a209c6bfb5a\\\"],\\\"sizeBytes\\\":677942383},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5bbb8535e2496de8389585ebbe696e7d7b9bad2b27785ad8a30a0fc683b0a22d\\\"],\\\"sizeBytes\\\":633877280},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec8fd46dfb35ed10e8f98933166f69ce579c2f35b8db03d21e4c34fc544553e4\\\"],\\\"sizeBytes\\\":621648710},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f3038df8df25746bb5095296d4e5740f2356f85c1ed8d43f1b3d281e294826e5\\\"],\\\"sizeBytes\\\":605698193},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae50e496bd6ae2d27298d997470b7cb0a426eeb8b7e2e9c7187a34cb03993998\\\"],\\\"sizeBytes\\\":589386806},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55\\\"],\\\"sizeBytes\\\":582154903},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982\\\"],\\\"sizeBytes\\\":558211175},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:112a03f2411f871cdaca5f20daef71024dac710113d5f30897117a5a02f6b6f5\\\"],\\\"sizeBytes\\\":557428271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7af9f5c5af9d529840233ef4b519120cc0e3f14c4fe28cc43b0823f2c11d8f89\\\"],\\\"sizeBytes\\\":548752816},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\\\"],\\\"sizeBytes\\\":529326739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b4c9cf268bb7abef7af187cd775d3f74d0bd33626250095428d53b705ee946\\\"],\\\"sizeBytes\\\":528956487},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a\\\"],\\\"sizeBytes\\\":518384969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1\\\"],\\\"sizeBytes\\\":517999161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\\\"],\\\"sizeBytes\\\":514984269},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30a2f97d7785ce8b0ea5115e67c4554b64adefbc7856bcf6f4fe6cc7e938a310\\\"],\\\"sizeBytes\\\":513582374},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfe394b58ec6195de8b8420e781b7630d85a412b9112d892fea903f92b783427\\\"],\\\"sizeBytes\\\":513221333},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e\\\"],\\\"sizeBytes\\\":512274055},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98bf5467a01195e20aeea7d6f0b130ddacc00b73bc5312253b8c34e7208538f8\\\"],\\\"sizeBytes\\\":512235769},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77fff570657d2fa0bfb709b2c8b6665bae0bf90a2be981d8dbca56c674715098\\\"],\\\"sizeBytes\\\":511227324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adb9f6f2fd701863c7caed747df43f83d3569ba9388cfa33ea7219ac6a606b11\\\"],\\\"sizeBytes\\\":511164375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458\\\"],\\\"sizeBytes\\\":508888171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263\\\"],\\\"sizeBytes\\\":508544745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:313d1d8ca85e65236a59f058a3316c49436dde691b3a3930d5bc5e3b4b8c8a71\\\"],\\\"sizeBytes\\\":507972093},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c527b4e8239a1f4f4e0a851113e7dd633b7dcb9d75b0e7b21c23d26304abcb3\\\"],\\\"sizeBytes\\\":506480167},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302\\\"],\\\"sizeBytes\\\":506395599},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4a034950346bcd4e36e9e2f1343e0cf7a10cf544963f33d09c7eb2a1bfc634\\\"],\\\"sizeBytes\\\":505345991},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\\\"],\\\"sizeBytes\\\":505246690},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1973d56a1097a48ea0ebf2c4dbae1ed86fa67bb0116f4962f7720d48aa337d27\\\"],\\\"sizeBytes\\\":504662731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252\\\"],\\\"sizeBytes\\\":504625081},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf72297fee61ec9950f6868881ad3e84be8692ca08f084b3d155d93a766c0823\\\"],\\\"sizeBytes\\\":502712961},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:712d334b7752d95580571059aae2c50e111d879af4fd8ea7cc3dbaf1a8e7dc69\\\"],\\\"sizeBytes\\\":495994673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5ea1ef4e09b673a0c68c8848ca162ab11d9ac373a377daa52dea702ffa3023\\\"],\\\"sizeBytes\\\":495065340},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:002dfb86e17ad8f5cc232a7d2dce183b23335c8ecb7e7d31dcf3e4446b390777\\\"],\\\"sizeBytes\\\":487159945}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:48.142524 master-0 kubenswrapper[27835]: E0318 13:27:48.142464 27835 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:48.143272 master-0 kubenswrapper[27835]: E0318 13:27:48.143221 27835 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:48.143912 master-0 kubenswrapper[27835]: E0318 13:27:48.143887 27835 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:48.144574 master-0 kubenswrapper[27835]: E0318 13:27:48.144547 27835 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:48.144574 master-0 kubenswrapper[27835]: E0318 13:27:48.144573 27835 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 13:27:48.506499 master-0 kubenswrapper[27835]: E0318 13:27:48.506374 27835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 18 13:27:49.308277 master-0 kubenswrapper[27835]: E0318 13:27:49.308198 27835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 18 13:27:49.997724 master-0 kubenswrapper[27835]: I0318 13:27:49.997620 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7bb86d5d56-ffwhx" Mar 18 13:27:49.998063 master-0 kubenswrapper[27835]: I0318 13:27:49.997845 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7bb86d5d56-ffwhx" Mar 18 13:27:50.000623 master-0 kubenswrapper[27835]: I0318 13:27:50.000537 27835 patch_prober.go:28] interesting pod/console-7bb86d5d56-ffwhx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" start-of-body= Mar 18 13:27:50.000623 master-0 kubenswrapper[27835]: I0318 13:27:50.000604 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7bb86d5d56-ffwhx" podUID="1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4" containerName="console" probeResult="failure" output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" Mar 18 13:27:50.910403 master-0 kubenswrapper[27835]: E0318 13:27:50.909218 27835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 18 13:27:52.280584 master-0 kubenswrapper[27835]: I0318 13:27:52.280500 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:27:52.282317 master-0 kubenswrapper[27835]: I0318 13:27:52.282248 27835 status_manager.go:851] "Failed to get status for pod" podUID="bcf7b950-0686-4e8e-87da-84c45d4ca1b4" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:52.282966 master-0 kubenswrapper[27835]: I0318 13:27:52.282918 27835 status_manager.go:851] "Failed to get status for pod" podUID="1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4" pod="openshift-console/console-7bb86d5d56-ffwhx" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-7bb86d5d56-ffwhx\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:52.308612 master-0 kubenswrapper[27835]: I0318 13:27:52.308544 27835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="0033a693-a6fc-449f-9a62-699cd3433715" Mar 18 13:27:52.308612 master-0 kubenswrapper[27835]: I0318 13:27:52.308599 27835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="0033a693-a6fc-449f-9a62-699cd3433715" Mar 18 13:27:52.309575 master-0 kubenswrapper[27835]: E0318 13:27:52.309504 27835 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:27:52.310313 master-0 kubenswrapper[27835]: I0318 13:27:52.310251 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:27:52.347665 master-0 kubenswrapper[27835]: W0318 13:27:52.347600 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5f502b117c7c8479f7f20848a50fec0.slice/crio-29c5839b61cd5e6b565cc22148ea9d18a78a57e4a4495db0b7cc8c2695a7e7c9 WatchSource:0}: Error finding container 29c5839b61cd5e6b565cc22148ea9d18a78a57e4a4495db0b7cc8c2695a7e7c9: Status 404 returned error can't find the container with id 29c5839b61cd5e6b565cc22148ea9d18a78a57e4a4495db0b7cc8c2695a7e7c9 Mar 18 13:27:52.790901 master-0 kubenswrapper[27835]: I0318 13:27:52.790795 27835 generic.go:334] "Generic (PLEG): container finished" podID="d5f502b117c7c8479f7f20848a50fec0" containerID="e87ea1da7a93df186fd364e11fc68cb519f35f4f8a41649d22e65e68cf8a316c" exitCode=0 Mar 18 13:27:52.790901 master-0 kubenswrapper[27835]: I0318 13:27:52.790887 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"d5f502b117c7c8479f7f20848a50fec0","Type":"ContainerDied","Data":"e87ea1da7a93df186fd364e11fc68cb519f35f4f8a41649d22e65e68cf8a316c"} Mar 18 13:27:52.791204 master-0 kubenswrapper[27835]: I0318 13:27:52.790951 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"d5f502b117c7c8479f7f20848a50fec0","Type":"ContainerStarted","Data":"29c5839b61cd5e6b565cc22148ea9d18a78a57e4a4495db0b7cc8c2695a7e7c9"} Mar 18 13:27:52.791520 master-0 kubenswrapper[27835]: I0318 13:27:52.791474 27835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="0033a693-a6fc-449f-9a62-699cd3433715" Mar 18 13:27:52.791579 master-0 kubenswrapper[27835]: I0318 13:27:52.791520 27835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="0033a693-a6fc-449f-9a62-699cd3433715" Mar 18 13:27:52.792400 master-0 kubenswrapper[27835]: I0318 13:27:52.792297 27835 status_manager.go:851] "Failed to get status for pod" podUID="bcf7b950-0686-4e8e-87da-84c45d4ca1b4" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:52.792550 master-0 kubenswrapper[27835]: E0318 13:27:52.792485 27835 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:27:52.793380 master-0 kubenswrapper[27835]: I0318 13:27:52.793306 27835 status_manager.go:851] "Failed to get status for pod" podUID="1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4" pod="openshift-console/console-7bb86d5d56-ffwhx" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-7bb86d5d56-ffwhx\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:27:53.810172 master-0 kubenswrapper[27835]: I0318 13:27:53.810127 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c129e07da670ff3af256d72652e4b1da/kube-controller-manager/2.log" Mar 18 13:27:53.816921 master-0 kubenswrapper[27835]: I0318 13:27:53.816820 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c129e07da670ff3af256d72652e4b1da/kube-controller-manager/1.log" Mar 18 13:27:53.819130 master-0 kubenswrapper[27835]: I0318 13:27:53.819089 27835 generic.go:334] "Generic (PLEG): container finished" podID="c129e07da670ff3af256d72652e4b1da" containerID="b4b5c8d1e406d13fe58909e853bd3a2f1dbf56f74da4311baf7f48390d808a56" exitCode=1 Mar 18 13:27:53.819240 master-0 kubenswrapper[27835]: I0318 13:27:53.819150 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c129e07da670ff3af256d72652e4b1da","Type":"ContainerDied","Data":"b4b5c8d1e406d13fe58909e853bd3a2f1dbf56f74da4311baf7f48390d808a56"} Mar 18 13:27:53.819240 master-0 kubenswrapper[27835]: I0318 13:27:53.819184 27835 scope.go:117] "RemoveContainer" containerID="9b5f2a1af05afc1d7e7cb36c0823f94aa2ee39888af2d9a6e00b457182627afd" Mar 18 13:27:53.819891 master-0 kubenswrapper[27835]: I0318 13:27:53.819719 27835 scope.go:117] "RemoveContainer" containerID="b4b5c8d1e406d13fe58909e853bd3a2f1dbf56f74da4311baf7f48390d808a56" Mar 18 13:27:53.820084 master-0 kubenswrapper[27835]: E0318 13:27:53.820053 27835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-master-0_openshift-kube-controller-manager(c129e07da670ff3af256d72652e4b1da)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c129e07da670ff3af256d72652e4b1da" Mar 18 13:27:53.834065 master-0 kubenswrapper[27835]: I0318 13:27:53.833989 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"d5f502b117c7c8479f7f20848a50fec0","Type":"ContainerStarted","Data":"4e6f860f5a238733bb1600608f8d19d5ba39e1c0b4dd9265395e2f7c50a3071e"} Mar 18 13:27:53.834065 master-0 kubenswrapper[27835]: I0318 13:27:53.834059 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"d5f502b117c7c8479f7f20848a50fec0","Type":"ContainerStarted","Data":"fcdb1b819161f6786aa89aeb7873b8bf4187f3c9a3ed2e0b6440edb9bb8a5aec"} Mar 18 13:27:53.834065 master-0 kubenswrapper[27835]: I0318 13:27:53.834074 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"d5f502b117c7c8479f7f20848a50fec0","Type":"ContainerStarted","Data":"b77d8578909e57c509d7afe00177650fc036c0ca83d0cb91505afe847365a480"} Mar 18 13:27:54.844285 master-0 kubenswrapper[27835]: I0318 13:27:54.844205 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c129e07da670ff3af256d72652e4b1da/kube-controller-manager/2.log" Mar 18 13:27:54.849491 master-0 kubenswrapper[27835]: I0318 13:27:54.849424 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"d5f502b117c7c8479f7f20848a50fec0","Type":"ContainerStarted","Data":"a56ff8b1b770bc5368cd1740d4fb0666d8bf754133e8b2ea51bef7267a3302c2"} Mar 18 13:27:54.849491 master-0 kubenswrapper[27835]: I0318 13:27:54.849466 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"d5f502b117c7c8479f7f20848a50fec0","Type":"ContainerStarted","Data":"7c8a553e1c6d6fefe684c617a690a439e9aa8c866ba1a72a9c4e5d94885b4e6b"} Mar 18 13:27:54.849709 master-0 kubenswrapper[27835]: I0318 13:27:54.849670 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:27:54.849936 master-0 kubenswrapper[27835]: I0318 13:27:54.849876 27835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="0033a693-a6fc-449f-9a62-699cd3433715" Mar 18 13:27:54.849936 master-0 kubenswrapper[27835]: I0318 13:27:54.849925 27835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="0033a693-a6fc-449f-9a62-699cd3433715" Mar 18 13:27:56.405003 master-0 kubenswrapper[27835]: I0318 13:27:56.404921 27835 scope.go:117] "RemoveContainer" containerID="057d6561c0f4da44fc1dbbb3cf541c1859a6f838b5eed3e585b47f89bb483358" Mar 18 13:27:57.310978 master-0 kubenswrapper[27835]: I0318 13:27:57.310896 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:27:57.311502 master-0 kubenswrapper[27835]: I0318 13:27:57.311392 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:27:57.320643 master-0 kubenswrapper[27835]: I0318 13:27:57.320588 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:27:57.958229 master-0 kubenswrapper[27835]: I0318 13:27:57.958118 27835 patch_prober.go:28] interesting pod/console-684cf44489-lfkt8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" start-of-body= Mar 18 13:27:57.958229 master-0 kubenswrapper[27835]: I0318 13:27:57.958200 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-684cf44489-lfkt8" podUID="81db56ef-4aac-48a8-aada-fb4c198f0b5c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" Mar 18 13:27:58.084050 master-0 kubenswrapper[27835]: I0318 13:27:58.083966 27835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:27:58.084835 master-0 kubenswrapper[27835]: I0318 13:27:58.084767 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:27:58.084969 master-0 kubenswrapper[27835]: I0318 13:27:58.084839 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:27:58.085050 master-0 kubenswrapper[27835]: I0318 13:27:58.084953 27835 scope.go:117] "RemoveContainer" containerID="b4b5c8d1e406d13fe58909e853bd3a2f1dbf56f74da4311baf7f48390d808a56" Mar 18 13:27:58.085587 master-0 kubenswrapper[27835]: E0318 13:27:58.085523 27835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-master-0_openshift-kube-controller-manager(c129e07da670ff3af256d72652e4b1da)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c129e07da670ff3af256d72652e4b1da" Mar 18 13:27:58.882685 master-0 kubenswrapper[27835]: I0318 13:27:58.882632 27835 scope.go:117] "RemoveContainer" containerID="b4b5c8d1e406d13fe58909e853bd3a2f1dbf56f74da4311baf7f48390d808a56" Mar 18 13:27:58.883709 master-0 kubenswrapper[27835]: E0318 13:27:58.883673 27835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-master-0_openshift-kube-controller-manager(c129e07da670ff3af256d72652e4b1da)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c129e07da670ff3af256d72652e4b1da" Mar 18 13:27:59.867145 master-0 kubenswrapper[27835]: I0318 13:27:59.867088 27835 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:27:59.888255 master-0 kubenswrapper[27835]: I0318 13:27:59.888197 27835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="0033a693-a6fc-449f-9a62-699cd3433715" Mar 18 13:27:59.888255 master-0 kubenswrapper[27835]: I0318 13:27:59.888235 27835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="0033a693-a6fc-449f-9a62-699cd3433715" Mar 18 13:27:59.894872 master-0 kubenswrapper[27835]: I0318 13:27:59.894827 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:27:59.897011 master-0 kubenswrapper[27835]: I0318 13:27:59.896970 27835 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="d5f502b117c7c8479f7f20848a50fec0" podUID="cd4e038b-f3af-470e-9787-9c98e1d1f129" Mar 18 13:27:59.999053 master-0 kubenswrapper[27835]: I0318 13:27:59.998977 27835 patch_prober.go:28] interesting pod/console-7bb86d5d56-ffwhx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" start-of-body= Mar 18 13:27:59.999379 master-0 kubenswrapper[27835]: I0318 13:27:59.999352 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7bb86d5d56-ffwhx" podUID="1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4" containerName="console" probeResult="failure" output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" Mar 18 13:28:00.900718 master-0 kubenswrapper[27835]: I0318 13:28:00.898667 27835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="0033a693-a6fc-449f-9a62-699cd3433715" Mar 18 13:28:00.900718 master-0 kubenswrapper[27835]: I0318 13:28:00.898737 27835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="0033a693-a6fc-449f-9a62-699cd3433715" Mar 18 13:28:00.910734 master-0 kubenswrapper[27835]: I0318 13:28:00.910669 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-5668cbc594-2kzhf" Mar 18 13:28:00.918505 master-0 kubenswrapper[27835]: I0318 13:28:00.918443 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-5668cbc594-2kzhf" Mar 18 13:28:02.616711 master-0 kubenswrapper[27835]: I0318 13:28:02.616601 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5df65d974f-mpf5j" podUID="a42bf050-6c38-4023-a8b4-dc795f3aadc7" containerName="console" containerID="cri-o://bcea1c31c8eafd3e0cc4eaee9afbe2a8bdda8a5acc6c08c03a14414c75f82365" gracePeriod=15 Mar 18 13:28:02.916281 master-0 kubenswrapper[27835]: I0318 13:28:02.916206 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5df65d974f-mpf5j_a42bf050-6c38-4023-a8b4-dc795f3aadc7/console/0.log" Mar 18 13:28:02.916566 master-0 kubenswrapper[27835]: I0318 13:28:02.916286 27835 generic.go:334] "Generic (PLEG): container finished" podID="a42bf050-6c38-4023-a8b4-dc795f3aadc7" containerID="bcea1c31c8eafd3e0cc4eaee9afbe2a8bdda8a5acc6c08c03a14414c75f82365" exitCode=2 Mar 18 13:28:02.916566 master-0 kubenswrapper[27835]: I0318 13:28:02.916330 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5df65d974f-mpf5j" event={"ID":"a42bf050-6c38-4023-a8b4-dc795f3aadc7","Type":"ContainerDied","Data":"bcea1c31c8eafd3e0cc4eaee9afbe2a8bdda8a5acc6c08c03a14414c75f82365"} Mar 18 13:28:03.150706 master-0 kubenswrapper[27835]: I0318 13:28:03.150633 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5df65d974f-mpf5j_a42bf050-6c38-4023-a8b4-dc795f3aadc7/console/0.log" Mar 18 13:28:03.150706 master-0 kubenswrapper[27835]: I0318 13:28:03.150709 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5df65d974f-mpf5j" Mar 18 13:28:03.269960 master-0 kubenswrapper[27835]: I0318 13:28:03.269686 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a42bf050-6c38-4023-a8b4-dc795f3aadc7-console-serving-cert\") pod \"a42bf050-6c38-4023-a8b4-dc795f3aadc7\" (UID: \"a42bf050-6c38-4023-a8b4-dc795f3aadc7\") " Mar 18 13:28:03.269960 master-0 kubenswrapper[27835]: I0318 13:28:03.269801 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a42bf050-6c38-4023-a8b4-dc795f3aadc7-service-ca\") pod \"a42bf050-6c38-4023-a8b4-dc795f3aadc7\" (UID: \"a42bf050-6c38-4023-a8b4-dc795f3aadc7\") " Mar 18 13:28:03.269960 master-0 kubenswrapper[27835]: I0318 13:28:03.269847 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qq6dl\" (UniqueName: \"kubernetes.io/projected/a42bf050-6c38-4023-a8b4-dc795f3aadc7-kube-api-access-qq6dl\") pod \"a42bf050-6c38-4023-a8b4-dc795f3aadc7\" (UID: \"a42bf050-6c38-4023-a8b4-dc795f3aadc7\") " Mar 18 13:28:03.269960 master-0 kubenswrapper[27835]: I0318 13:28:03.269889 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a42bf050-6c38-4023-a8b4-dc795f3aadc7-oauth-serving-cert\") pod \"a42bf050-6c38-4023-a8b4-dc795f3aadc7\" (UID: \"a42bf050-6c38-4023-a8b4-dc795f3aadc7\") " Mar 18 13:28:03.269960 master-0 kubenswrapper[27835]: I0318 13:28:03.269917 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a42bf050-6c38-4023-a8b4-dc795f3aadc7-console-config\") pod \"a42bf050-6c38-4023-a8b4-dc795f3aadc7\" (UID: \"a42bf050-6c38-4023-a8b4-dc795f3aadc7\") " Mar 18 13:28:03.269960 master-0 kubenswrapper[27835]: I0318 13:28:03.269986 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a42bf050-6c38-4023-a8b4-dc795f3aadc7-console-oauth-config\") pod \"a42bf050-6c38-4023-a8b4-dc795f3aadc7\" (UID: \"a42bf050-6c38-4023-a8b4-dc795f3aadc7\") " Mar 18 13:28:03.271015 master-0 kubenswrapper[27835]: I0318 13:28:03.270964 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a42bf050-6c38-4023-a8b4-dc795f3aadc7-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a42bf050-6c38-4023-a8b4-dc795f3aadc7" (UID: "a42bf050-6c38-4023-a8b4-dc795f3aadc7"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:28:03.271261 master-0 kubenswrapper[27835]: I0318 13:28:03.271167 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a42bf050-6c38-4023-a8b4-dc795f3aadc7-console-config" (OuterVolumeSpecName: "console-config") pod "a42bf050-6c38-4023-a8b4-dc795f3aadc7" (UID: "a42bf050-6c38-4023-a8b4-dc795f3aadc7"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:28:03.271858 master-0 kubenswrapper[27835]: I0318 13:28:03.271806 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a42bf050-6c38-4023-a8b4-dc795f3aadc7-service-ca" (OuterVolumeSpecName: "service-ca") pod "a42bf050-6c38-4023-a8b4-dc795f3aadc7" (UID: "a42bf050-6c38-4023-a8b4-dc795f3aadc7"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:28:03.272989 master-0 kubenswrapper[27835]: I0318 13:28:03.272937 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a42bf050-6c38-4023-a8b4-dc795f3aadc7-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a42bf050-6c38-4023-a8b4-dc795f3aadc7" (UID: "a42bf050-6c38-4023-a8b4-dc795f3aadc7"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:28:03.273640 master-0 kubenswrapper[27835]: I0318 13:28:03.273584 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a42bf050-6c38-4023-a8b4-dc795f3aadc7-kube-api-access-qq6dl" (OuterVolumeSpecName: "kube-api-access-qq6dl") pod "a42bf050-6c38-4023-a8b4-dc795f3aadc7" (UID: "a42bf050-6c38-4023-a8b4-dc795f3aadc7"). InnerVolumeSpecName "kube-api-access-qq6dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:28:03.274197 master-0 kubenswrapper[27835]: I0318 13:28:03.274141 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a42bf050-6c38-4023-a8b4-dc795f3aadc7-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a42bf050-6c38-4023-a8b4-dc795f3aadc7" (UID: "a42bf050-6c38-4023-a8b4-dc795f3aadc7"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:28:03.371925 master-0 kubenswrapper[27835]: I0318 13:28:03.371852 27835 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a42bf050-6c38-4023-a8b4-dc795f3aadc7-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:28:03.371925 master-0 kubenswrapper[27835]: I0318 13:28:03.371912 27835 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a42bf050-6c38-4023-a8b4-dc795f3aadc7-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:28:03.371925 master-0 kubenswrapper[27835]: I0318 13:28:03.371929 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qq6dl\" (UniqueName: \"kubernetes.io/projected/a42bf050-6c38-4023-a8b4-dc795f3aadc7-kube-api-access-qq6dl\") on node \"master-0\" DevicePath \"\"" Mar 18 13:28:03.372442 master-0 kubenswrapper[27835]: I0318 13:28:03.371946 27835 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a42bf050-6c38-4023-a8b4-dc795f3aadc7-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:28:03.372442 master-0 kubenswrapper[27835]: I0318 13:28:03.371961 27835 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a42bf050-6c38-4023-a8b4-dc795f3aadc7-console-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:28:03.372442 master-0 kubenswrapper[27835]: I0318 13:28:03.371973 27835 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a42bf050-6c38-4023-a8b4-dc795f3aadc7-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:28:03.927366 master-0 kubenswrapper[27835]: I0318 13:28:03.927314 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5df65d974f-mpf5j_a42bf050-6c38-4023-a8b4-dc795f3aadc7/console/0.log" Mar 18 13:28:03.927877 master-0 kubenswrapper[27835]: I0318 13:28:03.927373 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5df65d974f-mpf5j" event={"ID":"a42bf050-6c38-4023-a8b4-dc795f3aadc7","Type":"ContainerDied","Data":"e5f3340ca1c039a5a630a44295ded1a43916f16a647a68807e432f618e8cb3db"} Mar 18 13:28:03.927877 master-0 kubenswrapper[27835]: I0318 13:28:03.927410 27835 scope.go:117] "RemoveContainer" containerID="bcea1c31c8eafd3e0cc4eaee9afbe2a8bdda8a5acc6c08c03a14414c75f82365" Mar 18 13:28:03.927877 master-0 kubenswrapper[27835]: I0318 13:28:03.927519 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5df65d974f-mpf5j" Mar 18 13:28:04.676866 master-0 kubenswrapper[27835]: I0318 13:28:04.676725 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-686bcb5cf-88rcq" podUID="18d00d36-387c-4c03-affa-9abc8e2d4fe0" containerName="console" containerID="cri-o://f34e7adb2cb006bc8a93977875f942662294b291d04af30371e70d1940adf03d" gracePeriod=15 Mar 18 13:28:04.936760 master-0 kubenswrapper[27835]: I0318 13:28:04.936627 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-686bcb5cf-88rcq_18d00d36-387c-4c03-affa-9abc8e2d4fe0/console/0.log" Mar 18 13:28:04.936760 master-0 kubenswrapper[27835]: I0318 13:28:04.936723 27835 generic.go:334] "Generic (PLEG): container finished" podID="18d00d36-387c-4c03-affa-9abc8e2d4fe0" containerID="f34e7adb2cb006bc8a93977875f942662294b291d04af30371e70d1940adf03d" exitCode=2 Mar 18 13:28:04.937383 master-0 kubenswrapper[27835]: I0318 13:28:04.936774 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-686bcb5cf-88rcq" event={"ID":"18d00d36-387c-4c03-affa-9abc8e2d4fe0","Type":"ContainerDied","Data":"f34e7adb2cb006bc8a93977875f942662294b291d04af30371e70d1940adf03d"} Mar 18 13:28:05.193342 master-0 kubenswrapper[27835]: I0318 13:28:05.193227 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-686bcb5cf-88rcq_18d00d36-387c-4c03-affa-9abc8e2d4fe0/console/0.log" Mar 18 13:28:05.193342 master-0 kubenswrapper[27835]: I0318 13:28:05.193300 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-686bcb5cf-88rcq" Mar 18 13:28:05.305824 master-0 kubenswrapper[27835]: I0318 13:28:05.305716 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/18d00d36-387c-4c03-affa-9abc8e2d4fe0-console-oauth-config\") pod \"18d00d36-387c-4c03-affa-9abc8e2d4fe0\" (UID: \"18d00d36-387c-4c03-affa-9abc8e2d4fe0\") " Mar 18 13:28:05.306128 master-0 kubenswrapper[27835]: I0318 13:28:05.305835 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18d00d36-387c-4c03-affa-9abc8e2d4fe0-trusted-ca-bundle\") pod \"18d00d36-387c-4c03-affa-9abc8e2d4fe0\" (UID: \"18d00d36-387c-4c03-affa-9abc8e2d4fe0\") " Mar 18 13:28:05.306128 master-0 kubenswrapper[27835]: I0318 13:28:05.305885 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/18d00d36-387c-4c03-affa-9abc8e2d4fe0-console-config\") pod \"18d00d36-387c-4c03-affa-9abc8e2d4fe0\" (UID: \"18d00d36-387c-4c03-affa-9abc8e2d4fe0\") " Mar 18 13:28:05.306128 master-0 kubenswrapper[27835]: I0318 13:28:05.305995 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/18d00d36-387c-4c03-affa-9abc8e2d4fe0-console-serving-cert\") pod \"18d00d36-387c-4c03-affa-9abc8e2d4fe0\" (UID: \"18d00d36-387c-4c03-affa-9abc8e2d4fe0\") " Mar 18 13:28:05.306128 master-0 kubenswrapper[27835]: I0318 13:28:05.306043 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nzff\" (UniqueName: \"kubernetes.io/projected/18d00d36-387c-4c03-affa-9abc8e2d4fe0-kube-api-access-6nzff\") pod \"18d00d36-387c-4c03-affa-9abc8e2d4fe0\" (UID: \"18d00d36-387c-4c03-affa-9abc8e2d4fe0\") " Mar 18 13:28:05.306128 master-0 kubenswrapper[27835]: I0318 13:28:05.306106 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/18d00d36-387c-4c03-affa-9abc8e2d4fe0-oauth-serving-cert\") pod \"18d00d36-387c-4c03-affa-9abc8e2d4fe0\" (UID: \"18d00d36-387c-4c03-affa-9abc8e2d4fe0\") " Mar 18 13:28:05.306675 master-0 kubenswrapper[27835]: I0318 13:28:05.306172 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/18d00d36-387c-4c03-affa-9abc8e2d4fe0-service-ca\") pod \"18d00d36-387c-4c03-affa-9abc8e2d4fe0\" (UID: \"18d00d36-387c-4c03-affa-9abc8e2d4fe0\") " Mar 18 13:28:05.308020 master-0 kubenswrapper[27835]: I0318 13:28:05.307940 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18d00d36-387c-4c03-affa-9abc8e2d4fe0-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "18d00d36-387c-4c03-affa-9abc8e2d4fe0" (UID: "18d00d36-387c-4c03-affa-9abc8e2d4fe0"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:28:05.308194 master-0 kubenswrapper[27835]: I0318 13:28:05.307995 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18d00d36-387c-4c03-affa-9abc8e2d4fe0-service-ca" (OuterVolumeSpecName: "service-ca") pod "18d00d36-387c-4c03-affa-9abc8e2d4fe0" (UID: "18d00d36-387c-4c03-affa-9abc8e2d4fe0"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:28:05.308194 master-0 kubenswrapper[27835]: I0318 13:28:05.308133 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18d00d36-387c-4c03-affa-9abc8e2d4fe0-console-config" (OuterVolumeSpecName: "console-config") pod "18d00d36-387c-4c03-affa-9abc8e2d4fe0" (UID: "18d00d36-387c-4c03-affa-9abc8e2d4fe0"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:28:05.308360 master-0 kubenswrapper[27835]: I0318 13:28:05.308224 27835 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/18d00d36-387c-4c03-affa-9abc8e2d4fe0-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:28:05.308360 master-0 kubenswrapper[27835]: I0318 13:28:05.308262 27835 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/18d00d36-387c-4c03-affa-9abc8e2d4fe0-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:28:05.309349 master-0 kubenswrapper[27835]: I0318 13:28:05.309278 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18d00d36-387c-4c03-affa-9abc8e2d4fe0-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "18d00d36-387c-4c03-affa-9abc8e2d4fe0" (UID: "18d00d36-387c-4c03-affa-9abc8e2d4fe0"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:28:05.312140 master-0 kubenswrapper[27835]: I0318 13:28:05.312080 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18d00d36-387c-4c03-affa-9abc8e2d4fe0-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "18d00d36-387c-4c03-affa-9abc8e2d4fe0" (UID: "18d00d36-387c-4c03-affa-9abc8e2d4fe0"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:28:05.313255 master-0 kubenswrapper[27835]: I0318 13:28:05.313173 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18d00d36-387c-4c03-affa-9abc8e2d4fe0-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "18d00d36-387c-4c03-affa-9abc8e2d4fe0" (UID: "18d00d36-387c-4c03-affa-9abc8e2d4fe0"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:28:05.313735 master-0 kubenswrapper[27835]: I0318 13:28:05.313648 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18d00d36-387c-4c03-affa-9abc8e2d4fe0-kube-api-access-6nzff" (OuterVolumeSpecName: "kube-api-access-6nzff") pod "18d00d36-387c-4c03-affa-9abc8e2d4fe0" (UID: "18d00d36-387c-4c03-affa-9abc8e2d4fe0"). InnerVolumeSpecName "kube-api-access-6nzff". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:28:05.409598 master-0 kubenswrapper[27835]: I0318 13:28:05.409483 27835 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/18d00d36-387c-4c03-affa-9abc8e2d4fe0-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:28:05.409598 master-0 kubenswrapper[27835]: I0318 13:28:05.409547 27835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18d00d36-387c-4c03-affa-9abc8e2d4fe0-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:28:05.409598 master-0 kubenswrapper[27835]: I0318 13:28:05.409571 27835 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/18d00d36-387c-4c03-affa-9abc8e2d4fe0-console-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:28:05.409598 master-0 kubenswrapper[27835]: I0318 13:28:05.409588 27835 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/18d00d36-387c-4c03-affa-9abc8e2d4fe0-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:28:05.409598 master-0 kubenswrapper[27835]: I0318 13:28:05.409607 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nzff\" (UniqueName: \"kubernetes.io/projected/18d00d36-387c-4c03-affa-9abc8e2d4fe0-kube-api-access-6nzff\") on node \"master-0\" DevicePath \"\"" Mar 18 13:28:05.948109 master-0 kubenswrapper[27835]: I0318 13:28:05.948008 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-686bcb5cf-88rcq_18d00d36-387c-4c03-affa-9abc8e2d4fe0/console/0.log" Mar 18 13:28:05.948802 master-0 kubenswrapper[27835]: I0318 13:28:05.948120 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-686bcb5cf-88rcq" event={"ID":"18d00d36-387c-4c03-affa-9abc8e2d4fe0","Type":"ContainerDied","Data":"c3bc8b931abcfbab71285b0d05c76feae75b12d9b8c762c7ccaf412d14e7044b"} Mar 18 13:28:05.948802 master-0 kubenswrapper[27835]: I0318 13:28:05.948171 27835 scope.go:117] "RemoveContainer" containerID="f34e7adb2cb006bc8a93977875f942662294b291d04af30371e70d1940adf03d" Mar 18 13:28:05.948802 master-0 kubenswrapper[27835]: I0318 13:28:05.948190 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-686bcb5cf-88rcq" Mar 18 13:28:06.303907 master-0 kubenswrapper[27835]: I0318 13:28:06.303786 27835 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="d5f502b117c7c8479f7f20848a50fec0" podUID="cd4e038b-f3af-470e-9787-9c98e1d1f129" Mar 18 13:28:07.957936 master-0 kubenswrapper[27835]: I0318 13:28:07.957804 27835 patch_prober.go:28] interesting pod/console-684cf44489-lfkt8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" start-of-body= Mar 18 13:28:07.959545 master-0 kubenswrapper[27835]: I0318 13:28:07.957926 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-684cf44489-lfkt8" podUID="81db56ef-4aac-48a8-aada-fb4c198f0b5c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" Mar 18 13:28:09.998687 master-0 kubenswrapper[27835]: I0318 13:28:09.998568 27835 patch_prober.go:28] interesting pod/console-7bb86d5d56-ffwhx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" start-of-body= Mar 18 13:28:09.999981 master-0 kubenswrapper[27835]: I0318 13:28:09.998675 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7bb86d5d56-ffwhx" podUID="1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4" containerName="console" probeResult="failure" output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" Mar 18 13:28:10.282510 master-0 kubenswrapper[27835]: I0318 13:28:10.282359 27835 scope.go:117] "RemoveContainer" containerID="b4b5c8d1e406d13fe58909e853bd3a2f1dbf56f74da4311baf7f48390d808a56" Mar 18 13:28:10.282901 master-0 kubenswrapper[27835]: I0318 13:28:10.282862 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 18 13:28:10.282901 master-0 kubenswrapper[27835]: E0318 13:28:10.282896 27835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-master-0_openshift-kube-controller-manager(c129e07da670ff3af256d72652e4b1da)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c129e07da670ff3af256d72652e4b1da" Mar 18 13:28:11.487817 master-0 kubenswrapper[27835]: I0318 13:28:11.487741 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 18 13:28:12.707290 master-0 kubenswrapper[27835]: I0318 13:28:12.707237 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 18 13:28:12.955807 master-0 kubenswrapper[27835]: I0318 13:28:12.955733 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 18 13:28:13.305150 master-0 kubenswrapper[27835]: I0318 13:28:13.305056 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 18 13:28:13.404957 master-0 kubenswrapper[27835]: I0318 13:28:13.404905 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 18 13:28:13.639943 master-0 kubenswrapper[27835]: I0318 13:28:13.639838 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 18 13:28:14.118978 master-0 kubenswrapper[27835]: I0318 13:28:14.118901 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 18 13:28:14.817828 master-0 kubenswrapper[27835]: I0318 13:28:14.817723 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-c8bj17hs40gij" Mar 18 13:28:15.339607 master-0 kubenswrapper[27835]: I0318 13:28:15.339553 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 18 13:28:15.836435 master-0 kubenswrapper[27835]: I0318 13:28:15.836356 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 18 13:28:16.016706 master-0 kubenswrapper[27835]: I0318 13:28:16.014039 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 18 13:28:16.191226 master-0 kubenswrapper[27835]: I0318 13:28:16.191158 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-5rua6jvkkc769" Mar 18 13:28:16.514705 master-0 kubenswrapper[27835]: I0318 13:28:16.514561 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 18 13:28:16.602275 master-0 kubenswrapper[27835]: I0318 13:28:16.602223 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 18 13:28:16.746073 master-0 kubenswrapper[27835]: I0318 13:28:16.745964 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 18 13:28:17.016437 master-0 kubenswrapper[27835]: I0318 13:28:17.016365 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 18 13:28:17.154568 master-0 kubenswrapper[27835]: I0318 13:28:17.154487 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 18 13:28:17.213561 master-0 kubenswrapper[27835]: I0318 13:28:17.213486 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 18 13:28:17.745950 master-0 kubenswrapper[27835]: I0318 13:28:17.745873 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 18 13:28:17.803596 master-0 kubenswrapper[27835]: I0318 13:28:17.803522 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 18 13:28:17.958207 master-0 kubenswrapper[27835]: I0318 13:28:17.958112 27835 patch_prober.go:28] interesting pod/console-684cf44489-lfkt8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" start-of-body= Mar 18 13:28:17.958616 master-0 kubenswrapper[27835]: I0318 13:28:17.958214 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-684cf44489-lfkt8" podUID="81db56ef-4aac-48a8-aada-fb4c198f0b5c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" Mar 18 13:28:18.310484 master-0 kubenswrapper[27835]: I0318 13:28:18.306540 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 18 13:28:18.901345 master-0 kubenswrapper[27835]: I0318 13:28:18.901285 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 18 13:28:19.167024 master-0 kubenswrapper[27835]: I0318 13:28:19.166977 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 18 13:28:19.333719 master-0 kubenswrapper[27835]: I0318 13:28:19.333630 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 18 13:28:19.526226 master-0 kubenswrapper[27835]: I0318 13:28:19.524996 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 18 13:28:19.628270 master-0 kubenswrapper[27835]: I0318 13:28:19.628204 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-6v458hjp7b0gm" Mar 18 13:28:19.642732 master-0 kubenswrapper[27835]: I0318 13:28:19.642674 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 18 13:28:20.006680 master-0 kubenswrapper[27835]: I0318 13:28:20.000718 27835 patch_prober.go:28] interesting pod/console-7bb86d5d56-ffwhx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" start-of-body= Mar 18 13:28:20.006680 master-0 kubenswrapper[27835]: I0318 13:28:20.000878 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7bb86d5d56-ffwhx" podUID="1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4" containerName="console" probeResult="failure" output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" Mar 18 13:28:20.300934 master-0 kubenswrapper[27835]: I0318 13:28:20.300800 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 18 13:28:20.671302 master-0 kubenswrapper[27835]: I0318 13:28:20.671233 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 18 13:28:21.078696 master-0 kubenswrapper[27835]: I0318 13:28:21.078571 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 18 13:28:21.287760 master-0 kubenswrapper[27835]: I0318 13:28:21.287504 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 18 13:28:22.026501 master-0 kubenswrapper[27835]: I0318 13:28:22.026389 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-7rrtk" Mar 18 13:28:22.184817 master-0 kubenswrapper[27835]: I0318 13:28:22.184770 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 18 13:28:22.303127 master-0 kubenswrapper[27835]: I0318 13:28:22.302973 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 18 13:28:22.619979 master-0 kubenswrapper[27835]: I0318 13:28:22.619767 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 18 13:28:22.934754 master-0 kubenswrapper[27835]: I0318 13:28:22.934691 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 18 13:28:23.048830 master-0 kubenswrapper[27835]: I0318 13:28:23.048752 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 18 13:28:23.304493 master-0 kubenswrapper[27835]: I0318 13:28:23.304239 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 18 13:28:23.391372 master-0 kubenswrapper[27835]: I0318 13:28:23.391298 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 18 13:28:23.661096 master-0 kubenswrapper[27835]: I0318 13:28:23.661032 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 18 13:28:23.928142 master-0 kubenswrapper[27835]: I0318 13:28:23.927885 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 18 13:28:24.277872 master-0 kubenswrapper[27835]: I0318 13:28:24.277696 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 18 13:28:24.281573 master-0 kubenswrapper[27835]: I0318 13:28:24.281493 27835 scope.go:117] "RemoveContainer" containerID="b4b5c8d1e406d13fe58909e853bd3a2f1dbf56f74da4311baf7f48390d808a56" Mar 18 13:28:24.643200 master-0 kubenswrapper[27835]: I0318 13:28:24.643121 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 18 13:28:24.644402 master-0 kubenswrapper[27835]: I0318 13:28:24.644371 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 18 13:28:24.838877 master-0 kubenswrapper[27835]: I0318 13:28:24.838728 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 18 13:28:25.122952 master-0 kubenswrapper[27835]: I0318 13:28:25.122760 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c129e07da670ff3af256d72652e4b1da/kube-controller-manager/2.log" Mar 18 13:28:25.125392 master-0 kubenswrapper[27835]: I0318 13:28:25.125321 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c129e07da670ff3af256d72652e4b1da","Type":"ContainerStarted","Data":"b03e7d112d9f6e02fc0d128e89ab6de223ad85703c20718c2b9470a2cebd7529"} Mar 18 13:28:25.126047 master-0 kubenswrapper[27835]: I0318 13:28:25.125959 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 18 13:28:25.222522 master-0 kubenswrapper[27835]: I0318 13:28:25.222450 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-vxpff" Mar 18 13:28:25.787507 master-0 kubenswrapper[27835]: I0318 13:28:25.787389 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 18 13:28:25.853466 master-0 kubenswrapper[27835]: I0318 13:28:25.853317 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 18 13:28:26.190000 master-0 kubenswrapper[27835]: I0318 13:28:26.189952 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 18 13:28:27.007656 master-0 kubenswrapper[27835]: I0318 13:28:27.007597 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 18 13:28:27.405941 master-0 kubenswrapper[27835]: I0318 13:28:27.405779 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 18 13:28:27.437575 master-0 kubenswrapper[27835]: I0318 13:28:27.437500 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:27.473662 master-0 kubenswrapper[27835]: I0318 13:28:27.473567 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:27.926196 master-0 kubenswrapper[27835]: I0318 13:28:27.926096 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 18 13:28:27.958129 master-0 kubenswrapper[27835]: I0318 13:28:27.958043 27835 patch_prober.go:28] interesting pod/console-684cf44489-lfkt8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" start-of-body= Mar 18 13:28:27.958506 master-0 kubenswrapper[27835]: I0318 13:28:27.958129 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-684cf44489-lfkt8" podUID="81db56ef-4aac-48a8-aada-fb4c198f0b5c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" Mar 18 13:28:28.084328 master-0 kubenswrapper[27835]: I0318 13:28:28.084173 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:28:28.084328 master-0 kubenswrapper[27835]: I0318 13:28:28.084241 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:28:28.085116 master-0 kubenswrapper[27835]: I0318 13:28:28.085084 27835 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 18 13:28:28.085203 master-0 kubenswrapper[27835]: I0318 13:28:28.085128 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c129e07da670ff3af256d72652e4b1da" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 18 13:28:28.182588 master-0 kubenswrapper[27835]: I0318 13:28:28.182381 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 13:28:28.620720 master-0 kubenswrapper[27835]: I0318 13:28:28.620484 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 18 13:28:29.999059 master-0 kubenswrapper[27835]: I0318 13:28:29.998975 27835 patch_prober.go:28] interesting pod/console-7bb86d5d56-ffwhx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" start-of-body= Mar 18 13:28:29.999059 master-0 kubenswrapper[27835]: I0318 13:28:29.999055 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7bb86d5d56-ffwhx" podUID="1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4" containerName="console" probeResult="failure" output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" Mar 18 13:28:30.470319 master-0 kubenswrapper[27835]: I0318 13:28:30.470230 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 18 13:28:30.802972 master-0 kubenswrapper[27835]: I0318 13:28:30.802729 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 18 13:28:32.444031 master-0 kubenswrapper[27835]: I0318 13:28:32.443973 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 18 13:28:33.903453 master-0 kubenswrapper[27835]: I0318 13:28:33.903364 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 18 13:28:33.915716 master-0 kubenswrapper[27835]: I0318 13:28:33.915649 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 18 13:28:34.854633 master-0 kubenswrapper[27835]: I0318 13:28:34.854533 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-67jff" Mar 18 13:28:34.893627 master-0 kubenswrapper[27835]: I0318 13:28:34.893543 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 18 13:28:35.324518 master-0 kubenswrapper[27835]: I0318 13:28:35.324403 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 18 13:28:35.838442 master-0 kubenswrapper[27835]: I0318 13:28:35.838353 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 18 13:28:36.359801 master-0 kubenswrapper[27835]: I0318 13:28:36.359737 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 18 13:28:37.752962 master-0 kubenswrapper[27835]: I0318 13:28:37.752858 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 18 13:28:37.958087 master-0 kubenswrapper[27835]: I0318 13:28:37.957993 27835 patch_prober.go:28] interesting pod/console-684cf44489-lfkt8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" start-of-body= Mar 18 13:28:37.958087 master-0 kubenswrapper[27835]: I0318 13:28:37.958066 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-684cf44489-lfkt8" podUID="81db56ef-4aac-48a8-aada-fb4c198f0b5c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" Mar 18 13:28:38.084666 master-0 kubenswrapper[27835]: I0318 13:28:38.084484 27835 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 18 13:28:38.084666 master-0 kubenswrapper[27835]: I0318 13:28:38.084550 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c129e07da670ff3af256d72652e4b1da" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 18 13:28:38.582235 master-0 kubenswrapper[27835]: I0318 13:28:38.582126 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 18 13:28:38.869785 master-0 kubenswrapper[27835]: I0318 13:28:38.869663 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 18 13:28:39.500194 master-0 kubenswrapper[27835]: I0318 13:28:39.500139 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 18 13:28:39.998849 master-0 kubenswrapper[27835]: I0318 13:28:39.998759 27835 patch_prober.go:28] interesting pod/console-7bb86d5d56-ffwhx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" start-of-body= Mar 18 13:28:39.998849 master-0 kubenswrapper[27835]: I0318 13:28:39.998830 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7bb86d5d56-ffwhx" podUID="1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4" containerName="console" probeResult="failure" output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" Mar 18 13:28:40.476568 master-0 kubenswrapper[27835]: I0318 13:28:40.475000 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 18 13:28:40.721512 master-0 kubenswrapper[27835]: I0318 13:28:40.721383 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 18 13:28:40.821812 master-0 kubenswrapper[27835]: I0318 13:28:40.821657 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 18 13:28:41.552157 master-0 kubenswrapper[27835]: I0318 13:28:41.552063 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 18 13:28:41.908784 master-0 kubenswrapper[27835]: I0318 13:28:41.908728 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-9lwzk" Mar 18 13:28:42.725065 master-0 kubenswrapper[27835]: I0318 13:28:42.724713 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 18 13:28:43.095956 master-0 kubenswrapper[27835]: I0318 13:28:43.095754 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 18 13:28:43.295991 master-0 kubenswrapper[27835]: I0318 13:28:43.295885 27835 generic.go:334] "Generic (PLEG): container finished" podID="fe643e40-d06d-4e69-9be3-0065c2a78567" containerID="1e569b3cafd93d8af4f801b48428238651c12ec610106d9d95db5f8c5cc1b218" exitCode=0 Mar 18 13:28:43.295991 master-0 kubenswrapper[27835]: I0318 13:28:43.295974 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" event={"ID":"fe643e40-d06d-4e69-9be3-0065c2a78567","Type":"ContainerDied","Data":"1e569b3cafd93d8af4f801b48428238651c12ec610106d9d95db5f8c5cc1b218"} Mar 18 13:28:43.296330 master-0 kubenswrapper[27835]: I0318 13:28:43.296024 27835 scope.go:117] "RemoveContainer" containerID="f43bc8a8bbbc899289099dead977b3860d9a72dc536beb3c67e961db9a9900e8" Mar 18 13:28:43.297004 master-0 kubenswrapper[27835]: I0318 13:28:43.296956 27835 scope.go:117] "RemoveContainer" containerID="1e569b3cafd93d8af4f801b48428238651c12ec610106d9d95db5f8c5cc1b218" Mar 18 13:28:43.303406 master-0 kubenswrapper[27835]: I0318 13:28:43.303319 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 18 13:28:43.366946 master-0 kubenswrapper[27835]: I0318 13:28:43.366896 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-6fb5w" Mar 18 13:28:43.603789 master-0 kubenswrapper[27835]: I0318 13:28:43.603699 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 18 13:28:43.854782 master-0 kubenswrapper[27835]: I0318 13:28:43.854723 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 18 13:28:43.881828 master-0 kubenswrapper[27835]: I0318 13:28:43.881582 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 18 13:28:44.280689 master-0 kubenswrapper[27835]: I0318 13:28:44.280614 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 18 13:28:44.302788 master-0 kubenswrapper[27835]: I0318 13:28:44.302757 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-89ccd998f-99pzm_fe643e40-d06d-4e69-9be3-0065c2a78567/marketplace-operator/3.log" Mar 18 13:28:44.303478 master-0 kubenswrapper[27835]: I0318 13:28:44.303440 27835 generic.go:334] "Generic (PLEG): container finished" podID="fe643e40-d06d-4e69-9be3-0065c2a78567" containerID="038875b7496310eba4d592d71731dabe07e1cc335c819eb3e88f7c7069a8d44c" exitCode=1 Mar 18 13:28:44.303565 master-0 kubenswrapper[27835]: I0318 13:28:44.303488 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" event={"ID":"fe643e40-d06d-4e69-9be3-0065c2a78567","Type":"ContainerDied","Data":"038875b7496310eba4d592d71731dabe07e1cc335c819eb3e88f7c7069a8d44c"} Mar 18 13:28:44.303565 master-0 kubenswrapper[27835]: I0318 13:28:44.303527 27835 scope.go:117] "RemoveContainer" containerID="1e569b3cafd93d8af4f801b48428238651c12ec610106d9d95db5f8c5cc1b218" Mar 18 13:28:44.305965 master-0 kubenswrapper[27835]: I0318 13:28:44.305927 27835 scope.go:117] "RemoveContainer" containerID="038875b7496310eba4d592d71731dabe07e1cc335c819eb3e88f7c7069a8d44c" Mar 18 13:28:44.306230 master-0 kubenswrapper[27835]: E0318 13:28:44.306194 27835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-89ccd998f-99pzm_openshift-marketplace(fe643e40-d06d-4e69-9be3-0065c2a78567)\"" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" podUID="fe643e40-d06d-4e69-9be3-0065c2a78567" Mar 18 13:28:44.528199 master-0 kubenswrapper[27835]: I0318 13:28:44.528125 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 18 13:28:44.556162 master-0 kubenswrapper[27835]: I0318 13:28:44.556024 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 18 13:28:44.780127 master-0 kubenswrapper[27835]: I0318 13:28:44.780027 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 18 13:28:45.312287 master-0 kubenswrapper[27835]: I0318 13:28:45.312216 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-89ccd998f-99pzm_fe643e40-d06d-4e69-9be3-0065c2a78567/marketplace-operator/3.log" Mar 18 13:28:45.545759 master-0 kubenswrapper[27835]: I0318 13:28:45.545632 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 18 13:28:45.720345 master-0 kubenswrapper[27835]: I0318 13:28:45.720237 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 18 13:28:45.955991 master-0 kubenswrapper[27835]: I0318 13:28:45.955919 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 18 13:28:46.043723 master-0 kubenswrapper[27835]: I0318 13:28:46.043576 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 18 13:28:46.075182 master-0 kubenswrapper[27835]: I0318 13:28:46.075131 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 18 13:28:46.178544 master-0 kubenswrapper[27835]: I0318 13:28:46.178393 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 18 13:28:46.307505 master-0 kubenswrapper[27835]: I0318 13:28:46.307264 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 18 13:28:47.047189 master-0 kubenswrapper[27835]: I0318 13:28:47.047120 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 18 13:28:47.058850 master-0 kubenswrapper[27835]: I0318 13:28:47.058804 27835 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 18 13:28:47.069631 master-0 kubenswrapper[27835]: I0318 13:28:47.069590 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 18 13:28:47.154225 master-0 kubenswrapper[27835]: I0318 13:28:47.154115 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-kn6rx" Mar 18 13:28:47.429006 master-0 kubenswrapper[27835]: I0318 13:28:47.428944 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 18 13:28:47.443371 master-0 kubenswrapper[27835]: I0318 13:28:47.443343 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 18 13:28:47.957593 master-0 kubenswrapper[27835]: I0318 13:28:47.957507 27835 patch_prober.go:28] interesting pod/console-684cf44489-lfkt8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" start-of-body= Mar 18 13:28:47.957968 master-0 kubenswrapper[27835]: I0318 13:28:47.957585 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-684cf44489-lfkt8" podUID="81db56ef-4aac-48a8-aada-fb4c198f0b5c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" Mar 18 13:28:48.091477 master-0 kubenswrapper[27835]: I0318 13:28:48.091362 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:28:48.099162 master-0 kubenswrapper[27835]: I0318 13:28:48.099109 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:28:48.203895 master-0 kubenswrapper[27835]: I0318 13:28:48.203817 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 18 13:28:48.487228 master-0 kubenswrapper[27835]: I0318 13:28:48.486883 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 18 13:28:48.640835 master-0 kubenswrapper[27835]: I0318 13:28:48.640750 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 18 13:28:48.695215 master-0 kubenswrapper[27835]: I0318 13:28:48.694239 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 18 13:28:48.738791 master-0 kubenswrapper[27835]: I0318 13:28:48.738495 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 18 13:28:48.796013 master-0 kubenswrapper[27835]: I0318 13:28:48.795947 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 13:28:48.933762 master-0 kubenswrapper[27835]: I0318 13:28:48.933653 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 18 13:28:49.146355 master-0 kubenswrapper[27835]: I0318 13:28:49.146133 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 18 13:28:49.998863 master-0 kubenswrapper[27835]: I0318 13:28:49.998676 27835 patch_prober.go:28] interesting pod/console-7bb86d5d56-ffwhx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" start-of-body= Mar 18 13:28:49.998863 master-0 kubenswrapper[27835]: I0318 13:28:49.998774 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7bb86d5d56-ffwhx" podUID="1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4" containerName="console" probeResult="failure" output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" Mar 18 13:28:50.696583 master-0 kubenswrapper[27835]: I0318 13:28:50.695392 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 18 13:28:50.719122 master-0 kubenswrapper[27835]: I0318 13:28:50.719024 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 18 13:28:50.881784 master-0 kubenswrapper[27835]: I0318 13:28:50.881702 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 18 13:28:51.113340 master-0 kubenswrapper[27835]: I0318 13:28:51.113171 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 18 13:28:51.113588 master-0 kubenswrapper[27835]: I0318 13:28:51.113204 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 18 13:28:51.666068 master-0 kubenswrapper[27835]: I0318 13:28:51.665995 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 18 13:28:51.679540 master-0 kubenswrapper[27835]: I0318 13:28:51.678143 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 18 13:28:51.745774 master-0 kubenswrapper[27835]: I0318 13:28:51.745699 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 18 13:28:51.794538 master-0 kubenswrapper[27835]: I0318 13:28:51.794447 27835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:28:51.794853 master-0 kubenswrapper[27835]: I0318 13:28:51.794805 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:28:51.794941 master-0 kubenswrapper[27835]: I0318 13:28:51.794902 27835 scope.go:117] "RemoveContainer" containerID="038875b7496310eba4d592d71731dabe07e1cc335c819eb3e88f7c7069a8d44c" Mar 18 13:28:51.796596 master-0 kubenswrapper[27835]: E0318 13:28:51.796543 27835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-89ccd998f-99pzm_openshift-marketplace(fe643e40-d06d-4e69-9be3-0065c2a78567)\"" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" podUID="fe643e40-d06d-4e69-9be3-0065c2a78567" Mar 18 13:28:51.902365 master-0 kubenswrapper[27835]: I0318 13:28:51.902299 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 13:28:52.042078 master-0 kubenswrapper[27835]: I0318 13:28:52.041685 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 18 13:28:52.390194 master-0 kubenswrapper[27835]: I0318 13:28:52.390040 27835 scope.go:117] "RemoveContainer" containerID="038875b7496310eba4d592d71731dabe07e1cc335c819eb3e88f7c7069a8d44c" Mar 18 13:28:52.390504 master-0 kubenswrapper[27835]: E0318 13:28:52.390381 27835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-89ccd998f-99pzm_openshift-marketplace(fe643e40-d06d-4e69-9be3-0065c2a78567)\"" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" podUID="fe643e40-d06d-4e69-9be3-0065c2a78567" Mar 18 13:28:53.085647 master-0 kubenswrapper[27835]: I0318 13:28:53.085519 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 18 13:28:53.379498 master-0 kubenswrapper[27835]: I0318 13:28:53.378952 27835 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 18 13:28:53.379498 master-0 kubenswrapper[27835]: I0318 13:28:53.379053 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7bb86d5d56-ffwhx" podStartSLOduration=74.379033281 podStartE2EDuration="1m14.379033281s" podCreationTimestamp="2026-03-18 13:27:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:27:59.621371125 +0000 UTC m=+243.586582715" watchObservedRunningTime="2026-03-18 13:28:53.379033281 +0000 UTC m=+297.344244851" Mar 18 13:28:53.387720 master-0 kubenswrapper[27835]: I0318 13:28:53.387654 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0","openshift-console/console-5df65d974f-mpf5j","openshift-console/console-686bcb5cf-88rcq"] Mar 18 13:28:53.387884 master-0 kubenswrapper[27835]: I0318 13:28:53.387755 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 13:28:53.395557 master-0 kubenswrapper[27835]: I0318 13:28:53.395489 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:28:53.429118 master-0 kubenswrapper[27835]: I0318 13:28:53.428626 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=54.428567799 podStartE2EDuration="54.428567799s" podCreationTimestamp="2026-03-18 13:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:28:53.415351004 +0000 UTC m=+297.380562564" watchObservedRunningTime="2026-03-18 13:28:53.428567799 +0000 UTC m=+297.393779389" Mar 18 13:28:53.451260 master-0 kubenswrapper[27835]: I0318 13:28:53.451084 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 18 13:28:53.482432 master-0 kubenswrapper[27835]: I0318 13:28:53.482185 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 18 13:28:53.520003 master-0 kubenswrapper[27835]: I0318 13:28:53.519916 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 18 13:28:53.527889 master-0 kubenswrapper[27835]: I0318 13:28:53.527824 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 18 13:28:53.584521 master-0 kubenswrapper[27835]: I0318 13:28:53.584439 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 18 13:28:53.711840 master-0 kubenswrapper[27835]: I0318 13:28:53.711741 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-hl5hl" Mar 18 13:28:53.752110 master-0 kubenswrapper[27835]: I0318 13:28:53.752025 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 18 13:28:54.096035 master-0 kubenswrapper[27835]: I0318 13:28:54.095862 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 18 13:28:54.119220 master-0 kubenswrapper[27835]: I0318 13:28:54.119135 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-sn888" Mar 18 13:28:54.168376 master-0 kubenswrapper[27835]: I0318 13:28:54.168262 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 18 13:28:54.297103 master-0 kubenswrapper[27835]: I0318 13:28:54.296980 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18d00d36-387c-4c03-affa-9abc8e2d4fe0" path="/var/lib/kubelet/pods/18d00d36-387c-4c03-affa-9abc8e2d4fe0/volumes" Mar 18 13:28:54.298367 master-0 kubenswrapper[27835]: I0318 13:28:54.298306 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a42bf050-6c38-4023-a8b4-dc795f3aadc7" path="/var/lib/kubelet/pods/a42bf050-6c38-4023-a8b4-dc795f3aadc7/volumes" Mar 18 13:28:54.488362 master-0 kubenswrapper[27835]: I0318 13:28:54.488256 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 18 13:28:54.496645 master-0 kubenswrapper[27835]: I0318 13:28:54.496571 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-dz6jc" Mar 18 13:28:54.545097 master-0 kubenswrapper[27835]: I0318 13:28:54.544986 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 13:28:54.588798 master-0 kubenswrapper[27835]: I0318 13:28:54.588464 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 18 13:28:54.757557 master-0 kubenswrapper[27835]: I0318 13:28:54.757377 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 18 13:28:55.020391 master-0 kubenswrapper[27835]: I0318 13:28:55.020176 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 18 13:28:55.046085 master-0 kubenswrapper[27835]: I0318 13:28:55.046015 27835 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 18 13:28:55.087257 master-0 kubenswrapper[27835]: I0318 13:28:55.087183 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 18 13:28:55.371227 master-0 kubenswrapper[27835]: I0318 13:28:55.371076 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 18 13:28:55.905559 master-0 kubenswrapper[27835]: I0318 13:28:55.905477 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 18 13:28:55.999150 master-0 kubenswrapper[27835]: I0318 13:28:55.998958 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 18 13:28:56.233019 master-0 kubenswrapper[27835]: I0318 13:28:56.232939 27835 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 13:28:56.233438 master-0 kubenswrapper[27835]: I0318 13:28:56.233290 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="85632c1cec8974aa874834e4cfff4c77" containerName="startup-monitor" containerID="cri-o://79bf0f7a4318c4fff8d9359a7475c6cfeeb6f4f8213f77abdbf1e29b054b000e" gracePeriod=5 Mar 18 13:28:56.247675 master-0 kubenswrapper[27835]: I0318 13:28:56.247630 27835 kubelet.go:1505] "Image garbage collection succeeded" Mar 18 13:28:56.279313 master-0 kubenswrapper[27835]: I0318 13:28:56.279181 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 18 13:28:56.283175 master-0 kubenswrapper[27835]: I0318 13:28:56.283101 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 18 13:28:56.295790 master-0 kubenswrapper[27835]: I0318 13:28:56.295720 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 18 13:28:56.330309 master-0 kubenswrapper[27835]: I0318 13:28:56.330248 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-sjstk" Mar 18 13:28:56.417208 master-0 kubenswrapper[27835]: I0318 13:28:56.417150 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 18 13:28:56.755478 master-0 kubenswrapper[27835]: I0318 13:28:56.755397 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 18 13:28:57.003546 master-0 kubenswrapper[27835]: I0318 13:28:57.003469 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 18 13:28:57.058094 master-0 kubenswrapper[27835]: I0318 13:28:57.057918 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-b4r5l" Mar 18 13:28:57.826649 master-0 kubenswrapper[27835]: I0318 13:28:57.826563 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 18 13:28:57.958175 master-0 kubenswrapper[27835]: I0318 13:28:57.958102 27835 patch_prober.go:28] interesting pod/console-684cf44489-lfkt8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" start-of-body= Mar 18 13:28:57.958428 master-0 kubenswrapper[27835]: I0318 13:28:57.958183 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-684cf44489-lfkt8" podUID="81db56ef-4aac-48a8-aada-fb4c198f0b5c" containerName="console" probeResult="failure" output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" Mar 18 13:28:58.711234 master-0 kubenswrapper[27835]: I0318 13:28:58.711149 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 18 13:28:58.834876 master-0 kubenswrapper[27835]: I0318 13:28:58.834783 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 18 13:28:58.893167 master-0 kubenswrapper[27835]: I0318 13:28:58.893122 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 18 13:28:59.112332 master-0 kubenswrapper[27835]: I0318 13:28:59.111959 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 13:28:59.294839 master-0 kubenswrapper[27835]: I0318 13:28:59.294774 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-7kt87" Mar 18 13:28:59.485775 master-0 kubenswrapper[27835]: I0318 13:28:59.485733 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 18 13:28:59.666624 master-0 kubenswrapper[27835]: I0318 13:28:59.666582 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 18 13:28:59.797508 master-0 kubenswrapper[27835]: I0318 13:28:59.797347 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-lz5d6" Mar 18 13:28:59.887044 master-0 kubenswrapper[27835]: I0318 13:28:59.887003 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 18 13:28:59.998835 master-0 kubenswrapper[27835]: I0318 13:28:59.998789 27835 patch_prober.go:28] interesting pod/console-7bb86d5d56-ffwhx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" start-of-body= Mar 18 13:28:59.999122 master-0 kubenswrapper[27835]: I0318 13:28:59.999089 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7bb86d5d56-ffwhx" podUID="1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4" containerName="console" probeResult="failure" output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" Mar 18 13:29:00.257799 master-0 kubenswrapper[27835]: I0318 13:29:00.257730 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 18 13:29:00.333016 master-0 kubenswrapper[27835]: I0318 13:29:00.332976 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 18 13:29:00.357331 master-0 kubenswrapper[27835]: I0318 13:29:00.357272 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 18 13:29:00.511026 master-0 kubenswrapper[27835]: I0318 13:29:00.510930 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-7vfv5" Mar 18 13:29:00.711232 master-0 kubenswrapper[27835]: I0318 13:29:00.711181 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 18 13:29:00.711909 master-0 kubenswrapper[27835]: I0318 13:29:00.711323 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 18 13:29:01.379815 master-0 kubenswrapper[27835]: I0318 13:29:01.379743 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 18 13:29:01.462572 master-0 kubenswrapper[27835]: I0318 13:29:01.462516 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_85632c1cec8974aa874834e4cfff4c77/startup-monitor/0.log" Mar 18 13:29:01.462795 master-0 kubenswrapper[27835]: I0318 13:29:01.462573 27835 generic.go:334] "Generic (PLEG): container finished" podID="85632c1cec8974aa874834e4cfff4c77" containerID="79bf0f7a4318c4fff8d9359a7475c6cfeeb6f4f8213f77abdbf1e29b054b000e" exitCode=137 Mar 18 13:29:01.494090 master-0 kubenswrapper[27835]: I0318 13:29:01.493952 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 18 13:29:01.676484 master-0 kubenswrapper[27835]: I0318 13:29:01.674795 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 18 13:29:01.790903 master-0 kubenswrapper[27835]: I0318 13:29:01.790837 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 18 13:29:01.819012 master-0 kubenswrapper[27835]: I0318 13:29:01.818962 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_85632c1cec8974aa874834e4cfff4c77/startup-monitor/0.log" Mar 18 13:29:01.819275 master-0 kubenswrapper[27835]: I0318 13:29:01.819043 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:29:01.905446 master-0 kubenswrapper[27835]: I0318 13:29:01.894171 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-resource-dir\") pod \"85632c1cec8974aa874834e4cfff4c77\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " Mar 18 13:29:01.905446 master-0 kubenswrapper[27835]: I0318 13:29:01.894301 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-lock\") pod \"85632c1cec8974aa874834e4cfff4c77\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " Mar 18 13:29:01.905446 master-0 kubenswrapper[27835]: I0318 13:29:01.894483 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-log\") pod \"85632c1cec8974aa874834e4cfff4c77\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " Mar 18 13:29:01.905446 master-0 kubenswrapper[27835]: I0318 13:29:01.894583 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-pod-resource-dir\") pod \"85632c1cec8974aa874834e4cfff4c77\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " Mar 18 13:29:01.905446 master-0 kubenswrapper[27835]: I0318 13:29:01.894666 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-manifests\") pod \"85632c1cec8974aa874834e4cfff4c77\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " Mar 18 13:29:01.905446 master-0 kubenswrapper[27835]: I0318 13:29:01.895125 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-manifests" (OuterVolumeSpecName: "manifests") pod "85632c1cec8974aa874834e4cfff4c77" (UID: "85632c1cec8974aa874834e4cfff4c77"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:29:01.905446 master-0 kubenswrapper[27835]: I0318 13:29:01.895172 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "85632c1cec8974aa874834e4cfff4c77" (UID: "85632c1cec8974aa874834e4cfff4c77"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:29:01.905446 master-0 kubenswrapper[27835]: I0318 13:29:01.895192 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-lock" (OuterVolumeSpecName: "var-lock") pod "85632c1cec8974aa874834e4cfff4c77" (UID: "85632c1cec8974aa874834e4cfff4c77"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:29:01.905446 master-0 kubenswrapper[27835]: I0318 13:29:01.895210 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-log" (OuterVolumeSpecName: "var-log") pod "85632c1cec8974aa874834e4cfff4c77" (UID: "85632c1cec8974aa874834e4cfff4c77"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:29:01.905446 master-0 kubenswrapper[27835]: I0318 13:29:01.903670 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 18 13:29:01.916615 master-0 kubenswrapper[27835]: I0318 13:29:01.910635 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "85632c1cec8974aa874834e4cfff4c77" (UID: "85632c1cec8974aa874834e4cfff4c77"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:29:02.008205 master-0 kubenswrapper[27835]: I0318 13:29:02.008070 27835 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-manifests\") on node \"master-0\" DevicePath \"\"" Mar 18 13:29:02.008205 master-0 kubenswrapper[27835]: I0318 13:29:02.008129 27835 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:29:02.008205 master-0 kubenswrapper[27835]: I0318 13:29:02.008161 27835 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:29:02.008205 master-0 kubenswrapper[27835]: I0318 13:29:02.008172 27835 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-log\") on node \"master-0\" DevicePath \"\"" Mar 18 13:29:02.008205 master-0 kubenswrapper[27835]: I0318 13:29:02.008186 27835 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:29:02.017755 master-0 kubenswrapper[27835]: I0318 13:29:02.017712 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 18 13:29:02.166373 master-0 kubenswrapper[27835]: I0318 13:29:02.166323 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 18 13:29:02.293345 master-0 kubenswrapper[27835]: I0318 13:29:02.291294 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 18 13:29:02.302831 master-0 kubenswrapper[27835]: I0318 13:29:02.302766 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85632c1cec8974aa874834e4cfff4c77" path="/var/lib/kubelet/pods/85632c1cec8974aa874834e4cfff4c77/volumes" Mar 18 13:29:02.373076 master-0 kubenswrapper[27835]: I0318 13:29:02.372733 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 18 13:29:02.436577 master-0 kubenswrapper[27835]: I0318 13:29:02.436498 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 18 13:29:02.465110 master-0 kubenswrapper[27835]: I0318 13:29:02.463541 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 18 13:29:02.486923 master-0 kubenswrapper[27835]: I0318 13:29:02.486873 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_85632c1cec8974aa874834e4cfff4c77/startup-monitor/0.log" Mar 18 13:29:02.487049 master-0 kubenswrapper[27835]: I0318 13:29:02.486989 27835 scope.go:117] "RemoveContainer" containerID="79bf0f7a4318c4fff8d9359a7475c6cfeeb6f4f8213f77abdbf1e29b054b000e" Mar 18 13:29:02.487154 master-0 kubenswrapper[27835]: I0318 13:29:02.487108 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:29:02.553322 master-0 kubenswrapper[27835]: I0318 13:29:02.553190 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 18 13:29:02.770155 master-0 kubenswrapper[27835]: I0318 13:29:02.770087 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-q8tt6" Mar 18 13:29:02.791592 master-0 kubenswrapper[27835]: I0318 13:29:02.791549 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 18 13:29:02.957612 master-0 kubenswrapper[27835]: I0318 13:29:02.957562 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-qfz5b" Mar 18 13:29:02.970446 master-0 kubenswrapper[27835]: I0318 13:29:02.970375 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 18 13:29:03.083874 master-0 kubenswrapper[27835]: I0318 13:29:03.083801 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 18 13:29:03.310189 master-0 kubenswrapper[27835]: I0318 13:29:03.310056 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 18 13:29:03.422337 master-0 kubenswrapper[27835]: I0318 13:29:03.422253 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 18 13:29:03.573915 master-0 kubenswrapper[27835]: I0318 13:29:03.573782 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-hr2xw" Mar 18 13:29:03.589110 master-0 kubenswrapper[27835]: I0318 13:29:03.589062 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 18 13:29:03.682710 master-0 kubenswrapper[27835]: I0318 13:29:03.682647 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 18 13:29:03.705939 master-0 kubenswrapper[27835]: I0318 13:29:03.705872 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 18 13:29:03.725470 master-0 kubenswrapper[27835]: I0318 13:29:03.725272 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 18 13:29:04.116261 master-0 kubenswrapper[27835]: I0318 13:29:04.116209 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 18 13:29:04.280745 master-0 kubenswrapper[27835]: I0318 13:29:04.280697 27835 scope.go:117] "RemoveContainer" containerID="038875b7496310eba4d592d71731dabe07e1cc335c819eb3e88f7c7069a8d44c" Mar 18 13:29:04.454580 master-0 kubenswrapper[27835]: I0318 13:29:04.454519 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 13:29:04.486965 master-0 kubenswrapper[27835]: I0318 13:29:04.486911 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 18 13:29:04.504993 master-0 kubenswrapper[27835]: I0318 13:29:04.504948 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-89ccd998f-99pzm_fe643e40-d06d-4e69-9be3-0065c2a78567/marketplace-operator/3.log" Mar 18 13:29:04.505202 master-0 kubenswrapper[27835]: I0318 13:29:04.505013 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" event={"ID":"fe643e40-d06d-4e69-9be3-0065c2a78567","Type":"ContainerStarted","Data":"c61f80b39335bb556396af97805e745180f287f4f2c6b676611d48e0f72cc514"} Mar 18 13:29:04.505442 master-0 kubenswrapper[27835]: I0318 13:29:04.505401 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:29:04.506930 master-0 kubenswrapper[27835]: I0318 13:29:04.506878 27835 patch_prober.go:28] interesting pod/marketplace-operator-89ccd998f-99pzm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" start-of-body= Mar 18 13:29:04.506999 master-0 kubenswrapper[27835]: I0318 13:29:04.506952 27835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" podUID="fe643e40-d06d-4e69-9be3-0065c2a78567" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" Mar 18 13:29:04.604854 master-0 kubenswrapper[27835]: I0318 13:29:04.604799 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 18 13:29:05.087282 master-0 kubenswrapper[27835]: I0318 13:29:05.087225 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 18 13:29:05.240117 master-0 kubenswrapper[27835]: I0318 13:29:05.240072 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 18 13:29:05.488694 master-0 kubenswrapper[27835]: I0318 13:29:05.488652 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-qqvgp" Mar 18 13:29:05.516347 master-0 kubenswrapper[27835]: I0318 13:29:05.516307 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:29:05.677562 master-0 kubenswrapper[27835]: I0318 13:29:05.677502 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 18 13:29:05.799192 master-0 kubenswrapper[27835]: I0318 13:29:05.799068 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 18 13:29:05.835874 master-0 kubenswrapper[27835]: I0318 13:29:05.835801 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 13:29:05.870481 master-0 kubenswrapper[27835]: I0318 13:29:05.870375 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 18 13:29:05.982065 master-0 kubenswrapper[27835]: I0318 13:29:05.982005 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 18 13:29:06.153168 master-0 kubenswrapper[27835]: I0318 13:29:06.153061 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 18 13:29:06.208652 master-0 kubenswrapper[27835]: I0318 13:29:06.208603 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 18 13:29:06.287309 master-0 kubenswrapper[27835]: I0318 13:29:06.287267 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 18 13:29:06.515757 master-0 kubenswrapper[27835]: I0318 13:29:06.515698 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-684cf44489-lfkt8"] Mar 18 13:29:06.572201 master-0 kubenswrapper[27835]: I0318 13:29:06.572132 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5d794fddf9-gh6gq"] Mar 18 13:29:06.572578 master-0 kubenswrapper[27835]: E0318 13:29:06.572538 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a42bf050-6c38-4023-a8b4-dc795f3aadc7" containerName="console" Mar 18 13:29:06.572638 master-0 kubenswrapper[27835]: I0318 13:29:06.572576 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="a42bf050-6c38-4023-a8b4-dc795f3aadc7" containerName="console" Mar 18 13:29:06.572638 master-0 kubenswrapper[27835]: E0318 13:29:06.572609 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18d00d36-387c-4c03-affa-9abc8e2d4fe0" containerName="console" Mar 18 13:29:06.572638 master-0 kubenswrapper[27835]: I0318 13:29:06.572622 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="18d00d36-387c-4c03-affa-9abc8e2d4fe0" containerName="console" Mar 18 13:29:06.572737 master-0 kubenswrapper[27835]: E0318 13:29:06.572648 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85632c1cec8974aa874834e4cfff4c77" containerName="startup-monitor" Mar 18 13:29:06.572737 master-0 kubenswrapper[27835]: I0318 13:29:06.572660 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="85632c1cec8974aa874834e4cfff4c77" containerName="startup-monitor" Mar 18 13:29:06.572737 master-0 kubenswrapper[27835]: E0318 13:29:06.572679 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcf7b950-0686-4e8e-87da-84c45d4ca1b4" containerName="installer" Mar 18 13:29:06.572737 master-0 kubenswrapper[27835]: I0318 13:29:06.572690 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcf7b950-0686-4e8e-87da-84c45d4ca1b4" containerName="installer" Mar 18 13:29:06.572916 master-0 kubenswrapper[27835]: I0318 13:29:06.572886 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcf7b950-0686-4e8e-87da-84c45d4ca1b4" containerName="installer" Mar 18 13:29:06.572958 master-0 kubenswrapper[27835]: I0318 13:29:06.572939 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="a42bf050-6c38-4023-a8b4-dc795f3aadc7" containerName="console" Mar 18 13:29:06.572991 master-0 kubenswrapper[27835]: I0318 13:29:06.572958 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="18d00d36-387c-4c03-affa-9abc8e2d4fe0" containerName="console" Mar 18 13:29:06.572991 master-0 kubenswrapper[27835]: I0318 13:29:06.572974 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="85632c1cec8974aa874834e4cfff4c77" containerName="startup-monitor" Mar 18 13:29:06.573613 master-0 kubenswrapper[27835]: I0318 13:29:06.573574 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d794fddf9-gh6gq" Mar 18 13:29:06.578294 master-0 kubenswrapper[27835]: I0318 13:29:06.578236 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d794fddf9-gh6gq"] Mar 18 13:29:06.637140 master-0 kubenswrapper[27835]: I0318 13:29:06.637091 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 18 13:29:06.703630 master-0 kubenswrapper[27835]: I0318 13:29:06.703567 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/414429b2-4ccb-49cd-8bae-f9a6ab653831-trusted-ca-bundle\") pod \"console-5d794fddf9-gh6gq\" (UID: \"414429b2-4ccb-49cd-8bae-f9a6ab653831\") " pod="openshift-console/console-5d794fddf9-gh6gq" Mar 18 13:29:06.703630 master-0 kubenswrapper[27835]: I0318 13:29:06.703630 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/414429b2-4ccb-49cd-8bae-f9a6ab653831-console-serving-cert\") pod \"console-5d794fddf9-gh6gq\" (UID: \"414429b2-4ccb-49cd-8bae-f9a6ab653831\") " pod="openshift-console/console-5d794fddf9-gh6gq" Mar 18 13:29:06.704155 master-0 kubenswrapper[27835]: I0318 13:29:06.703656 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw9j8\" (UniqueName: \"kubernetes.io/projected/414429b2-4ccb-49cd-8bae-f9a6ab653831-kube-api-access-dw9j8\") pod \"console-5d794fddf9-gh6gq\" (UID: \"414429b2-4ccb-49cd-8bae-f9a6ab653831\") " pod="openshift-console/console-5d794fddf9-gh6gq" Mar 18 13:29:06.704155 master-0 kubenswrapper[27835]: I0318 13:29:06.703672 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/414429b2-4ccb-49cd-8bae-f9a6ab653831-console-config\") pod \"console-5d794fddf9-gh6gq\" (UID: \"414429b2-4ccb-49cd-8bae-f9a6ab653831\") " pod="openshift-console/console-5d794fddf9-gh6gq" Mar 18 13:29:06.704155 master-0 kubenswrapper[27835]: I0318 13:29:06.703695 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/414429b2-4ccb-49cd-8bae-f9a6ab653831-service-ca\") pod \"console-5d794fddf9-gh6gq\" (UID: \"414429b2-4ccb-49cd-8bae-f9a6ab653831\") " pod="openshift-console/console-5d794fddf9-gh6gq" Mar 18 13:29:06.704155 master-0 kubenswrapper[27835]: I0318 13:29:06.703721 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/414429b2-4ccb-49cd-8bae-f9a6ab653831-console-oauth-config\") pod \"console-5d794fddf9-gh6gq\" (UID: \"414429b2-4ccb-49cd-8bae-f9a6ab653831\") " pod="openshift-console/console-5d794fddf9-gh6gq" Mar 18 13:29:06.704155 master-0 kubenswrapper[27835]: I0318 13:29:06.703774 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/414429b2-4ccb-49cd-8bae-f9a6ab653831-oauth-serving-cert\") pod \"console-5d794fddf9-gh6gq\" (UID: \"414429b2-4ccb-49cd-8bae-f9a6ab653831\") " pod="openshift-console/console-5d794fddf9-gh6gq" Mar 18 13:29:06.804127 master-0 kubenswrapper[27835]: I0318 13:29:06.804001 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 18 13:29:06.804795 master-0 kubenswrapper[27835]: I0318 13:29:06.804751 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/414429b2-4ccb-49cd-8bae-f9a6ab653831-oauth-serving-cert\") pod \"console-5d794fddf9-gh6gq\" (UID: \"414429b2-4ccb-49cd-8bae-f9a6ab653831\") " pod="openshift-console/console-5d794fddf9-gh6gq" Mar 18 13:29:06.804918 master-0 kubenswrapper[27835]: I0318 13:29:06.804891 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/414429b2-4ccb-49cd-8bae-f9a6ab653831-trusted-ca-bundle\") pod \"console-5d794fddf9-gh6gq\" (UID: \"414429b2-4ccb-49cd-8bae-f9a6ab653831\") " pod="openshift-console/console-5d794fddf9-gh6gq" Mar 18 13:29:06.805077 master-0 kubenswrapper[27835]: I0318 13:29:06.805053 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/414429b2-4ccb-49cd-8bae-f9a6ab653831-console-serving-cert\") pod \"console-5d794fddf9-gh6gq\" (UID: \"414429b2-4ccb-49cd-8bae-f9a6ab653831\") " pod="openshift-console/console-5d794fddf9-gh6gq" Mar 18 13:29:06.805153 master-0 kubenswrapper[27835]: I0318 13:29:06.805109 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw9j8\" (UniqueName: \"kubernetes.io/projected/414429b2-4ccb-49cd-8bae-f9a6ab653831-kube-api-access-dw9j8\") pod \"console-5d794fddf9-gh6gq\" (UID: \"414429b2-4ccb-49cd-8bae-f9a6ab653831\") " pod="openshift-console/console-5d794fddf9-gh6gq" Mar 18 13:29:06.805153 master-0 kubenswrapper[27835]: I0318 13:29:06.805136 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/414429b2-4ccb-49cd-8bae-f9a6ab653831-console-config\") pod \"console-5d794fddf9-gh6gq\" (UID: \"414429b2-4ccb-49cd-8bae-f9a6ab653831\") " pod="openshift-console/console-5d794fddf9-gh6gq" Mar 18 13:29:06.805272 master-0 kubenswrapper[27835]: I0318 13:29:06.805161 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/414429b2-4ccb-49cd-8bae-f9a6ab653831-service-ca\") pod \"console-5d794fddf9-gh6gq\" (UID: \"414429b2-4ccb-49cd-8bae-f9a6ab653831\") " pod="openshift-console/console-5d794fddf9-gh6gq" Mar 18 13:29:06.805356 master-0 kubenswrapper[27835]: I0318 13:29:06.805330 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/414429b2-4ccb-49cd-8bae-f9a6ab653831-console-oauth-config\") pod \"console-5d794fddf9-gh6gq\" (UID: \"414429b2-4ccb-49cd-8bae-f9a6ab653831\") " pod="openshift-console/console-5d794fddf9-gh6gq" Mar 18 13:29:06.806141 master-0 kubenswrapper[27835]: I0318 13:29:06.806087 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/414429b2-4ccb-49cd-8bae-f9a6ab653831-service-ca\") pod \"console-5d794fddf9-gh6gq\" (UID: \"414429b2-4ccb-49cd-8bae-f9a6ab653831\") " pod="openshift-console/console-5d794fddf9-gh6gq" Mar 18 13:29:06.806222 master-0 kubenswrapper[27835]: I0318 13:29:06.806147 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/414429b2-4ccb-49cd-8bae-f9a6ab653831-console-config\") pod \"console-5d794fddf9-gh6gq\" (UID: \"414429b2-4ccb-49cd-8bae-f9a6ab653831\") " pod="openshift-console/console-5d794fddf9-gh6gq" Mar 18 13:29:06.806655 master-0 kubenswrapper[27835]: I0318 13:29:06.806590 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/414429b2-4ccb-49cd-8bae-f9a6ab653831-oauth-serving-cert\") pod \"console-5d794fddf9-gh6gq\" (UID: \"414429b2-4ccb-49cd-8bae-f9a6ab653831\") " pod="openshift-console/console-5d794fddf9-gh6gq" Mar 18 13:29:06.806816 master-0 kubenswrapper[27835]: I0318 13:29:06.806771 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/414429b2-4ccb-49cd-8bae-f9a6ab653831-trusted-ca-bundle\") pod \"console-5d794fddf9-gh6gq\" (UID: \"414429b2-4ccb-49cd-8bae-f9a6ab653831\") " pod="openshift-console/console-5d794fddf9-gh6gq" Mar 18 13:29:06.811765 master-0 kubenswrapper[27835]: I0318 13:29:06.811719 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/414429b2-4ccb-49cd-8bae-f9a6ab653831-console-oauth-config\") pod \"console-5d794fddf9-gh6gq\" (UID: \"414429b2-4ccb-49cd-8bae-f9a6ab653831\") " pod="openshift-console/console-5d794fddf9-gh6gq" Mar 18 13:29:06.813331 master-0 kubenswrapper[27835]: I0318 13:29:06.813292 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/414429b2-4ccb-49cd-8bae-f9a6ab653831-console-serving-cert\") pod \"console-5d794fddf9-gh6gq\" (UID: \"414429b2-4ccb-49cd-8bae-f9a6ab653831\") " pod="openshift-console/console-5d794fddf9-gh6gq" Mar 18 13:29:06.820881 master-0 kubenswrapper[27835]: I0318 13:29:06.820827 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw9j8\" (UniqueName: \"kubernetes.io/projected/414429b2-4ccb-49cd-8bae-f9a6ab653831-kube-api-access-dw9j8\") pod \"console-5d794fddf9-gh6gq\" (UID: \"414429b2-4ccb-49cd-8bae-f9a6ab653831\") " pod="openshift-console/console-5d794fddf9-gh6gq" Mar 18 13:29:06.865148 master-0 kubenswrapper[27835]: I0318 13:29:06.865090 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 18 13:29:06.882595 master-0 kubenswrapper[27835]: I0318 13:29:06.882564 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 18 13:29:06.899370 master-0 kubenswrapper[27835]: I0318 13:29:06.899319 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d794fddf9-gh6gq" Mar 18 13:29:06.913914 master-0 kubenswrapper[27835]: I0318 13:29:06.913846 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 18 13:29:07.027572 master-0 kubenswrapper[27835]: I0318 13:29:07.027504 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 18 13:29:07.268085 master-0 kubenswrapper[27835]: I0318 13:29:07.267931 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 18 13:29:07.497086 master-0 kubenswrapper[27835]: I0318 13:29:07.497028 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 18 13:29:07.522398 master-0 kubenswrapper[27835]: I0318 13:29:07.522276 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 13:29:07.531609 master-0 kubenswrapper[27835]: I0318 13:29:07.531554 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 18 13:29:07.576864 master-0 kubenswrapper[27835]: I0318 13:29:07.576788 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 18 13:29:07.622564 master-0 kubenswrapper[27835]: I0318 13:29:07.622474 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 18 13:29:07.729719 master-0 kubenswrapper[27835]: I0318 13:29:07.729655 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 18 13:29:07.889725 master-0 kubenswrapper[27835]: I0318 13:29:07.889604 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 18 13:29:08.130088 master-0 kubenswrapper[27835]: I0318 13:29:08.130035 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 18 13:29:08.157762 master-0 kubenswrapper[27835]: I0318 13:29:08.157663 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 18 13:29:08.535584 master-0 kubenswrapper[27835]: I0318 13:29:08.535527 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 18 13:29:08.600554 master-0 kubenswrapper[27835]: I0318 13:29:08.600482 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 18 13:29:08.656696 master-0 kubenswrapper[27835]: I0318 13:29:08.656638 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-mbkdw" Mar 18 13:29:08.684484 master-0 kubenswrapper[27835]: I0318 13:29:08.684403 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 18 13:29:08.723076 master-0 kubenswrapper[27835]: I0318 13:29:08.723034 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 18 13:29:08.807028 master-0 kubenswrapper[27835]: I0318 13:29:08.806895 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 18 13:29:08.856146 master-0 kubenswrapper[27835]: I0318 13:29:08.856090 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-vbmv6" Mar 18 13:29:08.956573 master-0 kubenswrapper[27835]: I0318 13:29:08.956523 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 18 13:29:09.031744 master-0 kubenswrapper[27835]: I0318 13:29:09.031661 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 18 13:29:09.100086 master-0 kubenswrapper[27835]: I0318 13:29:09.099926 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 18 13:29:09.115459 master-0 kubenswrapper[27835]: I0318 13:29:09.115358 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-kcq89" Mar 18 13:29:09.218934 master-0 kubenswrapper[27835]: I0318 13:29:09.218847 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 18 13:29:09.728522 master-0 kubenswrapper[27835]: I0318 13:29:09.728435 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 18 13:29:09.771596 master-0 kubenswrapper[27835]: I0318 13:29:09.771451 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 18 13:29:09.906777 master-0 kubenswrapper[27835]: I0318 13:29:09.906697 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 18 13:29:09.938356 master-0 kubenswrapper[27835]: I0318 13:29:09.938282 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 18 13:29:10.000586 master-0 kubenswrapper[27835]: I0318 13:29:09.998847 27835 patch_prober.go:28] interesting pod/console-7bb86d5d56-ffwhx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" start-of-body= Mar 18 13:29:10.000586 master-0 kubenswrapper[27835]: I0318 13:29:09.999023 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7bb86d5d56-ffwhx" podUID="1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4" containerName="console" probeResult="failure" output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" Mar 18 13:29:10.105985 master-0 kubenswrapper[27835]: E0318 13:29:10.105937 27835 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 18 13:29:10.105985 master-0 kubenswrapper[27835]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-5d794fddf9-gh6gq_openshift-console_414429b2-4ccb-49cd-8bae-f9a6ab653831_0(7356e81b3131f8e60082121883ec55f6e0f43493e25679c171ba04c969ea9501): error adding pod openshift-console_console-5d794fddf9-gh6gq to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7356e81b3131f8e60082121883ec55f6e0f43493e25679c171ba04c969ea9501" Netns:"/var/run/netns/b3539a91-4920-4b0b-a509-5f9c16f4c995" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-5d794fddf9-gh6gq;K8S_POD_INFRA_CONTAINER_ID=7356e81b3131f8e60082121883ec55f6e0f43493e25679c171ba04c969ea9501;K8S_POD_UID=414429b2-4ccb-49cd-8bae-f9a6ab653831" Path:"" ERRORED: error configuring pod [openshift-console/console-5d794fddf9-gh6gq] networking: Multus: [openshift-console/console-5d794fddf9-gh6gq/414429b2-4ccb-49cd-8bae-f9a6ab653831]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod console-5d794fddf9-gh6gq in out of cluster comm: pod "console-5d794fddf9-gh6gq" not found Mar 18 13:29:10.105985 master-0 kubenswrapper[27835]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 13:29:10.105985 master-0 kubenswrapper[27835]: > Mar 18 13:29:10.106247 master-0 kubenswrapper[27835]: E0318 13:29:10.106026 27835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 18 13:29:10.106247 master-0 kubenswrapper[27835]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-5d794fddf9-gh6gq_openshift-console_414429b2-4ccb-49cd-8bae-f9a6ab653831_0(7356e81b3131f8e60082121883ec55f6e0f43493e25679c171ba04c969ea9501): error adding pod openshift-console_console-5d794fddf9-gh6gq to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7356e81b3131f8e60082121883ec55f6e0f43493e25679c171ba04c969ea9501" Netns:"/var/run/netns/b3539a91-4920-4b0b-a509-5f9c16f4c995" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-5d794fddf9-gh6gq;K8S_POD_INFRA_CONTAINER_ID=7356e81b3131f8e60082121883ec55f6e0f43493e25679c171ba04c969ea9501;K8S_POD_UID=414429b2-4ccb-49cd-8bae-f9a6ab653831" Path:"" ERRORED: error configuring pod [openshift-console/console-5d794fddf9-gh6gq] networking: Multus: [openshift-console/console-5d794fddf9-gh6gq/414429b2-4ccb-49cd-8bae-f9a6ab653831]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod console-5d794fddf9-gh6gq in out of cluster comm: pod "console-5d794fddf9-gh6gq" not found Mar 18 13:29:10.106247 master-0 kubenswrapper[27835]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 13:29:10.106247 master-0 kubenswrapper[27835]: > pod="openshift-console/console-5d794fddf9-gh6gq" Mar 18 13:29:10.106247 master-0 kubenswrapper[27835]: E0318 13:29:10.106049 27835 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 18 13:29:10.106247 master-0 kubenswrapper[27835]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-5d794fddf9-gh6gq_openshift-console_414429b2-4ccb-49cd-8bae-f9a6ab653831_0(7356e81b3131f8e60082121883ec55f6e0f43493e25679c171ba04c969ea9501): error adding pod openshift-console_console-5d794fddf9-gh6gq to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7356e81b3131f8e60082121883ec55f6e0f43493e25679c171ba04c969ea9501" Netns:"/var/run/netns/b3539a91-4920-4b0b-a509-5f9c16f4c995" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-5d794fddf9-gh6gq;K8S_POD_INFRA_CONTAINER_ID=7356e81b3131f8e60082121883ec55f6e0f43493e25679c171ba04c969ea9501;K8S_POD_UID=414429b2-4ccb-49cd-8bae-f9a6ab653831" Path:"" ERRORED: error configuring pod [openshift-console/console-5d794fddf9-gh6gq] networking: Multus: [openshift-console/console-5d794fddf9-gh6gq/414429b2-4ccb-49cd-8bae-f9a6ab653831]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod console-5d794fddf9-gh6gq in out of cluster comm: pod "console-5d794fddf9-gh6gq" not found Mar 18 13:29:10.106247 master-0 kubenswrapper[27835]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 13:29:10.106247 master-0 kubenswrapper[27835]: > pod="openshift-console/console-5d794fddf9-gh6gq" Mar 18 13:29:10.106247 master-0 kubenswrapper[27835]: E0318 13:29:10.106117 27835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"console-5d794fddf9-gh6gq_openshift-console(414429b2-4ccb-49cd-8bae-f9a6ab653831)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"console-5d794fddf9-gh6gq_openshift-console(414429b2-4ccb-49cd-8bae-f9a6ab653831)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-5d794fddf9-gh6gq_openshift-console_414429b2-4ccb-49cd-8bae-f9a6ab653831_0(7356e81b3131f8e60082121883ec55f6e0f43493e25679c171ba04c969ea9501): error adding pod openshift-console_console-5d794fddf9-gh6gq to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"7356e81b3131f8e60082121883ec55f6e0f43493e25679c171ba04c969ea9501\\\" Netns:\\\"/var/run/netns/b3539a91-4920-4b0b-a509-5f9c16f4c995\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-5d794fddf9-gh6gq;K8S_POD_INFRA_CONTAINER_ID=7356e81b3131f8e60082121883ec55f6e0f43493e25679c171ba04c969ea9501;K8S_POD_UID=414429b2-4ccb-49cd-8bae-f9a6ab653831\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-console/console-5d794fddf9-gh6gq] networking: Multus: [openshift-console/console-5d794fddf9-gh6gq/414429b2-4ccb-49cd-8bae-f9a6ab653831]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod console-5d794fddf9-gh6gq in out of cluster comm: pod \\\"console-5d794fddf9-gh6gq\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-console/console-5d794fddf9-gh6gq" podUID="414429b2-4ccb-49cd-8bae-f9a6ab653831" Mar 18 13:29:10.255839 master-0 kubenswrapper[27835]: I0318 13:29:10.255667 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 13:29:10.302901 master-0 kubenswrapper[27835]: I0318 13:29:10.302838 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 18 13:29:10.545705 master-0 kubenswrapper[27835]: I0318 13:29:10.545631 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 18 13:29:10.549395 master-0 kubenswrapper[27835]: I0318 13:29:10.548755 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d794fddf9-gh6gq" Mar 18 13:29:10.549986 master-0 kubenswrapper[27835]: I0318 13:29:10.549947 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d794fddf9-gh6gq" Mar 18 13:29:10.667176 master-0 kubenswrapper[27835]: I0318 13:29:10.667113 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 18 13:29:11.531002 master-0 kubenswrapper[27835]: I0318 13:29:11.530924 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 18 13:29:11.612793 master-0 kubenswrapper[27835]: I0318 13:29:11.612727 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 18 13:29:11.699103 master-0 kubenswrapper[27835]: I0318 13:29:11.699036 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 18 13:29:11.799518 master-0 kubenswrapper[27835]: I0318 13:29:11.799378 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-lvs7l" Mar 18 13:29:11.804880 master-0 kubenswrapper[27835]: I0318 13:29:11.804833 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 18 13:29:11.838990 master-0 kubenswrapper[27835]: I0318 13:29:11.838926 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 18 13:29:11.893765 master-0 kubenswrapper[27835]: I0318 13:29:11.893703 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 18 13:29:11.981911 master-0 kubenswrapper[27835]: I0318 13:29:11.981867 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d794fddf9-gh6gq"] Mar 18 13:29:12.014495 master-0 kubenswrapper[27835]: I0318 13:29:12.012692 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 18 13:29:12.306347 master-0 kubenswrapper[27835]: I0318 13:29:12.305628 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 18 13:29:12.337224 master-0 kubenswrapper[27835]: I0318 13:29:12.337101 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 18 13:29:12.379974 master-0 kubenswrapper[27835]: I0318 13:29:12.379930 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 18 13:29:12.566439 master-0 kubenswrapper[27835]: I0318 13:29:12.566367 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d794fddf9-gh6gq" event={"ID":"414429b2-4ccb-49cd-8bae-f9a6ab653831","Type":"ContainerStarted","Data":"b6e635d52f946549fcf27defbcf724ae9289a976eded079682e4355e8de44795"} Mar 18 13:29:12.566439 master-0 kubenswrapper[27835]: I0318 13:29:12.566424 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d794fddf9-gh6gq" event={"ID":"414429b2-4ccb-49cd-8bae-f9a6ab653831","Type":"ContainerStarted","Data":"8ad937341a8553834ca1513f1e79113ee92be5c3147b5f5323178fd7ea20e047"} Mar 18 13:29:12.599715 master-0 kubenswrapper[27835]: I0318 13:29:12.599548 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5d794fddf9-gh6gq" podStartSLOduration=6.599527746 podStartE2EDuration="6.599527746s" podCreationTimestamp="2026-03-18 13:29:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:29:12.593958822 +0000 UTC m=+316.559170392" watchObservedRunningTime="2026-03-18 13:29:12.599527746 +0000 UTC m=+316.564739326" Mar 18 13:29:12.728013 master-0 kubenswrapper[27835]: I0318 13:29:12.727954 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 18 13:29:13.061196 master-0 kubenswrapper[27835]: I0318 13:29:13.061130 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 18 13:29:13.491260 master-0 kubenswrapper[27835]: I0318 13:29:13.491203 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 13:29:13.507615 master-0 kubenswrapper[27835]: I0318 13:29:13.507552 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 18 13:29:13.751098 master-0 kubenswrapper[27835]: I0318 13:29:13.750985 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 18 13:29:13.841866 master-0 kubenswrapper[27835]: I0318 13:29:13.841811 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 18 13:29:13.897882 master-0 kubenswrapper[27835]: I0318 13:29:13.897792 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 18 13:29:14.085056 master-0 kubenswrapper[27835]: I0318 13:29:14.084901 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 18 13:29:14.235685 master-0 kubenswrapper[27835]: I0318 13:29:14.235593 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 18 13:29:14.576166 master-0 kubenswrapper[27835]: I0318 13:29:14.576099 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 18 13:29:14.606665 master-0 kubenswrapper[27835]: I0318 13:29:14.606581 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 18 13:29:14.837541 master-0 kubenswrapper[27835]: I0318 13:29:14.837348 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 18 13:29:14.889320 master-0 kubenswrapper[27835]: I0318 13:29:14.889275 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-sszww" Mar 18 13:29:15.057850 master-0 kubenswrapper[27835]: I0318 13:29:15.057783 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-5dnvq" Mar 18 13:29:15.413071 master-0 kubenswrapper[27835]: I0318 13:29:15.413004 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 18 13:29:15.557618 master-0 kubenswrapper[27835]: I0318 13:29:15.557534 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-5fm1li8uoic3j" Mar 18 13:29:15.566098 master-0 kubenswrapper[27835]: I0318 13:29:15.565994 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 18 13:29:15.600437 master-0 kubenswrapper[27835]: I0318 13:29:15.600331 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 18 13:29:15.793525 master-0 kubenswrapper[27835]: I0318 13:29:15.792823 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-r54p6" Mar 18 13:29:16.522801 master-0 kubenswrapper[27835]: I0318 13:29:16.522679 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 18 13:29:16.694702 master-0 kubenswrapper[27835]: I0318 13:29:16.694654 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 13:29:16.708850 master-0 kubenswrapper[27835]: I0318 13:29:16.708812 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-6lm6r" Mar 18 13:29:16.900084 master-0 kubenswrapper[27835]: I0318 13:29:16.899856 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5d794fddf9-gh6gq" Mar 18 13:29:16.900084 master-0 kubenswrapper[27835]: I0318 13:29:16.899946 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5d794fddf9-gh6gq" Mar 18 13:29:16.901716 master-0 kubenswrapper[27835]: I0318 13:29:16.901666 27835 patch_prober.go:28] interesting pod/console-5d794fddf9-gh6gq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Mar 18 13:29:16.901796 master-0 kubenswrapper[27835]: I0318 13:29:16.901725 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d794fddf9-gh6gq" podUID="414429b2-4ccb-49cd-8bae-f9a6ab653831" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Mar 18 13:29:17.022235 master-0 kubenswrapper[27835]: I0318 13:29:17.022186 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 18 13:29:17.491237 master-0 kubenswrapper[27835]: I0318 13:29:17.491186 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 18 13:29:18.154619 master-0 kubenswrapper[27835]: I0318 13:29:18.153634 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 13:29:18.396450 master-0 kubenswrapper[27835]: I0318 13:29:18.396307 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 18 13:29:18.565758 master-0 kubenswrapper[27835]: I0318 13:29:18.565703 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 18 13:29:18.821536 master-0 kubenswrapper[27835]: I0318 13:29:18.821468 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 18 13:29:19.332897 master-0 kubenswrapper[27835]: I0318 13:29:19.332823 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 18 13:29:19.420334 master-0 kubenswrapper[27835]: I0318 13:29:19.420272 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 18 13:29:19.456710 master-0 kubenswrapper[27835]: I0318 13:29:19.446607 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 18 13:29:19.474903 master-0 kubenswrapper[27835]: I0318 13:29:19.474834 27835 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 18 13:29:19.856675 master-0 kubenswrapper[27835]: I0318 13:29:19.856620 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 18 13:29:19.882058 master-0 kubenswrapper[27835]: I0318 13:29:19.881982 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 18 13:29:19.998652 master-0 kubenswrapper[27835]: I0318 13:29:19.998588 27835 patch_prober.go:28] interesting pod/console-7bb86d5d56-ffwhx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" start-of-body= Mar 18 13:29:19.998903 master-0 kubenswrapper[27835]: I0318 13:29:19.998654 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7bb86d5d56-ffwhx" podUID="1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4" containerName="console" probeResult="failure" output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" Mar 18 13:29:20.516782 master-0 kubenswrapper[27835]: I0318 13:29:20.516715 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 18 13:29:20.733182 master-0 kubenswrapper[27835]: I0318 13:29:20.733126 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 18 13:29:20.957649 master-0 kubenswrapper[27835]: I0318 13:29:20.957565 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 18 13:29:21.051116 master-0 kubenswrapper[27835]: I0318 13:29:21.051045 27835 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 18 13:29:21.409477 master-0 kubenswrapper[27835]: I0318 13:29:21.409404 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 18 13:29:21.478028 master-0 kubenswrapper[27835]: I0318 13:29:21.477963 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 18 13:29:21.487216 master-0 kubenswrapper[27835]: I0318 13:29:21.487189 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 18 13:29:21.599700 master-0 kubenswrapper[27835]: I0318 13:29:21.599591 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 18 13:29:22.359648 master-0 kubenswrapper[27835]: I0318 13:29:22.358650 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 18 13:29:22.489302 master-0 kubenswrapper[27835]: I0318 13:29:22.489251 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 18 13:29:22.509465 master-0 kubenswrapper[27835]: I0318 13:29:22.509368 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 18 13:29:22.551598 master-0 kubenswrapper[27835]: I0318 13:29:22.551538 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 18 13:29:22.564531 master-0 kubenswrapper[27835]: I0318 13:29:22.563651 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 18 13:29:23.364527 master-0 kubenswrapper[27835]: I0318 13:29:23.364474 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 18 13:29:23.478520 master-0 kubenswrapper[27835]: I0318 13:29:23.478482 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 18 13:29:23.714560 master-0 kubenswrapper[27835]: I0318 13:29:23.714122 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 18 13:29:23.886648 master-0 kubenswrapper[27835]: I0318 13:29:23.886579 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 13:29:24.151005 master-0 kubenswrapper[27835]: I0318 13:29:24.150892 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 18 13:29:24.592064 master-0 kubenswrapper[27835]: I0318 13:29:24.591991 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 18 13:29:24.611142 master-0 kubenswrapper[27835]: I0318 13:29:24.611079 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 18 13:29:25.363306 master-0 kubenswrapper[27835]: I0318 13:29:25.363254 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 18 13:29:25.574839 master-0 kubenswrapper[27835]: I0318 13:29:25.574743 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 18 13:29:25.689305 master-0 kubenswrapper[27835]: I0318 13:29:25.689228 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 13:29:25.771616 master-0 kubenswrapper[27835]: I0318 13:29:25.771555 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 18 13:29:25.908723 master-0 kubenswrapper[27835]: I0318 13:29:25.908672 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 18 13:29:26.012262 master-0 kubenswrapper[27835]: I0318 13:29:26.012164 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 18 13:29:26.574917 master-0 kubenswrapper[27835]: I0318 13:29:26.574838 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-hnp25" Mar 18 13:29:26.901318 master-0 kubenswrapper[27835]: I0318 13:29:26.901152 27835 patch_prober.go:28] interesting pod/console-5d794fddf9-gh6gq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Mar 18 13:29:26.901318 master-0 kubenswrapper[27835]: I0318 13:29:26.901222 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d794fddf9-gh6gq" podUID="414429b2-4ccb-49cd-8bae-f9a6ab653831" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Mar 18 13:29:27.653584 master-0 kubenswrapper[27835]: I0318 13:29:27.653503 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 18 13:29:28.883264 master-0 kubenswrapper[27835]: I0318 13:29:28.883203 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 18 13:29:29.900962 master-0 kubenswrapper[27835]: I0318 13:29:29.900920 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 18 13:29:29.999094 master-0 kubenswrapper[27835]: I0318 13:29:29.999052 27835 patch_prober.go:28] interesting pod/console-7bb86d5d56-ffwhx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" start-of-body= Mar 18 13:29:29.999373 master-0 kubenswrapper[27835]: I0318 13:29:29.999341 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7bb86d5d56-ffwhx" podUID="1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4" containerName="console" probeResult="failure" output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" Mar 18 13:29:31.296582 master-0 kubenswrapper[27835]: I0318 13:29:31.296521 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 18 13:29:31.561976 master-0 kubenswrapper[27835]: I0318 13:29:31.561831 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-684cf44489-lfkt8" podUID="81db56ef-4aac-48a8-aada-fb4c198f0b5c" containerName="console" containerID="cri-o://9fa672993c9b7c5196a3fb1556df3cdd54dab0fccfb2f0fc7df466544667f818" gracePeriod=15 Mar 18 13:29:31.714001 master-0 kubenswrapper[27835]: I0318 13:29:31.713937 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-684cf44489-lfkt8_81db56ef-4aac-48a8-aada-fb4c198f0b5c/console/0.log" Mar 18 13:29:31.714001 master-0 kubenswrapper[27835]: I0318 13:29:31.713995 27835 generic.go:334] "Generic (PLEG): container finished" podID="81db56ef-4aac-48a8-aada-fb4c198f0b5c" containerID="9fa672993c9b7c5196a3fb1556df3cdd54dab0fccfb2f0fc7df466544667f818" exitCode=2 Mar 18 13:29:31.714289 master-0 kubenswrapper[27835]: I0318 13:29:31.714029 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-684cf44489-lfkt8" event={"ID":"81db56ef-4aac-48a8-aada-fb4c198f0b5c","Type":"ContainerDied","Data":"9fa672993c9b7c5196a3fb1556df3cdd54dab0fccfb2f0fc7df466544667f818"} Mar 18 13:29:31.752626 master-0 kubenswrapper[27835]: I0318 13:29:31.752559 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-dprq6" Mar 18 13:29:32.051680 master-0 kubenswrapper[27835]: I0318 13:29:32.051620 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-684cf44489-lfkt8_81db56ef-4aac-48a8-aada-fb4c198f0b5c/console/0.log" Mar 18 13:29:32.051928 master-0 kubenswrapper[27835]: I0318 13:29:32.051711 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-684cf44489-lfkt8" Mar 18 13:29:32.127169 master-0 kubenswrapper[27835]: I0318 13:29:32.125741 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81db56ef-4aac-48a8-aada-fb4c198f0b5c-trusted-ca-bundle\") pod \"81db56ef-4aac-48a8-aada-fb4c198f0b5c\" (UID: \"81db56ef-4aac-48a8-aada-fb4c198f0b5c\") " Mar 18 13:29:32.127169 master-0 kubenswrapper[27835]: I0318 13:29:32.125789 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/81db56ef-4aac-48a8-aada-fb4c198f0b5c-console-config\") pod \"81db56ef-4aac-48a8-aada-fb4c198f0b5c\" (UID: \"81db56ef-4aac-48a8-aada-fb4c198f0b5c\") " Mar 18 13:29:32.127169 master-0 kubenswrapper[27835]: I0318 13:29:32.125885 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/81db56ef-4aac-48a8-aada-fb4c198f0b5c-console-oauth-config\") pod \"81db56ef-4aac-48a8-aada-fb4c198f0b5c\" (UID: \"81db56ef-4aac-48a8-aada-fb4c198f0b5c\") " Mar 18 13:29:32.127169 master-0 kubenswrapper[27835]: I0318 13:29:32.125928 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4lhq\" (UniqueName: \"kubernetes.io/projected/81db56ef-4aac-48a8-aada-fb4c198f0b5c-kube-api-access-q4lhq\") pod \"81db56ef-4aac-48a8-aada-fb4c198f0b5c\" (UID: \"81db56ef-4aac-48a8-aada-fb4c198f0b5c\") " Mar 18 13:29:32.127169 master-0 kubenswrapper[27835]: I0318 13:29:32.125941 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/81db56ef-4aac-48a8-aada-fb4c198f0b5c-oauth-serving-cert\") pod \"81db56ef-4aac-48a8-aada-fb4c198f0b5c\" (UID: \"81db56ef-4aac-48a8-aada-fb4c198f0b5c\") " Mar 18 13:29:32.127169 master-0 kubenswrapper[27835]: I0318 13:29:32.125970 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/81db56ef-4aac-48a8-aada-fb4c198f0b5c-service-ca\") pod \"81db56ef-4aac-48a8-aada-fb4c198f0b5c\" (UID: \"81db56ef-4aac-48a8-aada-fb4c198f0b5c\") " Mar 18 13:29:32.127169 master-0 kubenswrapper[27835]: I0318 13:29:32.126073 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/81db56ef-4aac-48a8-aada-fb4c198f0b5c-console-serving-cert\") pod \"81db56ef-4aac-48a8-aada-fb4c198f0b5c\" (UID: \"81db56ef-4aac-48a8-aada-fb4c198f0b5c\") " Mar 18 13:29:32.128575 master-0 kubenswrapper[27835]: I0318 13:29:32.128533 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81db56ef-4aac-48a8-aada-fb4c198f0b5c-console-config" (OuterVolumeSpecName: "console-config") pod "81db56ef-4aac-48a8-aada-fb4c198f0b5c" (UID: "81db56ef-4aac-48a8-aada-fb4c198f0b5c"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:29:32.128575 master-0 kubenswrapper[27835]: I0318 13:29:32.128561 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81db56ef-4aac-48a8-aada-fb4c198f0b5c-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "81db56ef-4aac-48a8-aada-fb4c198f0b5c" (UID: "81db56ef-4aac-48a8-aada-fb4c198f0b5c"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:29:32.128797 master-0 kubenswrapper[27835]: I0318 13:29:32.128770 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81db56ef-4aac-48a8-aada-fb4c198f0b5c-service-ca" (OuterVolumeSpecName: "service-ca") pod "81db56ef-4aac-48a8-aada-fb4c198f0b5c" (UID: "81db56ef-4aac-48a8-aada-fb4c198f0b5c"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:29:32.129242 master-0 kubenswrapper[27835]: I0318 13:29:32.129226 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81db56ef-4aac-48a8-aada-fb4c198f0b5c-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "81db56ef-4aac-48a8-aada-fb4c198f0b5c" (UID: "81db56ef-4aac-48a8-aada-fb4c198f0b5c"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:29:32.130141 master-0 kubenswrapper[27835]: I0318 13:29:32.130080 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81db56ef-4aac-48a8-aada-fb4c198f0b5c-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "81db56ef-4aac-48a8-aada-fb4c198f0b5c" (UID: "81db56ef-4aac-48a8-aada-fb4c198f0b5c"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:29:32.130255 master-0 kubenswrapper[27835]: I0318 13:29:32.130202 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81db56ef-4aac-48a8-aada-fb4c198f0b5c-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "81db56ef-4aac-48a8-aada-fb4c198f0b5c" (UID: "81db56ef-4aac-48a8-aada-fb4c198f0b5c"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:29:32.132068 master-0 kubenswrapper[27835]: I0318 13:29:32.132043 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81db56ef-4aac-48a8-aada-fb4c198f0b5c-kube-api-access-q4lhq" (OuterVolumeSpecName: "kube-api-access-q4lhq") pod "81db56ef-4aac-48a8-aada-fb4c198f0b5c" (UID: "81db56ef-4aac-48a8-aada-fb4c198f0b5c"). InnerVolumeSpecName "kube-api-access-q4lhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:29:32.228176 master-0 kubenswrapper[27835]: I0318 13:29:32.227653 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4lhq\" (UniqueName: \"kubernetes.io/projected/81db56ef-4aac-48a8-aada-fb4c198f0b5c-kube-api-access-q4lhq\") on node \"master-0\" DevicePath \"\"" Mar 18 13:29:32.228176 master-0 kubenswrapper[27835]: I0318 13:29:32.227710 27835 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/81db56ef-4aac-48a8-aada-fb4c198f0b5c-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:29:32.228176 master-0 kubenswrapper[27835]: I0318 13:29:32.227726 27835 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/81db56ef-4aac-48a8-aada-fb4c198f0b5c-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:29:32.228176 master-0 kubenswrapper[27835]: I0318 13:29:32.227761 27835 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/81db56ef-4aac-48a8-aada-fb4c198f0b5c-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:29:32.228176 master-0 kubenswrapper[27835]: I0318 13:29:32.227774 27835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81db56ef-4aac-48a8-aada-fb4c198f0b5c-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:29:32.228176 master-0 kubenswrapper[27835]: I0318 13:29:32.227787 27835 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/81db56ef-4aac-48a8-aada-fb4c198f0b5c-console-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:29:32.228176 master-0 kubenswrapper[27835]: I0318 13:29:32.227799 27835 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/81db56ef-4aac-48a8-aada-fb4c198f0b5c-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:29:32.722298 master-0 kubenswrapper[27835]: I0318 13:29:32.722255 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-684cf44489-lfkt8_81db56ef-4aac-48a8-aada-fb4c198f0b5c/console/0.log" Mar 18 13:29:32.722795 master-0 kubenswrapper[27835]: I0318 13:29:32.722304 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-684cf44489-lfkt8" event={"ID":"81db56ef-4aac-48a8-aada-fb4c198f0b5c","Type":"ContainerDied","Data":"8bb10be2b6365a5a5679cfef0e278c2462d4eca1c11ab331102458b318fa5320"} Mar 18 13:29:32.722795 master-0 kubenswrapper[27835]: I0318 13:29:32.722338 27835 scope.go:117] "RemoveContainer" containerID="9fa672993c9b7c5196a3fb1556df3cdd54dab0fccfb2f0fc7df466544667f818" Mar 18 13:29:32.722795 master-0 kubenswrapper[27835]: I0318 13:29:32.722441 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-684cf44489-lfkt8" Mar 18 13:29:32.755628 master-0 kubenswrapper[27835]: I0318 13:29:32.755577 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-684cf44489-lfkt8"] Mar 18 13:29:32.767675 master-0 kubenswrapper[27835]: I0318 13:29:32.767629 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-684cf44489-lfkt8"] Mar 18 13:29:33.417271 master-0 kubenswrapper[27835]: I0318 13:29:33.417202 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 18 13:29:34.289296 master-0 kubenswrapper[27835]: I0318 13:29:34.289237 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81db56ef-4aac-48a8-aada-fb4c198f0b5c" path="/var/lib/kubelet/pods/81db56ef-4aac-48a8-aada-fb4c198f0b5c/volumes" Mar 18 13:29:36.900285 master-0 kubenswrapper[27835]: I0318 13:29:36.900129 27835 patch_prober.go:28] interesting pod/console-5d794fddf9-gh6gq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Mar 18 13:29:36.900285 master-0 kubenswrapper[27835]: I0318 13:29:36.900267 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d794fddf9-gh6gq" podUID="414429b2-4ccb-49cd-8bae-f9a6ab653831" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Mar 18 13:29:39.728123 master-0 kubenswrapper[27835]: I0318 13:29:39.728050 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7bb86d5d56-ffwhx"] Mar 18 13:29:39.778450 master-0 kubenswrapper[27835]: I0318 13:29:39.776893 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-9cc97458b-bkd6r"] Mar 18 13:29:39.778450 master-0 kubenswrapper[27835]: E0318 13:29:39.777516 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81db56ef-4aac-48a8-aada-fb4c198f0b5c" containerName="console" Mar 18 13:29:39.778450 master-0 kubenswrapper[27835]: I0318 13:29:39.777534 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="81db56ef-4aac-48a8-aada-fb4c198f0b5c" containerName="console" Mar 18 13:29:39.778450 master-0 kubenswrapper[27835]: I0318 13:29:39.777857 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="81db56ef-4aac-48a8-aada-fb4c198f0b5c" containerName="console" Mar 18 13:29:39.778863 master-0 kubenswrapper[27835]: I0318 13:29:39.778600 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-9cc97458b-bkd6r" Mar 18 13:29:39.810217 master-0 kubenswrapper[27835]: I0318 13:29:39.810132 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-9cc97458b-bkd6r"] Mar 18 13:29:39.949244 master-0 kubenswrapper[27835]: I0318 13:29:39.949172 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6600dd48-0759-43de-b1df-e99334590bac-oauth-serving-cert\") pod \"console-9cc97458b-bkd6r\" (UID: \"6600dd48-0759-43de-b1df-e99334590bac\") " pod="openshift-console/console-9cc97458b-bkd6r" Mar 18 13:29:39.949244 master-0 kubenswrapper[27835]: I0318 13:29:39.949243 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6600dd48-0759-43de-b1df-e99334590bac-console-oauth-config\") pod \"console-9cc97458b-bkd6r\" (UID: \"6600dd48-0759-43de-b1df-e99334590bac\") " pod="openshift-console/console-9cc97458b-bkd6r" Mar 18 13:29:39.949562 master-0 kubenswrapper[27835]: I0318 13:29:39.949298 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6600dd48-0759-43de-b1df-e99334590bac-console-serving-cert\") pod \"console-9cc97458b-bkd6r\" (UID: \"6600dd48-0759-43de-b1df-e99334590bac\") " pod="openshift-console/console-9cc97458b-bkd6r" Mar 18 13:29:39.949562 master-0 kubenswrapper[27835]: I0318 13:29:39.949364 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6600dd48-0759-43de-b1df-e99334590bac-trusted-ca-bundle\") pod \"console-9cc97458b-bkd6r\" (UID: \"6600dd48-0759-43de-b1df-e99334590bac\") " pod="openshift-console/console-9cc97458b-bkd6r" Mar 18 13:29:39.949562 master-0 kubenswrapper[27835]: I0318 13:29:39.949384 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6600dd48-0759-43de-b1df-e99334590bac-console-config\") pod \"console-9cc97458b-bkd6r\" (UID: \"6600dd48-0759-43de-b1df-e99334590bac\") " pod="openshift-console/console-9cc97458b-bkd6r" Mar 18 13:29:39.949562 master-0 kubenswrapper[27835]: I0318 13:29:39.949408 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nbgp\" (UniqueName: \"kubernetes.io/projected/6600dd48-0759-43de-b1df-e99334590bac-kube-api-access-7nbgp\") pod \"console-9cc97458b-bkd6r\" (UID: \"6600dd48-0759-43de-b1df-e99334590bac\") " pod="openshift-console/console-9cc97458b-bkd6r" Mar 18 13:29:39.949562 master-0 kubenswrapper[27835]: I0318 13:29:39.949534 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6600dd48-0759-43de-b1df-e99334590bac-service-ca\") pod \"console-9cc97458b-bkd6r\" (UID: \"6600dd48-0759-43de-b1df-e99334590bac\") " pod="openshift-console/console-9cc97458b-bkd6r" Mar 18 13:29:40.051238 master-0 kubenswrapper[27835]: I0318 13:29:40.051093 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6600dd48-0759-43de-b1df-e99334590bac-service-ca\") pod \"console-9cc97458b-bkd6r\" (UID: \"6600dd48-0759-43de-b1df-e99334590bac\") " pod="openshift-console/console-9cc97458b-bkd6r" Mar 18 13:29:40.051238 master-0 kubenswrapper[27835]: I0318 13:29:40.051148 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6600dd48-0759-43de-b1df-e99334590bac-oauth-serving-cert\") pod \"console-9cc97458b-bkd6r\" (UID: \"6600dd48-0759-43de-b1df-e99334590bac\") " pod="openshift-console/console-9cc97458b-bkd6r" Mar 18 13:29:40.051600 master-0 kubenswrapper[27835]: I0318 13:29:40.051515 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6600dd48-0759-43de-b1df-e99334590bac-console-oauth-config\") pod \"console-9cc97458b-bkd6r\" (UID: \"6600dd48-0759-43de-b1df-e99334590bac\") " pod="openshift-console/console-9cc97458b-bkd6r" Mar 18 13:29:40.051862 master-0 kubenswrapper[27835]: I0318 13:29:40.051790 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6600dd48-0759-43de-b1df-e99334590bac-console-serving-cert\") pod \"console-9cc97458b-bkd6r\" (UID: \"6600dd48-0759-43de-b1df-e99334590bac\") " pod="openshift-console/console-9cc97458b-bkd6r" Mar 18 13:29:40.051957 master-0 kubenswrapper[27835]: I0318 13:29:40.051880 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6600dd48-0759-43de-b1df-e99334590bac-trusted-ca-bundle\") pod \"console-9cc97458b-bkd6r\" (UID: \"6600dd48-0759-43de-b1df-e99334590bac\") " pod="openshift-console/console-9cc97458b-bkd6r" Mar 18 13:29:40.051957 master-0 kubenswrapper[27835]: I0318 13:29:40.051902 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6600dd48-0759-43de-b1df-e99334590bac-console-config\") pod \"console-9cc97458b-bkd6r\" (UID: \"6600dd48-0759-43de-b1df-e99334590bac\") " pod="openshift-console/console-9cc97458b-bkd6r" Mar 18 13:29:40.051957 master-0 kubenswrapper[27835]: I0318 13:29:40.051928 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nbgp\" (UniqueName: \"kubernetes.io/projected/6600dd48-0759-43de-b1df-e99334590bac-kube-api-access-7nbgp\") pod \"console-9cc97458b-bkd6r\" (UID: \"6600dd48-0759-43de-b1df-e99334590bac\") " pod="openshift-console/console-9cc97458b-bkd6r" Mar 18 13:29:40.052380 master-0 kubenswrapper[27835]: I0318 13:29:40.052341 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6600dd48-0759-43de-b1df-e99334590bac-oauth-serving-cert\") pod \"console-9cc97458b-bkd6r\" (UID: \"6600dd48-0759-43de-b1df-e99334590bac\") " pod="openshift-console/console-9cc97458b-bkd6r" Mar 18 13:29:40.052777 master-0 kubenswrapper[27835]: I0318 13:29:40.052708 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6600dd48-0759-43de-b1df-e99334590bac-service-ca\") pod \"console-9cc97458b-bkd6r\" (UID: \"6600dd48-0759-43de-b1df-e99334590bac\") " pod="openshift-console/console-9cc97458b-bkd6r" Mar 18 13:29:40.053293 master-0 kubenswrapper[27835]: I0318 13:29:40.053222 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6600dd48-0759-43de-b1df-e99334590bac-console-config\") pod \"console-9cc97458b-bkd6r\" (UID: \"6600dd48-0759-43de-b1df-e99334590bac\") " pod="openshift-console/console-9cc97458b-bkd6r" Mar 18 13:29:40.053753 master-0 kubenswrapper[27835]: I0318 13:29:40.053696 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6600dd48-0759-43de-b1df-e99334590bac-trusted-ca-bundle\") pod \"console-9cc97458b-bkd6r\" (UID: \"6600dd48-0759-43de-b1df-e99334590bac\") " pod="openshift-console/console-9cc97458b-bkd6r" Mar 18 13:29:40.054635 master-0 kubenswrapper[27835]: I0318 13:29:40.054597 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6600dd48-0759-43de-b1df-e99334590bac-console-serving-cert\") pod \"console-9cc97458b-bkd6r\" (UID: \"6600dd48-0759-43de-b1df-e99334590bac\") " pod="openshift-console/console-9cc97458b-bkd6r" Mar 18 13:29:40.054821 master-0 kubenswrapper[27835]: I0318 13:29:40.054781 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6600dd48-0759-43de-b1df-e99334590bac-console-oauth-config\") pod \"console-9cc97458b-bkd6r\" (UID: \"6600dd48-0759-43de-b1df-e99334590bac\") " pod="openshift-console/console-9cc97458b-bkd6r" Mar 18 13:29:40.073188 master-0 kubenswrapper[27835]: I0318 13:29:40.073078 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nbgp\" (UniqueName: \"kubernetes.io/projected/6600dd48-0759-43de-b1df-e99334590bac-kube-api-access-7nbgp\") pod \"console-9cc97458b-bkd6r\" (UID: \"6600dd48-0759-43de-b1df-e99334590bac\") " pod="openshift-console/console-9cc97458b-bkd6r" Mar 18 13:29:40.106740 master-0 kubenswrapper[27835]: I0318 13:29:40.106652 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-9cc97458b-bkd6r" Mar 18 13:29:40.525761 master-0 kubenswrapper[27835]: I0318 13:29:40.525722 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-9cc97458b-bkd6r"] Mar 18 13:29:40.796798 master-0 kubenswrapper[27835]: I0318 13:29:40.796654 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-9cc97458b-bkd6r" event={"ID":"6600dd48-0759-43de-b1df-e99334590bac","Type":"ContainerStarted","Data":"f0a78f5a48a297ffd715ddb4d57dccd0a5527ce4781f61229142c90e8164889e"} Mar 18 13:29:40.796798 master-0 kubenswrapper[27835]: I0318 13:29:40.796706 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-9cc97458b-bkd6r" event={"ID":"6600dd48-0759-43de-b1df-e99334590bac","Type":"ContainerStarted","Data":"d9f83ea5d9ad7cd809601507209e5981026898c5afd124c3c2542a7b973c1699"} Mar 18 13:29:40.817357 master-0 kubenswrapper[27835]: I0318 13:29:40.817239 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-9cc97458b-bkd6r" podStartSLOduration=1.817179994 podStartE2EDuration="1.817179994s" podCreationTimestamp="2026-03-18 13:29:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:29:40.81487415 +0000 UTC m=+344.780085721" watchObservedRunningTime="2026-03-18 13:29:40.817179994 +0000 UTC m=+344.782391564" Mar 18 13:29:46.900560 master-0 kubenswrapper[27835]: I0318 13:29:46.900498 27835 patch_prober.go:28] interesting pod/console-5d794fddf9-gh6gq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Mar 18 13:29:46.901137 master-0 kubenswrapper[27835]: I0318 13:29:46.900564 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d794fddf9-gh6gq" podUID="414429b2-4ccb-49cd-8bae-f9a6ab653831" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Mar 18 13:29:50.066445 master-0 kubenswrapper[27835]: I0318 13:29:50.063522 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 18 13:29:50.066445 master-0 kubenswrapper[27835]: I0318 13:29:50.065146 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 13:29:50.067442 master-0 kubenswrapper[27835]: I0318 13:29:50.067373 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-w7jpc" Mar 18 13:29:50.067823 master-0 kubenswrapper[27835]: I0318 13:29:50.067789 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 18 13:29:50.073125 master-0 kubenswrapper[27835]: I0318 13:29:50.073073 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 18 13:29:50.108280 master-0 kubenswrapper[27835]: I0318 13:29:50.108220 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-9cc97458b-bkd6r" Mar 18 13:29:50.108647 master-0 kubenswrapper[27835]: I0318 13:29:50.108626 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-9cc97458b-bkd6r" Mar 18 13:29:50.109761 master-0 kubenswrapper[27835]: I0318 13:29:50.109695 27835 patch_prober.go:28] interesting pod/console-9cc97458b-bkd6r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 18 13:29:50.109879 master-0 kubenswrapper[27835]: I0318 13:29:50.109786 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-9cc97458b-bkd6r" podUID="6600dd48-0759-43de-b1df-e99334590bac" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 18 13:29:50.223103 master-0 kubenswrapper[27835]: I0318 13:29:50.223025 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387-var-lock\") pod \"installer-4-master-0\" (UID: \"3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 13:29:50.223352 master-0 kubenswrapper[27835]: I0318 13:29:50.223114 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 13:29:50.223352 master-0 kubenswrapper[27835]: I0318 13:29:50.223232 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387-kube-api-access\") pod \"installer-4-master-0\" (UID: \"3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 13:29:50.325308 master-0 kubenswrapper[27835]: I0318 13:29:50.325108 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387-var-lock\") pod \"installer-4-master-0\" (UID: \"3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 13:29:50.325308 master-0 kubenswrapper[27835]: I0318 13:29:50.325162 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 13:29:50.325308 master-0 kubenswrapper[27835]: I0318 13:29:50.325208 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387-kube-api-access\") pod \"installer-4-master-0\" (UID: \"3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 13:29:50.325848 master-0 kubenswrapper[27835]: I0318 13:29:50.325586 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387-var-lock\") pod \"installer-4-master-0\" (UID: \"3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 13:29:50.325848 master-0 kubenswrapper[27835]: I0318 13:29:50.325625 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 13:29:50.343539 master-0 kubenswrapper[27835]: I0318 13:29:50.343445 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387-kube-api-access\") pod \"installer-4-master-0\" (UID: \"3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 13:29:50.460245 master-0 kubenswrapper[27835]: I0318 13:29:50.460168 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 13:29:50.876854 master-0 kubenswrapper[27835]: I0318 13:29:50.876730 27835 generic.go:334] "Generic (PLEG): container finished" podID="41cc6278-8f99-407c-ba5f-750a40e3058c" containerID="de4324b4c32cf4e9cbdf79af1c88339cded8c6fd18295426d2e5f309799e44c1" exitCode=0 Mar 18 13:29:50.876854 master-0 kubenswrapper[27835]: I0318 13:29:50.876777 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" event={"ID":"41cc6278-8f99-407c-ba5f-750a40e3058c","Type":"ContainerDied","Data":"de4324b4c32cf4e9cbdf79af1c88339cded8c6fd18295426d2e5f309799e44c1"} Mar 18 13:29:50.920465 master-0 kubenswrapper[27835]: I0318 13:29:50.916283 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 18 13:29:50.924142 master-0 kubenswrapper[27835]: W0318 13:29:50.924087 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3fc82bf1_1fc4_4bf4_b7c5_6624cc0cc387.slice/crio-55059deba7c052a5243c6be78ed5efa2f99a8b11497ffa2bc09cba330bd51b3d WatchSource:0}: Error finding container 55059deba7c052a5243c6be78ed5efa2f99a8b11497ffa2bc09cba330bd51b3d: Status 404 returned error can't find the container with id 55059deba7c052a5243c6be78ed5efa2f99a8b11497ffa2bc09cba330bd51b3d Mar 18 13:29:50.956140 master-0 kubenswrapper[27835]: I0318 13:29:50.956090 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:29:51.035643 master-0 kubenswrapper[27835]: I0318 13:29:51.035573 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41cc6278-8f99-407c-ba5f-750a40e3058c-client-ca-bundle\") pod \"41cc6278-8f99-407c-ba5f-750a40e3058c\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " Mar 18 13:29:51.035911 master-0 kubenswrapper[27835]: I0318 13:29:51.035658 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/41cc6278-8f99-407c-ba5f-750a40e3058c-secret-metrics-client-certs\") pod \"41cc6278-8f99-407c-ba5f-750a40e3058c\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " Mar 18 13:29:51.035911 master-0 kubenswrapper[27835]: I0318 13:29:51.035764 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/41cc6278-8f99-407c-ba5f-750a40e3058c-secret-metrics-server-tls\") pod \"41cc6278-8f99-407c-ba5f-750a40e3058c\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " Mar 18 13:29:51.035911 master-0 kubenswrapper[27835]: I0318 13:29:51.035793 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2w6b\" (UniqueName: \"kubernetes.io/projected/41cc6278-8f99-407c-ba5f-750a40e3058c-kube-api-access-g2w6b\") pod \"41cc6278-8f99-407c-ba5f-750a40e3058c\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " Mar 18 13:29:51.035911 master-0 kubenswrapper[27835]: I0318 13:29:51.035887 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41cc6278-8f99-407c-ba5f-750a40e3058c-configmap-kubelet-serving-ca-bundle\") pod \"41cc6278-8f99-407c-ba5f-750a40e3058c\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " Mar 18 13:29:51.035911 master-0 kubenswrapper[27835]: I0318 13:29:51.035912 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/41cc6278-8f99-407c-ba5f-750a40e3058c-audit-log\") pod \"41cc6278-8f99-407c-ba5f-750a40e3058c\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " Mar 18 13:29:51.036357 master-0 kubenswrapper[27835]: I0318 13:29:51.035930 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/41cc6278-8f99-407c-ba5f-750a40e3058c-metrics-server-audit-profiles\") pod \"41cc6278-8f99-407c-ba5f-750a40e3058c\" (UID: \"41cc6278-8f99-407c-ba5f-750a40e3058c\") " Mar 18 13:29:51.037308 master-0 kubenswrapper[27835]: I0318 13:29:51.037020 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41cc6278-8f99-407c-ba5f-750a40e3058c-metrics-server-audit-profiles" (OuterVolumeSpecName: "metrics-server-audit-profiles") pod "41cc6278-8f99-407c-ba5f-750a40e3058c" (UID: "41cc6278-8f99-407c-ba5f-750a40e3058c"). InnerVolumeSpecName "metrics-server-audit-profiles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:29:51.037308 master-0 kubenswrapper[27835]: I0318 13:29:51.037196 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41cc6278-8f99-407c-ba5f-750a40e3058c-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "41cc6278-8f99-407c-ba5f-750a40e3058c" (UID: "41cc6278-8f99-407c-ba5f-750a40e3058c"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:29:51.037598 master-0 kubenswrapper[27835]: I0318 13:29:51.037301 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41cc6278-8f99-407c-ba5f-750a40e3058c-audit-log" (OuterVolumeSpecName: "audit-log") pod "41cc6278-8f99-407c-ba5f-750a40e3058c" (UID: "41cc6278-8f99-407c-ba5f-750a40e3058c"). InnerVolumeSpecName "audit-log". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:29:51.039762 master-0 kubenswrapper[27835]: I0318 13:29:51.039725 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41cc6278-8f99-407c-ba5f-750a40e3058c-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "41cc6278-8f99-407c-ba5f-750a40e3058c" (UID: "41cc6278-8f99-407c-ba5f-750a40e3058c"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:29:51.039762 master-0 kubenswrapper[27835]: I0318 13:29:51.039748 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41cc6278-8f99-407c-ba5f-750a40e3058c-kube-api-access-g2w6b" (OuterVolumeSpecName: "kube-api-access-g2w6b") pod "41cc6278-8f99-407c-ba5f-750a40e3058c" (UID: "41cc6278-8f99-407c-ba5f-750a40e3058c"). InnerVolumeSpecName "kube-api-access-g2w6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:29:51.039981 master-0 kubenswrapper[27835]: I0318 13:29:51.039752 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41cc6278-8f99-407c-ba5f-750a40e3058c-client-ca-bundle" (OuterVolumeSpecName: "client-ca-bundle") pod "41cc6278-8f99-407c-ba5f-750a40e3058c" (UID: "41cc6278-8f99-407c-ba5f-750a40e3058c"). InnerVolumeSpecName "client-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:29:51.041246 master-0 kubenswrapper[27835]: I0318 13:29:51.041212 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41cc6278-8f99-407c-ba5f-750a40e3058c-secret-metrics-server-tls" (OuterVolumeSpecName: "secret-metrics-server-tls") pod "41cc6278-8f99-407c-ba5f-750a40e3058c" (UID: "41cc6278-8f99-407c-ba5f-750a40e3058c"). InnerVolumeSpecName "secret-metrics-server-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:29:51.138076 master-0 kubenswrapper[27835]: I0318 13:29:51.137925 27835 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/41cc6278-8f99-407c-ba5f-750a40e3058c-secret-metrics-server-tls\") on node \"master-0\" DevicePath \"\"" Mar 18 13:29:51.138076 master-0 kubenswrapper[27835]: I0318 13:29:51.137988 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2w6b\" (UniqueName: \"kubernetes.io/projected/41cc6278-8f99-407c-ba5f-750a40e3058c-kube-api-access-g2w6b\") on node \"master-0\" DevicePath \"\"" Mar 18 13:29:51.138076 master-0 kubenswrapper[27835]: I0318 13:29:51.138009 27835 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41cc6278-8f99-407c-ba5f-750a40e3058c-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:29:51.138076 master-0 kubenswrapper[27835]: I0318 13:29:51.138032 27835 reconciler_common.go:293] "Volume detached for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/41cc6278-8f99-407c-ba5f-750a40e3058c-audit-log\") on node \"master-0\" DevicePath \"\"" Mar 18 13:29:51.138076 master-0 kubenswrapper[27835]: I0318 13:29:51.138052 27835 reconciler_common.go:293] "Volume detached for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/41cc6278-8f99-407c-ba5f-750a40e3058c-metrics-server-audit-profiles\") on node \"master-0\" DevicePath \"\"" Mar 18 13:29:51.138076 master-0 kubenswrapper[27835]: I0318 13:29:51.138070 27835 reconciler_common.go:293] "Volume detached for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41cc6278-8f99-407c-ba5f-750a40e3058c-client-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:29:51.139277 master-0 kubenswrapper[27835]: I0318 13:29:51.138091 27835 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/41cc6278-8f99-407c-ba5f-750a40e3058c-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:29:51.887804 master-0 kubenswrapper[27835]: I0318 13:29:51.887746 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387","Type":"ContainerStarted","Data":"d2e514db69e75cbc768f96eebeb8588cc8f05fc140ed675e0f0f241327588485"} Mar 18 13:29:51.887804 master-0 kubenswrapper[27835]: I0318 13:29:51.887805 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387","Type":"ContainerStarted","Data":"55059deba7c052a5243c6be78ed5efa2f99a8b11497ffa2bc09cba330bd51b3d"} Mar 18 13:29:51.891094 master-0 kubenswrapper[27835]: I0318 13:29:51.891012 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" event={"ID":"41cc6278-8f99-407c-ba5f-750a40e3058c","Type":"ContainerDied","Data":"03b02d62589926abe5e0b1261c9a635f1d2bfbcefca79eac740978fb36ffa6b1"} Mar 18 13:29:51.891196 master-0 kubenswrapper[27835]: I0318 13:29:51.891103 27835 scope.go:117] "RemoveContainer" containerID="de4324b4c32cf4e9cbdf79af1c88339cded8c6fd18295426d2e5f309799e44c1" Mar 18 13:29:51.891370 master-0 kubenswrapper[27835]: I0318 13:29:51.891304 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-65dbcd767c-7bqc9" Mar 18 13:29:51.913713 master-0 kubenswrapper[27835]: I0318 13:29:51.913665 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-master-0" podStartSLOduration=1.913650333 podStartE2EDuration="1.913650333s" podCreationTimestamp="2026-03-18 13:29:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:29:51.912255524 +0000 UTC m=+355.877467104" watchObservedRunningTime="2026-03-18 13:29:51.913650333 +0000 UTC m=+355.878861893" Mar 18 13:29:51.957469 master-0 kubenswrapper[27835]: I0318 13:29:51.948236 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-65dbcd767c-7bqc9"] Mar 18 13:29:51.957469 master-0 kubenswrapper[27835]: I0318 13:29:51.954861 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/metrics-server-65dbcd767c-7bqc9"] Mar 18 13:29:52.293943 master-0 kubenswrapper[27835]: I0318 13:29:52.293901 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41cc6278-8f99-407c-ba5f-750a40e3058c" path="/var/lib/kubelet/pods/41cc6278-8f99-407c-ba5f-750a40e3058c/volumes" Mar 18 13:29:56.900494 master-0 kubenswrapper[27835]: I0318 13:29:56.900303 27835 patch_prober.go:28] interesting pod/console-5d794fddf9-gh6gq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Mar 18 13:29:56.900494 master-0 kubenswrapper[27835]: I0318 13:29:56.900369 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d794fddf9-gh6gq" podUID="414429b2-4ccb-49cd-8bae-f9a6ab653831" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Mar 18 13:30:00.107671 master-0 kubenswrapper[27835]: I0318 13:30:00.107594 27835 patch_prober.go:28] interesting pod/console-9cc97458b-bkd6r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 18 13:30:00.108379 master-0 kubenswrapper[27835]: I0318 13:30:00.107681 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-9cc97458b-bkd6r" podUID="6600dd48-0759-43de-b1df-e99334590bac" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 18 13:30:04.763932 master-0 kubenswrapper[27835]: I0318 13:30:04.763837 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7bb86d5d56-ffwhx" podUID="1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4" containerName="console" containerID="cri-o://ce1c406456499af0ac269fe2feecc4f83fe8248112e4b0f9d2db6dd1022203ce" gracePeriod=15 Mar 18 13:30:04.997130 master-0 kubenswrapper[27835]: I0318 13:30:04.997068 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7bb86d5d56-ffwhx_1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4/console/0.log" Mar 18 13:30:04.997130 master-0 kubenswrapper[27835]: I0318 13:30:04.997128 27835 generic.go:334] "Generic (PLEG): container finished" podID="1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4" containerID="ce1c406456499af0ac269fe2feecc4f83fe8248112e4b0f9d2db6dd1022203ce" exitCode=2 Mar 18 13:30:04.997504 master-0 kubenswrapper[27835]: I0318 13:30:04.997161 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7bb86d5d56-ffwhx" event={"ID":"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4","Type":"ContainerDied","Data":"ce1c406456499af0ac269fe2feecc4f83fe8248112e4b0f9d2db6dd1022203ce"} Mar 18 13:30:05.253099 master-0 kubenswrapper[27835]: I0318 13:30:05.253049 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7bb86d5d56-ffwhx_1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4/console/0.log" Mar 18 13:30:05.253323 master-0 kubenswrapper[27835]: I0318 13:30:05.253115 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7bb86d5d56-ffwhx" Mar 18 13:30:05.374043 master-0 kubenswrapper[27835]: I0318 13:30:05.373988 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-console-serving-cert\") pod \"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4\" (UID: \"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4\") " Mar 18 13:30:05.374043 master-0 kubenswrapper[27835]: I0318 13:30:05.374029 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-oauth-serving-cert\") pod \"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4\" (UID: \"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4\") " Mar 18 13:30:05.374328 master-0 kubenswrapper[27835]: I0318 13:30:05.374161 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-console-config\") pod \"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4\" (UID: \"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4\") " Mar 18 13:30:05.374328 master-0 kubenswrapper[27835]: I0318 13:30:05.374186 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-trusted-ca-bundle\") pod \"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4\" (UID: \"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4\") " Mar 18 13:30:05.374328 master-0 kubenswrapper[27835]: I0318 13:30:05.374200 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-service-ca\") pod \"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4\" (UID: \"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4\") " Mar 18 13:30:05.374328 master-0 kubenswrapper[27835]: I0318 13:30:05.374215 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-console-oauth-config\") pod \"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4\" (UID: \"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4\") " Mar 18 13:30:05.374328 master-0 kubenswrapper[27835]: I0318 13:30:05.374233 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbh4t\" (UniqueName: \"kubernetes.io/projected/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-kube-api-access-fbh4t\") pod \"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4\" (UID: \"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4\") " Mar 18 13:30:05.376150 master-0 kubenswrapper[27835]: I0318 13:30:05.375626 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-console-config" (OuterVolumeSpecName: "console-config") pod "1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4" (UID: "1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:30:05.376150 master-0 kubenswrapper[27835]: I0318 13:30:05.375689 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4" (UID: "1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:30:05.376150 master-0 kubenswrapper[27835]: I0318 13:30:05.376068 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4" (UID: "1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:30:05.376354 master-0 kubenswrapper[27835]: I0318 13:30:05.376314 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-service-ca" (OuterVolumeSpecName: "service-ca") pod "1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4" (UID: "1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:30:05.379941 master-0 kubenswrapper[27835]: I0318 13:30:05.379593 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4" (UID: "1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:30:05.379941 master-0 kubenswrapper[27835]: I0318 13:30:05.379690 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-kube-api-access-fbh4t" (OuterVolumeSpecName: "kube-api-access-fbh4t") pod "1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4" (UID: "1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4"). InnerVolumeSpecName "kube-api-access-fbh4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:30:05.379941 master-0 kubenswrapper[27835]: I0318 13:30:05.379719 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4" (UID: "1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:30:05.476009 master-0 kubenswrapper[27835]: I0318 13:30:05.475852 27835 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-console-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:30:05.476009 master-0 kubenswrapper[27835]: I0318 13:30:05.475899 27835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:30:05.476009 master-0 kubenswrapper[27835]: I0318 13:30:05.475919 27835 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:30:05.476009 master-0 kubenswrapper[27835]: I0318 13:30:05.475931 27835 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:30:05.476009 master-0 kubenswrapper[27835]: I0318 13:30:05.475944 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbh4t\" (UniqueName: \"kubernetes.io/projected/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-kube-api-access-fbh4t\") on node \"master-0\" DevicePath \"\"" Mar 18 13:30:05.476009 master-0 kubenswrapper[27835]: I0318 13:30:05.475957 27835 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:30:05.476009 master-0 kubenswrapper[27835]: I0318 13:30:05.475970 27835 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:30:06.004636 master-0 kubenswrapper[27835]: I0318 13:30:06.004588 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7bb86d5d56-ffwhx_1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4/console/0.log" Mar 18 13:30:06.004636 master-0 kubenswrapper[27835]: I0318 13:30:06.004638 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7bb86d5d56-ffwhx" event={"ID":"1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4","Type":"ContainerDied","Data":"28ff5b928d3aa871633aaa4dba6bb889908836cac6bb909062b1b831326332eb"} Mar 18 13:30:06.005242 master-0 kubenswrapper[27835]: I0318 13:30:06.004672 27835 scope.go:117] "RemoveContainer" containerID="ce1c406456499af0ac269fe2feecc4f83fe8248112e4b0f9d2db6dd1022203ce" Mar 18 13:30:06.005242 master-0 kubenswrapper[27835]: I0318 13:30:06.004767 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7bb86d5d56-ffwhx" Mar 18 13:30:06.038450 master-0 kubenswrapper[27835]: I0318 13:30:06.038369 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7bb86d5d56-ffwhx"] Mar 18 13:30:06.046056 master-0 kubenswrapper[27835]: I0318 13:30:06.045997 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7bb86d5d56-ffwhx"] Mar 18 13:30:06.292573 master-0 kubenswrapper[27835]: I0318 13:30:06.292379 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4" path="/var/lib/kubelet/pods/1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4/volumes" Mar 18 13:30:06.900172 master-0 kubenswrapper[27835]: I0318 13:30:06.900116 27835 patch_prober.go:28] interesting pod/console-5d794fddf9-gh6gq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Mar 18 13:30:06.900396 master-0 kubenswrapper[27835]: I0318 13:30:06.900180 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d794fddf9-gh6gq" podUID="414429b2-4ccb-49cd-8bae-f9a6ab653831" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Mar 18 13:30:10.107835 master-0 kubenswrapper[27835]: I0318 13:30:10.107759 27835 patch_prober.go:28] interesting pod/console-9cc97458b-bkd6r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 18 13:30:10.108611 master-0 kubenswrapper[27835]: I0318 13:30:10.107900 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-9cc97458b-bkd6r" podUID="6600dd48-0759-43de-b1df-e99334590bac" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 18 13:30:16.900671 master-0 kubenswrapper[27835]: I0318 13:30:16.900606 27835 patch_prober.go:28] interesting pod/console-5d794fddf9-gh6gq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Mar 18 13:30:16.900671 master-0 kubenswrapper[27835]: I0318 13:30:16.900670 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d794fddf9-gh6gq" podUID="414429b2-4ccb-49cd-8bae-f9a6ab653831" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Mar 18 13:30:20.108117 master-0 kubenswrapper[27835]: I0318 13:30:20.108057 27835 patch_prober.go:28] interesting pod/console-9cc97458b-bkd6r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 18 13:30:20.108669 master-0 kubenswrapper[27835]: I0318 13:30:20.108126 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-9cc97458b-bkd6r" podUID="6600dd48-0759-43de-b1df-e99334590bac" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 18 13:30:24.052985 master-0 kubenswrapper[27835]: I0318 13:30:24.052887 27835 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 13:30:24.054008 master-0 kubenswrapper[27835]: I0318 13:30:24.053189 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c129e07da670ff3af256d72652e4b1da" containerName="cluster-policy-controller" containerID="cri-o://80ea1bc510c56f8c04524b33659a9124c49ea6274b90ad277511efa074d7ad89" gracePeriod=30 Mar 18 13:30:24.054008 master-0 kubenswrapper[27835]: I0318 13:30:24.053347 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c129e07da670ff3af256d72652e4b1da" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://907f62a3ed0a66ba580ec70c52ca37e615478a0dc4f9200560aa2d1b0a9edbd8" gracePeriod=30 Mar 18 13:30:24.054008 master-0 kubenswrapper[27835]: I0318 13:30:24.053347 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c129e07da670ff3af256d72652e4b1da" containerName="kube-controller-manager" containerID="cri-o://b03e7d112d9f6e02fc0d128e89ab6de223ad85703c20718c2b9470a2cebd7529" gracePeriod=30 Mar 18 13:30:24.054008 master-0 kubenswrapper[27835]: I0318 13:30:24.053340 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c129e07da670ff3af256d72652e4b1da" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://d48e15ab911c60989cfb99c327ae7e0567590b1356659404b161af05907e4337" gracePeriod=30 Mar 18 13:30:24.054336 master-0 kubenswrapper[27835]: I0318 13:30:24.054251 27835 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 13:30:24.055276 master-0 kubenswrapper[27835]: E0318 13:30:24.054594 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c129e07da670ff3af256d72652e4b1da" containerName="kube-controller-manager-cert-syncer" Mar 18 13:30:24.055276 master-0 kubenswrapper[27835]: I0318 13:30:24.054619 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c129e07da670ff3af256d72652e4b1da" containerName="kube-controller-manager-cert-syncer" Mar 18 13:30:24.055276 master-0 kubenswrapper[27835]: E0318 13:30:24.054641 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c129e07da670ff3af256d72652e4b1da" containerName="cluster-policy-controller" Mar 18 13:30:24.055276 master-0 kubenswrapper[27835]: I0318 13:30:24.054650 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c129e07da670ff3af256d72652e4b1da" containerName="cluster-policy-controller" Mar 18 13:30:24.055276 master-0 kubenswrapper[27835]: E0318 13:30:24.054666 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4" containerName="console" Mar 18 13:30:24.055276 master-0 kubenswrapper[27835]: I0318 13:30:24.054674 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4" containerName="console" Mar 18 13:30:24.055276 master-0 kubenswrapper[27835]: E0318 13:30:24.054694 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c129e07da670ff3af256d72652e4b1da" containerName="kube-controller-manager-recovery-controller" Mar 18 13:30:24.055276 master-0 kubenswrapper[27835]: I0318 13:30:24.054702 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c129e07da670ff3af256d72652e4b1da" containerName="kube-controller-manager-recovery-controller" Mar 18 13:30:24.055276 master-0 kubenswrapper[27835]: E0318 13:30:24.054734 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c129e07da670ff3af256d72652e4b1da" containerName="kube-controller-manager" Mar 18 13:30:24.055276 master-0 kubenswrapper[27835]: I0318 13:30:24.054744 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c129e07da670ff3af256d72652e4b1da" containerName="kube-controller-manager" Mar 18 13:30:24.055276 master-0 kubenswrapper[27835]: E0318 13:30:24.054756 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41cc6278-8f99-407c-ba5f-750a40e3058c" containerName="metrics-server" Mar 18 13:30:24.055276 master-0 kubenswrapper[27835]: I0318 13:30:24.054773 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="41cc6278-8f99-407c-ba5f-750a40e3058c" containerName="metrics-server" Mar 18 13:30:24.055276 master-0 kubenswrapper[27835]: E0318 13:30:24.054788 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c129e07da670ff3af256d72652e4b1da" containerName="kube-controller-manager" Mar 18 13:30:24.055276 master-0 kubenswrapper[27835]: I0318 13:30:24.054796 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c129e07da670ff3af256d72652e4b1da" containerName="kube-controller-manager" Mar 18 13:30:24.055276 master-0 kubenswrapper[27835]: E0318 13:30:24.054807 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c129e07da670ff3af256d72652e4b1da" containerName="kube-controller-manager" Mar 18 13:30:24.055276 master-0 kubenswrapper[27835]: I0318 13:30:24.054816 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c129e07da670ff3af256d72652e4b1da" containerName="kube-controller-manager" Mar 18 13:30:24.055276 master-0 kubenswrapper[27835]: I0318 13:30:24.054982 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="c129e07da670ff3af256d72652e4b1da" containerName="kube-controller-manager-cert-syncer" Mar 18 13:30:24.055276 master-0 kubenswrapper[27835]: I0318 13:30:24.055016 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="c129e07da670ff3af256d72652e4b1da" containerName="kube-controller-manager" Mar 18 13:30:24.055276 master-0 kubenswrapper[27835]: I0318 13:30:24.055028 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="c129e07da670ff3af256d72652e4b1da" containerName="kube-controller-manager-recovery-controller" Mar 18 13:30:24.055276 master-0 kubenswrapper[27835]: I0318 13:30:24.055051 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="c129e07da670ff3af256d72652e4b1da" containerName="kube-controller-manager" Mar 18 13:30:24.055276 master-0 kubenswrapper[27835]: I0318 13:30:24.055069 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="c129e07da670ff3af256d72652e4b1da" containerName="cluster-policy-controller" Mar 18 13:30:24.055276 master-0 kubenswrapper[27835]: I0318 13:30:24.055082 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fb6b1c3-9c99-4a1c-ac37-f1ad9dfa73a4" containerName="console" Mar 18 13:30:24.055276 master-0 kubenswrapper[27835]: I0318 13:30:24.055099 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="41cc6278-8f99-407c-ba5f-750a40e3058c" containerName="metrics-server" Mar 18 13:30:24.055276 master-0 kubenswrapper[27835]: I0318 13:30:24.055111 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="c129e07da670ff3af256d72652e4b1da" containerName="kube-controller-manager" Mar 18 13:30:24.055276 master-0 kubenswrapper[27835]: I0318 13:30:24.055130 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="c129e07da670ff3af256d72652e4b1da" containerName="kube-controller-manager" Mar 18 13:30:24.058554 master-0 kubenswrapper[27835]: E0318 13:30:24.055384 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c129e07da670ff3af256d72652e4b1da" containerName="kube-controller-manager" Mar 18 13:30:24.058554 master-0 kubenswrapper[27835]: I0318 13:30:24.055400 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c129e07da670ff3af256d72652e4b1da" containerName="kube-controller-manager" Mar 18 13:30:24.202323 master-0 kubenswrapper[27835]: I0318 13:30:24.202244 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9587703208e136f5328582e1ba0fc966-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"9587703208e136f5328582e1ba0fc966\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:30:24.202732 master-0 kubenswrapper[27835]: I0318 13:30:24.202669 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9587703208e136f5328582e1ba0fc966-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"9587703208e136f5328582e1ba0fc966\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:30:24.304927 master-0 kubenswrapper[27835]: I0318 13:30:24.304755 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9587703208e136f5328582e1ba0fc966-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"9587703208e136f5328582e1ba0fc966\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:30:24.304927 master-0 kubenswrapper[27835]: I0318 13:30:24.304869 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9587703208e136f5328582e1ba0fc966-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"9587703208e136f5328582e1ba0fc966\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:30:24.304927 master-0 kubenswrapper[27835]: I0318 13:30:24.304890 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9587703208e136f5328582e1ba0fc966-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"9587703208e136f5328582e1ba0fc966\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:30:24.305326 master-0 kubenswrapper[27835]: I0318 13:30:24.305136 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9587703208e136f5328582e1ba0fc966-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"9587703208e136f5328582e1ba0fc966\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:30:24.352186 master-0 kubenswrapper[27835]: I0318 13:30:24.352117 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c129e07da670ff3af256d72652e4b1da/kube-controller-manager/2.log" Mar 18 13:30:24.353334 master-0 kubenswrapper[27835]: I0318 13:30:24.353277 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c129e07da670ff3af256d72652e4b1da/kube-controller-manager-cert-syncer/0.log" Mar 18 13:30:24.353830 master-0 kubenswrapper[27835]: I0318 13:30:24.353788 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:30:24.357207 master-0 kubenswrapper[27835]: I0318 13:30:24.357160 27835 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="c129e07da670ff3af256d72652e4b1da" podUID="9587703208e136f5328582e1ba0fc966" Mar 18 13:30:24.507249 master-0 kubenswrapper[27835]: I0318 13:30:24.507184 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c129e07da670ff3af256d72652e4b1da-resource-dir\") pod \"c129e07da670ff3af256d72652e4b1da\" (UID: \"c129e07da670ff3af256d72652e4b1da\") " Mar 18 13:30:24.507483 master-0 kubenswrapper[27835]: I0318 13:30:24.507293 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c129e07da670ff3af256d72652e4b1da-cert-dir\") pod \"c129e07da670ff3af256d72652e4b1da\" (UID: \"c129e07da670ff3af256d72652e4b1da\") " Mar 18 13:30:24.507739 master-0 kubenswrapper[27835]: I0318 13:30:24.507646 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c129e07da670ff3af256d72652e4b1da-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "c129e07da670ff3af256d72652e4b1da" (UID: "c129e07da670ff3af256d72652e4b1da"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:30:24.507823 master-0 kubenswrapper[27835]: I0318 13:30:24.507680 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c129e07da670ff3af256d72652e4b1da-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "c129e07da670ff3af256d72652e4b1da" (UID: "c129e07da670ff3af256d72652e4b1da"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:30:24.536214 master-0 kubenswrapper[27835]: I0318 13:30:24.535964 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-7-master-0"] Mar 18 13:30:24.537037 master-0 kubenswrapper[27835]: I0318 13:30:24.537019 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-7-master-0" Mar 18 13:30:24.541307 master-0 kubenswrapper[27835]: I0318 13:30:24.541251 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-sd5ht" Mar 18 13:30:24.541703 master-0 kubenswrapper[27835]: I0318 13:30:24.541676 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 18 13:30:24.553174 master-0 kubenswrapper[27835]: I0318 13:30:24.553112 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-7-master-0"] Mar 18 13:30:24.609406 master-0 kubenswrapper[27835]: I0318 13:30:24.609249 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ce1e1007-d503-49db-abc1-8daa04b3d881-var-lock\") pod \"installer-7-master-0\" (UID: \"ce1e1007-d503-49db-abc1-8daa04b3d881\") " pod="openshift-kube-apiserver/installer-7-master-0" Mar 18 13:30:24.609406 master-0 kubenswrapper[27835]: I0318 13:30:24.609402 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce1e1007-d503-49db-abc1-8daa04b3d881-kubelet-dir\") pod \"installer-7-master-0\" (UID: \"ce1e1007-d503-49db-abc1-8daa04b3d881\") " pod="openshift-kube-apiserver/installer-7-master-0" Mar 18 13:30:24.609699 master-0 kubenswrapper[27835]: I0318 13:30:24.609499 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce1e1007-d503-49db-abc1-8daa04b3d881-kube-api-access\") pod \"installer-7-master-0\" (UID: \"ce1e1007-d503-49db-abc1-8daa04b3d881\") " pod="openshift-kube-apiserver/installer-7-master-0" Mar 18 13:30:24.609699 master-0 kubenswrapper[27835]: I0318 13:30:24.609576 27835 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c129e07da670ff3af256d72652e4b1da-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:30:24.609699 master-0 kubenswrapper[27835]: I0318 13:30:24.609590 27835 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c129e07da670ff3af256d72652e4b1da-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:30:24.710819 master-0 kubenswrapper[27835]: I0318 13:30:24.710729 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ce1e1007-d503-49db-abc1-8daa04b3d881-var-lock\") pod \"installer-7-master-0\" (UID: \"ce1e1007-d503-49db-abc1-8daa04b3d881\") " pod="openshift-kube-apiserver/installer-7-master-0" Mar 18 13:30:24.711103 master-0 kubenswrapper[27835]: I0318 13:30:24.710831 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ce1e1007-d503-49db-abc1-8daa04b3d881-var-lock\") pod \"installer-7-master-0\" (UID: \"ce1e1007-d503-49db-abc1-8daa04b3d881\") " pod="openshift-kube-apiserver/installer-7-master-0" Mar 18 13:30:24.711103 master-0 kubenswrapper[27835]: I0318 13:30:24.711062 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce1e1007-d503-49db-abc1-8daa04b3d881-kubelet-dir\") pod \"installer-7-master-0\" (UID: \"ce1e1007-d503-49db-abc1-8daa04b3d881\") " pod="openshift-kube-apiserver/installer-7-master-0" Mar 18 13:30:24.711207 master-0 kubenswrapper[27835]: I0318 13:30:24.711137 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce1e1007-d503-49db-abc1-8daa04b3d881-kubelet-dir\") pod \"installer-7-master-0\" (UID: \"ce1e1007-d503-49db-abc1-8daa04b3d881\") " pod="openshift-kube-apiserver/installer-7-master-0" Mar 18 13:30:24.711207 master-0 kubenswrapper[27835]: I0318 13:30:24.711174 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce1e1007-d503-49db-abc1-8daa04b3d881-kube-api-access\") pod \"installer-7-master-0\" (UID: \"ce1e1007-d503-49db-abc1-8daa04b3d881\") " pod="openshift-kube-apiserver/installer-7-master-0" Mar 18 13:30:24.726538 master-0 kubenswrapper[27835]: I0318 13:30:24.726470 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce1e1007-d503-49db-abc1-8daa04b3d881-kube-api-access\") pod \"installer-7-master-0\" (UID: \"ce1e1007-d503-49db-abc1-8daa04b3d881\") " pod="openshift-kube-apiserver/installer-7-master-0" Mar 18 13:30:24.882535 master-0 kubenswrapper[27835]: I0318 13:30:24.882323 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-7-master-0" Mar 18 13:30:25.166492 master-0 kubenswrapper[27835]: I0318 13:30:25.164065 27835 generic.go:334] "Generic (PLEG): container finished" podID="3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387" containerID="d2e514db69e75cbc768f96eebeb8588cc8f05fc140ed675e0f0f241327588485" exitCode=0 Mar 18 13:30:25.166492 master-0 kubenswrapper[27835]: I0318 13:30:25.164134 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387","Type":"ContainerDied","Data":"d2e514db69e75cbc768f96eebeb8588cc8f05fc140ed675e0f0f241327588485"} Mar 18 13:30:25.169469 master-0 kubenswrapper[27835]: I0318 13:30:25.167395 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c129e07da670ff3af256d72652e4b1da/kube-controller-manager/2.log" Mar 18 13:30:25.169469 master-0 kubenswrapper[27835]: I0318 13:30:25.169216 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c129e07da670ff3af256d72652e4b1da/kube-controller-manager-cert-syncer/0.log" Mar 18 13:30:25.169862 master-0 kubenswrapper[27835]: I0318 13:30:25.169803 27835 generic.go:334] "Generic (PLEG): container finished" podID="c129e07da670ff3af256d72652e4b1da" containerID="b03e7d112d9f6e02fc0d128e89ab6de223ad85703c20718c2b9470a2cebd7529" exitCode=0 Mar 18 13:30:25.169862 master-0 kubenswrapper[27835]: I0318 13:30:25.169851 27835 generic.go:334] "Generic (PLEG): container finished" podID="c129e07da670ff3af256d72652e4b1da" containerID="d48e15ab911c60989cfb99c327ae7e0567590b1356659404b161af05907e4337" exitCode=0 Mar 18 13:30:25.169862 master-0 kubenswrapper[27835]: I0318 13:30:25.169859 27835 generic.go:334] "Generic (PLEG): container finished" podID="c129e07da670ff3af256d72652e4b1da" containerID="907f62a3ed0a66ba580ec70c52ca37e615478a0dc4f9200560aa2d1b0a9edbd8" exitCode=2 Mar 18 13:30:25.169862 master-0 kubenswrapper[27835]: I0318 13:30:25.169867 27835 generic.go:334] "Generic (PLEG): container finished" podID="c129e07da670ff3af256d72652e4b1da" containerID="80ea1bc510c56f8c04524b33659a9124c49ea6274b90ad277511efa074d7ad89" exitCode=0 Mar 18 13:30:25.171741 master-0 kubenswrapper[27835]: I0318 13:30:25.169925 27835 scope.go:117] "RemoveContainer" containerID="b03e7d112d9f6e02fc0d128e89ab6de223ad85703c20718c2b9470a2cebd7529" Mar 18 13:30:25.171741 master-0 kubenswrapper[27835]: I0318 13:30:25.170044 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:30:25.192882 master-0 kubenswrapper[27835]: I0318 13:30:25.192809 27835 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="c129e07da670ff3af256d72652e4b1da" podUID="9587703208e136f5328582e1ba0fc966" Mar 18 13:30:25.197265 master-0 kubenswrapper[27835]: I0318 13:30:25.196772 27835 scope.go:117] "RemoveContainer" containerID="b4b5c8d1e406d13fe58909e853bd3a2f1dbf56f74da4311baf7f48390d808a56" Mar 18 13:30:25.203630 master-0 kubenswrapper[27835]: I0318 13:30:25.203348 27835 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="c129e07da670ff3af256d72652e4b1da" podUID="9587703208e136f5328582e1ba0fc966" Mar 18 13:30:25.222597 master-0 kubenswrapper[27835]: I0318 13:30:25.222522 27835 scope.go:117] "RemoveContainer" containerID="d48e15ab911c60989cfb99c327ae7e0567590b1356659404b161af05907e4337" Mar 18 13:30:25.239859 master-0 kubenswrapper[27835]: I0318 13:30:25.239792 27835 scope.go:117] "RemoveContainer" containerID="907f62a3ed0a66ba580ec70c52ca37e615478a0dc4f9200560aa2d1b0a9edbd8" Mar 18 13:30:25.257792 master-0 kubenswrapper[27835]: I0318 13:30:25.257678 27835 scope.go:117] "RemoveContainer" containerID="80ea1bc510c56f8c04524b33659a9124c49ea6274b90ad277511efa074d7ad89" Mar 18 13:30:25.275707 master-0 kubenswrapper[27835]: I0318 13:30:25.275654 27835 scope.go:117] "RemoveContainer" containerID="b03e7d112d9f6e02fc0d128e89ab6de223ad85703c20718c2b9470a2cebd7529" Mar 18 13:30:25.278173 master-0 kubenswrapper[27835]: E0318 13:30:25.278126 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b03e7d112d9f6e02fc0d128e89ab6de223ad85703c20718c2b9470a2cebd7529\": container with ID starting with b03e7d112d9f6e02fc0d128e89ab6de223ad85703c20718c2b9470a2cebd7529 not found: ID does not exist" containerID="b03e7d112d9f6e02fc0d128e89ab6de223ad85703c20718c2b9470a2cebd7529" Mar 18 13:30:25.278247 master-0 kubenswrapper[27835]: I0318 13:30:25.278180 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b03e7d112d9f6e02fc0d128e89ab6de223ad85703c20718c2b9470a2cebd7529"} err="failed to get container status \"b03e7d112d9f6e02fc0d128e89ab6de223ad85703c20718c2b9470a2cebd7529\": rpc error: code = NotFound desc = could not find container \"b03e7d112d9f6e02fc0d128e89ab6de223ad85703c20718c2b9470a2cebd7529\": container with ID starting with b03e7d112d9f6e02fc0d128e89ab6de223ad85703c20718c2b9470a2cebd7529 not found: ID does not exist" Mar 18 13:30:25.278247 master-0 kubenswrapper[27835]: I0318 13:30:25.278212 27835 scope.go:117] "RemoveContainer" containerID="b4b5c8d1e406d13fe58909e853bd3a2f1dbf56f74da4311baf7f48390d808a56" Mar 18 13:30:25.278802 master-0 kubenswrapper[27835]: E0318 13:30:25.278751 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4b5c8d1e406d13fe58909e853bd3a2f1dbf56f74da4311baf7f48390d808a56\": container with ID starting with b4b5c8d1e406d13fe58909e853bd3a2f1dbf56f74da4311baf7f48390d808a56 not found: ID does not exist" containerID="b4b5c8d1e406d13fe58909e853bd3a2f1dbf56f74da4311baf7f48390d808a56" Mar 18 13:30:25.278869 master-0 kubenswrapper[27835]: I0318 13:30:25.278831 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4b5c8d1e406d13fe58909e853bd3a2f1dbf56f74da4311baf7f48390d808a56"} err="failed to get container status \"b4b5c8d1e406d13fe58909e853bd3a2f1dbf56f74da4311baf7f48390d808a56\": rpc error: code = NotFound desc = could not find container \"b4b5c8d1e406d13fe58909e853bd3a2f1dbf56f74da4311baf7f48390d808a56\": container with ID starting with b4b5c8d1e406d13fe58909e853bd3a2f1dbf56f74da4311baf7f48390d808a56 not found: ID does not exist" Mar 18 13:30:25.278929 master-0 kubenswrapper[27835]: I0318 13:30:25.278890 27835 scope.go:117] "RemoveContainer" containerID="d48e15ab911c60989cfb99c327ae7e0567590b1356659404b161af05907e4337" Mar 18 13:30:25.279385 master-0 kubenswrapper[27835]: E0318 13:30:25.279259 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d48e15ab911c60989cfb99c327ae7e0567590b1356659404b161af05907e4337\": container with ID starting with d48e15ab911c60989cfb99c327ae7e0567590b1356659404b161af05907e4337 not found: ID does not exist" containerID="d48e15ab911c60989cfb99c327ae7e0567590b1356659404b161af05907e4337" Mar 18 13:30:25.279488 master-0 kubenswrapper[27835]: I0318 13:30:25.279400 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d48e15ab911c60989cfb99c327ae7e0567590b1356659404b161af05907e4337"} err="failed to get container status \"d48e15ab911c60989cfb99c327ae7e0567590b1356659404b161af05907e4337\": rpc error: code = NotFound desc = could not find container \"d48e15ab911c60989cfb99c327ae7e0567590b1356659404b161af05907e4337\": container with ID starting with d48e15ab911c60989cfb99c327ae7e0567590b1356659404b161af05907e4337 not found: ID does not exist" Mar 18 13:30:25.279488 master-0 kubenswrapper[27835]: I0318 13:30:25.279448 27835 scope.go:117] "RemoveContainer" containerID="907f62a3ed0a66ba580ec70c52ca37e615478a0dc4f9200560aa2d1b0a9edbd8" Mar 18 13:30:25.279837 master-0 kubenswrapper[27835]: E0318 13:30:25.279800 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"907f62a3ed0a66ba580ec70c52ca37e615478a0dc4f9200560aa2d1b0a9edbd8\": container with ID starting with 907f62a3ed0a66ba580ec70c52ca37e615478a0dc4f9200560aa2d1b0a9edbd8 not found: ID does not exist" containerID="907f62a3ed0a66ba580ec70c52ca37e615478a0dc4f9200560aa2d1b0a9edbd8" Mar 18 13:30:25.279837 master-0 kubenswrapper[27835]: I0318 13:30:25.279825 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"907f62a3ed0a66ba580ec70c52ca37e615478a0dc4f9200560aa2d1b0a9edbd8"} err="failed to get container status \"907f62a3ed0a66ba580ec70c52ca37e615478a0dc4f9200560aa2d1b0a9edbd8\": rpc error: code = NotFound desc = could not find container \"907f62a3ed0a66ba580ec70c52ca37e615478a0dc4f9200560aa2d1b0a9edbd8\": container with ID starting with 907f62a3ed0a66ba580ec70c52ca37e615478a0dc4f9200560aa2d1b0a9edbd8 not found: ID does not exist" Mar 18 13:30:25.280178 master-0 kubenswrapper[27835]: I0318 13:30:25.279847 27835 scope.go:117] "RemoveContainer" containerID="80ea1bc510c56f8c04524b33659a9124c49ea6274b90ad277511efa074d7ad89" Mar 18 13:30:25.280271 master-0 kubenswrapper[27835]: E0318 13:30:25.280164 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80ea1bc510c56f8c04524b33659a9124c49ea6274b90ad277511efa074d7ad89\": container with ID starting with 80ea1bc510c56f8c04524b33659a9124c49ea6274b90ad277511efa074d7ad89 not found: ID does not exist" containerID="80ea1bc510c56f8c04524b33659a9124c49ea6274b90ad277511efa074d7ad89" Mar 18 13:30:25.280271 master-0 kubenswrapper[27835]: I0318 13:30:25.280205 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80ea1bc510c56f8c04524b33659a9124c49ea6274b90ad277511efa074d7ad89"} err="failed to get container status \"80ea1bc510c56f8c04524b33659a9124c49ea6274b90ad277511efa074d7ad89\": rpc error: code = NotFound desc = could not find container \"80ea1bc510c56f8c04524b33659a9124c49ea6274b90ad277511efa074d7ad89\": container with ID starting with 80ea1bc510c56f8c04524b33659a9124c49ea6274b90ad277511efa074d7ad89 not found: ID does not exist" Mar 18 13:30:25.280271 master-0 kubenswrapper[27835]: I0318 13:30:25.280228 27835 scope.go:117] "RemoveContainer" containerID="b03e7d112d9f6e02fc0d128e89ab6de223ad85703c20718c2b9470a2cebd7529" Mar 18 13:30:25.280550 master-0 kubenswrapper[27835]: I0318 13:30:25.280494 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b03e7d112d9f6e02fc0d128e89ab6de223ad85703c20718c2b9470a2cebd7529"} err="failed to get container status \"b03e7d112d9f6e02fc0d128e89ab6de223ad85703c20718c2b9470a2cebd7529\": rpc error: code = NotFound desc = could not find container \"b03e7d112d9f6e02fc0d128e89ab6de223ad85703c20718c2b9470a2cebd7529\": container with ID starting with b03e7d112d9f6e02fc0d128e89ab6de223ad85703c20718c2b9470a2cebd7529 not found: ID does not exist" Mar 18 13:30:25.280550 master-0 kubenswrapper[27835]: I0318 13:30:25.280525 27835 scope.go:117] "RemoveContainer" containerID="b4b5c8d1e406d13fe58909e853bd3a2f1dbf56f74da4311baf7f48390d808a56" Mar 18 13:30:25.280825 master-0 kubenswrapper[27835]: I0318 13:30:25.280747 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4b5c8d1e406d13fe58909e853bd3a2f1dbf56f74da4311baf7f48390d808a56"} err="failed to get container status \"b4b5c8d1e406d13fe58909e853bd3a2f1dbf56f74da4311baf7f48390d808a56\": rpc error: code = NotFound desc = could not find container \"b4b5c8d1e406d13fe58909e853bd3a2f1dbf56f74da4311baf7f48390d808a56\": container with ID starting with b4b5c8d1e406d13fe58909e853bd3a2f1dbf56f74da4311baf7f48390d808a56 not found: ID does not exist" Mar 18 13:30:25.280825 master-0 kubenswrapper[27835]: I0318 13:30:25.280763 27835 scope.go:117] "RemoveContainer" containerID="d48e15ab911c60989cfb99c327ae7e0567590b1356659404b161af05907e4337" Mar 18 13:30:25.281300 master-0 kubenswrapper[27835]: I0318 13:30:25.281259 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d48e15ab911c60989cfb99c327ae7e0567590b1356659404b161af05907e4337"} err="failed to get container status \"d48e15ab911c60989cfb99c327ae7e0567590b1356659404b161af05907e4337\": rpc error: code = NotFound desc = could not find container \"d48e15ab911c60989cfb99c327ae7e0567590b1356659404b161af05907e4337\": container with ID starting with d48e15ab911c60989cfb99c327ae7e0567590b1356659404b161af05907e4337 not found: ID does not exist" Mar 18 13:30:25.281300 master-0 kubenswrapper[27835]: I0318 13:30:25.281289 27835 scope.go:117] "RemoveContainer" containerID="907f62a3ed0a66ba580ec70c52ca37e615478a0dc4f9200560aa2d1b0a9edbd8" Mar 18 13:30:25.282280 master-0 kubenswrapper[27835]: I0318 13:30:25.282183 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"907f62a3ed0a66ba580ec70c52ca37e615478a0dc4f9200560aa2d1b0a9edbd8"} err="failed to get container status \"907f62a3ed0a66ba580ec70c52ca37e615478a0dc4f9200560aa2d1b0a9edbd8\": rpc error: code = NotFound desc = could not find container \"907f62a3ed0a66ba580ec70c52ca37e615478a0dc4f9200560aa2d1b0a9edbd8\": container with ID starting with 907f62a3ed0a66ba580ec70c52ca37e615478a0dc4f9200560aa2d1b0a9edbd8 not found: ID does not exist" Mar 18 13:30:25.282351 master-0 kubenswrapper[27835]: I0318 13:30:25.282279 27835 scope.go:117] "RemoveContainer" containerID="80ea1bc510c56f8c04524b33659a9124c49ea6274b90ad277511efa074d7ad89" Mar 18 13:30:25.283398 master-0 kubenswrapper[27835]: I0318 13:30:25.283364 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80ea1bc510c56f8c04524b33659a9124c49ea6274b90ad277511efa074d7ad89"} err="failed to get container status \"80ea1bc510c56f8c04524b33659a9124c49ea6274b90ad277511efa074d7ad89\": rpc error: code = NotFound desc = could not find container \"80ea1bc510c56f8c04524b33659a9124c49ea6274b90ad277511efa074d7ad89\": container with ID starting with 80ea1bc510c56f8c04524b33659a9124c49ea6274b90ad277511efa074d7ad89 not found: ID does not exist" Mar 18 13:30:25.283505 master-0 kubenswrapper[27835]: I0318 13:30:25.283430 27835 scope.go:117] "RemoveContainer" containerID="b03e7d112d9f6e02fc0d128e89ab6de223ad85703c20718c2b9470a2cebd7529" Mar 18 13:30:25.283796 master-0 kubenswrapper[27835]: I0318 13:30:25.283753 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b03e7d112d9f6e02fc0d128e89ab6de223ad85703c20718c2b9470a2cebd7529"} err="failed to get container status \"b03e7d112d9f6e02fc0d128e89ab6de223ad85703c20718c2b9470a2cebd7529\": rpc error: code = NotFound desc = could not find container \"b03e7d112d9f6e02fc0d128e89ab6de223ad85703c20718c2b9470a2cebd7529\": container with ID starting with b03e7d112d9f6e02fc0d128e89ab6de223ad85703c20718c2b9470a2cebd7529 not found: ID does not exist" Mar 18 13:30:25.283796 master-0 kubenswrapper[27835]: I0318 13:30:25.283781 27835 scope.go:117] "RemoveContainer" containerID="b4b5c8d1e406d13fe58909e853bd3a2f1dbf56f74da4311baf7f48390d808a56" Mar 18 13:30:25.284176 master-0 kubenswrapper[27835]: I0318 13:30:25.284140 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4b5c8d1e406d13fe58909e853bd3a2f1dbf56f74da4311baf7f48390d808a56"} err="failed to get container status \"b4b5c8d1e406d13fe58909e853bd3a2f1dbf56f74da4311baf7f48390d808a56\": rpc error: code = NotFound desc = could not find container \"b4b5c8d1e406d13fe58909e853bd3a2f1dbf56f74da4311baf7f48390d808a56\": container with ID starting with b4b5c8d1e406d13fe58909e853bd3a2f1dbf56f74da4311baf7f48390d808a56 not found: ID does not exist" Mar 18 13:30:25.284176 master-0 kubenswrapper[27835]: I0318 13:30:25.284172 27835 scope.go:117] "RemoveContainer" containerID="d48e15ab911c60989cfb99c327ae7e0567590b1356659404b161af05907e4337" Mar 18 13:30:25.284806 master-0 kubenswrapper[27835]: I0318 13:30:25.284760 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d48e15ab911c60989cfb99c327ae7e0567590b1356659404b161af05907e4337"} err="failed to get container status \"d48e15ab911c60989cfb99c327ae7e0567590b1356659404b161af05907e4337\": rpc error: code = NotFound desc = could not find container \"d48e15ab911c60989cfb99c327ae7e0567590b1356659404b161af05907e4337\": container with ID starting with d48e15ab911c60989cfb99c327ae7e0567590b1356659404b161af05907e4337 not found: ID does not exist" Mar 18 13:30:25.284881 master-0 kubenswrapper[27835]: I0318 13:30:25.284806 27835 scope.go:117] "RemoveContainer" containerID="907f62a3ed0a66ba580ec70c52ca37e615478a0dc4f9200560aa2d1b0a9edbd8" Mar 18 13:30:25.285221 master-0 kubenswrapper[27835]: I0318 13:30:25.285198 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"907f62a3ed0a66ba580ec70c52ca37e615478a0dc4f9200560aa2d1b0a9edbd8"} err="failed to get container status \"907f62a3ed0a66ba580ec70c52ca37e615478a0dc4f9200560aa2d1b0a9edbd8\": rpc error: code = NotFound desc = could not find container \"907f62a3ed0a66ba580ec70c52ca37e615478a0dc4f9200560aa2d1b0a9edbd8\": container with ID starting with 907f62a3ed0a66ba580ec70c52ca37e615478a0dc4f9200560aa2d1b0a9edbd8 not found: ID does not exist" Mar 18 13:30:25.285278 master-0 kubenswrapper[27835]: I0318 13:30:25.285219 27835 scope.go:117] "RemoveContainer" containerID="80ea1bc510c56f8c04524b33659a9124c49ea6274b90ad277511efa074d7ad89" Mar 18 13:30:25.285604 master-0 kubenswrapper[27835]: I0318 13:30:25.285582 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80ea1bc510c56f8c04524b33659a9124c49ea6274b90ad277511efa074d7ad89"} err="failed to get container status \"80ea1bc510c56f8c04524b33659a9124c49ea6274b90ad277511efa074d7ad89\": rpc error: code = NotFound desc = could not find container \"80ea1bc510c56f8c04524b33659a9124c49ea6274b90ad277511efa074d7ad89\": container with ID starting with 80ea1bc510c56f8c04524b33659a9124c49ea6274b90ad277511efa074d7ad89 not found: ID does not exist" Mar 18 13:30:25.285604 master-0 kubenswrapper[27835]: I0318 13:30:25.285601 27835 scope.go:117] "RemoveContainer" containerID="b03e7d112d9f6e02fc0d128e89ab6de223ad85703c20718c2b9470a2cebd7529" Mar 18 13:30:25.285840 master-0 kubenswrapper[27835]: I0318 13:30:25.285818 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b03e7d112d9f6e02fc0d128e89ab6de223ad85703c20718c2b9470a2cebd7529"} err="failed to get container status \"b03e7d112d9f6e02fc0d128e89ab6de223ad85703c20718c2b9470a2cebd7529\": rpc error: code = NotFound desc = could not find container \"b03e7d112d9f6e02fc0d128e89ab6de223ad85703c20718c2b9470a2cebd7529\": container with ID starting with b03e7d112d9f6e02fc0d128e89ab6de223ad85703c20718c2b9470a2cebd7529 not found: ID does not exist" Mar 18 13:30:25.285840 master-0 kubenswrapper[27835]: I0318 13:30:25.285833 27835 scope.go:117] "RemoveContainer" containerID="b4b5c8d1e406d13fe58909e853bd3a2f1dbf56f74da4311baf7f48390d808a56" Mar 18 13:30:25.286082 master-0 kubenswrapper[27835]: I0318 13:30:25.286044 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4b5c8d1e406d13fe58909e853bd3a2f1dbf56f74da4311baf7f48390d808a56"} err="failed to get container status \"b4b5c8d1e406d13fe58909e853bd3a2f1dbf56f74da4311baf7f48390d808a56\": rpc error: code = NotFound desc = could not find container \"b4b5c8d1e406d13fe58909e853bd3a2f1dbf56f74da4311baf7f48390d808a56\": container with ID starting with b4b5c8d1e406d13fe58909e853bd3a2f1dbf56f74da4311baf7f48390d808a56 not found: ID does not exist" Mar 18 13:30:25.286082 master-0 kubenswrapper[27835]: I0318 13:30:25.286061 27835 scope.go:117] "RemoveContainer" containerID="d48e15ab911c60989cfb99c327ae7e0567590b1356659404b161af05907e4337" Mar 18 13:30:25.286491 master-0 kubenswrapper[27835]: I0318 13:30:25.286465 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d48e15ab911c60989cfb99c327ae7e0567590b1356659404b161af05907e4337"} err="failed to get container status \"d48e15ab911c60989cfb99c327ae7e0567590b1356659404b161af05907e4337\": rpc error: code = NotFound desc = could not find container \"d48e15ab911c60989cfb99c327ae7e0567590b1356659404b161af05907e4337\": container with ID starting with d48e15ab911c60989cfb99c327ae7e0567590b1356659404b161af05907e4337 not found: ID does not exist" Mar 18 13:30:25.286491 master-0 kubenswrapper[27835]: I0318 13:30:25.286485 27835 scope.go:117] "RemoveContainer" containerID="907f62a3ed0a66ba580ec70c52ca37e615478a0dc4f9200560aa2d1b0a9edbd8" Mar 18 13:30:25.286767 master-0 kubenswrapper[27835]: I0318 13:30:25.286710 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"907f62a3ed0a66ba580ec70c52ca37e615478a0dc4f9200560aa2d1b0a9edbd8"} err="failed to get container status \"907f62a3ed0a66ba580ec70c52ca37e615478a0dc4f9200560aa2d1b0a9edbd8\": rpc error: code = NotFound desc = could not find container \"907f62a3ed0a66ba580ec70c52ca37e615478a0dc4f9200560aa2d1b0a9edbd8\": container with ID starting with 907f62a3ed0a66ba580ec70c52ca37e615478a0dc4f9200560aa2d1b0a9edbd8 not found: ID does not exist" Mar 18 13:30:25.286767 master-0 kubenswrapper[27835]: I0318 13:30:25.286730 27835 scope.go:117] "RemoveContainer" containerID="80ea1bc510c56f8c04524b33659a9124c49ea6274b90ad277511efa074d7ad89" Mar 18 13:30:25.287614 master-0 kubenswrapper[27835]: I0318 13:30:25.287564 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80ea1bc510c56f8c04524b33659a9124c49ea6274b90ad277511efa074d7ad89"} err="failed to get container status \"80ea1bc510c56f8c04524b33659a9124c49ea6274b90ad277511efa074d7ad89\": rpc error: code = NotFound desc = could not find container \"80ea1bc510c56f8c04524b33659a9124c49ea6274b90ad277511efa074d7ad89\": container with ID starting with 80ea1bc510c56f8c04524b33659a9124c49ea6274b90ad277511efa074d7ad89 not found: ID does not exist" Mar 18 13:30:25.308708 master-0 kubenswrapper[27835]: I0318 13:30:25.308661 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-7-master-0"] Mar 18 13:30:26.178235 master-0 kubenswrapper[27835]: I0318 13:30:26.178179 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-7-master-0" event={"ID":"ce1e1007-d503-49db-abc1-8daa04b3d881","Type":"ContainerStarted","Data":"c8731f762594602d2fd3769e7cbd0d1dcf5f97ae42652f6f76fde99ff58dc7f4"} Mar 18 13:30:26.178235 master-0 kubenswrapper[27835]: I0318 13:30:26.178236 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-7-master-0" event={"ID":"ce1e1007-d503-49db-abc1-8daa04b3d881","Type":"ContainerStarted","Data":"602008d6bdf46b564fe99b9585b24b42ac00f833aee7bc2f6dbf8f5834e3ba71"} Mar 18 13:30:26.199517 master-0 kubenswrapper[27835]: I0318 13:30:26.199434 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-7-master-0" podStartSLOduration=2.196566783 podStartE2EDuration="2.196566783s" podCreationTimestamp="2026-03-18 13:30:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:30:26.195606006 +0000 UTC m=+390.160817566" watchObservedRunningTime="2026-03-18 13:30:26.196566783 +0000 UTC m=+390.161778343" Mar 18 13:30:26.292403 master-0 kubenswrapper[27835]: I0318 13:30:26.292342 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c129e07da670ff3af256d72652e4b1da" path="/var/lib/kubelet/pods/c129e07da670ff3af256d72652e4b1da/volumes" Mar 18 13:30:26.530364 master-0 kubenswrapper[27835]: I0318 13:30:26.530307 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 13:30:26.656444 master-0 kubenswrapper[27835]: I0318 13:30:26.651165 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387-kubelet-dir\") pod \"3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387\" (UID: \"3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387\") " Mar 18 13:30:26.656444 master-0 kubenswrapper[27835]: I0318 13:30:26.651239 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387-kube-api-access\") pod \"3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387\" (UID: \"3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387\") " Mar 18 13:30:26.656444 master-0 kubenswrapper[27835]: I0318 13:30:26.651322 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387-var-lock\") pod \"3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387\" (UID: \"3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387\") " Mar 18 13:30:26.656444 master-0 kubenswrapper[27835]: I0318 13:30:26.651450 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387-var-lock" (OuterVolumeSpecName: "var-lock") pod "3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387" (UID: "3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:30:26.656444 master-0 kubenswrapper[27835]: I0318 13:30:26.651685 27835 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:30:26.656444 master-0 kubenswrapper[27835]: I0318 13:30:26.651683 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387" (UID: "3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:30:26.658763 master-0 kubenswrapper[27835]: I0318 13:30:26.657894 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387" (UID: "3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:30:26.753596 master-0 kubenswrapper[27835]: I0318 13:30:26.753398 27835 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:30:26.753596 master-0 kubenswrapper[27835]: I0318 13:30:26.753486 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:30:26.900306 master-0 kubenswrapper[27835]: I0318 13:30:26.900228 27835 patch_prober.go:28] interesting pod/console-5d794fddf9-gh6gq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Mar 18 13:30:26.900610 master-0 kubenswrapper[27835]: I0318 13:30:26.900305 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d794fddf9-gh6gq" podUID="414429b2-4ccb-49cd-8bae-f9a6ab653831" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Mar 18 13:30:27.188545 master-0 kubenswrapper[27835]: I0318 13:30:27.188456 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387","Type":"ContainerDied","Data":"55059deba7c052a5243c6be78ed5efa2f99a8b11497ffa2bc09cba330bd51b3d"} Mar 18 13:30:27.188545 master-0 kubenswrapper[27835]: I0318 13:30:27.188519 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55059deba7c052a5243c6be78ed5efa2f99a8b11497ffa2bc09cba330bd51b3d" Mar 18 13:30:27.188545 master-0 kubenswrapper[27835]: I0318 13:30:27.188468 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 13:30:30.107771 master-0 kubenswrapper[27835]: I0318 13:30:30.107701 27835 patch_prober.go:28] interesting pod/console-9cc97458b-bkd6r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 18 13:30:30.107771 master-0 kubenswrapper[27835]: I0318 13:30:30.107754 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-9cc97458b-bkd6r" podUID="6600dd48-0759-43de-b1df-e99334590bac" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 18 13:30:36.901034 master-0 kubenswrapper[27835]: I0318 13:30:36.900934 27835 patch_prober.go:28] interesting pod/console-5d794fddf9-gh6gq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Mar 18 13:30:36.901034 master-0 kubenswrapper[27835]: I0318 13:30:36.901025 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d794fddf9-gh6gq" podUID="414429b2-4ccb-49cd-8bae-f9a6ab653831" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Mar 18 13:30:37.280663 master-0 kubenswrapper[27835]: I0318 13:30:37.280547 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:30:37.302768 master-0 kubenswrapper[27835]: I0318 13:30:37.302713 27835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="672cc8f7-9acf-42af-ba5d-ce101e0fd68d" Mar 18 13:30:37.302768 master-0 kubenswrapper[27835]: I0318 13:30:37.302754 27835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="672cc8f7-9acf-42af-ba5d-ce101e0fd68d" Mar 18 13:30:37.315594 master-0 kubenswrapper[27835]: I0318 13:30:37.315290 27835 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:30:37.336548 master-0 kubenswrapper[27835]: I0318 13:30:37.334491 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:30:37.341503 master-0 kubenswrapper[27835]: I0318 13:30:37.340645 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 13:30:37.351869 master-0 kubenswrapper[27835]: I0318 13:30:37.351760 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 13:30:37.356856 master-0 kubenswrapper[27835]: I0318 13:30:37.356802 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 13:30:38.278948 master-0 kubenswrapper[27835]: I0318 13:30:38.278895 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"9587703208e136f5328582e1ba0fc966","Type":"ContainerStarted","Data":"e6968ae00540e2dba693c77e670deed9e7bc54fe368fe141703c50ccbabffb13"} Mar 18 13:30:38.278948 master-0 kubenswrapper[27835]: I0318 13:30:38.278943 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"9587703208e136f5328582e1ba0fc966","Type":"ContainerStarted","Data":"94fc6ccd494774969e85cb82526eca9c1c48ae661d0cb01fbc20f90c8d765122"} Mar 18 13:30:38.278948 master-0 kubenswrapper[27835]: I0318 13:30:38.278952 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"9587703208e136f5328582e1ba0fc966","Type":"ContainerStarted","Data":"7b30a002d6aa83cf20f0a552aace635f5910f80408cca31198fd9a8f8469738c"} Mar 18 13:30:38.279573 master-0 kubenswrapper[27835]: I0318 13:30:38.278961 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"9587703208e136f5328582e1ba0fc966","Type":"ContainerStarted","Data":"dfbc90abb344b2a7a1cc6f13176e45bd6d455cb909c6f2195b5b4f4f3d8afbb5"} Mar 18 13:30:39.290470 master-0 kubenswrapper[27835]: I0318 13:30:39.290348 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"9587703208e136f5328582e1ba0fc966","Type":"ContainerStarted","Data":"ccb092c587c45e342da0666937b41eed56ce284c71a0f51a5fd154bbee671969"} Mar 18 13:30:39.315065 master-0 kubenswrapper[27835]: I0318 13:30:39.314926 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.314905796 podStartE2EDuration="2.314905796s" podCreationTimestamp="2026-03-18 13:30:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:30:39.308438948 +0000 UTC m=+403.273650528" watchObservedRunningTime="2026-03-18 13:30:39.314905796 +0000 UTC m=+403.280117386" Mar 18 13:30:40.107560 master-0 kubenswrapper[27835]: I0318 13:30:40.107503 27835 patch_prober.go:28] interesting pod/console-9cc97458b-bkd6r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 18 13:30:40.107792 master-0 kubenswrapper[27835]: I0318 13:30:40.107565 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-9cc97458b-bkd6r" podUID="6600dd48-0759-43de-b1df-e99334590bac" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 18 13:30:46.901078 master-0 kubenswrapper[27835]: I0318 13:30:46.901003 27835 patch_prober.go:28] interesting pod/console-5d794fddf9-gh6gq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Mar 18 13:30:46.901898 master-0 kubenswrapper[27835]: I0318 13:30:46.901080 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d794fddf9-gh6gq" podUID="414429b2-4ccb-49cd-8bae-f9a6ab653831" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Mar 18 13:30:47.335993 master-0 kubenswrapper[27835]: I0318 13:30:47.335916 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:30:47.336227 master-0 kubenswrapper[27835]: I0318 13:30:47.336116 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:30:47.336465 master-0 kubenswrapper[27835]: I0318 13:30:47.336389 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:30:47.336582 master-0 kubenswrapper[27835]: I0318 13:30:47.336467 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:30:47.341750 master-0 kubenswrapper[27835]: I0318 13:30:47.341704 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:30:47.342668 master-0 kubenswrapper[27835]: I0318 13:30:47.342602 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:30:47.379120 master-0 kubenswrapper[27835]: I0318 13:30:47.377637 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:30:47.379120 master-0 kubenswrapper[27835]: I0318 13:30:47.378478 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:30:50.107701 master-0 kubenswrapper[27835]: I0318 13:30:50.107619 27835 patch_prober.go:28] interesting pod/console-9cc97458b-bkd6r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 18 13:30:50.108372 master-0 kubenswrapper[27835]: I0318 13:30:50.107709 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-9cc97458b-bkd6r" podUID="6600dd48-0759-43de-b1df-e99334590bac" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 18 13:30:56.900295 master-0 kubenswrapper[27835]: I0318 13:30:56.900160 27835 patch_prober.go:28] interesting pod/console-5d794fddf9-gh6gq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Mar 18 13:30:56.900295 master-0 kubenswrapper[27835]: I0318 13:30:56.900227 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d794fddf9-gh6gq" podUID="414429b2-4ccb-49cd-8bae-f9a6ab653831" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Mar 18 13:31:00.107925 master-0 kubenswrapper[27835]: I0318 13:31:00.107846 27835 patch_prober.go:28] interesting pod/console-9cc97458b-bkd6r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 18 13:31:00.107925 master-0 kubenswrapper[27835]: I0318 13:31:00.107915 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-9cc97458b-bkd6r" podUID="6600dd48-0759-43de-b1df-e99334590bac" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 18 13:31:03.356138 master-0 kubenswrapper[27835]: I0318 13:31:03.355988 27835 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 13:31:03.356925 master-0 kubenswrapper[27835]: I0318 13:31:03.356683 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver" containerID="cri-o://b77d8578909e57c509d7afe00177650fc036c0ca83d0cb91505afe847365a480" gracePeriod=15 Mar 18 13:31:03.356925 master-0 kubenswrapper[27835]: I0318 13:31:03.356723 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver-check-endpoints" containerID="cri-o://a56ff8b1b770bc5368cd1740d4fb0666d8bf754133e8b2ea51bef7267a3302c2" gracePeriod=15 Mar 18 13:31:03.356925 master-0 kubenswrapper[27835]: I0318 13:31:03.356857 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://7c8a553e1c6d6fefe684c617a690a439e9aa8c866ba1a72a9c4e5d94885b4e6b" gracePeriod=15 Mar 18 13:31:03.357073 master-0 kubenswrapper[27835]: I0318 13:31:03.356935 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://4e6f860f5a238733bb1600608f8d19d5ba39e1c0b4dd9265395e2f7c50a3071e" gracePeriod=15 Mar 18 13:31:03.357073 master-0 kubenswrapper[27835]: I0318 13:31:03.356969 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver-cert-syncer" containerID="cri-o://fcdb1b819161f6786aa89aeb7873b8bf4187f3c9a3ed2e0b6440edb9bb8a5aec" gracePeriod=15 Mar 18 13:31:03.360515 master-0 kubenswrapper[27835]: I0318 13:31:03.360393 27835 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 13:31:03.361953 master-0 kubenswrapper[27835]: E0318 13:31:03.361912 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver" Mar 18 13:31:03.361953 master-0 kubenswrapper[27835]: I0318 13:31:03.361943 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver" Mar 18 13:31:03.361953 master-0 kubenswrapper[27835]: E0318 13:31:03.361956 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver-check-endpoints" Mar 18 13:31:03.362448 master-0 kubenswrapper[27835]: I0318 13:31:03.361962 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver-check-endpoints" Mar 18 13:31:03.362448 master-0 kubenswrapper[27835]: E0318 13:31:03.361977 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver-cert-syncer" Mar 18 13:31:03.362448 master-0 kubenswrapper[27835]: I0318 13:31:03.361983 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver-cert-syncer" Mar 18 13:31:03.362448 master-0 kubenswrapper[27835]: E0318 13:31:03.362093 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387" containerName="installer" Mar 18 13:31:03.362448 master-0 kubenswrapper[27835]: I0318 13:31:03.362102 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387" containerName="installer" Mar 18 13:31:03.362448 master-0 kubenswrapper[27835]: E0318 13:31:03.362111 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver-insecure-readyz" Mar 18 13:31:03.362448 master-0 kubenswrapper[27835]: I0318 13:31:03.362118 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver-insecure-readyz" Mar 18 13:31:03.362448 master-0 kubenswrapper[27835]: E0318 13:31:03.362133 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="setup" Mar 18 13:31:03.362448 master-0 kubenswrapper[27835]: I0318 13:31:03.362139 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="setup" Mar 18 13:31:03.362448 master-0 kubenswrapper[27835]: E0318 13:31:03.362147 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver-cert-regeneration-controller" Mar 18 13:31:03.362448 master-0 kubenswrapper[27835]: I0318 13:31:03.362153 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver-cert-regeneration-controller" Mar 18 13:31:03.362448 master-0 kubenswrapper[27835]: I0318 13:31:03.362327 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver-check-endpoints" Mar 18 13:31:03.362448 master-0 kubenswrapper[27835]: I0318 13:31:03.362355 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver" Mar 18 13:31:03.362448 master-0 kubenswrapper[27835]: I0318 13:31:03.362365 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver-cert-syncer" Mar 18 13:31:03.362448 master-0 kubenswrapper[27835]: I0318 13:31:03.362382 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fc82bf1-1fc4-4bf4-b7c5-6624cc0cc387" containerName="installer" Mar 18 13:31:03.362448 master-0 kubenswrapper[27835]: I0318 13:31:03.362412 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver-cert-regeneration-controller" Mar 18 13:31:03.362448 master-0 kubenswrapper[27835]: I0318 13:31:03.362436 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver-insecure-readyz" Mar 18 13:31:03.364350 master-0 kubenswrapper[27835]: I0318 13:31:03.364320 27835 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 13:31:03.365770 master-0 kubenswrapper[27835]: I0318 13:31:03.365742 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:31:03.410574 master-0 kubenswrapper[27835]: I0318 13:31:03.409678 27835 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="d5f502b117c7c8479f7f20848a50fec0" podUID="3cae843f2a8e3c3c3212b1177305c1d5" Mar 18 13:31:03.417545 master-0 kubenswrapper[27835]: I0318 13:31:03.417097 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7a4744531cb137d7252790be662d8cc8-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"7a4744531cb137d7252790be662d8cc8\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:31:03.418393 master-0 kubenswrapper[27835]: I0318 13:31:03.418374 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7a4744531cb137d7252790be662d8cc8-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"7a4744531cb137d7252790be662d8cc8\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:31:03.418877 master-0 kubenswrapper[27835]: I0318 13:31:03.418858 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7a4744531cb137d7252790be662d8cc8-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"7a4744531cb137d7252790be662d8cc8\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:31:03.424345 master-0 kubenswrapper[27835]: I0318 13:31:03.424314 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/7a4744531cb137d7252790be662d8cc8-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"7a4744531cb137d7252790be662d8cc8\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:31:03.425247 master-0 kubenswrapper[27835]: I0318 13:31:03.425139 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3cae843f2a8e3c3c3212b1177305c1d5-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"3cae843f2a8e3c3c3212b1177305c1d5\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:31:03.425381 master-0 kubenswrapper[27835]: I0318 13:31:03.425364 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3cae843f2a8e3c3c3212b1177305c1d5-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"3cae843f2a8e3c3c3212b1177305c1d5\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:31:03.425825 master-0 kubenswrapper[27835]: I0318 13:31:03.425804 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3cae843f2a8e3c3c3212b1177305c1d5-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"3cae843f2a8e3c3c3212b1177305c1d5\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:31:03.426048 master-0 kubenswrapper[27835]: I0318 13:31:03.426031 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/7a4744531cb137d7252790be662d8cc8-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"7a4744531cb137d7252790be662d8cc8\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:31:03.524661 master-0 kubenswrapper[27835]: E0318 13:31:03.524601 27835 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:31:03.525792 master-0 kubenswrapper[27835]: I0318 13:31:03.525762 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_d5f502b117c7c8479f7f20848a50fec0/kube-apiserver-cert-syncer/0.log" Mar 18 13:31:03.526473 master-0 kubenswrapper[27835]: I0318 13:31:03.526435 27835 generic.go:334] "Generic (PLEG): container finished" podID="d5f502b117c7c8479f7f20848a50fec0" containerID="a56ff8b1b770bc5368cd1740d4fb0666d8bf754133e8b2ea51bef7267a3302c2" exitCode=0 Mar 18 13:31:03.526473 master-0 kubenswrapper[27835]: I0318 13:31:03.526470 27835 generic.go:334] "Generic (PLEG): container finished" podID="d5f502b117c7c8479f7f20848a50fec0" containerID="7c8a553e1c6d6fefe684c617a690a439e9aa8c866ba1a72a9c4e5d94885b4e6b" exitCode=0 Mar 18 13:31:03.526618 master-0 kubenswrapper[27835]: I0318 13:31:03.526479 27835 generic.go:334] "Generic (PLEG): container finished" podID="d5f502b117c7c8479f7f20848a50fec0" containerID="fcdb1b819161f6786aa89aeb7873b8bf4187f3c9a3ed2e0b6440edb9bb8a5aec" exitCode=2 Mar 18 13:31:03.528097 master-0 kubenswrapper[27835]: I0318 13:31:03.528054 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/7a4744531cb137d7252790be662d8cc8-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"7a4744531cb137d7252790be662d8cc8\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:31:03.528191 master-0 kubenswrapper[27835]: I0318 13:31:03.528144 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/7a4744531cb137d7252790be662d8cc8-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"7a4744531cb137d7252790be662d8cc8\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:31:03.528349 master-0 kubenswrapper[27835]: I0318 13:31:03.528294 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7a4744531cb137d7252790be662d8cc8-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"7a4744531cb137d7252790be662d8cc8\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:31:03.528434 master-0 kubenswrapper[27835]: I0318 13:31:03.528371 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7a4744531cb137d7252790be662d8cc8-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"7a4744531cb137d7252790be662d8cc8\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:31:03.528503 master-0 kubenswrapper[27835]: I0318 13:31:03.528441 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7a4744531cb137d7252790be662d8cc8-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"7a4744531cb137d7252790be662d8cc8\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:31:03.528503 master-0 kubenswrapper[27835]: I0318 13:31:03.528464 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7a4744531cb137d7252790be662d8cc8-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"7a4744531cb137d7252790be662d8cc8\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:31:03.528503 master-0 kubenswrapper[27835]: I0318 13:31:03.528491 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7a4744531cb137d7252790be662d8cc8-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"7a4744531cb137d7252790be662d8cc8\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:31:03.528626 master-0 kubenswrapper[27835]: I0318 13:31:03.528544 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/7a4744531cb137d7252790be662d8cc8-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"7a4744531cb137d7252790be662d8cc8\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:31:03.528626 master-0 kubenswrapper[27835]: I0318 13:31:03.528594 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3cae843f2a8e3c3c3212b1177305c1d5-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"3cae843f2a8e3c3c3212b1177305c1d5\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:31:03.528626 master-0 kubenswrapper[27835]: I0318 13:31:03.528613 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3cae843f2a8e3c3c3212b1177305c1d5-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"3cae843f2a8e3c3c3212b1177305c1d5\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:31:03.528740 master-0 kubenswrapper[27835]: I0318 13:31:03.528657 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7a4744531cb137d7252790be662d8cc8-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"7a4744531cb137d7252790be662d8cc8\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:31:03.528740 master-0 kubenswrapper[27835]: I0318 13:31:03.528632 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3cae843f2a8e3c3c3212b1177305c1d5-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"3cae843f2a8e3c3c3212b1177305c1d5\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:31:03.528821 master-0 kubenswrapper[27835]: I0318 13:31:03.528729 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3cae843f2a8e3c3c3212b1177305c1d5-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"3cae843f2a8e3c3c3212b1177305c1d5\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:31:03.528877 master-0 kubenswrapper[27835]: I0318 13:31:03.528844 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3cae843f2a8e3c3c3212b1177305c1d5-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"3cae843f2a8e3c3c3212b1177305c1d5\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:31:03.528976 master-0 kubenswrapper[27835]: I0318 13:31:03.528946 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/7a4744531cb137d7252790be662d8cc8-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"7a4744531cb137d7252790be662d8cc8\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:31:03.529052 master-0 kubenswrapper[27835]: I0318 13:31:03.528998 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3cae843f2a8e3c3c3212b1177305c1d5-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"3cae843f2a8e3c3c3212b1177305c1d5\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:31:03.826018 master-0 kubenswrapper[27835]: I0318 13:31:03.825968 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:31:03.862079 master-0 kubenswrapper[27835]: W0318 13:31:03.861975 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a4744531cb137d7252790be662d8cc8.slice/crio-6700d8577a4b28d2e3f28833c68577a73cb749abd6df9e5a12e1b52bbc91197f WatchSource:0}: Error finding container 6700d8577a4b28d2e3f28833c68577a73cb749abd6df9e5a12e1b52bbc91197f: Status 404 returned error can't find the container with id 6700d8577a4b28d2e3f28833c68577a73cb749abd6df9e5a12e1b52bbc91197f Mar 18 13:31:03.866223 master-0 kubenswrapper[27835]: E0318 13:31:03.866114 27835 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189df2af81cda4a9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:7a4744531cb137d7252790be662d8cc8,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:31:03.864779945 +0000 UTC m=+427.829991495,LastTimestamp:2026-03-18 13:31:03.864779945 +0000 UTC m=+427.829991495,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:31:04.538942 master-0 kubenswrapper[27835]: I0318 13:31:04.538893 27835 generic.go:334] "Generic (PLEG): container finished" podID="ce1e1007-d503-49db-abc1-8daa04b3d881" containerID="c8731f762594602d2fd3769e7cbd0d1dcf5f97ae42652f6f76fde99ff58dc7f4" exitCode=0 Mar 18 13:31:04.539645 master-0 kubenswrapper[27835]: I0318 13:31:04.539019 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-7-master-0" event={"ID":"ce1e1007-d503-49db-abc1-8daa04b3d881","Type":"ContainerDied","Data":"c8731f762594602d2fd3769e7cbd0d1dcf5f97ae42652f6f76fde99ff58dc7f4"} Mar 18 13:31:04.541059 master-0 kubenswrapper[27835]: I0318 13:31:04.540962 27835 status_manager.go:851] "Failed to get status for pod" podUID="ce1e1007-d503-49db-abc1-8daa04b3d881" pod="openshift-kube-apiserver/installer-7-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-7-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:31:04.542575 master-0 kubenswrapper[27835]: I0318 13:31:04.542481 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_d5f502b117c7c8479f7f20848a50fec0/kube-apiserver-cert-syncer/0.log" Mar 18 13:31:04.543340 master-0 kubenswrapper[27835]: I0318 13:31:04.543279 27835 generic.go:334] "Generic (PLEG): container finished" podID="d5f502b117c7c8479f7f20848a50fec0" containerID="4e6f860f5a238733bb1600608f8d19d5ba39e1c0b4dd9265395e2f7c50a3071e" exitCode=0 Mar 18 13:31:04.544738 master-0 kubenswrapper[27835]: I0318 13:31:04.544686 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"7a4744531cb137d7252790be662d8cc8","Type":"ContainerStarted","Data":"2474e8afe2c961844d8e61e408e5196602dffdc620ec016ddcbc79c6247500a1"} Mar 18 13:31:04.544738 master-0 kubenswrapper[27835]: I0318 13:31:04.544720 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"7a4744531cb137d7252790be662d8cc8","Type":"ContainerStarted","Data":"6700d8577a4b28d2e3f28833c68577a73cb749abd6df9e5a12e1b52bbc91197f"} Mar 18 13:31:04.545735 master-0 kubenswrapper[27835]: I0318 13:31:04.545664 27835 status_manager.go:851] "Failed to get status for pod" podUID="ce1e1007-d503-49db-abc1-8daa04b3d881" pod="openshift-kube-apiserver/installer-7-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-7-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:31:04.545903 master-0 kubenswrapper[27835]: E0318 13:31:04.545822 27835 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:31:05.851753 master-0 kubenswrapper[27835]: I0318 13:31:05.851681 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_d5f502b117c7c8479f7f20848a50fec0/kube-apiserver-cert-syncer/0.log" Mar 18 13:31:05.852594 master-0 kubenswrapper[27835]: I0318 13:31:05.852267 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:31:05.853091 master-0 kubenswrapper[27835]: I0318 13:31:05.853040 27835 status_manager.go:851] "Failed to get status for pod" podUID="ce1e1007-d503-49db-abc1-8daa04b3d881" pod="openshift-kube-apiserver/installer-7-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-7-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:31:05.853476 master-0 kubenswrapper[27835]: I0318 13:31:05.853433 27835 status_manager.go:851] "Failed to get status for pod" podUID="d5f502b117c7c8479f7f20848a50fec0" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:31:05.904914 master-0 kubenswrapper[27835]: I0318 13:31:05.904865 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-audit-dir\") pod \"d5f502b117c7c8479f7f20848a50fec0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " Mar 18 13:31:05.904914 master-0 kubenswrapper[27835]: I0318 13:31:05.904915 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-cert-dir\") pod \"d5f502b117c7c8479f7f20848a50fec0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " Mar 18 13:31:05.905072 master-0 kubenswrapper[27835]: I0318 13:31:05.904936 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-resource-dir\") pod \"d5f502b117c7c8479f7f20848a50fec0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " Mar 18 13:31:05.905218 master-0 kubenswrapper[27835]: I0318 13:31:05.905185 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "d5f502b117c7c8479f7f20848a50fec0" (UID: "d5f502b117c7c8479f7f20848a50fec0"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:31:05.905218 master-0 kubenswrapper[27835]: I0318 13:31:05.905217 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "d5f502b117c7c8479f7f20848a50fec0" (UID: "d5f502b117c7c8479f7f20848a50fec0"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:31:05.905467 master-0 kubenswrapper[27835]: I0318 13:31:05.905231 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "d5f502b117c7c8479f7f20848a50fec0" (UID: "d5f502b117c7c8479f7f20848a50fec0"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:31:05.944660 master-0 kubenswrapper[27835]: I0318 13:31:05.944603 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-7-master-0" Mar 18 13:31:05.946174 master-0 kubenswrapper[27835]: I0318 13:31:05.945548 27835 status_manager.go:851] "Failed to get status for pod" podUID="d5f502b117c7c8479f7f20848a50fec0" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:31:05.946577 master-0 kubenswrapper[27835]: I0318 13:31:05.946553 27835 status_manager.go:851] "Failed to get status for pod" podUID="ce1e1007-d503-49db-abc1-8daa04b3d881" pod="openshift-kube-apiserver/installer-7-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-7-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:31:06.006740 master-0 kubenswrapper[27835]: I0318 13:31:06.006665 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce1e1007-d503-49db-abc1-8daa04b3d881-kubelet-dir\") pod \"ce1e1007-d503-49db-abc1-8daa04b3d881\" (UID: \"ce1e1007-d503-49db-abc1-8daa04b3d881\") " Mar 18 13:31:06.007053 master-0 kubenswrapper[27835]: I0318 13:31:06.006799 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce1e1007-d503-49db-abc1-8daa04b3d881-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ce1e1007-d503-49db-abc1-8daa04b3d881" (UID: "ce1e1007-d503-49db-abc1-8daa04b3d881"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:31:06.007053 master-0 kubenswrapper[27835]: I0318 13:31:06.006842 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce1e1007-d503-49db-abc1-8daa04b3d881-var-lock" (OuterVolumeSpecName: "var-lock") pod "ce1e1007-d503-49db-abc1-8daa04b3d881" (UID: "ce1e1007-d503-49db-abc1-8daa04b3d881"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:31:06.007053 master-0 kubenswrapper[27835]: I0318 13:31:06.006815 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ce1e1007-d503-49db-abc1-8daa04b3d881-var-lock\") pod \"ce1e1007-d503-49db-abc1-8daa04b3d881\" (UID: \"ce1e1007-d503-49db-abc1-8daa04b3d881\") " Mar 18 13:31:06.007053 master-0 kubenswrapper[27835]: I0318 13:31:06.006927 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce1e1007-d503-49db-abc1-8daa04b3d881-kube-api-access\") pod \"ce1e1007-d503-49db-abc1-8daa04b3d881\" (UID: \"ce1e1007-d503-49db-abc1-8daa04b3d881\") " Mar 18 13:31:06.007819 master-0 kubenswrapper[27835]: I0318 13:31:06.007745 27835 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ce1e1007-d503-49db-abc1-8daa04b3d881-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:31:06.007885 master-0 kubenswrapper[27835]: I0318 13:31:06.007822 27835 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:31:06.007885 master-0 kubenswrapper[27835]: I0318 13:31:06.007842 27835 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:31:06.007953 master-0 kubenswrapper[27835]: I0318 13:31:06.007898 27835 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:31:06.007953 master-0 kubenswrapper[27835]: I0318 13:31:06.007919 27835 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce1e1007-d503-49db-abc1-8daa04b3d881-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:31:06.009679 master-0 kubenswrapper[27835]: I0318 13:31:06.009639 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce1e1007-d503-49db-abc1-8daa04b3d881-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ce1e1007-d503-49db-abc1-8daa04b3d881" (UID: "ce1e1007-d503-49db-abc1-8daa04b3d881"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:31:06.109141 master-0 kubenswrapper[27835]: I0318 13:31:06.109033 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce1e1007-d503-49db-abc1-8daa04b3d881-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 13:31:06.291704 master-0 kubenswrapper[27835]: I0318 13:31:06.291579 27835 status_manager.go:851] "Failed to get status for pod" podUID="d5f502b117c7c8479f7f20848a50fec0" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:31:06.292733 master-0 kubenswrapper[27835]: I0318 13:31:06.292624 27835 status_manager.go:851] "Failed to get status for pod" podUID="ce1e1007-d503-49db-abc1-8daa04b3d881" pod="openshift-kube-apiserver/installer-7-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-7-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:31:06.299608 master-0 kubenswrapper[27835]: I0318 13:31:06.299533 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5f502b117c7c8479f7f20848a50fec0" path="/var/lib/kubelet/pods/d5f502b117c7c8479f7f20848a50fec0/volumes" Mar 18 13:31:06.623305 master-0 kubenswrapper[27835]: I0318 13:31:06.622838 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-7-master-0" event={"ID":"ce1e1007-d503-49db-abc1-8daa04b3d881","Type":"ContainerDied","Data":"602008d6bdf46b564fe99b9585b24b42ac00f833aee7bc2f6dbf8f5834e3ba71"} Mar 18 13:31:06.623305 master-0 kubenswrapper[27835]: I0318 13:31:06.623194 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="602008d6bdf46b564fe99b9585b24b42ac00f833aee7bc2f6dbf8f5834e3ba71" Mar 18 13:31:06.623305 master-0 kubenswrapper[27835]: I0318 13:31:06.622921 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-7-master-0" Mar 18 13:31:06.628540 master-0 kubenswrapper[27835]: I0318 13:31:06.628410 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_d5f502b117c7c8479f7f20848a50fec0/kube-apiserver-cert-syncer/0.log" Mar 18 13:31:06.629972 master-0 kubenswrapper[27835]: I0318 13:31:06.629881 27835 status_manager.go:851] "Failed to get status for pod" podUID="ce1e1007-d503-49db-abc1-8daa04b3d881" pod="openshift-kube-apiserver/installer-7-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-7-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:31:06.631880 master-0 kubenswrapper[27835]: I0318 13:31:06.631839 27835 generic.go:334] "Generic (PLEG): container finished" podID="d5f502b117c7c8479f7f20848a50fec0" containerID="b77d8578909e57c509d7afe00177650fc036c0ca83d0cb91505afe847365a480" exitCode=0 Mar 18 13:31:06.631986 master-0 kubenswrapper[27835]: I0318 13:31:06.631905 27835 scope.go:117] "RemoveContainer" containerID="a56ff8b1b770bc5368cd1740d4fb0666d8bf754133e8b2ea51bef7267a3302c2" Mar 18 13:31:06.632857 master-0 kubenswrapper[27835]: I0318 13:31:06.632799 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:31:06.634179 master-0 kubenswrapper[27835]: I0318 13:31:06.634130 27835 status_manager.go:851] "Failed to get status for pod" podUID="ce1e1007-d503-49db-abc1-8daa04b3d881" pod="openshift-kube-apiserver/installer-7-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-7-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:31:06.634827 master-0 kubenswrapper[27835]: I0318 13:31:06.634793 27835 status_manager.go:851] "Failed to get status for pod" podUID="d5f502b117c7c8479f7f20848a50fec0" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:31:06.637063 master-0 kubenswrapper[27835]: I0318 13:31:06.636987 27835 status_manager.go:851] "Failed to get status for pod" podUID="d5f502b117c7c8479f7f20848a50fec0" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:31:06.637806 master-0 kubenswrapper[27835]: I0318 13:31:06.637742 27835 status_manager.go:851] "Failed to get status for pod" podUID="ce1e1007-d503-49db-abc1-8daa04b3d881" pod="openshift-kube-apiserver/installer-7-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-7-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:31:06.649967 master-0 kubenswrapper[27835]: I0318 13:31:06.649929 27835 scope.go:117] "RemoveContainer" containerID="7c8a553e1c6d6fefe684c617a690a439e9aa8c866ba1a72a9c4e5d94885b4e6b" Mar 18 13:31:06.665628 master-0 kubenswrapper[27835]: I0318 13:31:06.665583 27835 scope.go:117] "RemoveContainer" containerID="4e6f860f5a238733bb1600608f8d19d5ba39e1c0b4dd9265395e2f7c50a3071e" Mar 18 13:31:06.681197 master-0 kubenswrapper[27835]: I0318 13:31:06.681161 27835 scope.go:117] "RemoveContainer" containerID="fcdb1b819161f6786aa89aeb7873b8bf4187f3c9a3ed2e0b6440edb9bb8a5aec" Mar 18 13:31:06.695702 master-0 kubenswrapper[27835]: I0318 13:31:06.695663 27835 scope.go:117] "RemoveContainer" containerID="b77d8578909e57c509d7afe00177650fc036c0ca83d0cb91505afe847365a480" Mar 18 13:31:06.728556 master-0 kubenswrapper[27835]: I0318 13:31:06.728318 27835 scope.go:117] "RemoveContainer" containerID="e87ea1da7a93df186fd364e11fc68cb519f35f4f8a41649d22e65e68cf8a316c" Mar 18 13:31:06.752405 master-0 kubenswrapper[27835]: I0318 13:31:06.752364 27835 scope.go:117] "RemoveContainer" containerID="a56ff8b1b770bc5368cd1740d4fb0666d8bf754133e8b2ea51bef7267a3302c2" Mar 18 13:31:06.752845 master-0 kubenswrapper[27835]: E0318 13:31:06.752707 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a56ff8b1b770bc5368cd1740d4fb0666d8bf754133e8b2ea51bef7267a3302c2\": container with ID starting with a56ff8b1b770bc5368cd1740d4fb0666d8bf754133e8b2ea51bef7267a3302c2 not found: ID does not exist" containerID="a56ff8b1b770bc5368cd1740d4fb0666d8bf754133e8b2ea51bef7267a3302c2" Mar 18 13:31:06.752845 master-0 kubenswrapper[27835]: I0318 13:31:06.752745 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a56ff8b1b770bc5368cd1740d4fb0666d8bf754133e8b2ea51bef7267a3302c2"} err="failed to get container status \"a56ff8b1b770bc5368cd1740d4fb0666d8bf754133e8b2ea51bef7267a3302c2\": rpc error: code = NotFound desc = could not find container \"a56ff8b1b770bc5368cd1740d4fb0666d8bf754133e8b2ea51bef7267a3302c2\": container with ID starting with a56ff8b1b770bc5368cd1740d4fb0666d8bf754133e8b2ea51bef7267a3302c2 not found: ID does not exist" Mar 18 13:31:06.752845 master-0 kubenswrapper[27835]: I0318 13:31:06.752775 27835 scope.go:117] "RemoveContainer" containerID="7c8a553e1c6d6fefe684c617a690a439e9aa8c866ba1a72a9c4e5d94885b4e6b" Mar 18 13:31:06.753156 master-0 kubenswrapper[27835]: E0318 13:31:06.753049 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c8a553e1c6d6fefe684c617a690a439e9aa8c866ba1a72a9c4e5d94885b4e6b\": container with ID starting with 7c8a553e1c6d6fefe684c617a690a439e9aa8c866ba1a72a9c4e5d94885b4e6b not found: ID does not exist" containerID="7c8a553e1c6d6fefe684c617a690a439e9aa8c866ba1a72a9c4e5d94885b4e6b" Mar 18 13:31:06.753156 master-0 kubenswrapper[27835]: I0318 13:31:06.753077 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c8a553e1c6d6fefe684c617a690a439e9aa8c866ba1a72a9c4e5d94885b4e6b"} err="failed to get container status \"7c8a553e1c6d6fefe684c617a690a439e9aa8c866ba1a72a9c4e5d94885b4e6b\": rpc error: code = NotFound desc = could not find container \"7c8a553e1c6d6fefe684c617a690a439e9aa8c866ba1a72a9c4e5d94885b4e6b\": container with ID starting with 7c8a553e1c6d6fefe684c617a690a439e9aa8c866ba1a72a9c4e5d94885b4e6b not found: ID does not exist" Mar 18 13:31:06.753156 master-0 kubenswrapper[27835]: I0318 13:31:06.753098 27835 scope.go:117] "RemoveContainer" containerID="4e6f860f5a238733bb1600608f8d19d5ba39e1c0b4dd9265395e2f7c50a3071e" Mar 18 13:31:06.753321 master-0 kubenswrapper[27835]: E0318 13:31:06.753278 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e6f860f5a238733bb1600608f8d19d5ba39e1c0b4dd9265395e2f7c50a3071e\": container with ID starting with 4e6f860f5a238733bb1600608f8d19d5ba39e1c0b4dd9265395e2f7c50a3071e not found: ID does not exist" containerID="4e6f860f5a238733bb1600608f8d19d5ba39e1c0b4dd9265395e2f7c50a3071e" Mar 18 13:31:06.753321 master-0 kubenswrapper[27835]: I0318 13:31:06.753304 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e6f860f5a238733bb1600608f8d19d5ba39e1c0b4dd9265395e2f7c50a3071e"} err="failed to get container status \"4e6f860f5a238733bb1600608f8d19d5ba39e1c0b4dd9265395e2f7c50a3071e\": rpc error: code = NotFound desc = could not find container \"4e6f860f5a238733bb1600608f8d19d5ba39e1c0b4dd9265395e2f7c50a3071e\": container with ID starting with 4e6f860f5a238733bb1600608f8d19d5ba39e1c0b4dd9265395e2f7c50a3071e not found: ID does not exist" Mar 18 13:31:06.753453 master-0 kubenswrapper[27835]: I0318 13:31:06.753323 27835 scope.go:117] "RemoveContainer" containerID="fcdb1b819161f6786aa89aeb7873b8bf4187f3c9a3ed2e0b6440edb9bb8a5aec" Mar 18 13:31:06.753957 master-0 kubenswrapper[27835]: E0318 13:31:06.753562 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcdb1b819161f6786aa89aeb7873b8bf4187f3c9a3ed2e0b6440edb9bb8a5aec\": container with ID starting with fcdb1b819161f6786aa89aeb7873b8bf4187f3c9a3ed2e0b6440edb9bb8a5aec not found: ID does not exist" containerID="fcdb1b819161f6786aa89aeb7873b8bf4187f3c9a3ed2e0b6440edb9bb8a5aec" Mar 18 13:31:06.753957 master-0 kubenswrapper[27835]: I0318 13:31:06.753590 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcdb1b819161f6786aa89aeb7873b8bf4187f3c9a3ed2e0b6440edb9bb8a5aec"} err="failed to get container status \"fcdb1b819161f6786aa89aeb7873b8bf4187f3c9a3ed2e0b6440edb9bb8a5aec\": rpc error: code = NotFound desc = could not find container \"fcdb1b819161f6786aa89aeb7873b8bf4187f3c9a3ed2e0b6440edb9bb8a5aec\": container with ID starting with fcdb1b819161f6786aa89aeb7873b8bf4187f3c9a3ed2e0b6440edb9bb8a5aec not found: ID does not exist" Mar 18 13:31:06.753957 master-0 kubenswrapper[27835]: I0318 13:31:06.753609 27835 scope.go:117] "RemoveContainer" containerID="b77d8578909e57c509d7afe00177650fc036c0ca83d0cb91505afe847365a480" Mar 18 13:31:06.754529 master-0 kubenswrapper[27835]: E0318 13:31:06.754085 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b77d8578909e57c509d7afe00177650fc036c0ca83d0cb91505afe847365a480\": container with ID starting with b77d8578909e57c509d7afe00177650fc036c0ca83d0cb91505afe847365a480 not found: ID does not exist" containerID="b77d8578909e57c509d7afe00177650fc036c0ca83d0cb91505afe847365a480" Mar 18 13:31:06.754529 master-0 kubenswrapper[27835]: I0318 13:31:06.754114 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b77d8578909e57c509d7afe00177650fc036c0ca83d0cb91505afe847365a480"} err="failed to get container status \"b77d8578909e57c509d7afe00177650fc036c0ca83d0cb91505afe847365a480\": rpc error: code = NotFound desc = could not find container \"b77d8578909e57c509d7afe00177650fc036c0ca83d0cb91505afe847365a480\": container with ID starting with b77d8578909e57c509d7afe00177650fc036c0ca83d0cb91505afe847365a480 not found: ID does not exist" Mar 18 13:31:06.754529 master-0 kubenswrapper[27835]: I0318 13:31:06.754132 27835 scope.go:117] "RemoveContainer" containerID="e87ea1da7a93df186fd364e11fc68cb519f35f4f8a41649d22e65e68cf8a316c" Mar 18 13:31:06.754790 master-0 kubenswrapper[27835]: E0318 13:31:06.754649 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e87ea1da7a93df186fd364e11fc68cb519f35f4f8a41649d22e65e68cf8a316c\": container with ID starting with e87ea1da7a93df186fd364e11fc68cb519f35f4f8a41649d22e65e68cf8a316c not found: ID does not exist" containerID="e87ea1da7a93df186fd364e11fc68cb519f35f4f8a41649d22e65e68cf8a316c" Mar 18 13:31:06.754790 master-0 kubenswrapper[27835]: I0318 13:31:06.754737 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e87ea1da7a93df186fd364e11fc68cb519f35f4f8a41649d22e65e68cf8a316c"} err="failed to get container status \"e87ea1da7a93df186fd364e11fc68cb519f35f4f8a41649d22e65e68cf8a316c\": rpc error: code = NotFound desc = could not find container \"e87ea1da7a93df186fd364e11fc68cb519f35f4f8a41649d22e65e68cf8a316c\": container with ID starting with e87ea1da7a93df186fd364e11fc68cb519f35f4f8a41649d22e65e68cf8a316c not found: ID does not exist" Mar 18 13:31:06.899865 master-0 kubenswrapper[27835]: I0318 13:31:06.899696 27835 patch_prober.go:28] interesting pod/console-5d794fddf9-gh6gq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Mar 18 13:31:06.899865 master-0 kubenswrapper[27835]: I0318 13:31:06.899757 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d794fddf9-gh6gq" podUID="414429b2-4ccb-49cd-8bae-f9a6ab653831" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Mar 18 13:31:10.108850 master-0 kubenswrapper[27835]: I0318 13:31:10.108779 27835 patch_prober.go:28] interesting pod/console-9cc97458b-bkd6r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 18 13:31:10.109999 master-0 kubenswrapper[27835]: I0318 13:31:10.108868 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-9cc97458b-bkd6r" podUID="6600dd48-0759-43de-b1df-e99334590bac" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 18 13:31:11.169357 master-0 kubenswrapper[27835]: E0318 13:31:11.169164 27835 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189df2af81cda4a9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:7a4744531cb137d7252790be662d8cc8,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 13:31:03.864779945 +0000 UTC m=+427.829991495,LastTimestamp:2026-03-18 13:31:03.864779945 +0000 UTC m=+427.829991495,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 13:31:13.594279 master-0 kubenswrapper[27835]: E0318 13:31:13.594201 27835 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:31:13.595891 master-0 kubenswrapper[27835]: E0318 13:31:13.595845 27835 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:31:13.596795 master-0 kubenswrapper[27835]: E0318 13:31:13.596764 27835 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:31:13.597597 master-0 kubenswrapper[27835]: E0318 13:31:13.597570 27835 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:31:13.598328 master-0 kubenswrapper[27835]: E0318 13:31:13.598276 27835 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:31:13.598462 master-0 kubenswrapper[27835]: I0318 13:31:13.598336 27835 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 18 13:31:13.599072 master-0 kubenswrapper[27835]: E0318 13:31:13.599003 27835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 18 13:31:13.799954 master-0 kubenswrapper[27835]: E0318 13:31:13.799886 27835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 18 13:31:14.201213 master-0 kubenswrapper[27835]: E0318 13:31:14.201137 27835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 18 13:31:15.002298 master-0 kubenswrapper[27835]: E0318 13:31:15.002219 27835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 18 13:31:16.286463 master-0 kubenswrapper[27835]: I0318 13:31:16.286357 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:31:16.288893 master-0 kubenswrapper[27835]: I0318 13:31:16.288805 27835 status_manager.go:851] "Failed to get status for pod" podUID="ce1e1007-d503-49db-abc1-8daa04b3d881" pod="openshift-kube-apiserver/installer-7-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-7-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:31:16.290255 master-0 kubenswrapper[27835]: I0318 13:31:16.290143 27835 status_manager.go:851] "Failed to get status for pod" podUID="ce1e1007-d503-49db-abc1-8daa04b3d881" pod="openshift-kube-apiserver/installer-7-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-7-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:31:16.310834 master-0 kubenswrapper[27835]: I0318 13:31:16.310770 27835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="cd4e038b-f3af-470e-9787-9c98e1d1f129" Mar 18 13:31:16.310834 master-0 kubenswrapper[27835]: I0318 13:31:16.310829 27835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="cd4e038b-f3af-470e-9787-9c98e1d1f129" Mar 18 13:31:16.311967 master-0 kubenswrapper[27835]: E0318 13:31:16.311912 27835 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:31:16.312869 master-0 kubenswrapper[27835]: I0318 13:31:16.312798 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:31:16.335373 master-0 kubenswrapper[27835]: W0318 13:31:16.335304 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3cae843f2a8e3c3c3212b1177305c1d5.slice/crio-10bf0689770caa4f746390c0b5a5cb64f70e37c97e377ee0c25dee5a02609900 WatchSource:0}: Error finding container 10bf0689770caa4f746390c0b5a5cb64f70e37c97e377ee0c25dee5a02609900: Status 404 returned error can't find the container with id 10bf0689770caa4f746390c0b5a5cb64f70e37c97e377ee0c25dee5a02609900 Mar 18 13:31:16.603972 master-0 kubenswrapper[27835]: E0318 13:31:16.603887 27835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 18 13:31:16.718350 master-0 kubenswrapper[27835]: I0318 13:31:16.718288 27835 generic.go:334] "Generic (PLEG): container finished" podID="3cae843f2a8e3c3c3212b1177305c1d5" containerID="2253419b43cc4c7b1224034708086052f9bc7a4f87882cbb214d67e7f5da3088" exitCode=0 Mar 18 13:31:16.718587 master-0 kubenswrapper[27835]: I0318 13:31:16.718352 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"3cae843f2a8e3c3c3212b1177305c1d5","Type":"ContainerDied","Data":"2253419b43cc4c7b1224034708086052f9bc7a4f87882cbb214d67e7f5da3088"} Mar 18 13:31:16.718587 master-0 kubenswrapper[27835]: I0318 13:31:16.718537 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"3cae843f2a8e3c3c3212b1177305c1d5","Type":"ContainerStarted","Data":"10bf0689770caa4f746390c0b5a5cb64f70e37c97e377ee0c25dee5a02609900"} Mar 18 13:31:16.719156 master-0 kubenswrapper[27835]: I0318 13:31:16.719081 27835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="cd4e038b-f3af-470e-9787-9c98e1d1f129" Mar 18 13:31:16.719156 master-0 kubenswrapper[27835]: I0318 13:31:16.719131 27835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="cd4e038b-f3af-470e-9787-9c98e1d1f129" Mar 18 13:31:16.720366 master-0 kubenswrapper[27835]: I0318 13:31:16.720289 27835 status_manager.go:851] "Failed to get status for pod" podUID="ce1e1007-d503-49db-abc1-8daa04b3d881" pod="openshift-kube-apiserver/installer-7-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-7-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 13:31:16.720366 master-0 kubenswrapper[27835]: E0318 13:31:16.720318 27835 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:31:16.900496 master-0 kubenswrapper[27835]: I0318 13:31:16.900409 27835 patch_prober.go:28] interesting pod/console-5d794fddf9-gh6gq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Mar 18 13:31:16.900694 master-0 kubenswrapper[27835]: I0318 13:31:16.900505 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d794fddf9-gh6gq" podUID="414429b2-4ccb-49cd-8bae-f9a6ab653831" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Mar 18 13:31:17.730741 master-0 kubenswrapper[27835]: I0318 13:31:17.730695 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"3cae843f2a8e3c3c3212b1177305c1d5","Type":"ContainerStarted","Data":"8e875ed6c9f10aaefb9788ca2831cf2532c50221c7aecb6d4b7e05af82de64fe"} Mar 18 13:31:17.731111 master-0 kubenswrapper[27835]: I0318 13:31:17.730749 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"3cae843f2a8e3c3c3212b1177305c1d5","Type":"ContainerStarted","Data":"6a6a587231c485ea25100f263005b9317e4bceff881511b7a0d2174fc0218a91"} Mar 18 13:31:17.731111 master-0 kubenswrapper[27835]: I0318 13:31:17.730760 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"3cae843f2a8e3c3c3212b1177305c1d5","Type":"ContainerStarted","Data":"54bae3bf11a4616291860b2fc0da52c0cad392e68f05b6d9abf88d5a5923c8d7"} Mar 18 13:31:17.737874 master-0 kubenswrapper[27835]: I0318 13:31:17.737847 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_9587703208e136f5328582e1ba0fc966/kube-controller-manager/0.log" Mar 18 13:31:17.738007 master-0 kubenswrapper[27835]: I0318 13:31:17.737988 27835 generic.go:334] "Generic (PLEG): container finished" podID="9587703208e136f5328582e1ba0fc966" containerID="7b30a002d6aa83cf20f0a552aace635f5910f80408cca31198fd9a8f8469738c" exitCode=1 Mar 18 13:31:17.738092 master-0 kubenswrapper[27835]: I0318 13:31:17.738076 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"9587703208e136f5328582e1ba0fc966","Type":"ContainerDied","Data":"7b30a002d6aa83cf20f0a552aace635f5910f80408cca31198fd9a8f8469738c"} Mar 18 13:31:17.738643 master-0 kubenswrapper[27835]: I0318 13:31:17.738627 27835 scope.go:117] "RemoveContainer" containerID="7b30a002d6aa83cf20f0a552aace635f5910f80408cca31198fd9a8f8469738c" Mar 18 13:31:18.755838 master-0 kubenswrapper[27835]: I0318 13:31:18.755773 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_9587703208e136f5328582e1ba0fc966/kube-controller-manager/0.log" Mar 18 13:31:18.756854 master-0 kubenswrapper[27835]: I0318 13:31:18.755945 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"9587703208e136f5328582e1ba0fc966","Type":"ContainerStarted","Data":"3fbe594176c9752d9a925fbe58daa539d33d149d5b723ac3f46150aa60744fe6"} Mar 18 13:31:18.772637 master-0 kubenswrapper[27835]: I0318 13:31:18.772584 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"3cae843f2a8e3c3c3212b1177305c1d5","Type":"ContainerStarted","Data":"50645ddd29e832ca67b1eff8064326caa6b64dc8c041e3e320f85305a0cde7b3"} Mar 18 13:31:18.772637 master-0 kubenswrapper[27835]: I0318 13:31:18.772640 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"3cae843f2a8e3c3c3212b1177305c1d5","Type":"ContainerStarted","Data":"189d8572509da824ebc7a9f4ce12ce8d71689f4c4ada22898b45810b571f88d7"} Mar 18 13:31:18.772915 master-0 kubenswrapper[27835]: I0318 13:31:18.772773 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:31:18.772915 master-0 kubenswrapper[27835]: I0318 13:31:18.772883 27835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="cd4e038b-f3af-470e-9787-9c98e1d1f129" Mar 18 13:31:18.772915 master-0 kubenswrapper[27835]: I0318 13:31:18.772904 27835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="cd4e038b-f3af-470e-9787-9c98e1d1f129" Mar 18 13:31:20.107901 master-0 kubenswrapper[27835]: I0318 13:31:20.107831 27835 patch_prober.go:28] interesting pod/console-9cc97458b-bkd6r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 18 13:31:20.108486 master-0 kubenswrapper[27835]: I0318 13:31:20.107906 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-9cc97458b-bkd6r" podUID="6600dd48-0759-43de-b1df-e99334590bac" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 18 13:31:21.313911 master-0 kubenswrapper[27835]: I0318 13:31:21.313843 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:31:21.313911 master-0 kubenswrapper[27835]: I0318 13:31:21.313916 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:31:21.319751 master-0 kubenswrapper[27835]: I0318 13:31:21.319702 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:31:23.794012 master-0 kubenswrapper[27835]: I0318 13:31:23.793953 27835 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:31:24.077180 master-0 kubenswrapper[27835]: I0318 13:31:24.077011 27835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="cd4e038b-f3af-470e-9787-9c98e1d1f129" Mar 18 13:31:24.077180 master-0 kubenswrapper[27835]: I0318 13:31:24.077073 27835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="cd4e038b-f3af-470e-9787-9c98e1d1f129" Mar 18 13:31:24.082498 master-0 kubenswrapper[27835]: I0318 13:31:24.082399 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:31:24.085596 master-0 kubenswrapper[27835]: I0318 13:31:24.085533 27835 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="3cae843f2a8e3c3c3212b1177305c1d5" podUID="49781758-eaf0-48a6-a96a-d340fc2b98ff" Mar 18 13:31:25.090794 master-0 kubenswrapper[27835]: I0318 13:31:25.090686 27835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="cd4e038b-f3af-470e-9787-9c98e1d1f129" Mar 18 13:31:25.090794 master-0 kubenswrapper[27835]: I0318 13:31:25.090744 27835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="cd4e038b-f3af-470e-9787-9c98e1d1f129" Mar 18 13:31:26.311056 master-0 kubenswrapper[27835]: I0318 13:31:26.310989 27835 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="3cae843f2a8e3c3c3212b1177305c1d5" podUID="49781758-eaf0-48a6-a96a-d340fc2b98ff" Mar 18 13:31:26.900184 master-0 kubenswrapper[27835]: I0318 13:31:26.900071 27835 patch_prober.go:28] interesting pod/console-5d794fddf9-gh6gq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Mar 18 13:31:26.900184 master-0 kubenswrapper[27835]: I0318 13:31:26.900164 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d794fddf9-gh6gq" podUID="414429b2-4ccb-49cd-8bae-f9a6ab653831" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Mar 18 13:31:27.336868 master-0 kubenswrapper[27835]: I0318 13:31:27.336816 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:31:27.336868 master-0 kubenswrapper[27835]: I0318 13:31:27.336882 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:31:27.342757 master-0 kubenswrapper[27835]: I0318 13:31:27.342676 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:31:28.124684 master-0 kubenswrapper[27835]: I0318 13:31:28.124629 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 13:31:30.107678 master-0 kubenswrapper[27835]: I0318 13:31:30.107531 27835 patch_prober.go:28] interesting pod/console-9cc97458b-bkd6r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 18 13:31:30.107678 master-0 kubenswrapper[27835]: I0318 13:31:30.107602 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-9cc97458b-bkd6r" podUID="6600dd48-0759-43de-b1df-e99334590bac" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 18 13:31:32.896091 master-0 kubenswrapper[27835]: I0318 13:31:32.896016 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 18 13:31:32.996885 master-0 kubenswrapper[27835]: I0318 13:31:32.996781 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 18 13:31:33.493977 master-0 kubenswrapper[27835]: I0318 13:31:33.493914 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-hl5hl" Mar 18 13:31:33.532205 master-0 kubenswrapper[27835]: I0318 13:31:33.532172 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 18 13:31:34.274658 master-0 kubenswrapper[27835]: I0318 13:31:34.274613 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 18 13:31:34.437054 master-0 kubenswrapper[27835]: I0318 13:31:34.437010 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 18 13:31:34.547622 master-0 kubenswrapper[27835]: I0318 13:31:34.547454 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 18 13:31:34.728721 master-0 kubenswrapper[27835]: I0318 13:31:34.728647 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 18 13:31:34.984045 master-0 kubenswrapper[27835]: I0318 13:31:34.983971 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 18 13:31:35.033397 master-0 kubenswrapper[27835]: I0318 13:31:35.033326 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 18 13:31:35.064325 master-0 kubenswrapper[27835]: I0318 13:31:35.064243 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 18 13:31:35.104138 master-0 kubenswrapper[27835]: I0318 13:31:35.104073 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 18 13:31:35.125497 master-0 kubenswrapper[27835]: I0318 13:31:35.125387 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 18 13:31:35.535736 master-0 kubenswrapper[27835]: I0318 13:31:35.535683 27835 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 18 13:31:35.576172 master-0 kubenswrapper[27835]: I0318 13:31:35.576109 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 18 13:31:35.748465 master-0 kubenswrapper[27835]: I0318 13:31:35.748306 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 18 13:31:35.860021 master-0 kubenswrapper[27835]: I0318 13:31:35.859855 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 18 13:31:35.923949 master-0 kubenswrapper[27835]: I0318 13:31:35.923814 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 18 13:31:35.927234 master-0 kubenswrapper[27835]: I0318 13:31:35.927196 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 18 13:31:35.932616 master-0 kubenswrapper[27835]: I0318 13:31:35.932548 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 18 13:31:36.043849 master-0 kubenswrapper[27835]: I0318 13:31:36.043770 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 18 13:31:36.169976 master-0 kubenswrapper[27835]: I0318 13:31:36.169903 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 18 13:31:36.182522 master-0 kubenswrapper[27835]: I0318 13:31:36.181820 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 18 13:31:36.293640 master-0 kubenswrapper[27835]: I0318 13:31:36.293548 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 18 13:31:36.309460 master-0 kubenswrapper[27835]: I0318 13:31:36.309391 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 18 13:31:36.371866 master-0 kubenswrapper[27835]: I0318 13:31:36.371789 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-9lwzk" Mar 18 13:31:36.379583 master-0 kubenswrapper[27835]: I0318 13:31:36.379500 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 18 13:31:36.576054 master-0 kubenswrapper[27835]: I0318 13:31:36.575909 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-c8bj17hs40gij" Mar 18 13:31:36.594482 master-0 kubenswrapper[27835]: I0318 13:31:36.594375 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 18 13:31:36.675694 master-0 kubenswrapper[27835]: I0318 13:31:36.675599 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 18 13:31:36.687528 master-0 kubenswrapper[27835]: I0318 13:31:36.687449 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 18 13:31:36.700495 master-0 kubenswrapper[27835]: I0318 13:31:36.700332 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 18 13:31:36.901245 master-0 kubenswrapper[27835]: I0318 13:31:36.901067 27835 patch_prober.go:28] interesting pod/console-5d794fddf9-gh6gq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Mar 18 13:31:36.901245 master-0 kubenswrapper[27835]: I0318 13:31:36.901144 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d794fddf9-gh6gq" podUID="414429b2-4ccb-49cd-8bae-f9a6ab653831" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Mar 18 13:31:36.912337 master-0 kubenswrapper[27835]: I0318 13:31:36.912288 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-dz6jc" Mar 18 13:31:36.994110 master-0 kubenswrapper[27835]: I0318 13:31:36.994036 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 18 13:31:37.109975 master-0 kubenswrapper[27835]: I0318 13:31:37.109901 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 18 13:31:37.271831 master-0 kubenswrapper[27835]: I0318 13:31:37.271735 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 18 13:31:37.289945 master-0 kubenswrapper[27835]: I0318 13:31:37.289842 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-sn888" Mar 18 13:31:37.341628 master-0 kubenswrapper[27835]: I0318 13:31:37.340175 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-6lm6r" Mar 18 13:31:37.377454 master-0 kubenswrapper[27835]: I0318 13:31:37.377303 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 18 13:31:37.398451 master-0 kubenswrapper[27835]: I0318 13:31:37.398315 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 18 13:31:37.433528 master-0 kubenswrapper[27835]: I0318 13:31:37.433400 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 18 13:31:37.486110 master-0 kubenswrapper[27835]: I0318 13:31:37.486024 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 18 13:31:37.491854 master-0 kubenswrapper[27835]: I0318 13:31:37.491805 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 18 13:31:37.496783 master-0 kubenswrapper[27835]: I0318 13:31:37.496745 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-vbmv6" Mar 18 13:31:37.615705 master-0 kubenswrapper[27835]: I0318 13:31:37.615574 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 18 13:31:37.656260 master-0 kubenswrapper[27835]: I0318 13:31:37.656157 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 18 13:31:37.705348 master-0 kubenswrapper[27835]: I0318 13:31:37.705274 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 18 13:31:37.713801 master-0 kubenswrapper[27835]: I0318 13:31:37.713729 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 18 13:31:37.734530 master-0 kubenswrapper[27835]: I0318 13:31:37.734447 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 18 13:31:37.755334 master-0 kubenswrapper[27835]: I0318 13:31:37.755268 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 18 13:31:37.810990 master-0 kubenswrapper[27835]: I0318 13:31:37.810886 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 18 13:31:37.871675 master-0 kubenswrapper[27835]: I0318 13:31:37.871528 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 18 13:31:37.972743 master-0 kubenswrapper[27835]: I0318 13:31:37.972649 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 18 13:31:38.028706 master-0 kubenswrapper[27835]: I0318 13:31:38.028648 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 18 13:31:38.092566 master-0 kubenswrapper[27835]: I0318 13:31:38.092187 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 18 13:31:38.105589 master-0 kubenswrapper[27835]: I0318 13:31:38.105525 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 18 13:31:38.122925 master-0 kubenswrapper[27835]: I0318 13:31:38.122797 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-qfz5b" Mar 18 13:31:38.132876 master-0 kubenswrapper[27835]: I0318 13:31:38.132832 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 18 13:31:38.234604 master-0 kubenswrapper[27835]: I0318 13:31:38.234518 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 18 13:31:38.345784 master-0 kubenswrapper[27835]: I0318 13:31:38.345727 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 18 13:31:38.532081 master-0 kubenswrapper[27835]: I0318 13:31:38.532024 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 18 13:31:38.550725 master-0 kubenswrapper[27835]: I0318 13:31:38.550662 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 18 13:31:38.558338 master-0 kubenswrapper[27835]: I0318 13:31:38.558278 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 18 13:31:38.620042 master-0 kubenswrapper[27835]: I0318 13:31:38.619973 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 18 13:31:38.726858 master-0 kubenswrapper[27835]: I0318 13:31:38.726754 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 18 13:31:38.828157 master-0 kubenswrapper[27835]: I0318 13:31:38.828000 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 18 13:31:38.832645 master-0 kubenswrapper[27835]: I0318 13:31:38.832587 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 18 13:31:38.843979 master-0 kubenswrapper[27835]: I0318 13:31:38.843926 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 18 13:31:38.876581 master-0 kubenswrapper[27835]: I0318 13:31:38.876507 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 18 13:31:38.964736 master-0 kubenswrapper[27835]: I0318 13:31:38.964655 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 18 13:31:39.031589 master-0 kubenswrapper[27835]: I0318 13:31:39.031536 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 18 13:31:39.053906 master-0 kubenswrapper[27835]: I0318 13:31:39.053854 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 18 13:31:39.207103 master-0 kubenswrapper[27835]: I0318 13:31:39.206998 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-kcq89" Mar 18 13:31:39.342110 master-0 kubenswrapper[27835]: I0318 13:31:39.342050 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 18 13:31:39.422239 master-0 kubenswrapper[27835]: I0318 13:31:39.422186 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 18 13:31:39.510393 master-0 kubenswrapper[27835]: I0318 13:31:39.510293 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 18 13:31:39.547754 master-0 kubenswrapper[27835]: I0318 13:31:39.547702 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 13:31:39.558159 master-0 kubenswrapper[27835]: I0318 13:31:39.558098 27835 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 18 13:31:39.566184 master-0 kubenswrapper[27835]: I0318 13:31:39.566132 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 13:31:39.566350 master-0 kubenswrapper[27835]: I0318 13:31:39.566198 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 13:31:39.570497 master-0 kubenswrapper[27835]: I0318 13:31:39.570374 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 13:31:39.590064 master-0 kubenswrapper[27835]: I0318 13:31:39.589970 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=16.58995375 podStartE2EDuration="16.58995375s" podCreationTimestamp="2026-03-18 13:31:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:31:39.585308731 +0000 UTC m=+463.550520291" watchObservedRunningTime="2026-03-18 13:31:39.58995375 +0000 UTC m=+463.555165310" Mar 18 13:31:39.590916 master-0 kubenswrapper[27835]: I0318 13:31:39.590858 27835 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 18 13:31:39.597787 master-0 kubenswrapper[27835]: I0318 13:31:39.597743 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 13:31:39.645197 master-0 kubenswrapper[27835]: I0318 13:31:39.645148 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 18 13:31:39.674485 master-0 kubenswrapper[27835]: I0318 13:31:39.674405 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 18 13:31:39.681425 master-0 kubenswrapper[27835]: I0318 13:31:39.681353 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 18 13:31:39.744808 master-0 kubenswrapper[27835]: I0318 13:31:39.744742 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 18 13:31:39.773520 master-0 kubenswrapper[27835]: I0318 13:31:39.773420 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 18 13:31:39.975722 master-0 kubenswrapper[27835]: I0318 13:31:39.975675 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 18 13:31:40.108043 master-0 kubenswrapper[27835]: I0318 13:31:40.107894 27835 patch_prober.go:28] interesting pod/console-9cc97458b-bkd6r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 18 13:31:40.108043 master-0 kubenswrapper[27835]: I0318 13:31:40.107992 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-9cc97458b-bkd6r" podUID="6600dd48-0759-43de-b1df-e99334590bac" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 18 13:31:40.154320 master-0 kubenswrapper[27835]: I0318 13:31:40.154235 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 18 13:31:40.232265 master-0 kubenswrapper[27835]: I0318 13:31:40.232192 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 18 13:31:40.240532 master-0 kubenswrapper[27835]: I0318 13:31:40.240480 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-sszww" Mar 18 13:31:40.259490 master-0 kubenswrapper[27835]: I0318 13:31:40.259431 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 18 13:31:40.297638 master-0 kubenswrapper[27835]: I0318 13:31:40.297540 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 18 13:31:40.356507 master-0 kubenswrapper[27835]: I0318 13:31:40.356439 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 13:31:40.433843 master-0 kubenswrapper[27835]: I0318 13:31:40.433773 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 18 13:31:40.436255 master-0 kubenswrapper[27835]: I0318 13:31:40.436220 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 18 13:31:40.443981 master-0 kubenswrapper[27835]: I0318 13:31:40.443916 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-q8tt6" Mar 18 13:31:40.449459 master-0 kubenswrapper[27835]: I0318 13:31:40.449393 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 18 13:31:40.508028 master-0 kubenswrapper[27835]: I0318 13:31:40.507977 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 18 13:31:40.524272 master-0 kubenswrapper[27835]: I0318 13:31:40.524208 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 18 13:31:40.618290 master-0 kubenswrapper[27835]: I0318 13:31:40.618245 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 18 13:31:40.652129 master-0 kubenswrapper[27835]: I0318 13:31:40.651975 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 18 13:31:40.670842 master-0 kubenswrapper[27835]: I0318 13:31:40.670767 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 18 13:31:40.755355 master-0 kubenswrapper[27835]: I0318 13:31:40.755240 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 18 13:31:40.770685 master-0 kubenswrapper[27835]: I0318 13:31:40.770611 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 18 13:31:40.849699 master-0 kubenswrapper[27835]: I0318 13:31:40.849587 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 18 13:31:40.854560 master-0 kubenswrapper[27835]: I0318 13:31:40.854528 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 18 13:31:40.858805 master-0 kubenswrapper[27835]: I0318 13:31:40.858767 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 18 13:31:40.868925 master-0 kubenswrapper[27835]: I0318 13:31:40.868884 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 13:31:40.880698 master-0 kubenswrapper[27835]: I0318 13:31:40.880663 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 18 13:31:40.953566 master-0 kubenswrapper[27835]: I0318 13:31:40.952984 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-hnp25" Mar 18 13:31:41.028089 master-0 kubenswrapper[27835]: I0318 13:31:41.027944 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 18 13:31:41.056112 master-0 kubenswrapper[27835]: I0318 13:31:41.056046 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 18 13:31:41.074759 master-0 kubenswrapper[27835]: I0318 13:31:41.074681 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-7rrtk" Mar 18 13:31:41.089070 master-0 kubenswrapper[27835]: I0318 13:31:41.089013 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 18 13:31:41.094754 master-0 kubenswrapper[27835]: I0318 13:31:41.094676 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 18 13:31:41.124709 master-0 kubenswrapper[27835]: I0318 13:31:41.124659 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 18 13:31:41.124942 master-0 kubenswrapper[27835]: I0318 13:31:41.124660 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 18 13:31:41.125911 master-0 kubenswrapper[27835]: I0318 13:31:41.125883 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 18 13:31:41.132349 master-0 kubenswrapper[27835]: I0318 13:31:41.132316 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 18 13:31:41.160498 master-0 kubenswrapper[27835]: I0318 13:31:41.160470 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-kn6rx" Mar 18 13:31:41.208674 master-0 kubenswrapper[27835]: I0318 13:31:41.208607 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 18 13:31:41.235669 master-0 kubenswrapper[27835]: I0318 13:31:41.235569 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 18 13:31:41.398072 master-0 kubenswrapper[27835]: I0318 13:31:41.397912 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 18 13:31:41.404777 master-0 kubenswrapper[27835]: I0318 13:31:41.404719 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 18 13:31:41.538853 master-0 kubenswrapper[27835]: I0318 13:31:41.538768 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 18 13:31:41.540706 master-0 kubenswrapper[27835]: I0318 13:31:41.540677 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 18 13:31:41.577829 master-0 kubenswrapper[27835]: I0318 13:31:41.577738 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 18 13:31:41.623882 master-0 kubenswrapper[27835]: I0318 13:31:41.623775 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 18 13:31:41.653751 master-0 kubenswrapper[27835]: I0318 13:31:41.653634 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 18 13:31:41.661071 master-0 kubenswrapper[27835]: I0318 13:31:41.661049 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 18 13:31:41.707697 master-0 kubenswrapper[27835]: I0318 13:31:41.707630 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 18 13:31:41.718484 master-0 kubenswrapper[27835]: I0318 13:31:41.718404 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 18 13:31:41.915016 master-0 kubenswrapper[27835]: I0318 13:31:41.914952 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 18 13:31:41.935776 master-0 kubenswrapper[27835]: I0318 13:31:41.935711 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-lvs7l" Mar 18 13:31:42.048767 master-0 kubenswrapper[27835]: I0318 13:31:42.048706 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 18 13:31:42.065652 master-0 kubenswrapper[27835]: I0318 13:31:42.065595 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-7vfv5" Mar 18 13:31:42.173849 master-0 kubenswrapper[27835]: I0318 13:31:42.173736 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 18 13:31:42.182647 master-0 kubenswrapper[27835]: I0318 13:31:42.182610 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 18 13:31:42.215493 master-0 kubenswrapper[27835]: I0318 13:31:42.215386 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 18 13:31:42.297330 master-0 kubenswrapper[27835]: I0318 13:31:42.297276 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 18 13:31:42.356862 master-0 kubenswrapper[27835]: I0318 13:31:42.356790 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 13:31:42.397539 master-0 kubenswrapper[27835]: I0318 13:31:42.391668 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 18 13:31:42.460743 master-0 kubenswrapper[27835]: I0318 13:31:42.460604 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 18 13:31:42.505948 master-0 kubenswrapper[27835]: I0318 13:31:42.505899 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-lz5d6" Mar 18 13:31:42.510076 master-0 kubenswrapper[27835]: I0318 13:31:42.510036 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 18 13:31:42.545322 master-0 kubenswrapper[27835]: I0318 13:31:42.545246 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 18 13:31:42.548802 master-0 kubenswrapper[27835]: I0318 13:31:42.548679 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 18 13:31:42.554266 master-0 kubenswrapper[27835]: I0318 13:31:42.554234 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 18 13:31:42.611856 master-0 kubenswrapper[27835]: I0318 13:31:42.611809 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 18 13:31:42.621537 master-0 kubenswrapper[27835]: I0318 13:31:42.621509 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 18 13:31:42.815940 master-0 kubenswrapper[27835]: I0318 13:31:42.815793 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 18 13:31:42.899544 master-0 kubenswrapper[27835]: I0318 13:31:42.899479 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 18 13:31:42.919761 master-0 kubenswrapper[27835]: I0318 13:31:42.919682 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 18 13:31:43.067530 master-0 kubenswrapper[27835]: I0318 13:31:43.067291 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 18 13:31:43.086796 master-0 kubenswrapper[27835]: I0318 13:31:43.086720 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-6fb5w" Mar 18 13:31:43.095222 master-0 kubenswrapper[27835]: I0318 13:31:43.095124 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 18 13:31:43.107605 master-0 kubenswrapper[27835]: I0318 13:31:43.107555 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 18 13:31:43.166928 master-0 kubenswrapper[27835]: I0318 13:31:43.166683 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 18 13:31:43.178283 master-0 kubenswrapper[27835]: I0318 13:31:43.178236 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 18 13:31:43.283685 master-0 kubenswrapper[27835]: I0318 13:31:43.283570 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 18 13:31:43.285503 master-0 kubenswrapper[27835]: I0318 13:31:43.285401 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 18 13:31:43.368084 master-0 kubenswrapper[27835]: I0318 13:31:43.367370 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 18 13:31:43.368739 master-0 kubenswrapper[27835]: I0318 13:31:43.368661 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 18 13:31:43.378870 master-0 kubenswrapper[27835]: I0318 13:31:43.378804 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 18 13:31:43.476343 master-0 kubenswrapper[27835]: I0318 13:31:43.476287 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 18 13:31:43.478178 master-0 kubenswrapper[27835]: I0318 13:31:43.478142 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 18 13:31:43.504522 master-0 kubenswrapper[27835]: I0318 13:31:43.504478 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 18 13:31:43.578126 master-0 kubenswrapper[27835]: I0318 13:31:43.578072 27835 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 18 13:31:43.698942 master-0 kubenswrapper[27835]: I0318 13:31:43.698864 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 13:31:43.759956 master-0 kubenswrapper[27835]: I0318 13:31:43.759889 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 18 13:31:43.760668 master-0 kubenswrapper[27835]: I0318 13:31:43.760645 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-mbkdw" Mar 18 13:31:43.822862 master-0 kubenswrapper[27835]: I0318 13:31:43.822792 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 18 13:31:43.855001 master-0 kubenswrapper[27835]: I0318 13:31:43.854903 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 18 13:31:43.868035 master-0 kubenswrapper[27835]: I0318 13:31:43.867961 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 18 13:31:43.918873 master-0 kubenswrapper[27835]: I0318 13:31:43.918774 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 18 13:31:43.984005 master-0 kubenswrapper[27835]: I0318 13:31:43.983830 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 18 13:31:44.000340 master-0 kubenswrapper[27835]: I0318 13:31:44.000237 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 18 13:31:44.040376 master-0 kubenswrapper[27835]: I0318 13:31:44.040296 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 18 13:31:44.049952 master-0 kubenswrapper[27835]: I0318 13:31:44.049803 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 18 13:31:44.065622 master-0 kubenswrapper[27835]: I0318 13:31:44.065564 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 18 13:31:44.086693 master-0 kubenswrapper[27835]: I0318 13:31:44.086615 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 18 13:31:44.087198 master-0 kubenswrapper[27835]: I0318 13:31:44.087131 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 18 13:31:44.110241 master-0 kubenswrapper[27835]: I0318 13:31:44.110173 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 18 13:31:44.171674 master-0 kubenswrapper[27835]: I0318 13:31:44.171604 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 18 13:31:44.185191 master-0 kubenswrapper[27835]: I0318 13:31:44.185068 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 18 13:31:44.208949 master-0 kubenswrapper[27835]: I0318 13:31:44.208850 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 18 13:31:44.209256 master-0 kubenswrapper[27835]: I0318 13:31:44.208960 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 18 13:31:44.272586 master-0 kubenswrapper[27835]: I0318 13:31:44.272274 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 18 13:31:44.282846 master-0 kubenswrapper[27835]: I0318 13:31:44.282795 27835 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 18 13:31:44.319028 master-0 kubenswrapper[27835]: I0318 13:31:44.318919 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 18 13:31:44.439221 master-0 kubenswrapper[27835]: I0318 13:31:44.439118 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 13:31:44.444839 master-0 kubenswrapper[27835]: I0318 13:31:44.444796 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 18 13:31:44.498046 master-0 kubenswrapper[27835]: I0318 13:31:44.497940 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 18 13:31:44.567655 master-0 kubenswrapper[27835]: I0318 13:31:44.567480 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 18 13:31:44.631965 master-0 kubenswrapper[27835]: I0318 13:31:44.631676 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 18 13:31:44.663793 master-0 kubenswrapper[27835]: I0318 13:31:44.663705 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 18 13:31:44.680262 master-0 kubenswrapper[27835]: I0318 13:31:44.680156 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 18 13:31:44.731395 master-0 kubenswrapper[27835]: I0318 13:31:44.731306 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-6v458hjp7b0gm" Mar 18 13:31:44.816849 master-0 kubenswrapper[27835]: I0318 13:31:44.816788 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-vxpff" Mar 18 13:31:44.823577 master-0 kubenswrapper[27835]: I0318 13:31:44.823481 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 18 13:31:44.880307 master-0 kubenswrapper[27835]: I0318 13:31:44.880238 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 18 13:31:44.887127 master-0 kubenswrapper[27835]: I0318 13:31:44.887086 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 18 13:31:44.913672 master-0 kubenswrapper[27835]: I0318 13:31:44.913610 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 18 13:31:44.927259 master-0 kubenswrapper[27835]: I0318 13:31:44.927183 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 18 13:31:45.092052 master-0 kubenswrapper[27835]: I0318 13:31:45.091916 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 18 13:31:45.092401 master-0 kubenswrapper[27835]: I0318 13:31:45.092035 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 18 13:31:45.120928 master-0 kubenswrapper[27835]: I0318 13:31:45.120868 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 18 13:31:45.187938 master-0 kubenswrapper[27835]: I0318 13:31:45.187485 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 18 13:31:45.237641 master-0 kubenswrapper[27835]: I0318 13:31:45.237571 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 18 13:31:45.308802 master-0 kubenswrapper[27835]: I0318 13:31:45.308702 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 18 13:31:45.331591 master-0 kubenswrapper[27835]: I0318 13:31:45.331525 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 18 13:31:45.359157 master-0 kubenswrapper[27835]: I0318 13:31:45.358967 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-dprq6" Mar 18 13:31:45.369748 master-0 kubenswrapper[27835]: I0318 13:31:45.369658 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 18 13:31:45.388940 master-0 kubenswrapper[27835]: I0318 13:31:45.388866 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 18 13:31:45.434845 master-0 kubenswrapper[27835]: I0318 13:31:45.434775 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 18 13:31:45.463311 master-0 kubenswrapper[27835]: I0318 13:31:45.463239 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 13:31:45.463806 master-0 kubenswrapper[27835]: I0318 13:31:45.463773 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 18 13:31:45.468060 master-0 kubenswrapper[27835]: I0318 13:31:45.468021 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 18 13:31:45.513777 master-0 kubenswrapper[27835]: I0318 13:31:45.513675 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 18 13:31:45.516253 master-0 kubenswrapper[27835]: I0318 13:31:45.516205 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-sjstk" Mar 18 13:31:45.526138 master-0 kubenswrapper[27835]: I0318 13:31:45.526076 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 13:31:45.540403 master-0 kubenswrapper[27835]: I0318 13:31:45.540321 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 18 13:31:45.554510 master-0 kubenswrapper[27835]: I0318 13:31:45.554449 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 18 13:31:45.606203 master-0 kubenswrapper[27835]: I0318 13:31:45.606119 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 18 13:31:45.619397 master-0 kubenswrapper[27835]: I0318 13:31:45.619252 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 18 13:31:45.717185 master-0 kubenswrapper[27835]: I0318 13:31:45.717122 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 18 13:31:45.717686 master-0 kubenswrapper[27835]: I0318 13:31:45.717643 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 18 13:31:45.749659 master-0 kubenswrapper[27835]: I0318 13:31:45.749563 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 18 13:31:45.777507 master-0 kubenswrapper[27835]: I0318 13:31:45.777457 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 18 13:31:45.798690 master-0 kubenswrapper[27835]: I0318 13:31:45.798648 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 18 13:31:45.897890 master-0 kubenswrapper[27835]: I0318 13:31:45.896728 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 18 13:31:45.916372 master-0 kubenswrapper[27835]: I0318 13:31:45.916302 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 18 13:31:45.995752 master-0 kubenswrapper[27835]: I0318 13:31:45.995674 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 18 13:31:46.009593 master-0 kubenswrapper[27835]: I0318 13:31:46.009522 27835 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 13:31:46.010011 master-0 kubenswrapper[27835]: I0318 13:31:46.009948 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="7a4744531cb137d7252790be662d8cc8" containerName="startup-monitor" containerID="cri-o://2474e8afe2c961844d8e61e408e5196602dffdc620ec016ddcbc79c6247500a1" gracePeriod=5 Mar 18 13:31:46.109881 master-0 kubenswrapper[27835]: I0318 13:31:46.109810 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 18 13:31:46.166658 master-0 kubenswrapper[27835]: I0318 13:31:46.166601 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 18 13:31:46.168224 master-0 kubenswrapper[27835]: I0318 13:31:46.168176 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 18 13:31:46.204272 master-0 kubenswrapper[27835]: I0318 13:31:46.204190 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 18 13:31:46.207051 master-0 kubenswrapper[27835]: I0318 13:31:46.207018 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-r54p6" Mar 18 13:31:46.218840 master-0 kubenswrapper[27835]: I0318 13:31:46.218807 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 18 13:31:46.227364 master-0 kubenswrapper[27835]: I0318 13:31:46.227309 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-b4r5l" Mar 18 13:31:46.246507 master-0 kubenswrapper[27835]: I0318 13:31:46.246462 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 18 13:31:46.268584 master-0 kubenswrapper[27835]: I0318 13:31:46.268524 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 18 13:31:46.298745 master-0 kubenswrapper[27835]: I0318 13:31:46.298673 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 18 13:31:46.301886 master-0 kubenswrapper[27835]: I0318 13:31:46.301829 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 18 13:31:46.347775 master-0 kubenswrapper[27835]: I0318 13:31:46.347707 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 18 13:31:46.372806 master-0 kubenswrapper[27835]: I0318 13:31:46.372741 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 18 13:31:46.387212 master-0 kubenswrapper[27835]: I0318 13:31:46.387152 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 18 13:31:46.417506 master-0 kubenswrapper[27835]: I0318 13:31:46.417219 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 18 13:31:46.430442 master-0 kubenswrapper[27835]: I0318 13:31:46.428895 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 18 13:31:46.473808 master-0 kubenswrapper[27835]: I0318 13:31:46.473720 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 18 13:31:46.488019 master-0 kubenswrapper[27835]: I0318 13:31:46.487942 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 18 13:31:46.565314 master-0 kubenswrapper[27835]: I0318 13:31:46.565253 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 18 13:31:46.625586 master-0 kubenswrapper[27835]: I0318 13:31:46.625529 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-5dnvq" Mar 18 13:31:46.636122 master-0 kubenswrapper[27835]: I0318 13:31:46.636057 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 18 13:31:46.653478 master-0 kubenswrapper[27835]: I0318 13:31:46.653378 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 18 13:31:46.686546 master-0 kubenswrapper[27835]: I0318 13:31:46.686404 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 18 13:31:46.736754 master-0 kubenswrapper[27835]: I0318 13:31:46.736694 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 18 13:31:46.741531 master-0 kubenswrapper[27835]: I0318 13:31:46.741482 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 18 13:31:46.761313 master-0 kubenswrapper[27835]: I0318 13:31:46.761251 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 18 13:31:46.817979 master-0 kubenswrapper[27835]: I0318 13:31:46.817924 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-5rua6jvkkc769" Mar 18 13:31:46.901084 master-0 kubenswrapper[27835]: I0318 13:31:46.900992 27835 patch_prober.go:28] interesting pod/console-5d794fddf9-gh6gq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" start-of-body= Mar 18 13:31:46.901745 master-0 kubenswrapper[27835]: I0318 13:31:46.901113 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d794fddf9-gh6gq" podUID="414429b2-4ccb-49cd-8bae-f9a6ab653831" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": dial tcp 10.128.0.103:8443: connect: connection refused" Mar 18 13:31:46.983589 master-0 kubenswrapper[27835]: I0318 13:31:46.983467 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 18 13:31:47.086158 master-0 kubenswrapper[27835]: I0318 13:31:47.086100 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 18 13:31:47.094955 master-0 kubenswrapper[27835]: I0318 13:31:47.094921 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 13:31:47.145712 master-0 kubenswrapper[27835]: I0318 13:31:47.145663 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 18 13:31:47.198681 master-0 kubenswrapper[27835]: I0318 13:31:47.198605 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 18 13:31:47.241492 master-0 kubenswrapper[27835]: I0318 13:31:47.241263 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 18 13:31:47.255272 master-0 kubenswrapper[27835]: I0318 13:31:47.255196 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 18 13:31:47.294867 master-0 kubenswrapper[27835]: I0318 13:31:47.294789 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 18 13:31:47.370096 master-0 kubenswrapper[27835]: I0318 13:31:47.370042 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 18 13:31:47.484930 master-0 kubenswrapper[27835]: I0318 13:31:47.484872 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 18 13:31:47.734032 master-0 kubenswrapper[27835]: I0318 13:31:47.733966 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 18 13:31:47.830094 master-0 kubenswrapper[27835]: I0318 13:31:47.829944 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-7kt87" Mar 18 13:31:47.868694 master-0 kubenswrapper[27835]: I0318 13:31:47.868623 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 18 13:31:47.876823 master-0 kubenswrapper[27835]: I0318 13:31:47.876740 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 18 13:31:48.011912 master-0 kubenswrapper[27835]: I0318 13:31:48.011792 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 18 13:31:48.085941 master-0 kubenswrapper[27835]: I0318 13:31:48.085868 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 18 13:31:48.115757 master-0 kubenswrapper[27835]: I0318 13:31:48.115672 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 18 13:31:48.156166 master-0 kubenswrapper[27835]: I0318 13:31:48.156107 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 18 13:31:48.166482 master-0 kubenswrapper[27835]: I0318 13:31:48.166446 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 18 13:31:48.226976 master-0 kubenswrapper[27835]: I0318 13:31:48.226924 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 18 13:31:48.329553 master-0 kubenswrapper[27835]: I0318 13:31:48.329319 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-hr2xw" Mar 18 13:31:48.499678 master-0 kubenswrapper[27835]: I0318 13:31:48.499596 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 18 13:31:48.570827 master-0 kubenswrapper[27835]: I0318 13:31:48.570756 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 18 13:31:48.594312 master-0 kubenswrapper[27835]: I0318 13:31:48.594150 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 18 13:31:48.617449 master-0 kubenswrapper[27835]: I0318 13:31:48.617317 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 18 13:31:48.630203 master-0 kubenswrapper[27835]: I0318 13:31:48.630113 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 18 13:31:48.687955 master-0 kubenswrapper[27835]: I0318 13:31:48.687890 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 18 13:31:48.719691 master-0 kubenswrapper[27835]: I0318 13:31:48.719603 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-67jff" Mar 18 13:31:48.800494 master-0 kubenswrapper[27835]: I0318 13:31:48.800404 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 18 13:31:48.984230 master-0 kubenswrapper[27835]: I0318 13:31:48.984156 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 18 13:31:49.036099 master-0 kubenswrapper[27835]: I0318 13:31:49.036014 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 18 13:31:49.043871 master-0 kubenswrapper[27835]: I0318 13:31:49.043809 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 18 13:31:49.095556 master-0 kubenswrapper[27835]: I0318 13:31:49.095485 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 18 13:31:49.258077 master-0 kubenswrapper[27835]: I0318 13:31:49.257926 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 13:31:49.323315 master-0 kubenswrapper[27835]: I0318 13:31:49.323245 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 18 13:31:49.500571 master-0 kubenswrapper[27835]: I0318 13:31:49.500437 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 18 13:31:49.634502 master-0 kubenswrapper[27835]: I0318 13:31:49.634349 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 13:31:49.858807 master-0 kubenswrapper[27835]: I0318 13:31:49.858773 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 18 13:31:49.919045 master-0 kubenswrapper[27835]: I0318 13:31:49.918976 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 18 13:31:49.921112 master-0 kubenswrapper[27835]: I0318 13:31:49.921045 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 18 13:31:50.096874 master-0 kubenswrapper[27835]: I0318 13:31:50.096809 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 18 13:31:50.108186 master-0 kubenswrapper[27835]: I0318 13:31:50.108138 27835 patch_prober.go:28] interesting pod/console-9cc97458b-bkd6r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" start-of-body= Mar 18 13:31:50.108540 master-0 kubenswrapper[27835]: I0318 13:31:50.108484 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-9cc97458b-bkd6r" podUID="6600dd48-0759-43de-b1df-e99334590bac" containerName="console" probeResult="failure" output="Get \"https://10.128.0.104:8443/health\": dial tcp 10.128.0.104:8443: connect: connection refused" Mar 18 13:31:50.131234 master-0 kubenswrapper[27835]: I0318 13:31:50.131172 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 18 13:31:50.500280 master-0 kubenswrapper[27835]: I0318 13:31:50.500209 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 18 13:31:50.649244 master-0 kubenswrapper[27835]: I0318 13:31:50.649178 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 18 13:31:50.749155 master-0 kubenswrapper[27835]: I0318 13:31:50.745334 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 18 13:31:50.808641 master-0 kubenswrapper[27835]: I0318 13:31:50.808524 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 18 13:31:50.895371 master-0 kubenswrapper[27835]: I0318 13:31:50.895296 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 18 13:31:51.180585 master-0 kubenswrapper[27835]: I0318 13:31:51.180510 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 18 13:31:51.351763 master-0 kubenswrapper[27835]: I0318 13:31:51.351706 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_7a4744531cb137d7252790be662d8cc8/startup-monitor/0.log" Mar 18 13:31:51.352038 master-0 kubenswrapper[27835]: I0318 13:31:51.351793 27835 generic.go:334] "Generic (PLEG): container finished" podID="7a4744531cb137d7252790be662d8cc8" containerID="2474e8afe2c961844d8e61e408e5196602dffdc620ec016ddcbc79c6247500a1" exitCode=137 Mar 18 13:31:51.403620 master-0 kubenswrapper[27835]: I0318 13:31:51.403552 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 18 13:31:51.483030 master-0 kubenswrapper[27835]: I0318 13:31:51.482896 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 18 13:31:51.584075 master-0 kubenswrapper[27835]: I0318 13:31:51.583966 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_7a4744531cb137d7252790be662d8cc8/startup-monitor/0.log" Mar 18 13:31:51.584283 master-0 kubenswrapper[27835]: I0318 13:31:51.584106 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:31:51.743762 master-0 kubenswrapper[27835]: I0318 13:31:51.743621 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7a4744531cb137d7252790be662d8cc8-var-log\") pod \"7a4744531cb137d7252790be662d8cc8\" (UID: \"7a4744531cb137d7252790be662d8cc8\") " Mar 18 13:31:51.743762 master-0 kubenswrapper[27835]: I0318 13:31:51.743701 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/7a4744531cb137d7252790be662d8cc8-manifests\") pod \"7a4744531cb137d7252790be662d8cc8\" (UID: \"7a4744531cb137d7252790be662d8cc8\") " Mar 18 13:31:51.743762 master-0 kubenswrapper[27835]: I0318 13:31:51.743731 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7a4744531cb137d7252790be662d8cc8-resource-dir\") pod \"7a4744531cb137d7252790be662d8cc8\" (UID: \"7a4744531cb137d7252790be662d8cc8\") " Mar 18 13:31:51.744062 master-0 kubenswrapper[27835]: I0318 13:31:51.743773 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7a4744531cb137d7252790be662d8cc8-var-lock\") pod \"7a4744531cb137d7252790be662d8cc8\" (UID: \"7a4744531cb137d7252790be662d8cc8\") " Mar 18 13:31:51.744062 master-0 kubenswrapper[27835]: I0318 13:31:51.743795 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/7a4744531cb137d7252790be662d8cc8-pod-resource-dir\") pod \"7a4744531cb137d7252790be662d8cc8\" (UID: \"7a4744531cb137d7252790be662d8cc8\") " Mar 18 13:31:51.744393 master-0 kubenswrapper[27835]: I0318 13:31:51.744343 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a4744531cb137d7252790be662d8cc8-manifests" (OuterVolumeSpecName: "manifests") pod "7a4744531cb137d7252790be662d8cc8" (UID: "7a4744531cb137d7252790be662d8cc8"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:31:51.744393 master-0 kubenswrapper[27835]: I0318 13:31:51.744390 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a4744531cb137d7252790be662d8cc8-var-log" (OuterVolumeSpecName: "var-log") pod "7a4744531cb137d7252790be662d8cc8" (UID: "7a4744531cb137d7252790be662d8cc8"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:31:51.744545 master-0 kubenswrapper[27835]: I0318 13:31:51.744392 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a4744531cb137d7252790be662d8cc8-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "7a4744531cb137d7252790be662d8cc8" (UID: "7a4744531cb137d7252790be662d8cc8"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:31:51.744545 master-0 kubenswrapper[27835]: I0318 13:31:51.744440 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a4744531cb137d7252790be662d8cc8-var-lock" (OuterVolumeSpecName: "var-lock") pod "7a4744531cb137d7252790be662d8cc8" (UID: "7a4744531cb137d7252790be662d8cc8"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:31:51.749737 master-0 kubenswrapper[27835]: I0318 13:31:51.749668 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a4744531cb137d7252790be662d8cc8-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "7a4744531cb137d7252790be662d8cc8" (UID: "7a4744531cb137d7252790be662d8cc8"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:31:51.847488 master-0 kubenswrapper[27835]: I0318 13:31:51.847363 27835 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/7a4744531cb137d7252790be662d8cc8-manifests\") on node \"master-0\" DevicePath \"\"" Mar 18 13:31:51.847488 master-0 kubenswrapper[27835]: I0318 13:31:51.847462 27835 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7a4744531cb137d7252790be662d8cc8-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:31:51.847488 master-0 kubenswrapper[27835]: I0318 13:31:51.847480 27835 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7a4744531cb137d7252790be662d8cc8-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 13:31:51.847488 master-0 kubenswrapper[27835]: I0318 13:31:51.847493 27835 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/7a4744531cb137d7252790be662d8cc8-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:31:51.847488 master-0 kubenswrapper[27835]: I0318 13:31:51.847509 27835 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7a4744531cb137d7252790be662d8cc8-var-log\") on node \"master-0\" DevicePath \"\"" Mar 18 13:31:52.031255 master-0 kubenswrapper[27835]: I0318 13:31:52.031125 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 18 13:31:52.087201 master-0 kubenswrapper[27835]: I0318 13:31:52.087126 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-qqvgp" Mar 18 13:31:52.290821 master-0 kubenswrapper[27835]: I0318 13:31:52.290694 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a4744531cb137d7252790be662d8cc8" path="/var/lib/kubelet/pods/7a4744531cb137d7252790be662d8cc8/volumes" Mar 18 13:31:52.292231 master-0 kubenswrapper[27835]: I0318 13:31:52.292200 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 18 13:31:52.359261 master-0 kubenswrapper[27835]: I0318 13:31:52.359104 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_7a4744531cb137d7252790be662d8cc8/startup-monitor/0.log" Mar 18 13:31:52.359261 master-0 kubenswrapper[27835]: I0318 13:31:52.359177 27835 scope.go:117] "RemoveContainer" containerID="2474e8afe2c961844d8e61e408e5196602dffdc620ec016ddcbc79c6247500a1" Mar 18 13:31:52.359261 master-0 kubenswrapper[27835]: I0318 13:31:52.359226 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 13:31:52.397331 master-0 kubenswrapper[27835]: I0318 13:31:52.397249 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 18 13:31:53.101389 master-0 kubenswrapper[27835]: I0318 13:31:53.101318 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 13:31:53.208245 master-0 kubenswrapper[27835]: I0318 13:31:53.208187 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 18 13:31:56.616162 master-0 kubenswrapper[27835]: I0318 13:31:56.616048 27835 scope.go:117] "RemoveContainer" containerID="45c1bfe81a4ec9a67e0f96ccae8aa8e92cc20e9572ced1d331993a3be67d4dd1" Mar 18 13:31:56.906869 master-0 kubenswrapper[27835]: I0318 13:31:56.906661 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5d794fddf9-gh6gq" Mar 18 13:31:56.913956 master-0 kubenswrapper[27835]: I0318 13:31:56.913890 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5d794fddf9-gh6gq" Mar 18 13:32:00.113144 master-0 kubenswrapper[27835]: I0318 13:32:00.113094 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-9cc97458b-bkd6r" Mar 18 13:32:00.119003 master-0 kubenswrapper[27835]: I0318 13:32:00.118960 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-9cc97458b-bkd6r" Mar 18 13:32:00.204895 master-0 kubenswrapper[27835]: I0318 13:32:00.204839 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5d794fddf9-gh6gq"] Mar 18 13:32:05.489801 master-0 kubenswrapper[27835]: I0318 13:32:05.489702 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-89ccd998f-99pzm_fe643e40-d06d-4e69-9be3-0065c2a78567/marketplace-operator/3.log" Mar 18 13:32:05.490828 master-0 kubenswrapper[27835]: I0318 13:32:05.489810 27835 generic.go:334] "Generic (PLEG): container finished" podID="fe643e40-d06d-4e69-9be3-0065c2a78567" containerID="c61f80b39335bb556396af97805e745180f287f4f2c6b676611d48e0f72cc514" exitCode=0 Mar 18 13:32:05.490828 master-0 kubenswrapper[27835]: I0318 13:32:05.489877 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" event={"ID":"fe643e40-d06d-4e69-9be3-0065c2a78567","Type":"ContainerDied","Data":"c61f80b39335bb556396af97805e745180f287f4f2c6b676611d48e0f72cc514"} Mar 18 13:32:05.490828 master-0 kubenswrapper[27835]: I0318 13:32:05.489939 27835 scope.go:117] "RemoveContainer" containerID="038875b7496310eba4d592d71731dabe07e1cc335c819eb3e88f7c7069a8d44c" Mar 18 13:32:05.491120 master-0 kubenswrapper[27835]: I0318 13:32:05.491015 27835 scope.go:117] "RemoveContainer" containerID="c61f80b39335bb556396af97805e745180f287f4f2c6b676611d48e0f72cc514" Mar 18 13:32:05.491927 master-0 kubenswrapper[27835]: E0318 13:32:05.491764 27835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-89ccd998f-99pzm_openshift-marketplace(fe643e40-d06d-4e69-9be3-0065c2a78567)\"" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" podUID="fe643e40-d06d-4e69-9be3-0065c2a78567" Mar 18 13:32:11.794603 master-0 kubenswrapper[27835]: I0318 13:32:11.794535 27835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:32:11.794603 master-0 kubenswrapper[27835]: I0318 13:32:11.794604 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:32:11.795294 master-0 kubenswrapper[27835]: I0318 13:32:11.795014 27835 scope.go:117] "RemoveContainer" containerID="c61f80b39335bb556396af97805e745180f287f4f2c6b676611d48e0f72cc514" Mar 18 13:32:11.795294 master-0 kubenswrapper[27835]: E0318 13:32:11.795214 27835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-89ccd998f-99pzm_openshift-marketplace(fe643e40-d06d-4e69-9be3-0065c2a78567)\"" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" podUID="fe643e40-d06d-4e69-9be3-0065c2a78567" Mar 18 13:32:25.243076 master-0 kubenswrapper[27835]: I0318 13:32:25.242959 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5d794fddf9-gh6gq" podUID="414429b2-4ccb-49cd-8bae-f9a6ab653831" containerName="console" containerID="cri-o://b6e635d52f946549fcf27defbcf724ae9289a976eded079682e4355e8de44795" gracePeriod=15 Mar 18 13:32:25.663633 master-0 kubenswrapper[27835]: I0318 13:32:25.663575 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d794fddf9-gh6gq_414429b2-4ccb-49cd-8bae-f9a6ab653831/console/0.log" Mar 18 13:32:25.663848 master-0 kubenswrapper[27835]: I0318 13:32:25.663635 27835 generic.go:334] "Generic (PLEG): container finished" podID="414429b2-4ccb-49cd-8bae-f9a6ab653831" containerID="b6e635d52f946549fcf27defbcf724ae9289a976eded079682e4355e8de44795" exitCode=2 Mar 18 13:32:25.663848 master-0 kubenswrapper[27835]: I0318 13:32:25.663668 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d794fddf9-gh6gq" event={"ID":"414429b2-4ccb-49cd-8bae-f9a6ab653831","Type":"ContainerDied","Data":"b6e635d52f946549fcf27defbcf724ae9289a976eded079682e4355e8de44795"} Mar 18 13:32:25.663848 master-0 kubenswrapper[27835]: I0318 13:32:25.663702 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d794fddf9-gh6gq" event={"ID":"414429b2-4ccb-49cd-8bae-f9a6ab653831","Type":"ContainerDied","Data":"8ad937341a8553834ca1513f1e79113ee92be5c3147b5f5323178fd7ea20e047"} Mar 18 13:32:25.663848 master-0 kubenswrapper[27835]: I0318 13:32:25.663723 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ad937341a8553834ca1513f1e79113ee92be5c3147b5f5323178fd7ea20e047" Mar 18 13:32:25.666253 master-0 kubenswrapper[27835]: I0318 13:32:25.666212 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d794fddf9-gh6gq_414429b2-4ccb-49cd-8bae-f9a6ab653831/console/0.log" Mar 18 13:32:25.666336 master-0 kubenswrapper[27835]: I0318 13:32:25.666288 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d794fddf9-gh6gq" Mar 18 13:32:25.722549 master-0 kubenswrapper[27835]: I0318 13:32:25.722458 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/414429b2-4ccb-49cd-8bae-f9a6ab653831-oauth-serving-cert\") pod \"414429b2-4ccb-49cd-8bae-f9a6ab653831\" (UID: \"414429b2-4ccb-49cd-8bae-f9a6ab653831\") " Mar 18 13:32:25.722549 master-0 kubenswrapper[27835]: I0318 13:32:25.722534 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw9j8\" (UniqueName: \"kubernetes.io/projected/414429b2-4ccb-49cd-8bae-f9a6ab653831-kube-api-access-dw9j8\") pod \"414429b2-4ccb-49cd-8bae-f9a6ab653831\" (UID: \"414429b2-4ccb-49cd-8bae-f9a6ab653831\") " Mar 18 13:32:25.722852 master-0 kubenswrapper[27835]: I0318 13:32:25.722622 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/414429b2-4ccb-49cd-8bae-f9a6ab653831-console-oauth-config\") pod \"414429b2-4ccb-49cd-8bae-f9a6ab653831\" (UID: \"414429b2-4ccb-49cd-8bae-f9a6ab653831\") " Mar 18 13:32:25.722852 master-0 kubenswrapper[27835]: I0318 13:32:25.722676 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/414429b2-4ccb-49cd-8bae-f9a6ab653831-trusted-ca-bundle\") pod \"414429b2-4ccb-49cd-8bae-f9a6ab653831\" (UID: \"414429b2-4ccb-49cd-8bae-f9a6ab653831\") " Mar 18 13:32:25.722852 master-0 kubenswrapper[27835]: I0318 13:32:25.722769 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/414429b2-4ccb-49cd-8bae-f9a6ab653831-console-serving-cert\") pod \"414429b2-4ccb-49cd-8bae-f9a6ab653831\" (UID: \"414429b2-4ccb-49cd-8bae-f9a6ab653831\") " Mar 18 13:32:25.722852 master-0 kubenswrapper[27835]: I0318 13:32:25.722827 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/414429b2-4ccb-49cd-8bae-f9a6ab653831-console-config\") pod \"414429b2-4ccb-49cd-8bae-f9a6ab653831\" (UID: \"414429b2-4ccb-49cd-8bae-f9a6ab653831\") " Mar 18 13:32:25.723040 master-0 kubenswrapper[27835]: I0318 13:32:25.722856 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/414429b2-4ccb-49cd-8bae-f9a6ab653831-service-ca\") pod \"414429b2-4ccb-49cd-8bae-f9a6ab653831\" (UID: \"414429b2-4ccb-49cd-8bae-f9a6ab653831\") " Mar 18 13:32:25.723502 master-0 kubenswrapper[27835]: I0318 13:32:25.723376 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/414429b2-4ccb-49cd-8bae-f9a6ab653831-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "414429b2-4ccb-49cd-8bae-f9a6ab653831" (UID: "414429b2-4ccb-49cd-8bae-f9a6ab653831"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:32:25.723774 master-0 kubenswrapper[27835]: I0318 13:32:25.723733 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/414429b2-4ccb-49cd-8bae-f9a6ab653831-service-ca" (OuterVolumeSpecName: "service-ca") pod "414429b2-4ccb-49cd-8bae-f9a6ab653831" (UID: "414429b2-4ccb-49cd-8bae-f9a6ab653831"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:32:25.724249 master-0 kubenswrapper[27835]: I0318 13:32:25.724200 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/414429b2-4ccb-49cd-8bae-f9a6ab653831-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "414429b2-4ccb-49cd-8bae-f9a6ab653831" (UID: "414429b2-4ccb-49cd-8bae-f9a6ab653831"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:32:25.724822 master-0 kubenswrapper[27835]: I0318 13:32:25.724769 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/414429b2-4ccb-49cd-8bae-f9a6ab653831-console-config" (OuterVolumeSpecName: "console-config") pod "414429b2-4ccb-49cd-8bae-f9a6ab653831" (UID: "414429b2-4ccb-49cd-8bae-f9a6ab653831"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:32:25.727168 master-0 kubenswrapper[27835]: I0318 13:32:25.727119 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/414429b2-4ccb-49cd-8bae-f9a6ab653831-kube-api-access-dw9j8" (OuterVolumeSpecName: "kube-api-access-dw9j8") pod "414429b2-4ccb-49cd-8bae-f9a6ab653831" (UID: "414429b2-4ccb-49cd-8bae-f9a6ab653831"). InnerVolumeSpecName "kube-api-access-dw9j8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:32:25.727251 master-0 kubenswrapper[27835]: I0318 13:32:25.727146 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/414429b2-4ccb-49cd-8bae-f9a6ab653831-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "414429b2-4ccb-49cd-8bae-f9a6ab653831" (UID: "414429b2-4ccb-49cd-8bae-f9a6ab653831"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:32:25.728281 master-0 kubenswrapper[27835]: I0318 13:32:25.728252 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/414429b2-4ccb-49cd-8bae-f9a6ab653831-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "414429b2-4ccb-49cd-8bae-f9a6ab653831" (UID: "414429b2-4ccb-49cd-8bae-f9a6ab653831"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:32:25.824671 master-0 kubenswrapper[27835]: I0318 13:32:25.824606 27835 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/414429b2-4ccb-49cd-8bae-f9a6ab653831-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:32:25.824671 master-0 kubenswrapper[27835]: I0318 13:32:25.824650 27835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/414429b2-4ccb-49cd-8bae-f9a6ab653831-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:32:25.825020 master-0 kubenswrapper[27835]: I0318 13:32:25.824689 27835 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/414429b2-4ccb-49cd-8bae-f9a6ab653831-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:32:25.825020 master-0 kubenswrapper[27835]: I0318 13:32:25.824701 27835 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/414429b2-4ccb-49cd-8bae-f9a6ab653831-console-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:32:25.825020 master-0 kubenswrapper[27835]: I0318 13:32:25.824714 27835 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/414429b2-4ccb-49cd-8bae-f9a6ab653831-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:32:25.825020 master-0 kubenswrapper[27835]: I0318 13:32:25.824723 27835 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/414429b2-4ccb-49cd-8bae-f9a6ab653831-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:32:25.825020 master-0 kubenswrapper[27835]: I0318 13:32:25.824731 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dw9j8\" (UniqueName: \"kubernetes.io/projected/414429b2-4ccb-49cd-8bae-f9a6ab653831-kube-api-access-dw9j8\") on node \"master-0\" DevicePath \"\"" Mar 18 13:32:26.290029 master-0 kubenswrapper[27835]: I0318 13:32:26.289954 27835 scope.go:117] "RemoveContainer" containerID="c61f80b39335bb556396af97805e745180f287f4f2c6b676611d48e0f72cc514" Mar 18 13:32:26.672849 master-0 kubenswrapper[27835]: I0318 13:32:26.672775 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" event={"ID":"fe643e40-d06d-4e69-9be3-0065c2a78567","Type":"ContainerStarted","Data":"0ebd194435ff701e93a0fe560302b993a543084a7c3d570d3b65abff64b48a9a"} Mar 18 13:32:26.672849 master-0 kubenswrapper[27835]: I0318 13:32:26.672799 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d794fddf9-gh6gq" Mar 18 13:32:26.673394 master-0 kubenswrapper[27835]: I0318 13:32:26.673363 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:32:26.674472 master-0 kubenswrapper[27835]: I0318 13:32:26.674385 27835 patch_prober.go:28] interesting pod/marketplace-operator-89ccd998f-99pzm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" start-of-body= Mar 18 13:32:26.674553 master-0 kubenswrapper[27835]: I0318 13:32:26.674472 27835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" podUID="fe643e40-d06d-4e69-9be3-0065c2a78567" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" Mar 18 13:32:26.693183 master-0 kubenswrapper[27835]: I0318 13:32:26.693111 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5d794fddf9-gh6gq"] Mar 18 13:32:26.700492 master-0 kubenswrapper[27835]: I0318 13:32:26.700440 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5d794fddf9-gh6gq"] Mar 18 13:32:27.689560 master-0 kubenswrapper[27835]: I0318 13:32:27.689483 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-99pzm" Mar 18 13:32:28.297783 master-0 kubenswrapper[27835]: I0318 13:32:28.297684 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="414429b2-4ccb-49cd-8bae-f9a6ab653831" path="/var/lib/kubelet/pods/414429b2-4ccb-49cd-8bae-f9a6ab653831/volumes" Mar 18 13:34:31.135830 master-0 kubenswrapper[27835]: I0318 13:34:31.134217 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-59477995f9-p67lz"] Mar 18 13:34:31.135830 master-0 kubenswrapper[27835]: E0318 13:34:31.134526 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="414429b2-4ccb-49cd-8bae-f9a6ab653831" containerName="console" Mar 18 13:34:31.135830 master-0 kubenswrapper[27835]: I0318 13:34:31.134541 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="414429b2-4ccb-49cd-8bae-f9a6ab653831" containerName="console" Mar 18 13:34:31.135830 master-0 kubenswrapper[27835]: E0318 13:34:31.134575 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce1e1007-d503-49db-abc1-8daa04b3d881" containerName="installer" Mar 18 13:34:31.135830 master-0 kubenswrapper[27835]: I0318 13:34:31.134581 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce1e1007-d503-49db-abc1-8daa04b3d881" containerName="installer" Mar 18 13:34:31.135830 master-0 kubenswrapper[27835]: E0318 13:34:31.134591 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a4744531cb137d7252790be662d8cc8" containerName="startup-monitor" Mar 18 13:34:31.135830 master-0 kubenswrapper[27835]: I0318 13:34:31.134597 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a4744531cb137d7252790be662d8cc8" containerName="startup-monitor" Mar 18 13:34:31.135830 master-0 kubenswrapper[27835]: I0318 13:34:31.134745 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="414429b2-4ccb-49cd-8bae-f9a6ab653831" containerName="console" Mar 18 13:34:31.135830 master-0 kubenswrapper[27835]: I0318 13:34:31.134770 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce1e1007-d503-49db-abc1-8daa04b3d881" containerName="installer" Mar 18 13:34:31.135830 master-0 kubenswrapper[27835]: I0318 13:34:31.134783 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a4744531cb137d7252790be662d8cc8" containerName="startup-monitor" Mar 18 13:34:31.135830 master-0 kubenswrapper[27835]: I0318 13:34:31.135213 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-59477995f9-p67lz" Mar 18 13:34:31.143022 master-0 kubenswrapper[27835]: I0318 13:34:31.139562 27835 reflector.go:368] Caches populated for *v1.Secret from object-"sushy-emulator"/"os-client-config" Mar 18 13:34:31.143022 master-0 kubenswrapper[27835]: I0318 13:34:31.140856 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"kube-root-ca.crt" Mar 18 13:34:31.143022 master-0 kubenswrapper[27835]: I0318 13:34:31.141084 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"openshift-service-ca.crt" Mar 18 13:34:31.143022 master-0 kubenswrapper[27835]: I0318 13:34:31.141109 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"sushy-emulator-config" Mar 18 13:34:31.193643 master-0 kubenswrapper[27835]: I0318 13:34:31.150384 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-59477995f9-p67lz"] Mar 18 13:34:31.296050 master-0 kubenswrapper[27835]: I0318 13:34:31.295975 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc4gh\" (UniqueName: \"kubernetes.io/projected/355da07d-e01b-4940-a772-686d744c936c-kube-api-access-qc4gh\") pod \"sushy-emulator-59477995f9-p67lz\" (UID: \"355da07d-e01b-4940-a772-686d744c936c\") " pod="sushy-emulator/sushy-emulator-59477995f9-p67lz" Mar 18 13:34:31.296593 master-0 kubenswrapper[27835]: I0318 13:34:31.296567 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/355da07d-e01b-4940-a772-686d744c936c-os-client-config\") pod \"sushy-emulator-59477995f9-p67lz\" (UID: \"355da07d-e01b-4940-a772-686d744c936c\") " pod="sushy-emulator/sushy-emulator-59477995f9-p67lz" Mar 18 13:34:31.297142 master-0 kubenswrapper[27835]: I0318 13:34:31.297083 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/355da07d-e01b-4940-a772-686d744c936c-sushy-emulator-config\") pod \"sushy-emulator-59477995f9-p67lz\" (UID: \"355da07d-e01b-4940-a772-686d744c936c\") " pod="sushy-emulator/sushy-emulator-59477995f9-p67lz" Mar 18 13:34:31.399181 master-0 kubenswrapper[27835]: I0318 13:34:31.398957 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/355da07d-e01b-4940-a772-686d744c936c-sushy-emulator-config\") pod \"sushy-emulator-59477995f9-p67lz\" (UID: \"355da07d-e01b-4940-a772-686d744c936c\") " pod="sushy-emulator/sushy-emulator-59477995f9-p67lz" Mar 18 13:34:31.399181 master-0 kubenswrapper[27835]: I0318 13:34:31.399095 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc4gh\" (UniqueName: \"kubernetes.io/projected/355da07d-e01b-4940-a772-686d744c936c-kube-api-access-qc4gh\") pod \"sushy-emulator-59477995f9-p67lz\" (UID: \"355da07d-e01b-4940-a772-686d744c936c\") " pod="sushy-emulator/sushy-emulator-59477995f9-p67lz" Mar 18 13:34:31.399181 master-0 kubenswrapper[27835]: I0318 13:34:31.399135 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/355da07d-e01b-4940-a772-686d744c936c-os-client-config\") pod \"sushy-emulator-59477995f9-p67lz\" (UID: \"355da07d-e01b-4940-a772-686d744c936c\") " pod="sushy-emulator/sushy-emulator-59477995f9-p67lz" Mar 18 13:34:31.401225 master-0 kubenswrapper[27835]: I0318 13:34:31.401150 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/355da07d-e01b-4940-a772-686d744c936c-sushy-emulator-config\") pod \"sushy-emulator-59477995f9-p67lz\" (UID: \"355da07d-e01b-4940-a772-686d744c936c\") " pod="sushy-emulator/sushy-emulator-59477995f9-p67lz" Mar 18 13:34:31.404071 master-0 kubenswrapper[27835]: I0318 13:34:31.403822 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/355da07d-e01b-4940-a772-686d744c936c-os-client-config\") pod \"sushy-emulator-59477995f9-p67lz\" (UID: \"355da07d-e01b-4940-a772-686d744c936c\") " pod="sushy-emulator/sushy-emulator-59477995f9-p67lz" Mar 18 13:34:31.426235 master-0 kubenswrapper[27835]: I0318 13:34:31.426159 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc4gh\" (UniqueName: \"kubernetes.io/projected/355da07d-e01b-4940-a772-686d744c936c-kube-api-access-qc4gh\") pod \"sushy-emulator-59477995f9-p67lz\" (UID: \"355da07d-e01b-4940-a772-686d744c936c\") " pod="sushy-emulator/sushy-emulator-59477995f9-p67lz" Mar 18 13:34:31.520020 master-0 kubenswrapper[27835]: I0318 13:34:31.519962 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-59477995f9-p67lz" Mar 18 13:34:31.911843 master-0 kubenswrapper[27835]: I0318 13:34:31.911718 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-59477995f9-p67lz"] Mar 18 13:34:31.930133 master-0 kubenswrapper[27835]: I0318 13:34:31.930083 27835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 13:34:32.760730 master-0 kubenswrapper[27835]: I0318 13:34:32.760678 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-59477995f9-p67lz" event={"ID":"355da07d-e01b-4940-a772-686d744c936c","Type":"ContainerStarted","Data":"58336f73f174603f1b575703caed84c86d87898edd00bc63087744de4671d9f2"} Mar 18 13:34:39.824840 master-0 kubenswrapper[27835]: I0318 13:34:39.824761 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-59477995f9-p67lz" event={"ID":"355da07d-e01b-4940-a772-686d744c936c","Type":"ContainerStarted","Data":"e14a2769aa027884fe2cb7c11df376ef1c59d384146a52f5efa4c2e414161872"} Mar 18 13:34:39.852782 master-0 kubenswrapper[27835]: I0318 13:34:39.852676 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-59477995f9-p67lz" podStartSLOduration=1.7939556479999998 podStartE2EDuration="8.852654401s" podCreationTimestamp="2026-03-18 13:34:31 +0000 UTC" firstStartedPulling="2026-03-18 13:34:31.930036977 +0000 UTC m=+635.895248537" lastFinishedPulling="2026-03-18 13:34:38.98873572 +0000 UTC m=+642.953947290" observedRunningTime="2026-03-18 13:34:39.845809473 +0000 UTC m=+643.811021083" watchObservedRunningTime="2026-03-18 13:34:39.852654401 +0000 UTC m=+643.817865981" Mar 18 13:34:41.521347 master-0 kubenswrapper[27835]: I0318 13:34:41.521227 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-59477995f9-p67lz" Mar 18 13:34:41.521347 master-0 kubenswrapper[27835]: I0318 13:34:41.521299 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-59477995f9-p67lz" Mar 18 13:34:41.531374 master-0 kubenswrapper[27835]: I0318 13:34:41.531289 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-59477995f9-p67lz" Mar 18 13:34:41.844008 master-0 kubenswrapper[27835]: I0318 13:34:41.843841 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-59477995f9-p67lz" Mar 18 13:35:01.623255 master-0 kubenswrapper[27835]: I0318 13:35:01.623180 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-poller-54dcfd4f75-hlm7s"] Mar 18 13:35:01.625725 master-0 kubenswrapper[27835]: I0318 13:35:01.625703 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-54dcfd4f75-hlm7s" Mar 18 13:35:01.643652 master-0 kubenswrapper[27835]: I0318 13:35:01.643572 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-54dcfd4f75-hlm7s"] Mar 18 13:35:01.671406 master-0 kubenswrapper[27835]: I0318 13:35:01.671330 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd5r5\" (UniqueName: \"kubernetes.io/projected/4bb73b5b-9012-43bb-b196-5a3f930bf220-kube-api-access-gd5r5\") pod \"nova-console-poller-54dcfd4f75-hlm7s\" (UID: \"4bb73b5b-9012-43bb-b196-5a3f930bf220\") " pod="sushy-emulator/nova-console-poller-54dcfd4f75-hlm7s" Mar 18 13:35:01.671649 master-0 kubenswrapper[27835]: I0318 13:35:01.671473 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/4bb73b5b-9012-43bb-b196-5a3f930bf220-os-client-config\") pod \"nova-console-poller-54dcfd4f75-hlm7s\" (UID: \"4bb73b5b-9012-43bb-b196-5a3f930bf220\") " pod="sushy-emulator/nova-console-poller-54dcfd4f75-hlm7s" Mar 18 13:35:01.775620 master-0 kubenswrapper[27835]: I0318 13:35:01.773065 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd5r5\" (UniqueName: \"kubernetes.io/projected/4bb73b5b-9012-43bb-b196-5a3f930bf220-kube-api-access-gd5r5\") pod \"nova-console-poller-54dcfd4f75-hlm7s\" (UID: \"4bb73b5b-9012-43bb-b196-5a3f930bf220\") " pod="sushy-emulator/nova-console-poller-54dcfd4f75-hlm7s" Mar 18 13:35:01.775925 master-0 kubenswrapper[27835]: I0318 13:35:01.775819 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/4bb73b5b-9012-43bb-b196-5a3f930bf220-os-client-config\") pod \"nova-console-poller-54dcfd4f75-hlm7s\" (UID: \"4bb73b5b-9012-43bb-b196-5a3f930bf220\") " pod="sushy-emulator/nova-console-poller-54dcfd4f75-hlm7s" Mar 18 13:35:01.787518 master-0 kubenswrapper[27835]: I0318 13:35:01.787096 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/4bb73b5b-9012-43bb-b196-5a3f930bf220-os-client-config\") pod \"nova-console-poller-54dcfd4f75-hlm7s\" (UID: \"4bb73b5b-9012-43bb-b196-5a3f930bf220\") " pod="sushy-emulator/nova-console-poller-54dcfd4f75-hlm7s" Mar 18 13:35:01.789152 master-0 kubenswrapper[27835]: I0318 13:35:01.789104 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd5r5\" (UniqueName: \"kubernetes.io/projected/4bb73b5b-9012-43bb-b196-5a3f930bf220-kube-api-access-gd5r5\") pod \"nova-console-poller-54dcfd4f75-hlm7s\" (UID: \"4bb73b5b-9012-43bb-b196-5a3f930bf220\") " pod="sushy-emulator/nova-console-poller-54dcfd4f75-hlm7s" Mar 18 13:35:01.956678 master-0 kubenswrapper[27835]: I0318 13:35:01.956623 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-54dcfd4f75-hlm7s" Mar 18 13:35:02.359815 master-0 kubenswrapper[27835]: I0318 13:35:02.357980 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-54dcfd4f75-hlm7s"] Mar 18 13:35:02.369630 master-0 kubenswrapper[27835]: W0318 13:35:02.369584 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4bb73b5b_9012_43bb_b196_5a3f930bf220.slice/crio-df67fd9d3b935ac8a79a4caab2a1c582bca0ce80918a938f0fb1decd598912cc WatchSource:0}: Error finding container df67fd9d3b935ac8a79a4caab2a1c582bca0ce80918a938f0fb1decd598912cc: Status 404 returned error can't find the container with id df67fd9d3b935ac8a79a4caab2a1c582bca0ce80918a938f0fb1decd598912cc Mar 18 13:35:03.023540 master-0 kubenswrapper[27835]: I0318 13:35:03.023349 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-54dcfd4f75-hlm7s" event={"ID":"4bb73b5b-9012-43bb-b196-5a3f930bf220","Type":"ContainerStarted","Data":"df67fd9d3b935ac8a79a4caab2a1c582bca0ce80918a938f0fb1decd598912cc"} Mar 18 13:35:08.074221 master-0 kubenswrapper[27835]: I0318 13:35:08.074170 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-54dcfd4f75-hlm7s" event={"ID":"4bb73b5b-9012-43bb-b196-5a3f930bf220","Type":"ContainerStarted","Data":"3591f8e536ddffebc6a63f566cf47cb1c8c1ca28b2a80fb6d857808b54aa3587"} Mar 18 13:35:09.089166 master-0 kubenswrapper[27835]: I0318 13:35:09.089095 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-54dcfd4f75-hlm7s" event={"ID":"4bb73b5b-9012-43bb-b196-5a3f930bf220","Type":"ContainerStarted","Data":"7416b892f0c4bd5e56bbb9bb83e7a9f66cf8b48ef9c5ac5eeb385d69d4c9ee66"} Mar 18 13:35:09.118016 master-0 kubenswrapper[27835]: I0318 13:35:09.117922 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-poller-54dcfd4f75-hlm7s" podStartSLOduration=1.890706748 podStartE2EDuration="8.117894175s" podCreationTimestamp="2026-03-18 13:35:01 +0000 UTC" firstStartedPulling="2026-03-18 13:35:02.371459601 +0000 UTC m=+666.336671171" lastFinishedPulling="2026-03-18 13:35:08.598647008 +0000 UTC m=+672.563858598" observedRunningTime="2026-03-18 13:35:09.110948563 +0000 UTC m=+673.076160213" watchObservedRunningTime="2026-03-18 13:35:09.117894175 +0000 UTC m=+673.083105765" Mar 18 13:35:34.424021 master-0 kubenswrapper[27835]: I0318 13:35:34.423959 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-recorder-87cddbd75-wst8r"] Mar 18 13:35:34.425673 master-0 kubenswrapper[27835]: I0318 13:35:34.425614 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-recorder-87cddbd75-wst8r" Mar 18 13:35:34.469885 master-0 kubenswrapper[27835]: I0318 13:35:34.439798 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-recorder-87cddbd75-wst8r"] Mar 18 13:35:34.543677 master-0 kubenswrapper[27835]: I0318 13:35:34.543628 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/da3528af-8efc-4199-b1fb-8e3b7f4604e6-nova-console-recordings-pv\") pod \"nova-console-recorder-87cddbd75-wst8r\" (UID: \"da3528af-8efc-4199-b1fb-8e3b7f4604e6\") " pod="sushy-emulator/nova-console-recorder-87cddbd75-wst8r" Mar 18 13:35:34.543933 master-0 kubenswrapper[27835]: I0318 13:35:34.543695 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx4z9\" (UniqueName: \"kubernetes.io/projected/da3528af-8efc-4199-b1fb-8e3b7f4604e6-kube-api-access-zx4z9\") pod \"nova-console-recorder-87cddbd75-wst8r\" (UID: \"da3528af-8efc-4199-b1fb-8e3b7f4604e6\") " pod="sushy-emulator/nova-console-recorder-87cddbd75-wst8r" Mar 18 13:35:34.543933 master-0 kubenswrapper[27835]: I0318 13:35:34.543769 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/da3528af-8efc-4199-b1fb-8e3b7f4604e6-os-client-config\") pod \"nova-console-recorder-87cddbd75-wst8r\" (UID: \"da3528af-8efc-4199-b1fb-8e3b7f4604e6\") " pod="sushy-emulator/nova-console-recorder-87cddbd75-wst8r" Mar 18 13:35:34.645094 master-0 kubenswrapper[27835]: I0318 13:35:34.645018 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/da3528af-8efc-4199-b1fb-8e3b7f4604e6-os-client-config\") pod \"nova-console-recorder-87cddbd75-wst8r\" (UID: \"da3528af-8efc-4199-b1fb-8e3b7f4604e6\") " pod="sushy-emulator/nova-console-recorder-87cddbd75-wst8r" Mar 18 13:35:34.645323 master-0 kubenswrapper[27835]: I0318 13:35:34.645099 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/da3528af-8efc-4199-b1fb-8e3b7f4604e6-nova-console-recordings-pv\") pod \"nova-console-recorder-87cddbd75-wst8r\" (UID: \"da3528af-8efc-4199-b1fb-8e3b7f4604e6\") " pod="sushy-emulator/nova-console-recorder-87cddbd75-wst8r" Mar 18 13:35:34.645323 master-0 kubenswrapper[27835]: I0318 13:35:34.645152 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx4z9\" (UniqueName: \"kubernetes.io/projected/da3528af-8efc-4199-b1fb-8e3b7f4604e6-kube-api-access-zx4z9\") pod \"nova-console-recorder-87cddbd75-wst8r\" (UID: \"da3528af-8efc-4199-b1fb-8e3b7f4604e6\") " pod="sushy-emulator/nova-console-recorder-87cddbd75-wst8r" Mar 18 13:35:34.648619 master-0 kubenswrapper[27835]: I0318 13:35:34.648591 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/da3528af-8efc-4199-b1fb-8e3b7f4604e6-os-client-config\") pod \"nova-console-recorder-87cddbd75-wst8r\" (UID: \"da3528af-8efc-4199-b1fb-8e3b7f4604e6\") " pod="sushy-emulator/nova-console-recorder-87cddbd75-wst8r" Mar 18 13:35:34.664445 master-0 kubenswrapper[27835]: I0318 13:35:34.664396 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx4z9\" (UniqueName: \"kubernetes.io/projected/da3528af-8efc-4199-b1fb-8e3b7f4604e6-kube-api-access-zx4z9\") pod \"nova-console-recorder-87cddbd75-wst8r\" (UID: \"da3528af-8efc-4199-b1fb-8e3b7f4604e6\") " pod="sushy-emulator/nova-console-recorder-87cddbd75-wst8r" Mar 18 13:35:35.218971 master-0 kubenswrapper[27835]: I0318 13:35:35.218897 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/da3528af-8efc-4199-b1fb-8e3b7f4604e6-nova-console-recordings-pv\") pod \"nova-console-recorder-87cddbd75-wst8r\" (UID: \"da3528af-8efc-4199-b1fb-8e3b7f4604e6\") " pod="sushy-emulator/nova-console-recorder-87cddbd75-wst8r" Mar 18 13:35:35.383219 master-0 kubenswrapper[27835]: I0318 13:35:35.383152 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-recorder-87cddbd75-wst8r" Mar 18 13:35:35.883432 master-0 kubenswrapper[27835]: I0318 13:35:35.883376 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-recorder-87cddbd75-wst8r"] Mar 18 13:35:35.900860 master-0 kubenswrapper[27835]: I0318 13:35:35.900796 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-87cddbd75-wst8r" event={"ID":"da3528af-8efc-4199-b1fb-8e3b7f4604e6","Type":"ContainerStarted","Data":"918f186f951d60595e89db9355cbdd0d91a37811b49a56c9dcfdb1b72e30790e"} Mar 18 13:35:45.996043 master-0 kubenswrapper[27835]: I0318 13:35:45.995893 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-87cddbd75-wst8r" event={"ID":"da3528af-8efc-4199-b1fb-8e3b7f4604e6","Type":"ContainerStarted","Data":"e207da40ced7d837836ed65a0dbd6986b4a06887b36d114c644fe10716d52169"} Mar 18 13:35:47.004251 master-0 kubenswrapper[27835]: I0318 13:35:47.004163 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-87cddbd75-wst8r" event={"ID":"da3528af-8efc-4199-b1fb-8e3b7f4604e6","Type":"ContainerStarted","Data":"f634e151e42d82d3ee585e794d60db55b52919407c72986d365c325fa014972c"} Mar 18 13:35:47.025202 master-0 kubenswrapper[27835]: I0318 13:35:47.025137 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-recorder-87cddbd75-wst8r" podStartSLOduration=2.8011690700000003 podStartE2EDuration="13.025119761s" podCreationTimestamp="2026-03-18 13:35:34 +0000 UTC" firstStartedPulling="2026-03-18 13:35:35.868086071 +0000 UTC m=+699.833297631" lastFinishedPulling="2026-03-18 13:35:46.092036752 +0000 UTC m=+710.057248322" observedRunningTime="2026-03-18 13:35:47.021341997 +0000 UTC m=+710.986553567" watchObservedRunningTime="2026-03-18 13:35:47.025119761 +0000 UTC m=+710.990331321" Mar 18 13:35:56.747379 master-0 kubenswrapper[27835]: I0318 13:35:56.747277 27835 scope.go:117] "RemoveContainer" containerID="b6e635d52f946549fcf27defbcf724ae9289a976eded079682e4355e8de44795" Mar 18 13:36:15.627970 master-0 kubenswrapper[27835]: I0318 13:36:15.627825 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4ltwc2"] Mar 18 13:36:15.629107 master-0 kubenswrapper[27835]: I0318 13:36:15.629077 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4ltwc2" Mar 18 13:36:15.643738 master-0 kubenswrapper[27835]: I0318 13:36:15.643672 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4ltwc2"] Mar 18 13:36:15.799782 master-0 kubenswrapper[27835]: I0318 13:36:15.799714 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8f395df9-c5f6-4d1d-ad03-554e15eac129-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4ltwc2\" (UID: \"8f395df9-c5f6-4d1d-ad03-554e15eac129\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4ltwc2" Mar 18 13:36:15.799782 master-0 kubenswrapper[27835]: I0318 13:36:15.799790 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p9v2\" (UniqueName: \"kubernetes.io/projected/8f395df9-c5f6-4d1d-ad03-554e15eac129-kube-api-access-8p9v2\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4ltwc2\" (UID: \"8f395df9-c5f6-4d1d-ad03-554e15eac129\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4ltwc2" Mar 18 13:36:15.800065 master-0 kubenswrapper[27835]: I0318 13:36:15.799824 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8f395df9-c5f6-4d1d-ad03-554e15eac129-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4ltwc2\" (UID: \"8f395df9-c5f6-4d1d-ad03-554e15eac129\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4ltwc2" Mar 18 13:36:15.901840 master-0 kubenswrapper[27835]: I0318 13:36:15.901676 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8f395df9-c5f6-4d1d-ad03-554e15eac129-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4ltwc2\" (UID: \"8f395df9-c5f6-4d1d-ad03-554e15eac129\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4ltwc2" Mar 18 13:36:15.902160 master-0 kubenswrapper[27835]: I0318 13:36:15.901861 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p9v2\" (UniqueName: \"kubernetes.io/projected/8f395df9-c5f6-4d1d-ad03-554e15eac129-kube-api-access-8p9v2\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4ltwc2\" (UID: \"8f395df9-c5f6-4d1d-ad03-554e15eac129\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4ltwc2" Mar 18 13:36:15.902160 master-0 kubenswrapper[27835]: I0318 13:36:15.901901 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8f395df9-c5f6-4d1d-ad03-554e15eac129-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4ltwc2\" (UID: \"8f395df9-c5f6-4d1d-ad03-554e15eac129\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4ltwc2" Mar 18 13:36:15.902364 master-0 kubenswrapper[27835]: I0318 13:36:15.902320 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8f395df9-c5f6-4d1d-ad03-554e15eac129-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4ltwc2\" (UID: \"8f395df9-c5f6-4d1d-ad03-554e15eac129\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4ltwc2" Mar 18 13:36:15.902541 master-0 kubenswrapper[27835]: I0318 13:36:15.902383 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8f395df9-c5f6-4d1d-ad03-554e15eac129-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4ltwc2\" (UID: \"8f395df9-c5f6-4d1d-ad03-554e15eac129\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4ltwc2" Mar 18 13:36:15.923703 master-0 kubenswrapper[27835]: I0318 13:36:15.923654 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p9v2\" (UniqueName: \"kubernetes.io/projected/8f395df9-c5f6-4d1d-ad03-554e15eac129-kube-api-access-8p9v2\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4ltwc2\" (UID: \"8f395df9-c5f6-4d1d-ad03-554e15eac129\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4ltwc2" Mar 18 13:36:15.946345 master-0 kubenswrapper[27835]: I0318 13:36:15.946275 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4ltwc2" Mar 18 13:36:16.425707 master-0 kubenswrapper[27835]: I0318 13:36:16.425631 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4ltwc2"] Mar 18 13:36:16.429179 master-0 kubenswrapper[27835]: W0318 13:36:16.429144 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f395df9_c5f6_4d1d_ad03_554e15eac129.slice/crio-1614c3b5305a5aaafd76315d515188467ef286a3e48e9a43d147d71bdeb6be1d WatchSource:0}: Error finding container 1614c3b5305a5aaafd76315d515188467ef286a3e48e9a43d147d71bdeb6be1d: Status 404 returned error can't find the container with id 1614c3b5305a5aaafd76315d515188467ef286a3e48e9a43d147d71bdeb6be1d Mar 18 13:36:17.251465 master-0 kubenswrapper[27835]: I0318 13:36:17.251307 27835 generic.go:334] "Generic (PLEG): container finished" podID="8f395df9-c5f6-4d1d-ad03-554e15eac129" containerID="70c97449674d44820f4bb42ea159a40241bb49d5862435dab583c881d001e776" exitCode=0 Mar 18 13:36:17.252168 master-0 kubenswrapper[27835]: I0318 13:36:17.251626 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4ltwc2" event={"ID":"8f395df9-c5f6-4d1d-ad03-554e15eac129","Type":"ContainerDied","Data":"70c97449674d44820f4bb42ea159a40241bb49d5862435dab583c881d001e776"} Mar 18 13:36:17.252168 master-0 kubenswrapper[27835]: I0318 13:36:17.252047 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4ltwc2" event={"ID":"8f395df9-c5f6-4d1d-ad03-554e15eac129","Type":"ContainerStarted","Data":"1614c3b5305a5aaafd76315d515188467ef286a3e48e9a43d147d71bdeb6be1d"} Mar 18 13:36:19.272259 master-0 kubenswrapper[27835]: I0318 13:36:19.272147 27835 generic.go:334] "Generic (PLEG): container finished" podID="8f395df9-c5f6-4d1d-ad03-554e15eac129" containerID="7662cf6a9db23a3b445ceeb89023693c72210394d67db975eae72d7e7e1b4a50" exitCode=0 Mar 18 13:36:19.273279 master-0 kubenswrapper[27835]: I0318 13:36:19.272253 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4ltwc2" event={"ID":"8f395df9-c5f6-4d1d-ad03-554e15eac129","Type":"ContainerDied","Data":"7662cf6a9db23a3b445ceeb89023693c72210394d67db975eae72d7e7e1b4a50"} Mar 18 13:36:20.283967 master-0 kubenswrapper[27835]: I0318 13:36:20.283889 27835 generic.go:334] "Generic (PLEG): container finished" podID="8f395df9-c5f6-4d1d-ad03-554e15eac129" containerID="73881dcec9ccffd351ec1c68e0a43e02e56096fb01f6e18f9e7d7cbf3b074c9c" exitCode=0 Mar 18 13:36:20.289346 master-0 kubenswrapper[27835]: I0318 13:36:20.289296 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4ltwc2" event={"ID":"8f395df9-c5f6-4d1d-ad03-554e15eac129","Type":"ContainerDied","Data":"73881dcec9ccffd351ec1c68e0a43e02e56096fb01f6e18f9e7d7cbf3b074c9c"} Mar 18 13:36:21.582229 master-0 kubenswrapper[27835]: I0318 13:36:21.582170 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4ltwc2" Mar 18 13:36:21.732119 master-0 kubenswrapper[27835]: I0318 13:36:21.732033 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8f395df9-c5f6-4d1d-ad03-554e15eac129-util\") pod \"8f395df9-c5f6-4d1d-ad03-554e15eac129\" (UID: \"8f395df9-c5f6-4d1d-ad03-554e15eac129\") " Mar 18 13:36:21.732361 master-0 kubenswrapper[27835]: I0318 13:36:21.732238 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p9v2\" (UniqueName: \"kubernetes.io/projected/8f395df9-c5f6-4d1d-ad03-554e15eac129-kube-api-access-8p9v2\") pod \"8f395df9-c5f6-4d1d-ad03-554e15eac129\" (UID: \"8f395df9-c5f6-4d1d-ad03-554e15eac129\") " Mar 18 13:36:21.732361 master-0 kubenswrapper[27835]: I0318 13:36:21.732355 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8f395df9-c5f6-4d1d-ad03-554e15eac129-bundle\") pod \"8f395df9-c5f6-4d1d-ad03-554e15eac129\" (UID: \"8f395df9-c5f6-4d1d-ad03-554e15eac129\") " Mar 18 13:36:21.732999 master-0 kubenswrapper[27835]: I0318 13:36:21.732944 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f395df9-c5f6-4d1d-ad03-554e15eac129-bundle" (OuterVolumeSpecName: "bundle") pod "8f395df9-c5f6-4d1d-ad03-554e15eac129" (UID: "8f395df9-c5f6-4d1d-ad03-554e15eac129"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:36:21.742343 master-0 kubenswrapper[27835]: I0318 13:36:21.742295 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f395df9-c5f6-4d1d-ad03-554e15eac129-util" (OuterVolumeSpecName: "util") pod "8f395df9-c5f6-4d1d-ad03-554e15eac129" (UID: "8f395df9-c5f6-4d1d-ad03-554e15eac129"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:36:21.817876 master-0 kubenswrapper[27835]: I0318 13:36:21.814698 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f395df9-c5f6-4d1d-ad03-554e15eac129-kube-api-access-8p9v2" (OuterVolumeSpecName: "kube-api-access-8p9v2") pod "8f395df9-c5f6-4d1d-ad03-554e15eac129" (UID: "8f395df9-c5f6-4d1d-ad03-554e15eac129"). InnerVolumeSpecName "kube-api-access-8p9v2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:36:21.835388 master-0 kubenswrapper[27835]: I0318 13:36:21.835320 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8p9v2\" (UniqueName: \"kubernetes.io/projected/8f395df9-c5f6-4d1d-ad03-554e15eac129-kube-api-access-8p9v2\") on node \"master-0\" DevicePath \"\"" Mar 18 13:36:21.835388 master-0 kubenswrapper[27835]: I0318 13:36:21.835368 27835 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8f395df9-c5f6-4d1d-ad03-554e15eac129-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:36:21.835388 master-0 kubenswrapper[27835]: I0318 13:36:21.835383 27835 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8f395df9-c5f6-4d1d-ad03-554e15eac129-util\") on node \"master-0\" DevicePath \"\"" Mar 18 13:36:22.300275 master-0 kubenswrapper[27835]: I0318 13:36:22.300209 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4ltwc2" event={"ID":"8f395df9-c5f6-4d1d-ad03-554e15eac129","Type":"ContainerDied","Data":"1614c3b5305a5aaafd76315d515188467ef286a3e48e9a43d147d71bdeb6be1d"} Mar 18 13:36:22.300275 master-0 kubenswrapper[27835]: I0318 13:36:22.300249 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1614c3b5305a5aaafd76315d515188467ef286a3e48e9a43d147d71bdeb6be1d" Mar 18 13:36:22.300592 master-0 kubenswrapper[27835]: I0318 13:36:22.300280 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4ltwc2" Mar 18 13:36:22.346258 master-0 kubenswrapper[27835]: E0318 13:36:22.346192 27835 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f395df9_c5f6_4d1d_ad03_554e15eac129.slice\": RecentStats: unable to find data in memory cache]" Mar 18 13:36:28.172318 master-0 kubenswrapper[27835]: I0318 13:36:28.172258 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/lvms-operator-6f9bb89768-6vxqh"] Mar 18 13:36:28.172996 master-0 kubenswrapper[27835]: E0318 13:36:28.172554 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f395df9-c5f6-4d1d-ad03-554e15eac129" containerName="util" Mar 18 13:36:28.172996 master-0 kubenswrapper[27835]: I0318 13:36:28.172571 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f395df9-c5f6-4d1d-ad03-554e15eac129" containerName="util" Mar 18 13:36:28.172996 master-0 kubenswrapper[27835]: E0318 13:36:28.172611 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f395df9-c5f6-4d1d-ad03-554e15eac129" containerName="extract" Mar 18 13:36:28.172996 master-0 kubenswrapper[27835]: I0318 13:36:28.172620 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f395df9-c5f6-4d1d-ad03-554e15eac129" containerName="extract" Mar 18 13:36:28.172996 master-0 kubenswrapper[27835]: E0318 13:36:28.172649 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f395df9-c5f6-4d1d-ad03-554e15eac129" containerName="pull" Mar 18 13:36:28.172996 master-0 kubenswrapper[27835]: I0318 13:36:28.172658 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f395df9-c5f6-4d1d-ad03-554e15eac129" containerName="pull" Mar 18 13:36:28.172996 master-0 kubenswrapper[27835]: I0318 13:36:28.172858 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f395df9-c5f6-4d1d-ad03-554e15eac129" containerName="extract" Mar 18 13:36:28.173375 master-0 kubenswrapper[27835]: I0318 13:36:28.173345 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-6f9bb89768-6vxqh" Mar 18 13:36:28.176600 master-0 kubenswrapper[27835]: I0318 13:36:28.176559 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"kube-root-ca.crt" Mar 18 13:36:28.176771 master-0 kubenswrapper[27835]: I0318 13:36:28.176705 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"openshift-service-ca.crt" Mar 18 13:36:28.176819 master-0 kubenswrapper[27835]: I0318 13:36:28.176786 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-metrics-cert" Mar 18 13:36:28.176854 master-0 kubenswrapper[27835]: I0318 13:36:28.176794 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-service-cert" Mar 18 13:36:28.176946 master-0 kubenswrapper[27835]: I0318 13:36:28.176924 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-webhook-server-cert" Mar 18 13:36:28.192596 master-0 kubenswrapper[27835]: I0318 13:36:28.192518 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-6f9bb89768-6vxqh"] Mar 18 13:36:28.277795 master-0 kubenswrapper[27835]: I0318 13:36:28.277747 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-545bz\" (UniqueName: \"kubernetes.io/projected/b46c49cf-0ccf-436f-aa30-21cedff365d0-kube-api-access-545bz\") pod \"lvms-operator-6f9bb89768-6vxqh\" (UID: \"b46c49cf-0ccf-436f-aa30-21cedff365d0\") " pod="openshift-storage/lvms-operator-6f9bb89768-6vxqh" Mar 18 13:36:28.277795 master-0 kubenswrapper[27835]: I0318 13:36:28.277814 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/b46c49cf-0ccf-436f-aa30-21cedff365d0-socket-dir\") pod \"lvms-operator-6f9bb89768-6vxqh\" (UID: \"b46c49cf-0ccf-436f-aa30-21cedff365d0\") " pod="openshift-storage/lvms-operator-6f9bb89768-6vxqh" Mar 18 13:36:28.278178 master-0 kubenswrapper[27835]: I0318 13:36:28.277835 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b46c49cf-0ccf-436f-aa30-21cedff365d0-apiservice-cert\") pod \"lvms-operator-6f9bb89768-6vxqh\" (UID: \"b46c49cf-0ccf-436f-aa30-21cedff365d0\") " pod="openshift-storage/lvms-operator-6f9bb89768-6vxqh" Mar 18 13:36:28.278178 master-0 kubenswrapper[27835]: I0318 13:36:28.277854 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b46c49cf-0ccf-436f-aa30-21cedff365d0-webhook-cert\") pod \"lvms-operator-6f9bb89768-6vxqh\" (UID: \"b46c49cf-0ccf-436f-aa30-21cedff365d0\") " pod="openshift-storage/lvms-operator-6f9bb89768-6vxqh" Mar 18 13:36:28.278178 master-0 kubenswrapper[27835]: I0318 13:36:28.277869 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/b46c49cf-0ccf-436f-aa30-21cedff365d0-metrics-cert\") pod \"lvms-operator-6f9bb89768-6vxqh\" (UID: \"b46c49cf-0ccf-436f-aa30-21cedff365d0\") " pod="openshift-storage/lvms-operator-6f9bb89768-6vxqh" Mar 18 13:36:28.379102 master-0 kubenswrapper[27835]: I0318 13:36:28.379028 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/b46c49cf-0ccf-436f-aa30-21cedff365d0-metrics-cert\") pod \"lvms-operator-6f9bb89768-6vxqh\" (UID: \"b46c49cf-0ccf-436f-aa30-21cedff365d0\") " pod="openshift-storage/lvms-operator-6f9bb89768-6vxqh" Mar 18 13:36:28.379358 master-0 kubenswrapper[27835]: I0318 13:36:28.379122 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b46c49cf-0ccf-436f-aa30-21cedff365d0-apiservice-cert\") pod \"lvms-operator-6f9bb89768-6vxqh\" (UID: \"b46c49cf-0ccf-436f-aa30-21cedff365d0\") " pod="openshift-storage/lvms-operator-6f9bb89768-6vxqh" Mar 18 13:36:28.379358 master-0 kubenswrapper[27835]: I0318 13:36:28.379155 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b46c49cf-0ccf-436f-aa30-21cedff365d0-webhook-cert\") pod \"lvms-operator-6f9bb89768-6vxqh\" (UID: \"b46c49cf-0ccf-436f-aa30-21cedff365d0\") " pod="openshift-storage/lvms-operator-6f9bb89768-6vxqh" Mar 18 13:36:28.379592 master-0 kubenswrapper[27835]: I0318 13:36:28.379544 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-545bz\" (UniqueName: \"kubernetes.io/projected/b46c49cf-0ccf-436f-aa30-21cedff365d0-kube-api-access-545bz\") pod \"lvms-operator-6f9bb89768-6vxqh\" (UID: \"b46c49cf-0ccf-436f-aa30-21cedff365d0\") " pod="openshift-storage/lvms-operator-6f9bb89768-6vxqh" Mar 18 13:36:28.379751 master-0 kubenswrapper[27835]: I0318 13:36:28.379724 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/b46c49cf-0ccf-436f-aa30-21cedff365d0-socket-dir\") pod \"lvms-operator-6f9bb89768-6vxqh\" (UID: \"b46c49cf-0ccf-436f-aa30-21cedff365d0\") " pod="openshift-storage/lvms-operator-6f9bb89768-6vxqh" Mar 18 13:36:28.380319 master-0 kubenswrapper[27835]: I0318 13:36:28.380277 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/b46c49cf-0ccf-436f-aa30-21cedff365d0-socket-dir\") pod \"lvms-operator-6f9bb89768-6vxqh\" (UID: \"b46c49cf-0ccf-436f-aa30-21cedff365d0\") " pod="openshift-storage/lvms-operator-6f9bb89768-6vxqh" Mar 18 13:36:28.383399 master-0 kubenswrapper[27835]: I0318 13:36:28.383168 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b46c49cf-0ccf-436f-aa30-21cedff365d0-webhook-cert\") pod \"lvms-operator-6f9bb89768-6vxqh\" (UID: \"b46c49cf-0ccf-436f-aa30-21cedff365d0\") " pod="openshift-storage/lvms-operator-6f9bb89768-6vxqh" Mar 18 13:36:28.383399 master-0 kubenswrapper[27835]: I0318 13:36:28.383329 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/b46c49cf-0ccf-436f-aa30-21cedff365d0-metrics-cert\") pod \"lvms-operator-6f9bb89768-6vxqh\" (UID: \"b46c49cf-0ccf-436f-aa30-21cedff365d0\") " pod="openshift-storage/lvms-operator-6f9bb89768-6vxqh" Mar 18 13:36:28.386716 master-0 kubenswrapper[27835]: I0318 13:36:28.383967 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b46c49cf-0ccf-436f-aa30-21cedff365d0-apiservice-cert\") pod \"lvms-operator-6f9bb89768-6vxqh\" (UID: \"b46c49cf-0ccf-436f-aa30-21cedff365d0\") " pod="openshift-storage/lvms-operator-6f9bb89768-6vxqh" Mar 18 13:36:28.400258 master-0 kubenswrapper[27835]: I0318 13:36:28.400221 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-545bz\" (UniqueName: \"kubernetes.io/projected/b46c49cf-0ccf-436f-aa30-21cedff365d0-kube-api-access-545bz\") pod \"lvms-operator-6f9bb89768-6vxqh\" (UID: \"b46c49cf-0ccf-436f-aa30-21cedff365d0\") " pod="openshift-storage/lvms-operator-6f9bb89768-6vxqh" Mar 18 13:36:28.490505 master-0 kubenswrapper[27835]: I0318 13:36:28.490024 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-6f9bb89768-6vxqh" Mar 18 13:36:28.922927 master-0 kubenswrapper[27835]: I0318 13:36:28.922867 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-6f9bb89768-6vxqh"] Mar 18 13:36:29.353170 master-0 kubenswrapper[27835]: I0318 13:36:29.352937 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-6f9bb89768-6vxqh" event={"ID":"b46c49cf-0ccf-436f-aa30-21cedff365d0","Type":"ContainerStarted","Data":"675ae897c2b76216e0e29dbbf616d20a96f3d693b07df73ff643e81d99504b4b"} Mar 18 13:36:46.483173 master-0 kubenswrapper[27835]: I0318 13:36:46.483101 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-6f9bb89768-6vxqh" event={"ID":"b46c49cf-0ccf-436f-aa30-21cedff365d0","Type":"ContainerStarted","Data":"545e450a964bc7fdee534e71909a8ded41cc3aceaac065f48aefc2a47b33bacb"} Mar 18 13:36:46.483848 master-0 kubenswrapper[27835]: I0318 13:36:46.483290 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/lvms-operator-6f9bb89768-6vxqh" Mar 18 13:36:46.505431 master-0 kubenswrapper[27835]: I0318 13:36:46.505315 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/lvms-operator-6f9bb89768-6vxqh" podStartSLOduration=1.145306981 podStartE2EDuration="18.505280191s" podCreationTimestamp="2026-03-18 13:36:28 +0000 UTC" firstStartedPulling="2026-03-18 13:36:28.933088539 +0000 UTC m=+752.898300109" lastFinishedPulling="2026-03-18 13:36:46.293061759 +0000 UTC m=+770.258273319" observedRunningTime="2026-03-18 13:36:46.503258055 +0000 UTC m=+770.468469645" watchObservedRunningTime="2026-03-18 13:36:46.505280191 +0000 UTC m=+770.470491751" Mar 18 13:36:47.495335 master-0 kubenswrapper[27835]: I0318 13:36:47.495268 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/lvms-operator-6f9bb89768-6vxqh" Mar 18 13:36:50.675429 master-0 kubenswrapper[27835]: I0318 13:36:50.675346 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g9prz"] Mar 18 13:36:50.677241 master-0 kubenswrapper[27835]: I0318 13:36:50.677201 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g9prz" Mar 18 13:36:50.699215 master-0 kubenswrapper[27835]: I0318 13:36:50.699132 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g9prz"] Mar 18 13:36:50.831169 master-0 kubenswrapper[27835]: I0318 13:36:50.831112 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4bbaa90f-2f09-4d78-8724-6ef7beea9bff-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g9prz\" (UID: \"4bbaa90f-2f09-4d78-8724-6ef7beea9bff\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g9prz" Mar 18 13:36:50.831493 master-0 kubenswrapper[27835]: I0318 13:36:50.831474 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9knt\" (UniqueName: \"kubernetes.io/projected/4bbaa90f-2f09-4d78-8724-6ef7beea9bff-kube-api-access-p9knt\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g9prz\" (UID: \"4bbaa90f-2f09-4d78-8724-6ef7beea9bff\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g9prz" Mar 18 13:36:50.831607 master-0 kubenswrapper[27835]: I0318 13:36:50.831594 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4bbaa90f-2f09-4d78-8724-6ef7beea9bff-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g9prz\" (UID: \"4bbaa90f-2f09-4d78-8724-6ef7beea9bff\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g9prz" Mar 18 13:36:50.870293 master-0 kubenswrapper[27835]: I0318 13:36:50.870228 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zznbn"] Mar 18 13:36:50.872291 master-0 kubenswrapper[27835]: I0318 13:36:50.872269 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zznbn" Mar 18 13:36:50.883876 master-0 kubenswrapper[27835]: I0318 13:36:50.883825 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zznbn"] Mar 18 13:36:50.933187 master-0 kubenswrapper[27835]: I0318 13:36:50.933065 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78fc45e5-8894-4be0-9e43-5684e37b8e5e-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zznbn\" (UID: \"78fc45e5-8894-4be0-9e43-5684e37b8e5e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zznbn" Mar 18 13:36:50.933187 master-0 kubenswrapper[27835]: I0318 13:36:50.933135 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9knt\" (UniqueName: \"kubernetes.io/projected/4bbaa90f-2f09-4d78-8724-6ef7beea9bff-kube-api-access-p9knt\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g9prz\" (UID: \"4bbaa90f-2f09-4d78-8724-6ef7beea9bff\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g9prz" Mar 18 13:36:50.933187 master-0 kubenswrapper[27835]: I0318 13:36:50.933165 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78fc45e5-8894-4be0-9e43-5684e37b8e5e-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zznbn\" (UID: \"78fc45e5-8894-4be0-9e43-5684e37b8e5e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zznbn" Mar 18 13:36:50.933444 master-0 kubenswrapper[27835]: I0318 13:36:50.933306 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4bbaa90f-2f09-4d78-8724-6ef7beea9bff-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g9prz\" (UID: \"4bbaa90f-2f09-4d78-8724-6ef7beea9bff\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g9prz" Mar 18 13:36:50.933444 master-0 kubenswrapper[27835]: I0318 13:36:50.933355 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njj9m\" (UniqueName: \"kubernetes.io/projected/78fc45e5-8894-4be0-9e43-5684e37b8e5e-kube-api-access-njj9m\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zznbn\" (UID: \"78fc45e5-8894-4be0-9e43-5684e37b8e5e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zznbn" Mar 18 13:36:50.933444 master-0 kubenswrapper[27835]: I0318 13:36:50.933425 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4bbaa90f-2f09-4d78-8724-6ef7beea9bff-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g9prz\" (UID: \"4bbaa90f-2f09-4d78-8724-6ef7beea9bff\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g9prz" Mar 18 13:36:50.933806 master-0 kubenswrapper[27835]: I0318 13:36:50.933776 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4bbaa90f-2f09-4d78-8724-6ef7beea9bff-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g9prz\" (UID: \"4bbaa90f-2f09-4d78-8724-6ef7beea9bff\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g9prz" Mar 18 13:36:50.933877 master-0 kubenswrapper[27835]: I0318 13:36:50.933832 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4bbaa90f-2f09-4d78-8724-6ef7beea9bff-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g9prz\" (UID: \"4bbaa90f-2f09-4d78-8724-6ef7beea9bff\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g9prz" Mar 18 13:36:50.948135 master-0 kubenswrapper[27835]: I0318 13:36:50.948088 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9knt\" (UniqueName: \"kubernetes.io/projected/4bbaa90f-2f09-4d78-8724-6ef7beea9bff-kube-api-access-p9knt\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g9prz\" (UID: \"4bbaa90f-2f09-4d78-8724-6ef7beea9bff\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g9prz" Mar 18 13:36:50.992961 master-0 kubenswrapper[27835]: I0318 13:36:50.992916 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g9prz" Mar 18 13:36:51.035499 master-0 kubenswrapper[27835]: I0318 13:36:51.034802 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78fc45e5-8894-4be0-9e43-5684e37b8e5e-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zznbn\" (UID: \"78fc45e5-8894-4be0-9e43-5684e37b8e5e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zznbn" Mar 18 13:36:51.035499 master-0 kubenswrapper[27835]: I0318 13:36:51.034854 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78fc45e5-8894-4be0-9e43-5684e37b8e5e-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zznbn\" (UID: \"78fc45e5-8894-4be0-9e43-5684e37b8e5e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zznbn" Mar 18 13:36:51.035499 master-0 kubenswrapper[27835]: I0318 13:36:51.034931 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njj9m\" (UniqueName: \"kubernetes.io/projected/78fc45e5-8894-4be0-9e43-5684e37b8e5e-kube-api-access-njj9m\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zznbn\" (UID: \"78fc45e5-8894-4be0-9e43-5684e37b8e5e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zznbn" Mar 18 13:36:51.040541 master-0 kubenswrapper[27835]: I0318 13:36:51.038012 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78fc45e5-8894-4be0-9e43-5684e37b8e5e-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zznbn\" (UID: \"78fc45e5-8894-4be0-9e43-5684e37b8e5e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zznbn" Mar 18 13:36:51.040937 master-0 kubenswrapper[27835]: I0318 13:36:51.040900 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78fc45e5-8894-4be0-9e43-5684e37b8e5e-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zznbn\" (UID: \"78fc45e5-8894-4be0-9e43-5684e37b8e5e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zznbn" Mar 18 13:36:51.053736 master-0 kubenswrapper[27835]: I0318 13:36:51.053706 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njj9m\" (UniqueName: \"kubernetes.io/projected/78fc45e5-8894-4be0-9e43-5684e37b8e5e-kube-api-access-njj9m\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zznbn\" (UID: \"78fc45e5-8894-4be0-9e43-5684e37b8e5e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zznbn" Mar 18 13:36:51.193171 master-0 kubenswrapper[27835]: I0318 13:36:51.192515 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zznbn" Mar 18 13:36:51.543811 master-0 kubenswrapper[27835]: I0318 13:36:51.543757 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g9prz"] Mar 18 13:36:51.846769 master-0 kubenswrapper[27835]: I0318 13:36:51.846718 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zznbn"] Mar 18 13:36:51.851396 master-0 kubenswrapper[27835]: W0318 13:36:51.851318 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78fc45e5_8894_4be0_9e43_5684e37b8e5e.slice/crio-4ec64ab62992afd171e6b140d8a0d167e8bbb79230ae16c781ea1d400de9e53e WatchSource:0}: Error finding container 4ec64ab62992afd171e6b140d8a0d167e8bbb79230ae16c781ea1d400de9e53e: Status 404 returned error can't find the container with id 4ec64ab62992afd171e6b140d8a0d167e8bbb79230ae16c781ea1d400de9e53e Mar 18 13:36:52.554740 master-0 kubenswrapper[27835]: I0318 13:36:52.554670 27835 generic.go:334] "Generic (PLEG): container finished" podID="78fc45e5-8894-4be0-9e43-5684e37b8e5e" containerID="83a76c64f18537155286fa65bac87d59a502f4a1910a7218037504090f163046" exitCode=0 Mar 18 13:36:52.554961 master-0 kubenswrapper[27835]: I0318 13:36:52.554807 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zznbn" event={"ID":"78fc45e5-8894-4be0-9e43-5684e37b8e5e","Type":"ContainerDied","Data":"83a76c64f18537155286fa65bac87d59a502f4a1910a7218037504090f163046"} Mar 18 13:36:52.554961 master-0 kubenswrapper[27835]: I0318 13:36:52.554890 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zznbn" event={"ID":"78fc45e5-8894-4be0-9e43-5684e37b8e5e","Type":"ContainerStarted","Data":"4ec64ab62992afd171e6b140d8a0d167e8bbb79230ae16c781ea1d400de9e53e"} Mar 18 13:36:52.557967 master-0 kubenswrapper[27835]: I0318 13:36:52.557930 27835 generic.go:334] "Generic (PLEG): container finished" podID="4bbaa90f-2f09-4d78-8724-6ef7beea9bff" containerID="4627b445e0bc8b6bd660c3d0cab46f86d33280f9cadbc2b72150057d955769be" exitCode=0 Mar 18 13:36:52.558500 master-0 kubenswrapper[27835]: I0318 13:36:52.558273 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g9prz" event={"ID":"4bbaa90f-2f09-4d78-8724-6ef7beea9bff","Type":"ContainerDied","Data":"4627b445e0bc8b6bd660c3d0cab46f86d33280f9cadbc2b72150057d955769be"} Mar 18 13:36:52.558500 master-0 kubenswrapper[27835]: I0318 13:36:52.558329 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g9prz" event={"ID":"4bbaa90f-2f09-4d78-8724-6ef7beea9bff","Type":"ContainerStarted","Data":"8566183878fc006f8103ae8129452cb351a38484c92fee278bf71035494f16ba"} Mar 18 13:36:54.576120 master-0 kubenswrapper[27835]: I0318 13:36:54.576040 27835 generic.go:334] "Generic (PLEG): container finished" podID="4bbaa90f-2f09-4d78-8724-6ef7beea9bff" containerID="d50bb26475ae65bcdca02e5b177f1584267401cd68c307d310178a692627f6c4" exitCode=0 Mar 18 13:36:54.576120 master-0 kubenswrapper[27835]: I0318 13:36:54.576115 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g9prz" event={"ID":"4bbaa90f-2f09-4d78-8724-6ef7beea9bff","Type":"ContainerDied","Data":"d50bb26475ae65bcdca02e5b177f1584267401cd68c307d310178a692627f6c4"} Mar 18 13:36:55.589944 master-0 kubenswrapper[27835]: I0318 13:36:55.589895 27835 generic.go:334] "Generic (PLEG): container finished" podID="4bbaa90f-2f09-4d78-8724-6ef7beea9bff" containerID="a4a2ff6bdb2dd6006770d54370ee8e116541fdae2547995f8d31f1f394d129f1" exitCode=0 Mar 18 13:36:55.590692 master-0 kubenswrapper[27835]: I0318 13:36:55.589954 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g9prz" event={"ID":"4bbaa90f-2f09-4d78-8724-6ef7beea9bff","Type":"ContainerDied","Data":"a4a2ff6bdb2dd6006770d54370ee8e116541fdae2547995f8d31f1f394d129f1"} Mar 18 13:36:56.598028 master-0 kubenswrapper[27835]: I0318 13:36:56.597907 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zznbn" event={"ID":"78fc45e5-8894-4be0-9e43-5684e37b8e5e","Type":"ContainerStarted","Data":"3cc8795fcc375647f407b14c93987ea4eebade6485f0a8993a0e57aad05d6550"} Mar 18 13:36:57.041105 master-0 kubenswrapper[27835]: I0318 13:36:57.041047 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g9prz" Mar 18 13:36:57.057245 master-0 kubenswrapper[27835]: I0318 13:36:57.057182 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4bbaa90f-2f09-4d78-8724-6ef7beea9bff-bundle\") pod \"4bbaa90f-2f09-4d78-8724-6ef7beea9bff\" (UID: \"4bbaa90f-2f09-4d78-8724-6ef7beea9bff\") " Mar 18 13:36:57.057245 master-0 kubenswrapper[27835]: I0318 13:36:57.057247 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4bbaa90f-2f09-4d78-8724-6ef7beea9bff-util\") pod \"4bbaa90f-2f09-4d78-8724-6ef7beea9bff\" (UID: \"4bbaa90f-2f09-4d78-8724-6ef7beea9bff\") " Mar 18 13:36:57.057525 master-0 kubenswrapper[27835]: I0318 13:36:57.057285 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9knt\" (UniqueName: \"kubernetes.io/projected/4bbaa90f-2f09-4d78-8724-6ef7beea9bff-kube-api-access-p9knt\") pod \"4bbaa90f-2f09-4d78-8724-6ef7beea9bff\" (UID: \"4bbaa90f-2f09-4d78-8724-6ef7beea9bff\") " Mar 18 13:36:57.057871 master-0 kubenswrapper[27835]: I0318 13:36:57.057827 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bbaa90f-2f09-4d78-8724-6ef7beea9bff-bundle" (OuterVolumeSpecName: "bundle") pod "4bbaa90f-2f09-4d78-8724-6ef7beea9bff" (UID: "4bbaa90f-2f09-4d78-8724-6ef7beea9bff"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:36:57.061672 master-0 kubenswrapper[27835]: I0318 13:36:57.061605 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bbaa90f-2f09-4d78-8724-6ef7beea9bff-kube-api-access-p9knt" (OuterVolumeSpecName: "kube-api-access-p9knt") pod "4bbaa90f-2f09-4d78-8724-6ef7beea9bff" (UID: "4bbaa90f-2f09-4d78-8724-6ef7beea9bff"). InnerVolumeSpecName "kube-api-access-p9knt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:36:57.071383 master-0 kubenswrapper[27835]: I0318 13:36:57.071290 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bbaa90f-2f09-4d78-8724-6ef7beea9bff-util" (OuterVolumeSpecName: "util") pod "4bbaa90f-2f09-4d78-8724-6ef7beea9bff" (UID: "4bbaa90f-2f09-4d78-8724-6ef7beea9bff"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:36:57.159813 master-0 kubenswrapper[27835]: I0318 13:36:57.159727 27835 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4bbaa90f-2f09-4d78-8724-6ef7beea9bff-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:36:57.159813 master-0 kubenswrapper[27835]: I0318 13:36:57.159785 27835 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4bbaa90f-2f09-4d78-8724-6ef7beea9bff-util\") on node \"master-0\" DevicePath \"\"" Mar 18 13:36:57.159813 master-0 kubenswrapper[27835]: I0318 13:36:57.159800 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9knt\" (UniqueName: \"kubernetes.io/projected/4bbaa90f-2f09-4d78-8724-6ef7beea9bff-kube-api-access-p9knt\") on node \"master-0\" DevicePath \"\"" Mar 18 13:36:57.608269 master-0 kubenswrapper[27835]: I0318 13:36:57.608072 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g9prz" Mar 18 13:36:57.608269 master-0 kubenswrapper[27835]: I0318 13:36:57.608140 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874g9prz" event={"ID":"4bbaa90f-2f09-4d78-8724-6ef7beea9bff","Type":"ContainerDied","Data":"8566183878fc006f8103ae8129452cb351a38484c92fee278bf71035494f16ba"} Mar 18 13:36:57.608269 master-0 kubenswrapper[27835]: I0318 13:36:57.608227 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8566183878fc006f8103ae8129452cb351a38484c92fee278bf71035494f16ba" Mar 18 13:36:57.610532 master-0 kubenswrapper[27835]: I0318 13:36:57.610402 27835 generic.go:334] "Generic (PLEG): container finished" podID="78fc45e5-8894-4be0-9e43-5684e37b8e5e" containerID="3cc8795fcc375647f407b14c93987ea4eebade6485f0a8993a0e57aad05d6550" exitCode=0 Mar 18 13:36:57.610532 master-0 kubenswrapper[27835]: I0318 13:36:57.610486 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zznbn" event={"ID":"78fc45e5-8894-4be0-9e43-5684e37b8e5e","Type":"ContainerDied","Data":"3cc8795fcc375647f407b14c93987ea4eebade6485f0a8993a0e57aad05d6550"} Mar 18 13:36:58.491283 master-0 kubenswrapper[27835]: I0318 13:36:58.491188 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726gsd9d"] Mar 18 13:36:58.491635 master-0 kubenswrapper[27835]: E0318 13:36:58.491604 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bbaa90f-2f09-4d78-8724-6ef7beea9bff" containerName="util" Mar 18 13:36:58.491635 master-0 kubenswrapper[27835]: I0318 13:36:58.491627 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bbaa90f-2f09-4d78-8724-6ef7beea9bff" containerName="util" Mar 18 13:36:58.491735 master-0 kubenswrapper[27835]: E0318 13:36:58.491675 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bbaa90f-2f09-4d78-8724-6ef7beea9bff" containerName="extract" Mar 18 13:36:58.491735 master-0 kubenswrapper[27835]: I0318 13:36:58.491686 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bbaa90f-2f09-4d78-8724-6ef7beea9bff" containerName="extract" Mar 18 13:36:58.491735 master-0 kubenswrapper[27835]: E0318 13:36:58.491695 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bbaa90f-2f09-4d78-8724-6ef7beea9bff" containerName="pull" Mar 18 13:36:58.491735 master-0 kubenswrapper[27835]: I0318 13:36:58.491703 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bbaa90f-2f09-4d78-8724-6ef7beea9bff" containerName="pull" Mar 18 13:36:58.491935 master-0 kubenswrapper[27835]: I0318 13:36:58.491908 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bbaa90f-2f09-4d78-8724-6ef7beea9bff" containerName="extract" Mar 18 13:36:58.492997 master-0 kubenswrapper[27835]: I0318 13:36:58.492964 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726gsd9d" Mar 18 13:36:58.517548 master-0 kubenswrapper[27835]: I0318 13:36:58.517467 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726gsd9d"] Mar 18 13:36:58.583142 master-0 kubenswrapper[27835]: I0318 13:36:58.583041 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16a15672-ffbe-4973-84bf-ce847214d0a1-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726gsd9d\" (UID: \"16a15672-ffbe-4973-84bf-ce847214d0a1\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726gsd9d" Mar 18 13:36:58.583592 master-0 kubenswrapper[27835]: I0318 13:36:58.583268 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zd5t\" (UniqueName: \"kubernetes.io/projected/16a15672-ffbe-4973-84bf-ce847214d0a1-kube-api-access-9zd5t\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726gsd9d\" (UID: \"16a15672-ffbe-4973-84bf-ce847214d0a1\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726gsd9d" Mar 18 13:36:58.583923 master-0 kubenswrapper[27835]: I0318 13:36:58.583822 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16a15672-ffbe-4973-84bf-ce847214d0a1-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726gsd9d\" (UID: \"16a15672-ffbe-4973-84bf-ce847214d0a1\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726gsd9d" Mar 18 13:36:58.623166 master-0 kubenswrapper[27835]: I0318 13:36:58.623036 27835 generic.go:334] "Generic (PLEG): container finished" podID="78fc45e5-8894-4be0-9e43-5684e37b8e5e" containerID="b39206d997e5ed7604e7b2d5583a15a8b5e7f1a02a717f12b701c1e61ba3339b" exitCode=0 Mar 18 13:36:58.623166 master-0 kubenswrapper[27835]: I0318 13:36:58.623153 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zznbn" event={"ID":"78fc45e5-8894-4be0-9e43-5684e37b8e5e","Type":"ContainerDied","Data":"b39206d997e5ed7604e7b2d5583a15a8b5e7f1a02a717f12b701c1e61ba3339b"} Mar 18 13:36:58.686271 master-0 kubenswrapper[27835]: I0318 13:36:58.686186 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zd5t\" (UniqueName: \"kubernetes.io/projected/16a15672-ffbe-4973-84bf-ce847214d0a1-kube-api-access-9zd5t\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726gsd9d\" (UID: \"16a15672-ffbe-4973-84bf-ce847214d0a1\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726gsd9d" Mar 18 13:36:58.686546 master-0 kubenswrapper[27835]: I0318 13:36:58.686479 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16a15672-ffbe-4973-84bf-ce847214d0a1-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726gsd9d\" (UID: \"16a15672-ffbe-4973-84bf-ce847214d0a1\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726gsd9d" Mar 18 13:36:58.686892 master-0 kubenswrapper[27835]: I0318 13:36:58.686822 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16a15672-ffbe-4973-84bf-ce847214d0a1-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726gsd9d\" (UID: \"16a15672-ffbe-4973-84bf-ce847214d0a1\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726gsd9d" Mar 18 13:36:58.687538 master-0 kubenswrapper[27835]: I0318 13:36:58.687388 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16a15672-ffbe-4973-84bf-ce847214d0a1-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726gsd9d\" (UID: \"16a15672-ffbe-4973-84bf-ce847214d0a1\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726gsd9d" Mar 18 13:36:58.687685 master-0 kubenswrapper[27835]: I0318 13:36:58.687634 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16a15672-ffbe-4973-84bf-ce847214d0a1-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726gsd9d\" (UID: \"16a15672-ffbe-4973-84bf-ce847214d0a1\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726gsd9d" Mar 18 13:36:58.709309 master-0 kubenswrapper[27835]: I0318 13:36:58.709211 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zd5t\" (UniqueName: \"kubernetes.io/projected/16a15672-ffbe-4973-84bf-ce847214d0a1-kube-api-access-9zd5t\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726gsd9d\" (UID: \"16a15672-ffbe-4973-84bf-ce847214d0a1\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726gsd9d" Mar 18 13:36:58.811060 master-0 kubenswrapper[27835]: I0318 13:36:58.810860 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726gsd9d" Mar 18 13:36:59.259015 master-0 kubenswrapper[27835]: I0318 13:36:59.258953 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726gsd9d"] Mar 18 13:36:59.262026 master-0 kubenswrapper[27835]: W0318 13:36:59.261988 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16a15672_ffbe_4973_84bf_ce847214d0a1.slice/crio-47d3781d8697157ab4f9ad221ba3d0032dc9a746ef0b67aed532b96cdd523a7e WatchSource:0}: Error finding container 47d3781d8697157ab4f9ad221ba3d0032dc9a746ef0b67aed532b96cdd523a7e: Status 404 returned error can't find the container with id 47d3781d8697157ab4f9ad221ba3d0032dc9a746ef0b67aed532b96cdd523a7e Mar 18 13:36:59.633544 master-0 kubenswrapper[27835]: I0318 13:36:59.633327 27835 generic.go:334] "Generic (PLEG): container finished" podID="16a15672-ffbe-4973-84bf-ce847214d0a1" containerID="1c495601803bf3f6b1eaccc4c7dcf6479dfe0c045460012c5a9344ad444bc426" exitCode=0 Mar 18 13:36:59.633544 master-0 kubenswrapper[27835]: I0318 13:36:59.633381 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726gsd9d" event={"ID":"16a15672-ffbe-4973-84bf-ce847214d0a1","Type":"ContainerDied","Data":"1c495601803bf3f6b1eaccc4c7dcf6479dfe0c045460012c5a9344ad444bc426"} Mar 18 13:36:59.634295 master-0 kubenswrapper[27835]: I0318 13:36:59.633769 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726gsd9d" event={"ID":"16a15672-ffbe-4973-84bf-ce847214d0a1","Type":"ContainerStarted","Data":"47d3781d8697157ab4f9ad221ba3d0032dc9a746ef0b67aed532b96cdd523a7e"} Mar 18 13:37:00.007026 master-0 kubenswrapper[27835]: I0318 13:37:00.006962 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zznbn" Mar 18 13:37:00.121008 master-0 kubenswrapper[27835]: I0318 13:37:00.120946 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78fc45e5-8894-4be0-9e43-5684e37b8e5e-bundle\") pod \"78fc45e5-8894-4be0-9e43-5684e37b8e5e\" (UID: \"78fc45e5-8894-4be0-9e43-5684e37b8e5e\") " Mar 18 13:37:00.121239 master-0 kubenswrapper[27835]: I0318 13:37:00.121098 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njj9m\" (UniqueName: \"kubernetes.io/projected/78fc45e5-8894-4be0-9e43-5684e37b8e5e-kube-api-access-njj9m\") pod \"78fc45e5-8894-4be0-9e43-5684e37b8e5e\" (UID: \"78fc45e5-8894-4be0-9e43-5684e37b8e5e\") " Mar 18 13:37:00.121239 master-0 kubenswrapper[27835]: I0318 13:37:00.121173 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78fc45e5-8894-4be0-9e43-5684e37b8e5e-util\") pod \"78fc45e5-8894-4be0-9e43-5684e37b8e5e\" (UID: \"78fc45e5-8894-4be0-9e43-5684e37b8e5e\") " Mar 18 13:37:00.122447 master-0 kubenswrapper[27835]: I0318 13:37:00.122393 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78fc45e5-8894-4be0-9e43-5684e37b8e5e-bundle" (OuterVolumeSpecName: "bundle") pod "78fc45e5-8894-4be0-9e43-5684e37b8e5e" (UID: "78fc45e5-8894-4be0-9e43-5684e37b8e5e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:37:00.124635 master-0 kubenswrapper[27835]: I0318 13:37:00.124576 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78fc45e5-8894-4be0-9e43-5684e37b8e5e-kube-api-access-njj9m" (OuterVolumeSpecName: "kube-api-access-njj9m") pod "78fc45e5-8894-4be0-9e43-5684e37b8e5e" (UID: "78fc45e5-8894-4be0-9e43-5684e37b8e5e"). InnerVolumeSpecName "kube-api-access-njj9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:37:00.131921 master-0 kubenswrapper[27835]: I0318 13:37:00.131881 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78fc45e5-8894-4be0-9e43-5684e37b8e5e-util" (OuterVolumeSpecName: "util") pod "78fc45e5-8894-4be0-9e43-5684e37b8e5e" (UID: "78fc45e5-8894-4be0-9e43-5684e37b8e5e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:37:00.223718 master-0 kubenswrapper[27835]: I0318 13:37:00.223568 27835 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78fc45e5-8894-4be0-9e43-5684e37b8e5e-util\") on node \"master-0\" DevicePath \"\"" Mar 18 13:37:00.223718 master-0 kubenswrapper[27835]: I0318 13:37:00.223626 27835 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78fc45e5-8894-4be0-9e43-5684e37b8e5e-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:37:00.223718 master-0 kubenswrapper[27835]: I0318 13:37:00.223646 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njj9m\" (UniqueName: \"kubernetes.io/projected/78fc45e5-8894-4be0-9e43-5684e37b8e5e-kube-api-access-njj9m\") on node \"master-0\" DevicePath \"\"" Mar 18 13:37:00.647228 master-0 kubenswrapper[27835]: I0318 13:37:00.647062 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zznbn" event={"ID":"78fc45e5-8894-4be0-9e43-5684e37b8e5e","Type":"ContainerDied","Data":"4ec64ab62992afd171e6b140d8a0d167e8bbb79230ae16c781ea1d400de9e53e"} Mar 18 13:37:00.647228 master-0 kubenswrapper[27835]: I0318 13:37:00.647128 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ec64ab62992afd171e6b140d8a0d167e8bbb79230ae16c781ea1d400de9e53e" Mar 18 13:37:00.647228 master-0 kubenswrapper[27835]: I0318 13:37:00.647149 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5zznbn" Mar 18 13:37:01.656562 master-0 kubenswrapper[27835]: I0318 13:37:01.656459 27835 generic.go:334] "Generic (PLEG): container finished" podID="16a15672-ffbe-4973-84bf-ce847214d0a1" containerID="cb81e660ef0f3bb7923a9155c17eb6367c8735392a10a0fc2d3d60d7b27a0cc8" exitCode=0 Mar 18 13:37:01.656562 master-0 kubenswrapper[27835]: I0318 13:37:01.656511 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726gsd9d" event={"ID":"16a15672-ffbe-4973-84bf-ce847214d0a1","Type":"ContainerDied","Data":"cb81e660ef0f3bb7923a9155c17eb6367c8735392a10a0fc2d3d60d7b27a0cc8"} Mar 18 13:37:01.954442 master-0 kubenswrapper[27835]: I0318 13:37:01.949067 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-xp5mm"] Mar 18 13:37:01.954442 master-0 kubenswrapper[27835]: E0318 13:37:01.949567 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78fc45e5-8894-4be0-9e43-5684e37b8e5e" containerName="util" Mar 18 13:37:01.954442 master-0 kubenswrapper[27835]: I0318 13:37:01.949587 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="78fc45e5-8894-4be0-9e43-5684e37b8e5e" containerName="util" Mar 18 13:37:01.954442 master-0 kubenswrapper[27835]: E0318 13:37:01.949597 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78fc45e5-8894-4be0-9e43-5684e37b8e5e" containerName="pull" Mar 18 13:37:01.954442 master-0 kubenswrapper[27835]: I0318 13:37:01.949607 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="78fc45e5-8894-4be0-9e43-5684e37b8e5e" containerName="pull" Mar 18 13:37:01.954442 master-0 kubenswrapper[27835]: E0318 13:37:01.949656 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78fc45e5-8894-4be0-9e43-5684e37b8e5e" containerName="extract" Mar 18 13:37:01.954442 master-0 kubenswrapper[27835]: I0318 13:37:01.949663 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="78fc45e5-8894-4be0-9e43-5684e37b8e5e" containerName="extract" Mar 18 13:37:01.954442 master-0 kubenswrapper[27835]: I0318 13:37:01.949848 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="78fc45e5-8894-4be0-9e43-5684e37b8e5e" containerName="extract" Mar 18 13:37:01.954442 master-0 kubenswrapper[27835]: I0318 13:37:01.951326 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-xp5mm" Mar 18 13:37:01.961057 master-0 kubenswrapper[27835]: I0318 13:37:01.955849 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 18 13:37:01.961057 master-0 kubenswrapper[27835]: I0318 13:37:01.956404 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 18 13:37:01.963096 master-0 kubenswrapper[27835]: I0318 13:37:01.962134 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-xp5mm"] Mar 18 13:37:02.052699 master-0 kubenswrapper[27835]: I0318 13:37:02.052608 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqctz\" (UniqueName: \"kubernetes.io/projected/62a087e6-1ed1-4cab-9c7e-f712ed437c35-kube-api-access-fqctz\") pod \"nmstate-operator-796d4cfff4-xp5mm\" (UID: \"62a087e6-1ed1-4cab-9c7e-f712ed437c35\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-xp5mm" Mar 18 13:37:02.154689 master-0 kubenswrapper[27835]: I0318 13:37:02.154149 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqctz\" (UniqueName: \"kubernetes.io/projected/62a087e6-1ed1-4cab-9c7e-f712ed437c35-kube-api-access-fqctz\") pod \"nmstate-operator-796d4cfff4-xp5mm\" (UID: \"62a087e6-1ed1-4cab-9c7e-f712ed437c35\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-xp5mm" Mar 18 13:37:02.173107 master-0 kubenswrapper[27835]: I0318 13:37:02.173048 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqctz\" (UniqueName: \"kubernetes.io/projected/62a087e6-1ed1-4cab-9c7e-f712ed437c35-kube-api-access-fqctz\") pod \"nmstate-operator-796d4cfff4-xp5mm\" (UID: \"62a087e6-1ed1-4cab-9c7e-f712ed437c35\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-xp5mm" Mar 18 13:37:02.320626 master-0 kubenswrapper[27835]: I0318 13:37:02.320495 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-xp5mm" Mar 18 13:37:02.708859 master-0 kubenswrapper[27835]: I0318 13:37:02.708791 27835 generic.go:334] "Generic (PLEG): container finished" podID="16a15672-ffbe-4973-84bf-ce847214d0a1" containerID="e950cb24c53755000e255378adf623b1e162cf2d6fbcaa8370440f11747c06a9" exitCode=0 Mar 18 13:37:02.708859 master-0 kubenswrapper[27835]: I0318 13:37:02.708857 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726gsd9d" event={"ID":"16a15672-ffbe-4973-84bf-ce847214d0a1","Type":"ContainerDied","Data":"e950cb24c53755000e255378adf623b1e162cf2d6fbcaa8370440f11747c06a9"} Mar 18 13:37:02.766021 master-0 kubenswrapper[27835]: I0318 13:37:02.765935 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-xp5mm"] Mar 18 13:37:02.775844 master-0 kubenswrapper[27835]: W0318 13:37:02.775725 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62a087e6_1ed1_4cab_9c7e_f712ed437c35.slice/crio-55c23edf65208ee08199a3cf16afc6c25d788ec99d951ee05ba5cf8505b330b7 WatchSource:0}: Error finding container 55c23edf65208ee08199a3cf16afc6c25d788ec99d951ee05ba5cf8505b330b7: Status 404 returned error can't find the container with id 55c23edf65208ee08199a3cf16afc6c25d788ec99d951ee05ba5cf8505b330b7 Mar 18 13:37:03.724882 master-0 kubenswrapper[27835]: I0318 13:37:03.724805 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-xp5mm" event={"ID":"62a087e6-1ed1-4cab-9c7e-f712ed437c35","Type":"ContainerStarted","Data":"55c23edf65208ee08199a3cf16afc6c25d788ec99d951ee05ba5cf8505b330b7"} Mar 18 13:37:04.050061 master-0 kubenswrapper[27835]: I0318 13:37:04.050018 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726gsd9d" Mar 18 13:37:04.090487 master-0 kubenswrapper[27835]: I0318 13:37:04.090399 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zd5t\" (UniqueName: \"kubernetes.io/projected/16a15672-ffbe-4973-84bf-ce847214d0a1-kube-api-access-9zd5t\") pod \"16a15672-ffbe-4973-84bf-ce847214d0a1\" (UID: \"16a15672-ffbe-4973-84bf-ce847214d0a1\") " Mar 18 13:37:04.090788 master-0 kubenswrapper[27835]: I0318 13:37:04.090574 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16a15672-ffbe-4973-84bf-ce847214d0a1-util\") pod \"16a15672-ffbe-4973-84bf-ce847214d0a1\" (UID: \"16a15672-ffbe-4973-84bf-ce847214d0a1\") " Mar 18 13:37:04.090788 master-0 kubenswrapper[27835]: I0318 13:37:04.090628 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16a15672-ffbe-4973-84bf-ce847214d0a1-bundle\") pod \"16a15672-ffbe-4973-84bf-ce847214d0a1\" (UID: \"16a15672-ffbe-4973-84bf-ce847214d0a1\") " Mar 18 13:37:04.093470 master-0 kubenswrapper[27835]: I0318 13:37:04.093005 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16a15672-ffbe-4973-84bf-ce847214d0a1-kube-api-access-9zd5t" (OuterVolumeSpecName: "kube-api-access-9zd5t") pod "16a15672-ffbe-4973-84bf-ce847214d0a1" (UID: "16a15672-ffbe-4973-84bf-ce847214d0a1"). InnerVolumeSpecName "kube-api-access-9zd5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:37:04.093470 master-0 kubenswrapper[27835]: I0318 13:37:04.093258 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16a15672-ffbe-4973-84bf-ce847214d0a1-bundle" (OuterVolumeSpecName: "bundle") pod "16a15672-ffbe-4973-84bf-ce847214d0a1" (UID: "16a15672-ffbe-4973-84bf-ce847214d0a1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:37:04.110952 master-0 kubenswrapper[27835]: I0318 13:37:04.110849 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16a15672-ffbe-4973-84bf-ce847214d0a1-util" (OuterVolumeSpecName: "util") pod "16a15672-ffbe-4973-84bf-ce847214d0a1" (UID: "16a15672-ffbe-4973-84bf-ce847214d0a1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:37:04.192849 master-0 kubenswrapper[27835]: I0318 13:37:04.192788 27835 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/16a15672-ffbe-4973-84bf-ce847214d0a1-util\") on node \"master-0\" DevicePath \"\"" Mar 18 13:37:04.192849 master-0 kubenswrapper[27835]: I0318 13:37:04.192850 27835 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/16a15672-ffbe-4973-84bf-ce847214d0a1-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:37:04.193105 master-0 kubenswrapper[27835]: I0318 13:37:04.192864 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zd5t\" (UniqueName: \"kubernetes.io/projected/16a15672-ffbe-4973-84bf-ce847214d0a1-kube-api-access-9zd5t\") on node \"master-0\" DevicePath \"\"" Mar 18 13:37:04.735347 master-0 kubenswrapper[27835]: I0318 13:37:04.735262 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726gsd9d" event={"ID":"16a15672-ffbe-4973-84bf-ce847214d0a1","Type":"ContainerDied","Data":"47d3781d8697157ab4f9ad221ba3d0032dc9a746ef0b67aed532b96cdd523a7e"} Mar 18 13:37:04.735347 master-0 kubenswrapper[27835]: I0318 13:37:04.735325 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47d3781d8697157ab4f9ad221ba3d0032dc9a746ef0b67aed532b96cdd523a7e" Mar 18 13:37:04.735996 master-0 kubenswrapper[27835]: I0318 13:37:04.735345 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726gsd9d" Mar 18 13:37:05.286212 master-0 kubenswrapper[27835]: I0318 13:37:05.286133 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16m86f"] Mar 18 13:37:05.286946 master-0 kubenswrapper[27835]: E0318 13:37:05.286678 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16a15672-ffbe-4973-84bf-ce847214d0a1" containerName="extract" Mar 18 13:37:05.286946 master-0 kubenswrapper[27835]: I0318 13:37:05.286702 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="16a15672-ffbe-4973-84bf-ce847214d0a1" containerName="extract" Mar 18 13:37:05.286946 master-0 kubenswrapper[27835]: E0318 13:37:05.286752 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16a15672-ffbe-4973-84bf-ce847214d0a1" containerName="pull" Mar 18 13:37:05.286946 master-0 kubenswrapper[27835]: I0318 13:37:05.286758 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="16a15672-ffbe-4973-84bf-ce847214d0a1" containerName="pull" Mar 18 13:37:05.286946 master-0 kubenswrapper[27835]: E0318 13:37:05.286769 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16a15672-ffbe-4973-84bf-ce847214d0a1" containerName="util" Mar 18 13:37:05.286946 master-0 kubenswrapper[27835]: I0318 13:37:05.286775 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="16a15672-ffbe-4973-84bf-ce847214d0a1" containerName="util" Mar 18 13:37:05.286946 master-0 kubenswrapper[27835]: I0318 13:37:05.286946 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="16a15672-ffbe-4973-84bf-ce847214d0a1" containerName="extract" Mar 18 13:37:05.288301 master-0 kubenswrapper[27835]: I0318 13:37:05.288229 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16m86f" Mar 18 13:37:05.296066 master-0 kubenswrapper[27835]: I0318 13:37:05.296000 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16m86f"] Mar 18 13:37:05.320545 master-0 kubenswrapper[27835]: I0318 13:37:05.317665 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ba19e727-759c-4f25-aa08-63384f07f765-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16m86f\" (UID: \"ba19e727-759c-4f25-aa08-63384f07f765\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16m86f" Mar 18 13:37:05.320545 master-0 kubenswrapper[27835]: I0318 13:37:05.317776 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ba19e727-759c-4f25-aa08-63384f07f765-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16m86f\" (UID: \"ba19e727-759c-4f25-aa08-63384f07f765\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16m86f" Mar 18 13:37:05.320545 master-0 kubenswrapper[27835]: I0318 13:37:05.317813 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n995c\" (UniqueName: \"kubernetes.io/projected/ba19e727-759c-4f25-aa08-63384f07f765-kube-api-access-n995c\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16m86f\" (UID: \"ba19e727-759c-4f25-aa08-63384f07f765\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16m86f" Mar 18 13:37:05.425494 master-0 kubenswrapper[27835]: I0318 13:37:05.419827 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ba19e727-759c-4f25-aa08-63384f07f765-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16m86f\" (UID: \"ba19e727-759c-4f25-aa08-63384f07f765\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16m86f" Mar 18 13:37:05.425494 master-0 kubenswrapper[27835]: I0318 13:37:05.419911 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ba19e727-759c-4f25-aa08-63384f07f765-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16m86f\" (UID: \"ba19e727-759c-4f25-aa08-63384f07f765\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16m86f" Mar 18 13:37:05.425494 master-0 kubenswrapper[27835]: I0318 13:37:05.419937 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n995c\" (UniqueName: \"kubernetes.io/projected/ba19e727-759c-4f25-aa08-63384f07f765-kube-api-access-n995c\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16m86f\" (UID: \"ba19e727-759c-4f25-aa08-63384f07f765\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16m86f" Mar 18 13:37:05.425494 master-0 kubenswrapper[27835]: I0318 13:37:05.420705 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ba19e727-759c-4f25-aa08-63384f07f765-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16m86f\" (UID: \"ba19e727-759c-4f25-aa08-63384f07f765\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16m86f" Mar 18 13:37:05.425494 master-0 kubenswrapper[27835]: I0318 13:37:05.420931 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ba19e727-759c-4f25-aa08-63384f07f765-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16m86f\" (UID: \"ba19e727-759c-4f25-aa08-63384f07f765\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16m86f" Mar 18 13:37:05.439029 master-0 kubenswrapper[27835]: I0318 13:37:05.438994 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n995c\" (UniqueName: \"kubernetes.io/projected/ba19e727-759c-4f25-aa08-63384f07f765-kube-api-access-n995c\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16m86f\" (UID: \"ba19e727-759c-4f25-aa08-63384f07f765\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16m86f" Mar 18 13:37:05.629883 master-0 kubenswrapper[27835]: I0318 13:37:05.629753 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16m86f" Mar 18 13:37:06.196389 master-0 kubenswrapper[27835]: I0318 13:37:06.194200 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16m86f"] Mar 18 13:37:06.202170 master-0 kubenswrapper[27835]: W0318 13:37:06.202098 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba19e727_759c_4f25_aa08_63384f07f765.slice/crio-1a53da7864c98d3bd568ebeb42aae0c39c0f7f24329a8fb0fa235453506e8b18 WatchSource:0}: Error finding container 1a53da7864c98d3bd568ebeb42aae0c39c0f7f24329a8fb0fa235453506e8b18: Status 404 returned error can't find the container with id 1a53da7864c98d3bd568ebeb42aae0c39c0f7f24329a8fb0fa235453506e8b18 Mar 18 13:37:06.752265 master-0 kubenswrapper[27835]: I0318 13:37:06.752221 27835 generic.go:334] "Generic (PLEG): container finished" podID="ba19e727-759c-4f25-aa08-63384f07f765" containerID="133a29929155f92f43338b55c9b58029104ed44650d619fc0a3a2f984e3d25d6" exitCode=0 Mar 18 13:37:06.752620 master-0 kubenswrapper[27835]: I0318 13:37:06.752280 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16m86f" event={"ID":"ba19e727-759c-4f25-aa08-63384f07f765","Type":"ContainerDied","Data":"133a29929155f92f43338b55c9b58029104ed44650d619fc0a3a2f984e3d25d6"} Mar 18 13:37:06.752620 master-0 kubenswrapper[27835]: I0318 13:37:06.752303 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16m86f" event={"ID":"ba19e727-759c-4f25-aa08-63384f07f765","Type":"ContainerStarted","Data":"1a53da7864c98d3bd568ebeb42aae0c39c0f7f24329a8fb0fa235453506e8b18"} Mar 18 13:37:06.754283 master-0 kubenswrapper[27835]: I0318 13:37:06.754231 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-xp5mm" event={"ID":"62a087e6-1ed1-4cab-9c7e-f712ed437c35","Type":"ContainerStarted","Data":"2ca96d43bd71f88243e5a68c57a6f6146a0f685c7dc47ff941165eb6101ee1cc"} Mar 18 13:37:07.036164 master-0 kubenswrapper[27835]: I0318 13:37:07.035995 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-796d4cfff4-xp5mm" podStartSLOduration=3.107414467 podStartE2EDuration="6.035974138s" podCreationTimestamp="2026-03-18 13:37:01 +0000 UTC" firstStartedPulling="2026-03-18 13:37:02.77876528 +0000 UTC m=+786.743976840" lastFinishedPulling="2026-03-18 13:37:05.707324951 +0000 UTC m=+789.672536511" observedRunningTime="2026-03-18 13:37:07.030177699 +0000 UTC m=+790.995389269" watchObservedRunningTime="2026-03-18 13:37:07.035974138 +0000 UTC m=+791.001185698" Mar 18 13:37:08.769843 master-0 kubenswrapper[27835]: I0318 13:37:08.769690 27835 generic.go:334] "Generic (PLEG): container finished" podID="ba19e727-759c-4f25-aa08-63384f07f765" containerID="ef95868de3a7a5dad24b42aa7e7db5f057030329bb58e98f87270f228ff5e9f9" exitCode=0 Mar 18 13:37:08.769843 master-0 kubenswrapper[27835]: I0318 13:37:08.769748 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16m86f" event={"ID":"ba19e727-759c-4f25-aa08-63384f07f765","Type":"ContainerDied","Data":"ef95868de3a7a5dad24b42aa7e7db5f057030329bb58e98f87270f228ff5e9f9"} Mar 18 13:37:09.779106 master-0 kubenswrapper[27835]: I0318 13:37:09.779007 27835 generic.go:334] "Generic (PLEG): container finished" podID="ba19e727-759c-4f25-aa08-63384f07f765" containerID="6efa1e81e55819c2a735134880168f903072dd742386af5456e5b0757dca92ae" exitCode=0 Mar 18 13:37:09.779106 master-0 kubenswrapper[27835]: I0318 13:37:09.779068 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16m86f" event={"ID":"ba19e727-759c-4f25-aa08-63384f07f765","Type":"ContainerDied","Data":"6efa1e81e55819c2a735134880168f903072dd742386af5456e5b0757dca92ae"} Mar 18 13:37:09.879122 master-0 kubenswrapper[27835]: I0318 13:37:09.879053 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-njr9z"] Mar 18 13:37:09.880777 master-0 kubenswrapper[27835]: I0318 13:37:09.880740 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-njr9z" Mar 18 13:37:09.883047 master-0 kubenswrapper[27835]: I0318 13:37:09.882992 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Mar 18 13:37:09.883524 master-0 kubenswrapper[27835]: I0318 13:37:09.883496 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Mar 18 13:37:09.906498 master-0 kubenswrapper[27835]: I0318 13:37:09.906442 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-njr9z"] Mar 18 13:37:10.000702 master-0 kubenswrapper[27835]: I0318 13:37:10.000605 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h84gx\" (UniqueName: \"kubernetes.io/projected/5226ae9a-bfdb-402e-9447-e423f8638c2a-kube-api-access-h84gx\") pod \"cert-manager-operator-controller-manager-66c8bdd694-njr9z\" (UID: \"5226ae9a-bfdb-402e-9447-e423f8638c2a\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-njr9z" Mar 18 13:37:10.000944 master-0 kubenswrapper[27835]: I0318 13:37:10.000744 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5226ae9a-bfdb-402e-9447-e423f8638c2a-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-njr9z\" (UID: \"5226ae9a-bfdb-402e-9447-e423f8638c2a\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-njr9z" Mar 18 13:37:10.102471 master-0 kubenswrapper[27835]: I0318 13:37:10.102315 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h84gx\" (UniqueName: \"kubernetes.io/projected/5226ae9a-bfdb-402e-9447-e423f8638c2a-kube-api-access-h84gx\") pod \"cert-manager-operator-controller-manager-66c8bdd694-njr9z\" (UID: \"5226ae9a-bfdb-402e-9447-e423f8638c2a\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-njr9z" Mar 18 13:37:10.102471 master-0 kubenswrapper[27835]: I0318 13:37:10.102464 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5226ae9a-bfdb-402e-9447-e423f8638c2a-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-njr9z\" (UID: \"5226ae9a-bfdb-402e-9447-e423f8638c2a\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-njr9z" Mar 18 13:37:10.102959 master-0 kubenswrapper[27835]: I0318 13:37:10.102926 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5226ae9a-bfdb-402e-9447-e423f8638c2a-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-njr9z\" (UID: \"5226ae9a-bfdb-402e-9447-e423f8638c2a\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-njr9z" Mar 18 13:37:10.122482 master-0 kubenswrapper[27835]: I0318 13:37:10.122433 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h84gx\" (UniqueName: \"kubernetes.io/projected/5226ae9a-bfdb-402e-9447-e423f8638c2a-kube-api-access-h84gx\") pod \"cert-manager-operator-controller-manager-66c8bdd694-njr9z\" (UID: \"5226ae9a-bfdb-402e-9447-e423f8638c2a\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-njr9z" Mar 18 13:37:10.203511 master-0 kubenswrapper[27835]: I0318 13:37:10.203422 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-njr9z" Mar 18 13:37:10.633510 master-0 kubenswrapper[27835]: I0318 13:37:10.633372 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-njr9z"] Mar 18 13:37:10.637782 master-0 kubenswrapper[27835]: W0318 13:37:10.637718 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5226ae9a_bfdb_402e_9447_e423f8638c2a.slice/crio-18012f6482ef00441ecd454fb861d3082a77159c8a8a5e35e6640708ad52b363 WatchSource:0}: Error finding container 18012f6482ef00441ecd454fb861d3082a77159c8a8a5e35e6640708ad52b363: Status 404 returned error can't find the container with id 18012f6482ef00441ecd454fb861d3082a77159c8a8a5e35e6640708ad52b363 Mar 18 13:37:10.788142 master-0 kubenswrapper[27835]: I0318 13:37:10.788073 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-njr9z" event={"ID":"5226ae9a-bfdb-402e-9447-e423f8638c2a","Type":"ContainerStarted","Data":"18012f6482ef00441ecd454fb861d3082a77159c8a8a5e35e6640708ad52b363"} Mar 18 13:37:11.158435 master-0 kubenswrapper[27835]: I0318 13:37:11.155905 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16m86f" Mar 18 13:37:11.221765 master-0 kubenswrapper[27835]: I0318 13:37:11.221573 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ba19e727-759c-4f25-aa08-63384f07f765-util\") pod \"ba19e727-759c-4f25-aa08-63384f07f765\" (UID: \"ba19e727-759c-4f25-aa08-63384f07f765\") " Mar 18 13:37:11.221765 master-0 kubenswrapper[27835]: I0318 13:37:11.221706 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ba19e727-759c-4f25-aa08-63384f07f765-bundle\") pod \"ba19e727-759c-4f25-aa08-63384f07f765\" (UID: \"ba19e727-759c-4f25-aa08-63384f07f765\") " Mar 18 13:37:11.222030 master-0 kubenswrapper[27835]: I0318 13:37:11.221849 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n995c\" (UniqueName: \"kubernetes.io/projected/ba19e727-759c-4f25-aa08-63384f07f765-kube-api-access-n995c\") pod \"ba19e727-759c-4f25-aa08-63384f07f765\" (UID: \"ba19e727-759c-4f25-aa08-63384f07f765\") " Mar 18 13:37:11.223115 master-0 kubenswrapper[27835]: I0318 13:37:11.222827 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba19e727-759c-4f25-aa08-63384f07f765-bundle" (OuterVolumeSpecName: "bundle") pod "ba19e727-759c-4f25-aa08-63384f07f765" (UID: "ba19e727-759c-4f25-aa08-63384f07f765"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:37:11.225805 master-0 kubenswrapper[27835]: I0318 13:37:11.224969 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba19e727-759c-4f25-aa08-63384f07f765-kube-api-access-n995c" (OuterVolumeSpecName: "kube-api-access-n995c") pod "ba19e727-759c-4f25-aa08-63384f07f765" (UID: "ba19e727-759c-4f25-aa08-63384f07f765"). InnerVolumeSpecName "kube-api-access-n995c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:37:11.232698 master-0 kubenswrapper[27835]: I0318 13:37:11.232648 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba19e727-759c-4f25-aa08-63384f07f765-util" (OuterVolumeSpecName: "util") pod "ba19e727-759c-4f25-aa08-63384f07f765" (UID: "ba19e727-759c-4f25-aa08-63384f07f765"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:37:11.328476 master-0 kubenswrapper[27835]: I0318 13:37:11.327787 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n995c\" (UniqueName: \"kubernetes.io/projected/ba19e727-759c-4f25-aa08-63384f07f765-kube-api-access-n995c\") on node \"master-0\" DevicePath \"\"" Mar 18 13:37:11.328476 master-0 kubenswrapper[27835]: I0318 13:37:11.327853 27835 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ba19e727-759c-4f25-aa08-63384f07f765-util\") on node \"master-0\" DevicePath \"\"" Mar 18 13:37:11.328476 master-0 kubenswrapper[27835]: I0318 13:37:11.327873 27835 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ba19e727-759c-4f25-aa08-63384f07f765-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:37:11.815052 master-0 kubenswrapper[27835]: I0318 13:37:11.814990 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16m86f" event={"ID":"ba19e727-759c-4f25-aa08-63384f07f765","Type":"ContainerDied","Data":"1a53da7864c98d3bd568ebeb42aae0c39c0f7f24329a8fb0fa235453506e8b18"} Mar 18 13:37:11.815052 master-0 kubenswrapper[27835]: I0318 13:37:11.815040 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a53da7864c98d3bd568ebeb42aae0c39c0f7f24329a8fb0fa235453506e8b18" Mar 18 13:37:11.817071 master-0 kubenswrapper[27835]: I0318 13:37:11.815119 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c16m86f" Mar 18 13:37:14.848285 master-0 kubenswrapper[27835]: I0318 13:37:14.848224 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-njr9z" event={"ID":"5226ae9a-bfdb-402e-9447-e423f8638c2a","Type":"ContainerStarted","Data":"cabaeca6a902c06937d4db1d70108d83f53fa422c8a16203aa07263ce569e8c8"} Mar 18 13:37:14.916999 master-0 kubenswrapper[27835]: I0318 13:37:14.916898 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-njr9z" podStartSLOduration=2.147743166 podStartE2EDuration="5.916877574s" podCreationTimestamp="2026-03-18 13:37:09 +0000 UTC" firstStartedPulling="2026-03-18 13:37:10.641096455 +0000 UTC m=+794.606308015" lastFinishedPulling="2026-03-18 13:37:14.410230853 +0000 UTC m=+798.375442423" observedRunningTime="2026-03-18 13:37:14.904944535 +0000 UTC m=+798.870156105" watchObservedRunningTime="2026-03-18 13:37:14.916877574 +0000 UTC m=+798.882089134" Mar 18 13:37:22.312492 master-0 kubenswrapper[27835]: I0318 13:37:22.312419 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-5hd6j"] Mar 18 13:37:22.313514 master-0 kubenswrapper[27835]: E0318 13:37:22.313495 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba19e727-759c-4f25-aa08-63384f07f765" containerName="extract" Mar 18 13:37:22.313598 master-0 kubenswrapper[27835]: I0318 13:37:22.313587 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba19e727-759c-4f25-aa08-63384f07f765" containerName="extract" Mar 18 13:37:22.313683 master-0 kubenswrapper[27835]: E0318 13:37:22.313671 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba19e727-759c-4f25-aa08-63384f07f765" containerName="pull" Mar 18 13:37:22.313744 master-0 kubenswrapper[27835]: I0318 13:37:22.313734 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba19e727-759c-4f25-aa08-63384f07f765" containerName="pull" Mar 18 13:37:22.313819 master-0 kubenswrapper[27835]: E0318 13:37:22.313809 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba19e727-759c-4f25-aa08-63384f07f765" containerName="util" Mar 18 13:37:22.313934 master-0 kubenswrapper[27835]: I0318 13:37:22.313924 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba19e727-759c-4f25-aa08-63384f07f765" containerName="util" Mar 18 13:37:22.314174 master-0 kubenswrapper[27835]: I0318 13:37:22.314161 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba19e727-759c-4f25-aa08-63384f07f765" containerName="extract" Mar 18 13:37:22.314776 master-0 kubenswrapper[27835]: I0318 13:37:22.314755 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-5hd6j" Mar 18 13:37:22.316953 master-0 kubenswrapper[27835]: I0318 13:37:22.316922 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 18 13:37:22.317173 master-0 kubenswrapper[27835]: I0318 13:37:22.316991 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 18 13:37:22.325230 master-0 kubenswrapper[27835]: I0318 13:37:22.325180 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-5hd6j"] Mar 18 13:37:22.428620 master-0 kubenswrapper[27835]: I0318 13:37:22.428574 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b934e6b-fae9-4024-9772-784d60b259b6-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-5hd6j\" (UID: \"9b934e6b-fae9-4024-9772-784d60b259b6\") " pod="cert-manager/cert-manager-cainjector-5545bd876-5hd6j" Mar 18 13:37:22.428941 master-0 kubenswrapper[27835]: I0318 13:37:22.428922 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7th86\" (UniqueName: \"kubernetes.io/projected/9b934e6b-fae9-4024-9772-784d60b259b6-kube-api-access-7th86\") pod \"cert-manager-cainjector-5545bd876-5hd6j\" (UID: \"9b934e6b-fae9-4024-9772-784d60b259b6\") " pod="cert-manager/cert-manager-cainjector-5545bd876-5hd6j" Mar 18 13:37:22.531518 master-0 kubenswrapper[27835]: I0318 13:37:22.531205 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b934e6b-fae9-4024-9772-784d60b259b6-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-5hd6j\" (UID: \"9b934e6b-fae9-4024-9772-784d60b259b6\") " pod="cert-manager/cert-manager-cainjector-5545bd876-5hd6j" Mar 18 13:37:22.531518 master-0 kubenswrapper[27835]: I0318 13:37:22.531269 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7th86\" (UniqueName: \"kubernetes.io/projected/9b934e6b-fae9-4024-9772-784d60b259b6-kube-api-access-7th86\") pod \"cert-manager-cainjector-5545bd876-5hd6j\" (UID: \"9b934e6b-fae9-4024-9772-784d60b259b6\") " pod="cert-manager/cert-manager-cainjector-5545bd876-5hd6j" Mar 18 13:37:22.668537 master-0 kubenswrapper[27835]: I0318 13:37:22.668487 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9b934e6b-fae9-4024-9772-784d60b259b6-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-5hd6j\" (UID: \"9b934e6b-fae9-4024-9772-784d60b259b6\") " pod="cert-manager/cert-manager-cainjector-5545bd876-5hd6j" Mar 18 13:37:22.669861 master-0 kubenswrapper[27835]: I0318 13:37:22.669117 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7th86\" (UniqueName: \"kubernetes.io/projected/9b934e6b-fae9-4024-9772-784d60b259b6-kube-api-access-7th86\") pod \"cert-manager-cainjector-5545bd876-5hd6j\" (UID: \"9b934e6b-fae9-4024-9772-784d60b259b6\") " pod="cert-manager/cert-manager-cainjector-5545bd876-5hd6j" Mar 18 13:37:22.941999 master-0 kubenswrapper[27835]: I0318 13:37:22.941865 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-5hd6j" Mar 18 13:37:23.276473 master-0 kubenswrapper[27835]: I0318 13:37:23.276405 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-5hd6j"] Mar 18 13:37:23.909249 master-0 kubenswrapper[27835]: I0318 13:37:23.909194 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-5hd6j" event={"ID":"9b934e6b-fae9-4024-9772-784d60b259b6","Type":"ContainerStarted","Data":"745b820b79ca6c0ab7a65186773655e4cf9acee170519e2fcfba427740e6c7a3"} Mar 18 13:37:24.990851 master-0 kubenswrapper[27835]: I0318 13:37:24.990794 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-5gdv7"] Mar 18 13:37:24.991766 master-0 kubenswrapper[27835]: I0318 13:37:24.991749 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-5gdv7" Mar 18 13:37:25.023689 master-0 kubenswrapper[27835]: I0318 13:37:25.023640 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-5gdv7"] Mar 18 13:37:25.130444 master-0 kubenswrapper[27835]: I0318 13:37:25.130249 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l28tx\" (UniqueName: \"kubernetes.io/projected/5ccf902a-17d7-4805-bc9f-0dc6dc596312-kube-api-access-l28tx\") pod \"cert-manager-webhook-6888856db4-5gdv7\" (UID: \"5ccf902a-17d7-4805-bc9f-0dc6dc596312\") " pod="cert-manager/cert-manager-webhook-6888856db4-5gdv7" Mar 18 13:37:25.130444 master-0 kubenswrapper[27835]: I0318 13:37:25.130321 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5ccf902a-17d7-4805-bc9f-0dc6dc596312-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-5gdv7\" (UID: \"5ccf902a-17d7-4805-bc9f-0dc6dc596312\") " pod="cert-manager/cert-manager-webhook-6888856db4-5gdv7" Mar 18 13:37:25.231530 master-0 kubenswrapper[27835]: I0318 13:37:25.231467 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l28tx\" (UniqueName: \"kubernetes.io/projected/5ccf902a-17d7-4805-bc9f-0dc6dc596312-kube-api-access-l28tx\") pod \"cert-manager-webhook-6888856db4-5gdv7\" (UID: \"5ccf902a-17d7-4805-bc9f-0dc6dc596312\") " pod="cert-manager/cert-manager-webhook-6888856db4-5gdv7" Mar 18 13:37:25.231530 master-0 kubenswrapper[27835]: I0318 13:37:25.231535 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5ccf902a-17d7-4805-bc9f-0dc6dc596312-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-5gdv7\" (UID: \"5ccf902a-17d7-4805-bc9f-0dc6dc596312\") " pod="cert-manager/cert-manager-webhook-6888856db4-5gdv7" Mar 18 13:37:25.249804 master-0 kubenswrapper[27835]: I0318 13:37:25.249551 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l28tx\" (UniqueName: \"kubernetes.io/projected/5ccf902a-17d7-4805-bc9f-0dc6dc596312-kube-api-access-l28tx\") pod \"cert-manager-webhook-6888856db4-5gdv7\" (UID: \"5ccf902a-17d7-4805-bc9f-0dc6dc596312\") " pod="cert-manager/cert-manager-webhook-6888856db4-5gdv7" Mar 18 13:37:25.251056 master-0 kubenswrapper[27835]: I0318 13:37:25.251018 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5ccf902a-17d7-4805-bc9f-0dc6dc596312-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-5gdv7\" (UID: \"5ccf902a-17d7-4805-bc9f-0dc6dc596312\") " pod="cert-manager/cert-manager-webhook-6888856db4-5gdv7" Mar 18 13:37:25.274083 master-0 kubenswrapper[27835]: I0318 13:37:25.274007 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-4kqlg"] Mar 18 13:37:25.275123 master-0 kubenswrapper[27835]: I0318 13:37:25.275092 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-4kqlg" Mar 18 13:37:25.276678 master-0 kubenswrapper[27835]: I0318 13:37:25.276646 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Mar 18 13:37:25.277864 master-0 kubenswrapper[27835]: I0318 13:37:25.277819 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Mar 18 13:37:25.293682 master-0 kubenswrapper[27835]: I0318 13:37:25.293624 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-4kqlg"] Mar 18 13:37:25.317461 master-0 kubenswrapper[27835]: I0318 13:37:25.316854 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-5gdv7" Mar 18 13:37:25.437492 master-0 kubenswrapper[27835]: I0318 13:37:25.437218 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tv6j\" (UniqueName: \"kubernetes.io/projected/63b3162b-c0b3-46ef-8894-ecfa759ce0da-kube-api-access-2tv6j\") pod \"obo-prometheus-operator-8ff7d675-4kqlg\" (UID: \"63b3162b-c0b3-46ef-8894-ecfa759ce0da\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-4kqlg" Mar 18 13:37:25.541509 master-0 kubenswrapper[27835]: I0318 13:37:25.541201 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tv6j\" (UniqueName: \"kubernetes.io/projected/63b3162b-c0b3-46ef-8894-ecfa759ce0da-kube-api-access-2tv6j\") pod \"obo-prometheus-operator-8ff7d675-4kqlg\" (UID: \"63b3162b-c0b3-46ef-8894-ecfa759ce0da\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-4kqlg" Mar 18 13:37:25.568504 master-0 kubenswrapper[27835]: I0318 13:37:25.565914 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tv6j\" (UniqueName: \"kubernetes.io/projected/63b3162b-c0b3-46ef-8894-ecfa759ce0da-kube-api-access-2tv6j\") pod \"obo-prometheus-operator-8ff7d675-4kqlg\" (UID: \"63b3162b-c0b3-46ef-8894-ecfa759ce0da\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-4kqlg" Mar 18 13:37:25.630285 master-0 kubenswrapper[27835]: I0318 13:37:25.630224 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-4kqlg" Mar 18 13:37:25.802901 master-0 kubenswrapper[27835]: I0318 13:37:25.798910 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-799c6cb588-c4gvg"] Mar 18 13:37:25.802901 master-0 kubenswrapper[27835]: I0318 13:37:25.800304 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-799c6cb588-c4gvg" Mar 18 13:37:25.802901 master-0 kubenswrapper[27835]: I0318 13:37:25.802330 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Mar 18 13:37:25.834137 master-0 kubenswrapper[27835]: I0318 13:37:25.834064 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-799c6cb588-mvmjx"] Mar 18 13:37:25.837174 master-0 kubenswrapper[27835]: I0318 13:37:25.836566 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-799c6cb588-mvmjx" Mar 18 13:37:25.845784 master-0 kubenswrapper[27835]: I0318 13:37:25.845737 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-799c6cb588-c4gvg"] Mar 18 13:37:25.861498 master-0 kubenswrapper[27835]: I0318 13:37:25.855649 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-799c6cb588-mvmjx"] Mar 18 13:37:25.949570 master-0 kubenswrapper[27835]: I0318 13:37:25.949278 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/80eb4d62-b3a3-47af-b14a-9bdc545480c4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-799c6cb588-mvmjx\" (UID: \"80eb4d62-b3a3-47af-b14a-9bdc545480c4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-799c6cb588-mvmjx" Mar 18 13:37:25.949570 master-0 kubenswrapper[27835]: I0318 13:37:25.949356 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2f8860c1-aaaa-419e-9af7-e730b8551861-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-799c6cb588-c4gvg\" (UID: \"2f8860c1-aaaa-419e-9af7-e730b8551861\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-799c6cb588-c4gvg" Mar 18 13:37:25.949570 master-0 kubenswrapper[27835]: I0318 13:37:25.949403 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2f8860c1-aaaa-419e-9af7-e730b8551861-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-799c6cb588-c4gvg\" (UID: \"2f8860c1-aaaa-419e-9af7-e730b8551861\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-799c6cb588-c4gvg" Mar 18 13:37:25.949570 master-0 kubenswrapper[27835]: I0318 13:37:25.949455 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/80eb4d62-b3a3-47af-b14a-9bdc545480c4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-799c6cb588-mvmjx\" (UID: \"80eb4d62-b3a3-47af-b14a-9bdc545480c4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-799c6cb588-mvmjx" Mar 18 13:37:25.961553 master-0 kubenswrapper[27835]: I0318 13:37:25.954297 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-5gdv7"] Mar 18 13:37:26.051154 master-0 kubenswrapper[27835]: I0318 13:37:26.051090 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/80eb4d62-b3a3-47af-b14a-9bdc545480c4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-799c6cb588-mvmjx\" (UID: \"80eb4d62-b3a3-47af-b14a-9bdc545480c4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-799c6cb588-mvmjx" Mar 18 13:37:26.051154 master-0 kubenswrapper[27835]: I0318 13:37:26.051151 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2f8860c1-aaaa-419e-9af7-e730b8551861-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-799c6cb588-c4gvg\" (UID: \"2f8860c1-aaaa-419e-9af7-e730b8551861\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-799c6cb588-c4gvg" Mar 18 13:37:26.051741 master-0 kubenswrapper[27835]: I0318 13:37:26.051187 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2f8860c1-aaaa-419e-9af7-e730b8551861-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-799c6cb588-c4gvg\" (UID: \"2f8860c1-aaaa-419e-9af7-e730b8551861\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-799c6cb588-c4gvg" Mar 18 13:37:26.051741 master-0 kubenswrapper[27835]: I0318 13:37:26.051208 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/80eb4d62-b3a3-47af-b14a-9bdc545480c4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-799c6cb588-mvmjx\" (UID: \"80eb4d62-b3a3-47af-b14a-9bdc545480c4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-799c6cb588-mvmjx" Mar 18 13:37:26.057504 master-0 kubenswrapper[27835]: I0318 13:37:26.054136 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/80eb4d62-b3a3-47af-b14a-9bdc545480c4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-799c6cb588-mvmjx\" (UID: \"80eb4d62-b3a3-47af-b14a-9bdc545480c4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-799c6cb588-mvmjx" Mar 18 13:37:26.057504 master-0 kubenswrapper[27835]: I0318 13:37:26.054763 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2f8860c1-aaaa-419e-9af7-e730b8551861-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-799c6cb588-c4gvg\" (UID: \"2f8860c1-aaaa-419e-9af7-e730b8551861\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-799c6cb588-c4gvg" Mar 18 13:37:26.058270 master-0 kubenswrapper[27835]: I0318 13:37:26.058189 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/80eb4d62-b3a3-47af-b14a-9bdc545480c4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-799c6cb588-mvmjx\" (UID: \"80eb4d62-b3a3-47af-b14a-9bdc545480c4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-799c6cb588-mvmjx" Mar 18 13:37:26.061775 master-0 kubenswrapper[27835]: I0318 13:37:26.061438 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2f8860c1-aaaa-419e-9af7-e730b8551861-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-799c6cb588-c4gvg\" (UID: \"2f8860c1-aaaa-419e-9af7-e730b8551861\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-799c6cb588-c4gvg" Mar 18 13:37:26.126227 master-0 kubenswrapper[27835]: I0318 13:37:26.126105 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-4kqlg"] Mar 18 13:37:26.147457 master-0 kubenswrapper[27835]: I0318 13:37:26.145245 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-799c6cb588-c4gvg" Mar 18 13:37:26.173358 master-0 kubenswrapper[27835]: I0318 13:37:26.173305 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-799c6cb588-mvmjx" Mar 18 13:37:26.272984 master-0 kubenswrapper[27835]: I0318 13:37:26.272651 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-rwdls"] Mar 18 13:37:26.288462 master-0 kubenswrapper[27835]: I0318 13:37:26.273852 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-rwdls" Mar 18 13:37:26.288462 master-0 kubenswrapper[27835]: I0318 13:37:26.285351 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Mar 18 13:37:26.336035 master-0 kubenswrapper[27835]: I0318 13:37:26.334383 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-rwdls"] Mar 18 13:37:26.362694 master-0 kubenswrapper[27835]: I0318 13:37:26.362585 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxlp9\" (UniqueName: \"kubernetes.io/projected/28ef90df-8b3f-4d66-a172-8421be1a910f-kube-api-access-cxlp9\") pod \"observability-operator-6dd7dd855f-rwdls\" (UID: \"28ef90df-8b3f-4d66-a172-8421be1a910f\") " pod="openshift-operators/observability-operator-6dd7dd855f-rwdls" Mar 18 13:37:26.363106 master-0 kubenswrapper[27835]: I0318 13:37:26.363011 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/28ef90df-8b3f-4d66-a172-8421be1a910f-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-rwdls\" (UID: \"28ef90df-8b3f-4d66-a172-8421be1a910f\") " pod="openshift-operators/observability-operator-6dd7dd855f-rwdls" Mar 18 13:37:26.465198 master-0 kubenswrapper[27835]: I0318 13:37:26.465153 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/28ef90df-8b3f-4d66-a172-8421be1a910f-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-rwdls\" (UID: \"28ef90df-8b3f-4d66-a172-8421be1a910f\") " pod="openshift-operators/observability-operator-6dd7dd855f-rwdls" Mar 18 13:37:26.465357 master-0 kubenswrapper[27835]: I0318 13:37:26.465332 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxlp9\" (UniqueName: \"kubernetes.io/projected/28ef90df-8b3f-4d66-a172-8421be1a910f-kube-api-access-cxlp9\") pod \"observability-operator-6dd7dd855f-rwdls\" (UID: \"28ef90df-8b3f-4d66-a172-8421be1a910f\") " pod="openshift-operators/observability-operator-6dd7dd855f-rwdls" Mar 18 13:37:26.477871 master-0 kubenswrapper[27835]: I0318 13:37:26.476304 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/28ef90df-8b3f-4d66-a172-8421be1a910f-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-rwdls\" (UID: \"28ef90df-8b3f-4d66-a172-8421be1a910f\") " pod="openshift-operators/observability-operator-6dd7dd855f-rwdls" Mar 18 13:37:26.500344 master-0 kubenswrapper[27835]: I0318 13:37:26.500272 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxlp9\" (UniqueName: \"kubernetes.io/projected/28ef90df-8b3f-4d66-a172-8421be1a910f-kube-api-access-cxlp9\") pod \"observability-operator-6dd7dd855f-rwdls\" (UID: \"28ef90df-8b3f-4d66-a172-8421be1a910f\") " pod="openshift-operators/observability-operator-6dd7dd855f-rwdls" Mar 18 13:37:26.634581 master-0 kubenswrapper[27835]: I0318 13:37:26.628842 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-rwdls" Mar 18 13:37:26.674478 master-0 kubenswrapper[27835]: I0318 13:37:26.671447 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-799c6cb588-c4gvg"] Mar 18 13:37:26.751524 master-0 kubenswrapper[27835]: I0318 13:37:26.749395 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-799c6cb588-mvmjx"] Mar 18 13:37:26.973133 master-0 kubenswrapper[27835]: I0318 13:37:26.972985 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-8ff7d675-4kqlg" event={"ID":"63b3162b-c0b3-46ef-8894-ecfa759ce0da","Type":"ContainerStarted","Data":"e674af394e2f3027876bc3eb183669276ba3b18f152643144b77eb5094e8bb8b"} Mar 18 13:37:26.975111 master-0 kubenswrapper[27835]: I0318 13:37:26.975065 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-799c6cb588-mvmjx" event={"ID":"80eb4d62-b3a3-47af-b14a-9bdc545480c4","Type":"ContainerStarted","Data":"62de122659f4be26722670c3074e7cf9ca725b306fbb4027b0c9848b3f5f00a1"} Mar 18 13:37:26.977243 master-0 kubenswrapper[27835]: I0318 13:37:26.977215 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-5gdv7" event={"ID":"5ccf902a-17d7-4805-bc9f-0dc6dc596312","Type":"ContainerStarted","Data":"f1134843611bbe1d6ec4cdcfa725ed357ef1e9b8108629a7d08c6c216df72e96"} Mar 18 13:37:26.978551 master-0 kubenswrapper[27835]: I0318 13:37:26.978526 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-799c6cb588-c4gvg" event={"ID":"2f8860c1-aaaa-419e-9af7-e730b8551861","Type":"ContainerStarted","Data":"60dc7d9a826d3e3a238912e2ff2fb1990ded15b387f79d157588f2574151c46f"} Mar 18 13:37:27.266438 master-0 kubenswrapper[27835]: I0318 13:37:27.262876 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-rwdls"] Mar 18 13:37:27.297446 master-0 kubenswrapper[27835]: I0318 13:37:27.296945 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5d6bf8dcbf-mpv4c"] Mar 18 13:37:27.301444 master-0 kubenswrapper[27835]: I0318 13:37:27.297924 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5d6bf8dcbf-mpv4c" Mar 18 13:37:27.318848 master-0 kubenswrapper[27835]: I0318 13:37:27.318795 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-service-cert" Mar 18 13:37:27.363457 master-0 kubenswrapper[27835]: I0318 13:37:27.358311 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5d6bf8dcbf-mpv4c"] Mar 18 13:37:27.397444 master-0 kubenswrapper[27835]: I0318 13:37:27.392099 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/84a63422-2593-4bed-9564-f5105e0d7015-webhook-cert\") pod \"perses-operator-5d6bf8dcbf-mpv4c\" (UID: \"84a63422-2593-4bed-9564-f5105e0d7015\") " pod="openshift-operators/perses-operator-5d6bf8dcbf-mpv4c" Mar 18 13:37:27.397444 master-0 kubenswrapper[27835]: I0318 13:37:27.392209 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ltnf\" (UniqueName: \"kubernetes.io/projected/84a63422-2593-4bed-9564-f5105e0d7015-kube-api-access-7ltnf\") pod \"perses-operator-5d6bf8dcbf-mpv4c\" (UID: \"84a63422-2593-4bed-9564-f5105e0d7015\") " pod="openshift-operators/perses-operator-5d6bf8dcbf-mpv4c" Mar 18 13:37:27.397444 master-0 kubenswrapper[27835]: I0318 13:37:27.392276 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/84a63422-2593-4bed-9564-f5105e0d7015-openshift-service-ca\") pod \"perses-operator-5d6bf8dcbf-mpv4c\" (UID: \"84a63422-2593-4bed-9564-f5105e0d7015\") " pod="openshift-operators/perses-operator-5d6bf8dcbf-mpv4c" Mar 18 13:37:27.397444 master-0 kubenswrapper[27835]: I0318 13:37:27.392330 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/84a63422-2593-4bed-9564-f5105e0d7015-apiservice-cert\") pod \"perses-operator-5d6bf8dcbf-mpv4c\" (UID: \"84a63422-2593-4bed-9564-f5105e0d7015\") " pod="openshift-operators/perses-operator-5d6bf8dcbf-mpv4c" Mar 18 13:37:27.493460 master-0 kubenswrapper[27835]: I0318 13:37:27.493146 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/84a63422-2593-4bed-9564-f5105e0d7015-openshift-service-ca\") pod \"perses-operator-5d6bf8dcbf-mpv4c\" (UID: \"84a63422-2593-4bed-9564-f5105e0d7015\") " pod="openshift-operators/perses-operator-5d6bf8dcbf-mpv4c" Mar 18 13:37:27.493460 master-0 kubenswrapper[27835]: I0318 13:37:27.493203 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/84a63422-2593-4bed-9564-f5105e0d7015-apiservice-cert\") pod \"perses-operator-5d6bf8dcbf-mpv4c\" (UID: \"84a63422-2593-4bed-9564-f5105e0d7015\") " pod="openshift-operators/perses-operator-5d6bf8dcbf-mpv4c" Mar 18 13:37:27.493460 master-0 kubenswrapper[27835]: I0318 13:37:27.493269 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/84a63422-2593-4bed-9564-f5105e0d7015-webhook-cert\") pod \"perses-operator-5d6bf8dcbf-mpv4c\" (UID: \"84a63422-2593-4bed-9564-f5105e0d7015\") " pod="openshift-operators/perses-operator-5d6bf8dcbf-mpv4c" Mar 18 13:37:27.493460 master-0 kubenswrapper[27835]: I0318 13:37:27.493299 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ltnf\" (UniqueName: \"kubernetes.io/projected/84a63422-2593-4bed-9564-f5105e0d7015-kube-api-access-7ltnf\") pod \"perses-operator-5d6bf8dcbf-mpv4c\" (UID: \"84a63422-2593-4bed-9564-f5105e0d7015\") " pod="openshift-operators/perses-operator-5d6bf8dcbf-mpv4c" Mar 18 13:37:27.496273 master-0 kubenswrapper[27835]: I0318 13:37:27.494903 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/84a63422-2593-4bed-9564-f5105e0d7015-openshift-service-ca\") pod \"perses-operator-5d6bf8dcbf-mpv4c\" (UID: \"84a63422-2593-4bed-9564-f5105e0d7015\") " pod="openshift-operators/perses-operator-5d6bf8dcbf-mpv4c" Mar 18 13:37:27.499520 master-0 kubenswrapper[27835]: I0318 13:37:27.496861 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/84a63422-2593-4bed-9564-f5105e0d7015-apiservice-cert\") pod \"perses-operator-5d6bf8dcbf-mpv4c\" (UID: \"84a63422-2593-4bed-9564-f5105e0d7015\") " pod="openshift-operators/perses-operator-5d6bf8dcbf-mpv4c" Mar 18 13:37:27.499520 master-0 kubenswrapper[27835]: I0318 13:37:27.497396 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/84a63422-2593-4bed-9564-f5105e0d7015-webhook-cert\") pod \"perses-operator-5d6bf8dcbf-mpv4c\" (UID: \"84a63422-2593-4bed-9564-f5105e0d7015\") " pod="openshift-operators/perses-operator-5d6bf8dcbf-mpv4c" Mar 18 13:37:27.513585 master-0 kubenswrapper[27835]: I0318 13:37:27.511505 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ltnf\" (UniqueName: \"kubernetes.io/projected/84a63422-2593-4bed-9564-f5105e0d7015-kube-api-access-7ltnf\") pod \"perses-operator-5d6bf8dcbf-mpv4c\" (UID: \"84a63422-2593-4bed-9564-f5105e0d7015\") " pod="openshift-operators/perses-operator-5d6bf8dcbf-mpv4c" Mar 18 13:37:27.652444 master-0 kubenswrapper[27835]: I0318 13:37:27.646732 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5d6bf8dcbf-mpv4c" Mar 18 13:37:28.002282 master-0 kubenswrapper[27835]: I0318 13:37:28.002235 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-6dd7dd855f-rwdls" event={"ID":"28ef90df-8b3f-4d66-a172-8421be1a910f","Type":"ContainerStarted","Data":"282d55952cba7fa175b8754294d42c089e149a6c3499f712561fb5e516b3be8c"} Mar 18 13:37:28.273772 master-0 kubenswrapper[27835]: I0318 13:37:28.273702 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5d6bf8dcbf-mpv4c"] Mar 18 13:37:28.306520 master-0 kubenswrapper[27835]: W0318 13:37:28.292235 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84a63422_2593_4bed_9564_f5105e0d7015.slice/crio-04dd6f507b7d43b764a12eebc8846275514a1c913ecd6a63a184e72a828ede90 WatchSource:0}: Error finding container 04dd6f507b7d43b764a12eebc8846275514a1c913ecd6a63a184e72a828ede90: Status 404 returned error can't find the container with id 04dd6f507b7d43b764a12eebc8846275514a1c913ecd6a63a184e72a828ede90 Mar 18 13:37:28.641262 master-0 kubenswrapper[27835]: I0318 13:37:28.641133 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-rkb57"] Mar 18 13:37:28.642661 master-0 kubenswrapper[27835]: I0318 13:37:28.642620 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-rkb57" Mar 18 13:37:28.655008 master-0 kubenswrapper[27835]: I0318 13:37:28.654704 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-rkb57"] Mar 18 13:37:28.738770 master-0 kubenswrapper[27835]: I0318 13:37:28.738696 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ae584c06-a5ee-4994-b744-c3fd2bd1933e-bound-sa-token\") pod \"cert-manager-545d4d4674-rkb57\" (UID: \"ae584c06-a5ee-4994-b744-c3fd2bd1933e\") " pod="cert-manager/cert-manager-545d4d4674-rkb57" Mar 18 13:37:28.738972 master-0 kubenswrapper[27835]: I0318 13:37:28.738857 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k58wg\" (UniqueName: \"kubernetes.io/projected/ae584c06-a5ee-4994-b744-c3fd2bd1933e-kube-api-access-k58wg\") pod \"cert-manager-545d4d4674-rkb57\" (UID: \"ae584c06-a5ee-4994-b744-c3fd2bd1933e\") " pod="cert-manager/cert-manager-545d4d4674-rkb57" Mar 18 13:37:28.840665 master-0 kubenswrapper[27835]: I0318 13:37:28.840537 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ae584c06-a5ee-4994-b744-c3fd2bd1933e-bound-sa-token\") pod \"cert-manager-545d4d4674-rkb57\" (UID: \"ae584c06-a5ee-4994-b744-c3fd2bd1933e\") " pod="cert-manager/cert-manager-545d4d4674-rkb57" Mar 18 13:37:28.840877 master-0 kubenswrapper[27835]: I0318 13:37:28.840668 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k58wg\" (UniqueName: \"kubernetes.io/projected/ae584c06-a5ee-4994-b744-c3fd2bd1933e-kube-api-access-k58wg\") pod \"cert-manager-545d4d4674-rkb57\" (UID: \"ae584c06-a5ee-4994-b744-c3fd2bd1933e\") " pod="cert-manager/cert-manager-545d4d4674-rkb57" Mar 18 13:37:28.862231 master-0 kubenswrapper[27835]: I0318 13:37:28.862181 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ae584c06-a5ee-4994-b744-c3fd2bd1933e-bound-sa-token\") pod \"cert-manager-545d4d4674-rkb57\" (UID: \"ae584c06-a5ee-4994-b744-c3fd2bd1933e\") " pod="cert-manager/cert-manager-545d4d4674-rkb57" Mar 18 13:37:28.862974 master-0 kubenswrapper[27835]: I0318 13:37:28.862944 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k58wg\" (UniqueName: \"kubernetes.io/projected/ae584c06-a5ee-4994-b744-c3fd2bd1933e-kube-api-access-k58wg\") pod \"cert-manager-545d4d4674-rkb57\" (UID: \"ae584c06-a5ee-4994-b744-c3fd2bd1933e\") " pod="cert-manager/cert-manager-545d4d4674-rkb57" Mar 18 13:37:28.962760 master-0 kubenswrapper[27835]: I0318 13:37:28.962540 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-rkb57" Mar 18 13:37:29.015904 master-0 kubenswrapper[27835]: I0318 13:37:29.012057 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5d6bf8dcbf-mpv4c" event={"ID":"84a63422-2593-4bed-9564-f5105e0d7015","Type":"ContainerStarted","Data":"04dd6f507b7d43b764a12eebc8846275514a1c913ecd6a63a184e72a828ede90"} Mar 18 13:37:29.442309 master-0 kubenswrapper[27835]: I0318 13:37:29.442226 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-rkb57"] Mar 18 13:37:29.465275 master-0 kubenswrapper[27835]: W0318 13:37:29.464025 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae584c06_a5ee_4994_b744_c3fd2bd1933e.slice/crio-0f5312c7b092c43f7e701e55f81d0735f5aa76e58d98e713affd30fbcc9b253c WatchSource:0}: Error finding container 0f5312c7b092c43f7e701e55f81d0735f5aa76e58d98e713affd30fbcc9b253c: Status 404 returned error can't find the container with id 0f5312c7b092c43f7e701e55f81d0735f5aa76e58d98e713affd30fbcc9b253c Mar 18 13:37:30.035530 master-0 kubenswrapper[27835]: I0318 13:37:30.035470 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-rkb57" event={"ID":"ae584c06-a5ee-4994-b744-c3fd2bd1933e","Type":"ContainerStarted","Data":"0f5312c7b092c43f7e701e55f81d0735f5aa76e58d98e713affd30fbcc9b253c"} Mar 18 13:37:36.208441 master-0 kubenswrapper[27835]: I0318 13:37:36.201984 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-549849bb46-fnr78"] Mar 18 13:37:36.208441 master-0 kubenswrapper[27835]: I0318 13:37:36.203276 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-549849bb46-fnr78" Mar 18 13:37:36.208441 master-0 kubenswrapper[27835]: I0318 13:37:36.207473 27835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 18 13:37:36.214120 master-0 kubenswrapper[27835]: I0318 13:37:36.209358 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 18 13:37:36.214120 master-0 kubenswrapper[27835]: I0318 13:37:36.212280 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 18 13:37:36.231487 master-0 kubenswrapper[27835]: I0318 13:37:36.221902 27835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 18 13:37:36.238533 master-0 kubenswrapper[27835]: I0318 13:37:36.238459 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-549849bb46-fnr78"] Mar 18 13:37:36.329623 master-0 kubenswrapper[27835]: I0318 13:37:36.329561 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brnlh\" (UniqueName: \"kubernetes.io/projected/ebfb29f2-806f-4507-8235-055d55cd360b-kube-api-access-brnlh\") pod \"metallb-operator-controller-manager-549849bb46-fnr78\" (UID: \"ebfb29f2-806f-4507-8235-055d55cd360b\") " pod="metallb-system/metallb-operator-controller-manager-549849bb46-fnr78" Mar 18 13:37:36.330049 master-0 kubenswrapper[27835]: I0318 13:37:36.329985 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ebfb29f2-806f-4507-8235-055d55cd360b-apiservice-cert\") pod \"metallb-operator-controller-manager-549849bb46-fnr78\" (UID: \"ebfb29f2-806f-4507-8235-055d55cd360b\") " pod="metallb-system/metallb-operator-controller-manager-549849bb46-fnr78" Mar 18 13:37:36.330145 master-0 kubenswrapper[27835]: I0318 13:37:36.330095 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ebfb29f2-806f-4507-8235-055d55cd360b-webhook-cert\") pod \"metallb-operator-controller-manager-549849bb46-fnr78\" (UID: \"ebfb29f2-806f-4507-8235-055d55cd360b\") " pod="metallb-system/metallb-operator-controller-manager-549849bb46-fnr78" Mar 18 13:37:36.431616 master-0 kubenswrapper[27835]: I0318 13:37:36.431572 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ebfb29f2-806f-4507-8235-055d55cd360b-apiservice-cert\") pod \"metallb-operator-controller-manager-549849bb46-fnr78\" (UID: \"ebfb29f2-806f-4507-8235-055d55cd360b\") " pod="metallb-system/metallb-operator-controller-manager-549849bb46-fnr78" Mar 18 13:37:36.431944 master-0 kubenswrapper[27835]: I0318 13:37:36.431920 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ebfb29f2-806f-4507-8235-055d55cd360b-webhook-cert\") pod \"metallb-operator-controller-manager-549849bb46-fnr78\" (UID: \"ebfb29f2-806f-4507-8235-055d55cd360b\") " pod="metallb-system/metallb-operator-controller-manager-549849bb46-fnr78" Mar 18 13:37:36.432070 master-0 kubenswrapper[27835]: I0318 13:37:36.432056 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brnlh\" (UniqueName: \"kubernetes.io/projected/ebfb29f2-806f-4507-8235-055d55cd360b-kube-api-access-brnlh\") pod \"metallb-operator-controller-manager-549849bb46-fnr78\" (UID: \"ebfb29f2-806f-4507-8235-055d55cd360b\") " pod="metallb-system/metallb-operator-controller-manager-549849bb46-fnr78" Mar 18 13:37:36.434866 master-0 kubenswrapper[27835]: I0318 13:37:36.434847 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ebfb29f2-806f-4507-8235-055d55cd360b-apiservice-cert\") pod \"metallb-operator-controller-manager-549849bb46-fnr78\" (UID: \"ebfb29f2-806f-4507-8235-055d55cd360b\") " pod="metallb-system/metallb-operator-controller-manager-549849bb46-fnr78" Mar 18 13:37:36.438263 master-0 kubenswrapper[27835]: I0318 13:37:36.438217 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ebfb29f2-806f-4507-8235-055d55cd360b-webhook-cert\") pod \"metallb-operator-controller-manager-549849bb46-fnr78\" (UID: \"ebfb29f2-806f-4507-8235-055d55cd360b\") " pod="metallb-system/metallb-operator-controller-manager-549849bb46-fnr78" Mar 18 13:37:36.482855 master-0 kubenswrapper[27835]: I0318 13:37:36.482754 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brnlh\" (UniqueName: \"kubernetes.io/projected/ebfb29f2-806f-4507-8235-055d55cd360b-kube-api-access-brnlh\") pod \"metallb-operator-controller-manager-549849bb46-fnr78\" (UID: \"ebfb29f2-806f-4507-8235-055d55cd360b\") " pod="metallb-system/metallb-operator-controller-manager-549849bb46-fnr78" Mar 18 13:37:36.554190 master-0 kubenswrapper[27835]: I0318 13:37:36.554136 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-549849bb46-fnr78" Mar 18 13:37:37.066441 master-0 kubenswrapper[27835]: I0318 13:37:37.061315 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6c4dc89f9c-lppql"] Mar 18 13:37:37.066441 master-0 kubenswrapper[27835]: I0318 13:37:37.062567 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6c4dc89f9c-lppql" Mar 18 13:37:37.069012 master-0 kubenswrapper[27835]: I0318 13:37:37.066812 27835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 18 13:37:37.069012 master-0 kubenswrapper[27835]: I0318 13:37:37.067030 27835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 18 13:37:37.080442 master-0 kubenswrapper[27835]: I0318 13:37:37.074727 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6c4dc89f9c-lppql"] Mar 18 13:37:37.149204 master-0 kubenswrapper[27835]: I0318 13:37:37.149129 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/01ddc4ef-d5f9-4761-b728-828f9a107b0c-apiservice-cert\") pod \"metallb-operator-webhook-server-6c4dc89f9c-lppql\" (UID: \"01ddc4ef-d5f9-4761-b728-828f9a107b0c\") " pod="metallb-system/metallb-operator-webhook-server-6c4dc89f9c-lppql" Mar 18 13:37:37.149204 master-0 kubenswrapper[27835]: I0318 13:37:37.149196 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/01ddc4ef-d5f9-4761-b728-828f9a107b0c-webhook-cert\") pod \"metallb-operator-webhook-server-6c4dc89f9c-lppql\" (UID: \"01ddc4ef-d5f9-4761-b728-828f9a107b0c\") " pod="metallb-system/metallb-operator-webhook-server-6c4dc89f9c-lppql" Mar 18 13:37:37.149542 master-0 kubenswrapper[27835]: I0318 13:37:37.149293 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mksph\" (UniqueName: \"kubernetes.io/projected/01ddc4ef-d5f9-4761-b728-828f9a107b0c-kube-api-access-mksph\") pod \"metallb-operator-webhook-server-6c4dc89f9c-lppql\" (UID: \"01ddc4ef-d5f9-4761-b728-828f9a107b0c\") " pod="metallb-system/metallb-operator-webhook-server-6c4dc89f9c-lppql" Mar 18 13:37:37.255038 master-0 kubenswrapper[27835]: I0318 13:37:37.254367 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mksph\" (UniqueName: \"kubernetes.io/projected/01ddc4ef-d5f9-4761-b728-828f9a107b0c-kube-api-access-mksph\") pod \"metallb-operator-webhook-server-6c4dc89f9c-lppql\" (UID: \"01ddc4ef-d5f9-4761-b728-828f9a107b0c\") " pod="metallb-system/metallb-operator-webhook-server-6c4dc89f9c-lppql" Mar 18 13:37:37.255038 master-0 kubenswrapper[27835]: I0318 13:37:37.254468 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/01ddc4ef-d5f9-4761-b728-828f9a107b0c-apiservice-cert\") pod \"metallb-operator-webhook-server-6c4dc89f9c-lppql\" (UID: \"01ddc4ef-d5f9-4761-b728-828f9a107b0c\") " pod="metallb-system/metallb-operator-webhook-server-6c4dc89f9c-lppql" Mar 18 13:37:37.255038 master-0 kubenswrapper[27835]: I0318 13:37:37.254502 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/01ddc4ef-d5f9-4761-b728-828f9a107b0c-webhook-cert\") pod \"metallb-operator-webhook-server-6c4dc89f9c-lppql\" (UID: \"01ddc4ef-d5f9-4761-b728-828f9a107b0c\") " pod="metallb-system/metallb-operator-webhook-server-6c4dc89f9c-lppql" Mar 18 13:37:37.258687 master-0 kubenswrapper[27835]: I0318 13:37:37.258305 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/01ddc4ef-d5f9-4761-b728-828f9a107b0c-webhook-cert\") pod \"metallb-operator-webhook-server-6c4dc89f9c-lppql\" (UID: \"01ddc4ef-d5f9-4761-b728-828f9a107b0c\") " pod="metallb-system/metallb-operator-webhook-server-6c4dc89f9c-lppql" Mar 18 13:37:37.264130 master-0 kubenswrapper[27835]: I0318 13:37:37.264071 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/01ddc4ef-d5f9-4761-b728-828f9a107b0c-apiservice-cert\") pod \"metallb-operator-webhook-server-6c4dc89f9c-lppql\" (UID: \"01ddc4ef-d5f9-4761-b728-828f9a107b0c\") " pod="metallb-system/metallb-operator-webhook-server-6c4dc89f9c-lppql" Mar 18 13:37:37.303942 master-0 kubenswrapper[27835]: I0318 13:37:37.303899 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mksph\" (UniqueName: \"kubernetes.io/projected/01ddc4ef-d5f9-4761-b728-828f9a107b0c-kube-api-access-mksph\") pod \"metallb-operator-webhook-server-6c4dc89f9c-lppql\" (UID: \"01ddc4ef-d5f9-4761-b728-828f9a107b0c\") " pod="metallb-system/metallb-operator-webhook-server-6c4dc89f9c-lppql" Mar 18 13:37:37.457196 master-0 kubenswrapper[27835]: I0318 13:37:37.456110 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6c4dc89f9c-lppql" Mar 18 13:37:45.366230 master-0 kubenswrapper[27835]: I0318 13:37:45.365689 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-5gdv7" event={"ID":"5ccf902a-17d7-4805-bc9f-0dc6dc596312","Type":"ContainerStarted","Data":"3c239a9182a235aa386123a21541ed6ec85060efedff2205f3076b2a6768a16a"} Mar 18 13:37:45.366230 master-0 kubenswrapper[27835]: I0318 13:37:45.365959 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-5gdv7" Mar 18 13:37:45.411380 master-0 kubenswrapper[27835]: W0318 13:37:45.411337 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebfb29f2_806f_4507_8235_055d55cd360b.slice/crio-fa572ff5101913dd0bae0ba39bec0f157c9f241cca6f274f56a73b184fe94496 WatchSource:0}: Error finding container fa572ff5101913dd0bae0ba39bec0f157c9f241cca6f274f56a73b184fe94496: Status 404 returned error can't find the container with id fa572ff5101913dd0bae0ba39bec0f157c9f241cca6f274f56a73b184fe94496 Mar 18 13:37:45.411849 master-0 kubenswrapper[27835]: I0318 13:37:45.411816 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-8ff7d675-4kqlg" event={"ID":"63b3162b-c0b3-46ef-8894-ecfa759ce0da","Type":"ContainerStarted","Data":"2d059517491e9e1a27eb290b716c05d09aeae19b54ee7844b6795f3e0397e70d"} Mar 18 13:37:45.435627 master-0 kubenswrapper[27835]: I0318 13:37:45.431924 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-549849bb46-fnr78"] Mar 18 13:37:45.442486 master-0 kubenswrapper[27835]: I0318 13:37:45.437273 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-5gdv7" podStartSLOduration=2.616834873 podStartE2EDuration="21.437257937s" podCreationTimestamp="2026-03-18 13:37:24 +0000 UTC" firstStartedPulling="2026-03-18 13:37:25.968973667 +0000 UTC m=+809.934185227" lastFinishedPulling="2026-03-18 13:37:44.789396731 +0000 UTC m=+828.754608291" observedRunningTime="2026-03-18 13:37:45.400560846 +0000 UTC m=+829.365772406" watchObservedRunningTime="2026-03-18 13:37:45.437257937 +0000 UTC m=+829.402469487" Mar 18 13:37:45.445254 master-0 kubenswrapper[27835]: I0318 13:37:45.443660 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-6dd7dd855f-rwdls" event={"ID":"28ef90df-8b3f-4d66-a172-8421be1a910f","Type":"ContainerStarted","Data":"bc4e2068c05057d53f2428e3f93723315415219f433c235edcc697bdb78b0e68"} Mar 18 13:37:45.445254 master-0 kubenswrapper[27835]: I0318 13:37:45.444712 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-6dd7dd855f-rwdls" Mar 18 13:37:45.449631 master-0 kubenswrapper[27835]: I0318 13:37:45.449560 27835 patch_prober.go:28] interesting pod/observability-operator-6dd7dd855f-rwdls container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.128.0.123:8081/healthz\": dial tcp 10.128.0.123:8081: connect: connection refused" start-of-body= Mar 18 13:37:45.449631 master-0 kubenswrapper[27835]: I0318 13:37:45.449627 27835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-6dd7dd855f-rwdls" podUID="28ef90df-8b3f-4d66-a172-8421be1a910f" containerName="operator" probeResult="failure" output="Get \"http://10.128.0.123:8081/healthz\": dial tcp 10.128.0.123:8081: connect: connection refused" Mar 18 13:37:45.474964 master-0 kubenswrapper[27835]: I0318 13:37:45.474852 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-799c6cb588-c4gvg" podStartSLOduration=2.259957129 podStartE2EDuration="20.474812781s" podCreationTimestamp="2026-03-18 13:37:25 +0000 UTC" firstStartedPulling="2026-03-18 13:37:26.714497279 +0000 UTC m=+810.679708839" lastFinishedPulling="2026-03-18 13:37:44.929352931 +0000 UTC m=+828.894564491" observedRunningTime="2026-03-18 13:37:45.460732511 +0000 UTC m=+829.425944071" watchObservedRunningTime="2026-03-18 13:37:45.474812781 +0000 UTC m=+829.440024341" Mar 18 13:37:45.613898 master-0 kubenswrapper[27835]: I0318 13:37:45.613833 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-6dd7dd855f-rwdls" podStartSLOduration=1.978735149 podStartE2EDuration="19.613795894s" podCreationTimestamp="2026-03-18 13:37:26 +0000 UTC" firstStartedPulling="2026-03-18 13:37:27.271126041 +0000 UTC m=+811.236337601" lastFinishedPulling="2026-03-18 13:37:44.906186786 +0000 UTC m=+828.871398346" observedRunningTime="2026-03-18 13:37:45.537923046 +0000 UTC m=+829.503134616" watchObservedRunningTime="2026-03-18 13:37:45.613795894 +0000 UTC m=+829.579007444" Mar 18 13:37:45.638473 master-0 kubenswrapper[27835]: I0318 13:37:45.637484 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6c4dc89f9c-lppql"] Mar 18 13:37:45.657678 master-0 kubenswrapper[27835]: I0318 13:37:45.657578 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-8ff7d675-4kqlg" podStartSLOduration=1.990258856 podStartE2EDuration="20.657550815s" podCreationTimestamp="2026-03-18 13:37:25 +0000 UTC" firstStartedPulling="2026-03-18 13:37:26.122038471 +0000 UTC m=+810.087250031" lastFinishedPulling="2026-03-18 13:37:44.78933043 +0000 UTC m=+828.754541990" observedRunningTime="2026-03-18 13:37:45.605232863 +0000 UTC m=+829.570444423" watchObservedRunningTime="2026-03-18 13:37:45.657550815 +0000 UTC m=+829.622762375" Mar 18 13:37:46.455346 master-0 kubenswrapper[27835]: I0318 13:37:46.455289 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-799c6cb588-mvmjx" event={"ID":"80eb4d62-b3a3-47af-b14a-9bdc545480c4","Type":"ContainerStarted","Data":"cd901aae8bd589a07b15e643479c7106d95ab5f1c72bd1f97c72d10770e60c0a"} Mar 18 13:37:46.456754 master-0 kubenswrapper[27835]: I0318 13:37:46.456729 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6c4dc89f9c-lppql" event={"ID":"01ddc4ef-d5f9-4761-b728-828f9a107b0c","Type":"ContainerStarted","Data":"edacffc867c33c0ba68cbab54b14290643edabd324a4752509e47e11fd29fbc2"} Mar 18 13:37:46.458526 master-0 kubenswrapper[27835]: I0318 13:37:46.458502 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-799c6cb588-c4gvg" event={"ID":"2f8860c1-aaaa-419e-9af7-e730b8551861","Type":"ContainerStarted","Data":"67080145ae8b86c9709f07f17d7f2af8f873cd962cacaf8fa778d9983dc4161a"} Mar 18 13:37:46.461661 master-0 kubenswrapper[27835]: I0318 13:37:46.461622 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-549849bb46-fnr78" event={"ID":"ebfb29f2-806f-4507-8235-055d55cd360b","Type":"ContainerStarted","Data":"fa572ff5101913dd0bae0ba39bec0f157c9f241cca6f274f56a73b184fe94496"} Mar 18 13:37:46.463064 master-0 kubenswrapper[27835]: I0318 13:37:46.463038 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5d6bf8dcbf-mpv4c" event={"ID":"84a63422-2593-4bed-9564-f5105e0d7015","Type":"ContainerStarted","Data":"599e87a4f463a526a4be170e5897f04dc3bfbdab62aa8d83968baa4a46b1802d"} Mar 18 13:37:46.463560 master-0 kubenswrapper[27835]: I0318 13:37:46.463541 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5d6bf8dcbf-mpv4c" Mar 18 13:37:46.464975 master-0 kubenswrapper[27835]: I0318 13:37:46.464949 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-5hd6j" event={"ID":"9b934e6b-fae9-4024-9772-784d60b259b6","Type":"ContainerStarted","Data":"71e2593b09647b045590077b7823d1fb98acc22d53d1bbc7a334b33d746f6c9a"} Mar 18 13:37:46.467315 master-0 kubenswrapper[27835]: I0318 13:37:46.467252 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-rkb57" event={"ID":"ae584c06-a5ee-4994-b744-c3fd2bd1933e","Type":"ContainerStarted","Data":"a02fb12a5e220bc3ca145eae79a1b9ae59887669ffc43f8f0ce236e4aa570876"} Mar 18 13:37:46.469863 master-0 kubenswrapper[27835]: I0318 13:37:46.469833 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-6dd7dd855f-rwdls" Mar 18 13:37:46.503243 master-0 kubenswrapper[27835]: I0318 13:37:46.503127 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-799c6cb588-mvmjx" podStartSLOduration=3.357341013 podStartE2EDuration="21.503099338s" podCreationTimestamp="2026-03-18 13:37:25 +0000 UTC" firstStartedPulling="2026-03-18 13:37:26.765671512 +0000 UTC m=+810.730883072" lastFinishedPulling="2026-03-18 13:37:44.911429837 +0000 UTC m=+828.876641397" observedRunningTime="2026-03-18 13:37:46.500715464 +0000 UTC m=+830.465927034" watchObservedRunningTime="2026-03-18 13:37:46.503099338 +0000 UTC m=+830.468310938" Mar 18 13:37:46.557653 master-0 kubenswrapper[27835]: I0318 13:37:46.557362 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5d6bf8dcbf-mpv4c" podStartSLOduration=2.926695842 podStartE2EDuration="19.557340864s" podCreationTimestamp="2026-03-18 13:37:27 +0000 UTC" firstStartedPulling="2026-03-18 13:37:28.295070611 +0000 UTC m=+812.260282171" lastFinishedPulling="2026-03-18 13:37:44.925715633 +0000 UTC m=+828.890927193" observedRunningTime="2026-03-18 13:37:46.554198488 +0000 UTC m=+830.519410058" watchObservedRunningTime="2026-03-18 13:37:46.557340864 +0000 UTC m=+830.522552424" Mar 18 13:37:46.598640 master-0 kubenswrapper[27835]: I0318 13:37:46.598171 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-5hd6j" podStartSLOduration=2.8392344 podStartE2EDuration="24.598157805s" podCreationTimestamp="2026-03-18 13:37:22 +0000 UTC" firstStartedPulling="2026-03-18 13:37:23.278741501 +0000 UTC m=+807.243953061" lastFinishedPulling="2026-03-18 13:37:45.037664906 +0000 UTC m=+829.002876466" observedRunningTime="2026-03-18 13:37:46.597272512 +0000 UTC m=+830.562484072" watchObservedRunningTime="2026-03-18 13:37:46.598157805 +0000 UTC m=+830.563369365" Mar 18 13:37:46.681901 master-0 kubenswrapper[27835]: I0318 13:37:46.681369 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-rkb57" podStartSLOduration=3.212916084 podStartE2EDuration="18.681347602s" podCreationTimestamp="2026-03-18 13:37:28 +0000 UTC" firstStartedPulling="2026-03-18 13:37:29.467942553 +0000 UTC m=+813.433154113" lastFinishedPulling="2026-03-18 13:37:44.936374071 +0000 UTC m=+828.901585631" observedRunningTime="2026-03-18 13:37:46.65570968 +0000 UTC m=+830.620921240" watchObservedRunningTime="2026-03-18 13:37:46.681347602 +0000 UTC m=+830.646559162" Mar 18 13:37:50.338521 master-0 kubenswrapper[27835]: I0318 13:37:50.338469 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-5gdv7" Mar 18 13:37:51.529700 master-0 kubenswrapper[27835]: I0318 13:37:51.529622 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-549849bb46-fnr78" event={"ID":"ebfb29f2-806f-4507-8235-055d55cd360b","Type":"ContainerStarted","Data":"effc070a364a58adebd15fdda9f6f53d5b7307ae61769e3af4962a91221f78da"} Mar 18 13:37:51.530244 master-0 kubenswrapper[27835]: I0318 13:37:51.529873 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-549849bb46-fnr78" Mar 18 13:37:51.574446 master-0 kubenswrapper[27835]: I0318 13:37:51.568445 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-549849bb46-fnr78" podStartSLOduration=10.5661043 podStartE2EDuration="15.568396921s" podCreationTimestamp="2026-03-18 13:37:36 +0000 UTC" firstStartedPulling="2026-03-18 13:37:45.435071378 +0000 UTC m=+829.400282938" lastFinishedPulling="2026-03-18 13:37:50.437363999 +0000 UTC m=+834.402575559" observedRunningTime="2026-03-18 13:37:51.567560089 +0000 UTC m=+835.532771649" watchObservedRunningTime="2026-03-18 13:37:51.568396921 +0000 UTC m=+835.533608481" Mar 18 13:37:55.597654 master-0 kubenswrapper[27835]: I0318 13:37:55.597592 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6c4dc89f9c-lppql" event={"ID":"01ddc4ef-d5f9-4761-b728-828f9a107b0c","Type":"ContainerStarted","Data":"468039f6da8ec9d5aeb2f2cf341ad768e84affa146ec850909733d8349cc3910"} Mar 18 13:37:55.598322 master-0 kubenswrapper[27835]: I0318 13:37:55.597733 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6c4dc89f9c-lppql" Mar 18 13:37:55.634688 master-0 kubenswrapper[27835]: I0318 13:37:55.634393 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6c4dc89f9c-lppql" podStartSLOduration=9.554785005 podStartE2EDuration="18.634364908s" podCreationTimestamp="2026-03-18 13:37:37 +0000 UTC" firstStartedPulling="2026-03-18 13:37:45.616632151 +0000 UTC m=+829.581843711" lastFinishedPulling="2026-03-18 13:37:54.696212054 +0000 UTC m=+838.661423614" observedRunningTime="2026-03-18 13:37:55.627508923 +0000 UTC m=+839.592720493" watchObservedRunningTime="2026-03-18 13:37:55.634364908 +0000 UTC m=+839.599576468" Mar 18 13:37:57.650697 master-0 kubenswrapper[27835]: I0318 13:37:57.650647 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5d6bf8dcbf-mpv4c" Mar 18 13:38:07.463490 master-0 kubenswrapper[27835]: I0318 13:38:07.463433 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6c4dc89f9c-lppql" Mar 18 13:38:26.558163 master-0 kubenswrapper[27835]: I0318 13:38:26.558068 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-549849bb46-fnr78" Mar 18 13:38:35.532990 master-0 kubenswrapper[27835]: I0318 13:38:35.532921 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-49gvc"] Mar 18 13:38:35.537531 master-0 kubenswrapper[27835]: I0318 13:38:35.537446 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-49gvc" Mar 18 13:38:35.556815 master-0 kubenswrapper[27835]: I0318 13:38:35.549198 27835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 18 13:38:35.576055 master-0 kubenswrapper[27835]: I0318 13:38:35.576015 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-n7qbc"] Mar 18 13:38:35.592650 master-0 kubenswrapper[27835]: I0318 13:38:35.591810 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-49gvc"] Mar 18 13:38:35.592650 master-0 kubenswrapper[27835]: I0318 13:38:35.592016 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-n7qbc" Mar 18 13:38:35.602393 master-0 kubenswrapper[27835]: I0318 13:38:35.601809 27835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 18 13:38:35.602393 master-0 kubenswrapper[27835]: I0318 13:38:35.601998 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 18 13:38:35.655733 master-0 kubenswrapper[27835]: I0318 13:38:35.654533 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-jr6nk"] Mar 18 13:38:35.656547 master-0 kubenswrapper[27835]: I0318 13:38:35.656405 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-jr6nk" Mar 18 13:38:35.659989 master-0 kubenswrapper[27835]: I0318 13:38:35.659875 27835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 18 13:38:35.661514 master-0 kubenswrapper[27835]: I0318 13:38:35.661367 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 18 13:38:35.662068 master-0 kubenswrapper[27835]: I0318 13:38:35.661671 27835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 18 13:38:35.665484 master-0 kubenswrapper[27835]: I0318 13:38:35.665408 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-7bb4cc7c98-h6chv"] Mar 18 13:38:35.668283 master-0 kubenswrapper[27835]: I0318 13:38:35.668243 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-h6chv" Mar 18 13:38:35.673250 master-0 kubenswrapper[27835]: I0318 13:38:35.673222 27835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 18 13:38:35.689112 master-0 kubenswrapper[27835]: I0318 13:38:35.685433 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f57x6\" (UniqueName: \"kubernetes.io/projected/ea0bf2ff-c852-4365-9856-f3e4aa5b766d-kube-api-access-f57x6\") pod \"frr-k8s-webhook-server-bcc4b6f68-49gvc\" (UID: \"ea0bf2ff-c852-4365-9856-f3e4aa5b766d\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-49gvc" Mar 18 13:38:35.689375 master-0 kubenswrapper[27835]: I0318 13:38:35.689337 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-h6chv"] Mar 18 13:38:35.691637 master-0 kubenswrapper[27835]: I0318 13:38:35.691432 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ea0bf2ff-c852-4365-9856-f3e4aa5b766d-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-49gvc\" (UID: \"ea0bf2ff-c852-4365-9856-f3e4aa5b766d\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-49gvc" Mar 18 13:38:35.794635 master-0 kubenswrapper[27835]: I0318 13:38:35.794461 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/bb0c2adb-b940-48a0-b870-826b63cc2de4-metrics\") pod \"frr-k8s-n7qbc\" (UID: \"bb0c2adb-b940-48a0-b870-826b63cc2de4\") " pod="metallb-system/frr-k8s-n7qbc" Mar 18 13:38:35.794635 master-0 kubenswrapper[27835]: I0318 13:38:35.794545 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f57x6\" (UniqueName: \"kubernetes.io/projected/ea0bf2ff-c852-4365-9856-f3e4aa5b766d-kube-api-access-f57x6\") pod \"frr-k8s-webhook-server-bcc4b6f68-49gvc\" (UID: \"ea0bf2ff-c852-4365-9856-f3e4aa5b766d\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-49gvc" Mar 18 13:38:35.794635 master-0 kubenswrapper[27835]: I0318 13:38:35.794580 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c730deaa-80a1-4fa1-aa84-0c523df12f53-metrics-certs\") pod \"controller-7bb4cc7c98-h6chv\" (UID: \"c730deaa-80a1-4fa1-aa84-0c523df12f53\") " pod="metallb-system/controller-7bb4cc7c98-h6chv" Mar 18 13:38:35.794635 master-0 kubenswrapper[27835]: I0318 13:38:35.794611 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/675171a9-caa6-495c-930b-aee9a8d4cbeb-metrics-certs\") pod \"speaker-jr6nk\" (UID: \"675171a9-caa6-495c-930b-aee9a8d4cbeb\") " pod="metallb-system/speaker-jr6nk" Mar 18 13:38:35.794987 master-0 kubenswrapper[27835]: I0318 13:38:35.794677 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/bb0c2adb-b940-48a0-b870-826b63cc2de4-reloader\") pod \"frr-k8s-n7qbc\" (UID: \"bb0c2adb-b940-48a0-b870-826b63cc2de4\") " pod="metallb-system/frr-k8s-n7qbc" Mar 18 13:38:35.794987 master-0 kubenswrapper[27835]: I0318 13:38:35.794708 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/675171a9-caa6-495c-930b-aee9a8d4cbeb-memberlist\") pod \"speaker-jr6nk\" (UID: \"675171a9-caa6-495c-930b-aee9a8d4cbeb\") " pod="metallb-system/speaker-jr6nk" Mar 18 13:38:35.794987 master-0 kubenswrapper[27835]: I0318 13:38:35.794723 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/bb0c2adb-b940-48a0-b870-826b63cc2de4-frr-conf\") pod \"frr-k8s-n7qbc\" (UID: \"bb0c2adb-b940-48a0-b870-826b63cc2de4\") " pod="metallb-system/frr-k8s-n7qbc" Mar 18 13:38:35.794987 master-0 kubenswrapper[27835]: I0318 13:38:35.794739 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttjpx\" (UniqueName: \"kubernetes.io/projected/675171a9-caa6-495c-930b-aee9a8d4cbeb-kube-api-access-ttjpx\") pod \"speaker-jr6nk\" (UID: \"675171a9-caa6-495c-930b-aee9a8d4cbeb\") " pod="metallb-system/speaker-jr6nk" Mar 18 13:38:35.794987 master-0 kubenswrapper[27835]: I0318 13:38:35.794753 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/bb0c2adb-b940-48a0-b870-826b63cc2de4-frr-startup\") pod \"frr-k8s-n7qbc\" (UID: \"bb0c2adb-b940-48a0-b870-826b63cc2de4\") " pod="metallb-system/frr-k8s-n7qbc" Mar 18 13:38:35.794987 master-0 kubenswrapper[27835]: I0318 13:38:35.794769 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/675171a9-caa6-495c-930b-aee9a8d4cbeb-metallb-excludel2\") pod \"speaker-jr6nk\" (UID: \"675171a9-caa6-495c-930b-aee9a8d4cbeb\") " pod="metallb-system/speaker-jr6nk" Mar 18 13:38:35.794987 master-0 kubenswrapper[27835]: I0318 13:38:35.794805 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzghr\" (UniqueName: \"kubernetes.io/projected/c730deaa-80a1-4fa1-aa84-0c523df12f53-kube-api-access-pzghr\") pod \"controller-7bb4cc7c98-h6chv\" (UID: \"c730deaa-80a1-4fa1-aa84-0c523df12f53\") " pod="metallb-system/controller-7bb4cc7c98-h6chv" Mar 18 13:38:35.794987 master-0 kubenswrapper[27835]: I0318 13:38:35.794820 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/bb0c2adb-b940-48a0-b870-826b63cc2de4-frr-sockets\") pod \"frr-k8s-n7qbc\" (UID: \"bb0c2adb-b940-48a0-b870-826b63cc2de4\") " pod="metallb-system/frr-k8s-n7qbc" Mar 18 13:38:35.794987 master-0 kubenswrapper[27835]: I0318 13:38:35.794853 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb0c2adb-b940-48a0-b870-826b63cc2de4-metrics-certs\") pod \"frr-k8s-n7qbc\" (UID: \"bb0c2adb-b940-48a0-b870-826b63cc2de4\") " pod="metallb-system/frr-k8s-n7qbc" Mar 18 13:38:35.794987 master-0 kubenswrapper[27835]: I0318 13:38:35.794871 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk77c\" (UniqueName: \"kubernetes.io/projected/bb0c2adb-b940-48a0-b870-826b63cc2de4-kube-api-access-zk77c\") pod \"frr-k8s-n7qbc\" (UID: \"bb0c2adb-b940-48a0-b870-826b63cc2de4\") " pod="metallb-system/frr-k8s-n7qbc" Mar 18 13:38:35.794987 master-0 kubenswrapper[27835]: I0318 13:38:35.794894 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c730deaa-80a1-4fa1-aa84-0c523df12f53-cert\") pod \"controller-7bb4cc7c98-h6chv\" (UID: \"c730deaa-80a1-4fa1-aa84-0c523df12f53\") " pod="metallb-system/controller-7bb4cc7c98-h6chv" Mar 18 13:38:35.794987 master-0 kubenswrapper[27835]: I0318 13:38:35.794914 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ea0bf2ff-c852-4365-9856-f3e4aa5b766d-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-49gvc\" (UID: \"ea0bf2ff-c852-4365-9856-f3e4aa5b766d\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-49gvc" Mar 18 13:38:35.800849 master-0 kubenswrapper[27835]: I0318 13:38:35.800798 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ea0bf2ff-c852-4365-9856-f3e4aa5b766d-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-49gvc\" (UID: \"ea0bf2ff-c852-4365-9856-f3e4aa5b766d\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-49gvc" Mar 18 13:38:35.827464 master-0 kubenswrapper[27835]: I0318 13:38:35.827432 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f57x6\" (UniqueName: \"kubernetes.io/projected/ea0bf2ff-c852-4365-9856-f3e4aa5b766d-kube-api-access-f57x6\") pod \"frr-k8s-webhook-server-bcc4b6f68-49gvc\" (UID: \"ea0bf2ff-c852-4365-9856-f3e4aa5b766d\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-49gvc" Mar 18 13:38:35.878882 master-0 kubenswrapper[27835]: I0318 13:38:35.878807 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-49gvc" Mar 18 13:38:35.896488 master-0 kubenswrapper[27835]: I0318 13:38:35.895689 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c730deaa-80a1-4fa1-aa84-0c523df12f53-metrics-certs\") pod \"controller-7bb4cc7c98-h6chv\" (UID: \"c730deaa-80a1-4fa1-aa84-0c523df12f53\") " pod="metallb-system/controller-7bb4cc7c98-h6chv" Mar 18 13:38:35.896488 master-0 kubenswrapper[27835]: I0318 13:38:35.895737 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/675171a9-caa6-495c-930b-aee9a8d4cbeb-metrics-certs\") pod \"speaker-jr6nk\" (UID: \"675171a9-caa6-495c-930b-aee9a8d4cbeb\") " pod="metallb-system/speaker-jr6nk" Mar 18 13:38:35.896488 master-0 kubenswrapper[27835]: I0318 13:38:35.895782 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/bb0c2adb-b940-48a0-b870-826b63cc2de4-reloader\") pod \"frr-k8s-n7qbc\" (UID: \"bb0c2adb-b940-48a0-b870-826b63cc2de4\") " pod="metallb-system/frr-k8s-n7qbc" Mar 18 13:38:35.896488 master-0 kubenswrapper[27835]: I0318 13:38:35.895805 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/675171a9-caa6-495c-930b-aee9a8d4cbeb-memberlist\") pod \"speaker-jr6nk\" (UID: \"675171a9-caa6-495c-930b-aee9a8d4cbeb\") " pod="metallb-system/speaker-jr6nk" Mar 18 13:38:35.896488 master-0 kubenswrapper[27835]: I0318 13:38:35.895818 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/bb0c2adb-b940-48a0-b870-826b63cc2de4-frr-conf\") pod \"frr-k8s-n7qbc\" (UID: \"bb0c2adb-b940-48a0-b870-826b63cc2de4\") " pod="metallb-system/frr-k8s-n7qbc" Mar 18 13:38:35.896488 master-0 kubenswrapper[27835]: I0318 13:38:35.895836 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttjpx\" (UniqueName: \"kubernetes.io/projected/675171a9-caa6-495c-930b-aee9a8d4cbeb-kube-api-access-ttjpx\") pod \"speaker-jr6nk\" (UID: \"675171a9-caa6-495c-930b-aee9a8d4cbeb\") " pod="metallb-system/speaker-jr6nk" Mar 18 13:38:35.896488 master-0 kubenswrapper[27835]: I0318 13:38:35.895855 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/bb0c2adb-b940-48a0-b870-826b63cc2de4-frr-startup\") pod \"frr-k8s-n7qbc\" (UID: \"bb0c2adb-b940-48a0-b870-826b63cc2de4\") " pod="metallb-system/frr-k8s-n7qbc" Mar 18 13:38:35.896488 master-0 kubenswrapper[27835]: I0318 13:38:35.895870 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/675171a9-caa6-495c-930b-aee9a8d4cbeb-metallb-excludel2\") pod \"speaker-jr6nk\" (UID: \"675171a9-caa6-495c-930b-aee9a8d4cbeb\") " pod="metallb-system/speaker-jr6nk" Mar 18 13:38:35.896488 master-0 kubenswrapper[27835]: I0318 13:38:35.895896 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzghr\" (UniqueName: \"kubernetes.io/projected/c730deaa-80a1-4fa1-aa84-0c523df12f53-kube-api-access-pzghr\") pod \"controller-7bb4cc7c98-h6chv\" (UID: \"c730deaa-80a1-4fa1-aa84-0c523df12f53\") " pod="metallb-system/controller-7bb4cc7c98-h6chv" Mar 18 13:38:35.896488 master-0 kubenswrapper[27835]: I0318 13:38:35.895913 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/bb0c2adb-b940-48a0-b870-826b63cc2de4-frr-sockets\") pod \"frr-k8s-n7qbc\" (UID: \"bb0c2adb-b940-48a0-b870-826b63cc2de4\") " pod="metallb-system/frr-k8s-n7qbc" Mar 18 13:38:35.896488 master-0 kubenswrapper[27835]: I0318 13:38:35.895943 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb0c2adb-b940-48a0-b870-826b63cc2de4-metrics-certs\") pod \"frr-k8s-n7qbc\" (UID: \"bb0c2adb-b940-48a0-b870-826b63cc2de4\") " pod="metallb-system/frr-k8s-n7qbc" Mar 18 13:38:35.896488 master-0 kubenswrapper[27835]: I0318 13:38:35.895961 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk77c\" (UniqueName: \"kubernetes.io/projected/bb0c2adb-b940-48a0-b870-826b63cc2de4-kube-api-access-zk77c\") pod \"frr-k8s-n7qbc\" (UID: \"bb0c2adb-b940-48a0-b870-826b63cc2de4\") " pod="metallb-system/frr-k8s-n7qbc" Mar 18 13:38:35.896488 master-0 kubenswrapper[27835]: I0318 13:38:35.895981 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c730deaa-80a1-4fa1-aa84-0c523df12f53-cert\") pod \"controller-7bb4cc7c98-h6chv\" (UID: \"c730deaa-80a1-4fa1-aa84-0c523df12f53\") " pod="metallb-system/controller-7bb4cc7c98-h6chv" Mar 18 13:38:35.896488 master-0 kubenswrapper[27835]: I0318 13:38:35.896002 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/bb0c2adb-b940-48a0-b870-826b63cc2de4-metrics\") pod \"frr-k8s-n7qbc\" (UID: \"bb0c2adb-b940-48a0-b870-826b63cc2de4\") " pod="metallb-system/frr-k8s-n7qbc" Mar 18 13:38:35.897993 master-0 kubenswrapper[27835]: I0318 13:38:35.897938 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/bb0c2adb-b940-48a0-b870-826b63cc2de4-metrics\") pod \"frr-k8s-n7qbc\" (UID: \"bb0c2adb-b940-48a0-b870-826b63cc2de4\") " pod="metallb-system/frr-k8s-n7qbc" Mar 18 13:38:35.899445 master-0 kubenswrapper[27835]: I0318 13:38:35.899069 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/bb0c2adb-b940-48a0-b870-826b63cc2de4-frr-conf\") pod \"frr-k8s-n7qbc\" (UID: \"bb0c2adb-b940-48a0-b870-826b63cc2de4\") " pod="metallb-system/frr-k8s-n7qbc" Mar 18 13:38:35.903233 master-0 kubenswrapper[27835]: I0318 13:38:35.900073 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/bb0c2adb-b940-48a0-b870-826b63cc2de4-frr-sockets\") pod \"frr-k8s-n7qbc\" (UID: \"bb0c2adb-b940-48a0-b870-826b63cc2de4\") " pod="metallb-system/frr-k8s-n7qbc" Mar 18 13:38:35.903233 master-0 kubenswrapper[27835]: E0318 13:38:35.900244 27835 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 18 13:38:35.903233 master-0 kubenswrapper[27835]: E0318 13:38:35.900287 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/675171a9-caa6-495c-930b-aee9a8d4cbeb-memberlist podName:675171a9-caa6-495c-930b-aee9a8d4cbeb nodeName:}" failed. No retries permitted until 2026-03-18 13:38:36.400272224 +0000 UTC m=+880.365483784 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/675171a9-caa6-495c-930b-aee9a8d4cbeb-memberlist") pod "speaker-jr6nk" (UID: "675171a9-caa6-495c-930b-aee9a8d4cbeb") : secret "metallb-memberlist" not found Mar 18 13:38:35.903233 master-0 kubenswrapper[27835]: I0318 13:38:35.901707 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/bb0c2adb-b940-48a0-b870-826b63cc2de4-reloader\") pod \"frr-k8s-n7qbc\" (UID: \"bb0c2adb-b940-48a0-b870-826b63cc2de4\") " pod="metallb-system/frr-k8s-n7qbc" Mar 18 13:38:35.903233 master-0 kubenswrapper[27835]: I0318 13:38:35.902193 27835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 18 13:38:35.903233 master-0 kubenswrapper[27835]: I0318 13:38:35.902214 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/675171a9-caa6-495c-930b-aee9a8d4cbeb-metallb-excludel2\") pod \"speaker-jr6nk\" (UID: \"675171a9-caa6-495c-930b-aee9a8d4cbeb\") " pod="metallb-system/speaker-jr6nk" Mar 18 13:38:35.903233 master-0 kubenswrapper[27835]: I0318 13:38:35.902192 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/bb0c2adb-b940-48a0-b870-826b63cc2de4-frr-startup\") pod \"frr-k8s-n7qbc\" (UID: \"bb0c2adb-b940-48a0-b870-826b63cc2de4\") " pod="metallb-system/frr-k8s-n7qbc" Mar 18 13:38:35.905019 master-0 kubenswrapper[27835]: I0318 13:38:35.904943 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c730deaa-80a1-4fa1-aa84-0c523df12f53-metrics-certs\") pod \"controller-7bb4cc7c98-h6chv\" (UID: \"c730deaa-80a1-4fa1-aa84-0c523df12f53\") " pod="metallb-system/controller-7bb4cc7c98-h6chv" Mar 18 13:38:35.905400 master-0 kubenswrapper[27835]: I0318 13:38:35.905353 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/675171a9-caa6-495c-930b-aee9a8d4cbeb-metrics-certs\") pod \"speaker-jr6nk\" (UID: \"675171a9-caa6-495c-930b-aee9a8d4cbeb\") " pod="metallb-system/speaker-jr6nk" Mar 18 13:38:35.912589 master-0 kubenswrapper[27835]: I0318 13:38:35.912184 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb0c2adb-b940-48a0-b870-826b63cc2de4-metrics-certs\") pod \"frr-k8s-n7qbc\" (UID: \"bb0c2adb-b940-48a0-b870-826b63cc2de4\") " pod="metallb-system/frr-k8s-n7qbc" Mar 18 13:38:35.912589 master-0 kubenswrapper[27835]: I0318 13:38:35.912539 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c730deaa-80a1-4fa1-aa84-0c523df12f53-cert\") pod \"controller-7bb4cc7c98-h6chv\" (UID: \"c730deaa-80a1-4fa1-aa84-0c523df12f53\") " pod="metallb-system/controller-7bb4cc7c98-h6chv" Mar 18 13:38:35.917231 master-0 kubenswrapper[27835]: I0318 13:38:35.917214 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk77c\" (UniqueName: \"kubernetes.io/projected/bb0c2adb-b940-48a0-b870-826b63cc2de4-kube-api-access-zk77c\") pod \"frr-k8s-n7qbc\" (UID: \"bb0c2adb-b940-48a0-b870-826b63cc2de4\") " pod="metallb-system/frr-k8s-n7qbc" Mar 18 13:38:35.919549 master-0 kubenswrapper[27835]: I0318 13:38:35.919498 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttjpx\" (UniqueName: \"kubernetes.io/projected/675171a9-caa6-495c-930b-aee9a8d4cbeb-kube-api-access-ttjpx\") pod \"speaker-jr6nk\" (UID: \"675171a9-caa6-495c-930b-aee9a8d4cbeb\") " pod="metallb-system/speaker-jr6nk" Mar 18 13:38:35.922027 master-0 kubenswrapper[27835]: I0318 13:38:35.921983 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzghr\" (UniqueName: \"kubernetes.io/projected/c730deaa-80a1-4fa1-aa84-0c523df12f53-kube-api-access-pzghr\") pod \"controller-7bb4cc7c98-h6chv\" (UID: \"c730deaa-80a1-4fa1-aa84-0c523df12f53\") " pod="metallb-system/controller-7bb4cc7c98-h6chv" Mar 18 13:38:35.927485 master-0 kubenswrapper[27835]: I0318 13:38:35.927396 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-n7qbc" Mar 18 13:38:35.993574 master-0 kubenswrapper[27835]: I0318 13:38:35.993529 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-h6chv" Mar 18 13:38:36.299892 master-0 kubenswrapper[27835]: I0318 13:38:36.299820 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-49gvc"] Mar 18 13:38:36.406222 master-0 kubenswrapper[27835]: I0318 13:38:36.406095 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/675171a9-caa6-495c-930b-aee9a8d4cbeb-memberlist\") pod \"speaker-jr6nk\" (UID: \"675171a9-caa6-495c-930b-aee9a8d4cbeb\") " pod="metallb-system/speaker-jr6nk" Mar 18 13:38:36.406444 master-0 kubenswrapper[27835]: E0318 13:38:36.406389 27835 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 18 13:38:36.406508 master-0 kubenswrapper[27835]: E0318 13:38:36.406479 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/675171a9-caa6-495c-930b-aee9a8d4cbeb-memberlist podName:675171a9-caa6-495c-930b-aee9a8d4cbeb nodeName:}" failed. No retries permitted until 2026-03-18 13:38:37.406461983 +0000 UTC m=+881.371673543 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/675171a9-caa6-495c-930b-aee9a8d4cbeb-memberlist") pod "speaker-jr6nk" (UID: "675171a9-caa6-495c-930b-aee9a8d4cbeb") : secret "metallb-memberlist" not found Mar 18 13:38:36.430818 master-0 kubenswrapper[27835]: I0318 13:38:36.430737 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-h6chv"] Mar 18 13:38:36.431691 master-0 kubenswrapper[27835]: W0318 13:38:36.431646 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc730deaa_80a1_4fa1_aa84_0c523df12f53.slice/crio-cbae73c574e1569c09e4d8ceb9798e219ed34273ecec68a0d3798711b66fcb7e WatchSource:0}: Error finding container cbae73c574e1569c09e4d8ceb9798e219ed34273ecec68a0d3798711b66fcb7e: Status 404 returned error can't find the container with id cbae73c574e1569c09e4d8ceb9798e219ed34273ecec68a0d3798711b66fcb7e Mar 18 13:38:37.002878 master-0 kubenswrapper[27835]: I0318 13:38:37.002819 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-h6chv" event={"ID":"c730deaa-80a1-4fa1-aa84-0c523df12f53","Type":"ContainerStarted","Data":"381c9e98ead0b7822937d33ab7910b180bc6eeca8c0437cc8a5c8506833871ed"} Mar 18 13:38:37.002878 master-0 kubenswrapper[27835]: I0318 13:38:37.002884 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-h6chv" event={"ID":"c730deaa-80a1-4fa1-aa84-0c523df12f53","Type":"ContainerStarted","Data":"cbae73c574e1569c09e4d8ceb9798e219ed34273ecec68a0d3798711b66fcb7e"} Mar 18 13:38:37.005102 master-0 kubenswrapper[27835]: I0318 13:38:37.005060 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n7qbc" event={"ID":"bb0c2adb-b940-48a0-b870-826b63cc2de4","Type":"ContainerStarted","Data":"78f462b78e9bdd884d5b2108a7fbe3e91fca1eeb6a0a7e5d25a43e5089ef2613"} Mar 18 13:38:37.006324 master-0 kubenswrapper[27835]: I0318 13:38:37.006290 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-49gvc" event={"ID":"ea0bf2ff-c852-4365-9856-f3e4aa5b766d","Type":"ContainerStarted","Data":"d1329ee38775bad3e930c53ee10f9b05762d12fc37fafbe7153ad68186121f55"} Mar 18 13:38:37.421461 master-0 kubenswrapper[27835]: I0318 13:38:37.420846 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/675171a9-caa6-495c-930b-aee9a8d4cbeb-memberlist\") pod \"speaker-jr6nk\" (UID: \"675171a9-caa6-495c-930b-aee9a8d4cbeb\") " pod="metallb-system/speaker-jr6nk" Mar 18 13:38:37.427628 master-0 kubenswrapper[27835]: I0318 13:38:37.427575 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/675171a9-caa6-495c-930b-aee9a8d4cbeb-memberlist\") pod \"speaker-jr6nk\" (UID: \"675171a9-caa6-495c-930b-aee9a8d4cbeb\") " pod="metallb-system/speaker-jr6nk" Mar 18 13:38:37.478083 master-0 kubenswrapper[27835]: I0318 13:38:37.478035 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-jr6nk" Mar 18 13:38:37.503918 master-0 kubenswrapper[27835]: W0318 13:38:37.503786 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod675171a9_caa6_495c_930b_aee9a8d4cbeb.slice/crio-91eebc474fcd590b789ea9aba097d07d2faabaec3988d03e40dc443c4f19a67e WatchSource:0}: Error finding container 91eebc474fcd590b789ea9aba097d07d2faabaec3988d03e40dc443c4f19a67e: Status 404 returned error can't find the container with id 91eebc474fcd590b789ea9aba097d07d2faabaec3988d03e40dc443c4f19a67e Mar 18 13:38:37.919296 master-0 kubenswrapper[27835]: I0318 13:38:37.919210 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-29vrd"] Mar 18 13:38:37.921285 master-0 kubenswrapper[27835]: I0318 13:38:37.921238 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-29vrd" Mar 18 13:38:37.937496 master-0 kubenswrapper[27835]: I0318 13:38:37.934950 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srhrp\" (UniqueName: \"kubernetes.io/projected/4eadbe2a-43ed-4b28-871c-b8f7a3579fae-kube-api-access-srhrp\") pod \"nmstate-metrics-9b8c8685d-29vrd\" (UID: \"4eadbe2a-43ed-4b28-871c-b8f7a3579fae\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-29vrd" Mar 18 13:38:37.964438 master-0 kubenswrapper[27835]: I0318 13:38:37.946539 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-mqsmx"] Mar 18 13:38:37.964438 master-0 kubenswrapper[27835]: I0318 13:38:37.950099 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-mqsmx" Mar 18 13:38:37.964438 master-0 kubenswrapper[27835]: I0318 13:38:37.951924 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 18 13:38:37.964438 master-0 kubenswrapper[27835]: I0318 13:38:37.959271 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-29vrd"] Mar 18 13:38:37.968638 master-0 kubenswrapper[27835]: I0318 13:38:37.967370 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-mqsmx"] Mar 18 13:38:37.980705 master-0 kubenswrapper[27835]: I0318 13:38:37.980084 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-k22dv"] Mar 18 13:38:37.985428 master-0 kubenswrapper[27835]: I0318 13:38:37.984805 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-k22dv" Mar 18 13:38:38.028434 master-0 kubenswrapper[27835]: I0318 13:38:38.022742 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jr6nk" event={"ID":"675171a9-caa6-495c-930b-aee9a8d4cbeb","Type":"ContainerStarted","Data":"7a10233d8d679ecf73d1f61026f542bc0b741f4eab6f993e158081b512bc0577"} Mar 18 13:38:38.028434 master-0 kubenswrapper[27835]: I0318 13:38:38.022799 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jr6nk" event={"ID":"675171a9-caa6-495c-930b-aee9a8d4cbeb","Type":"ContainerStarted","Data":"91eebc474fcd590b789ea9aba097d07d2faabaec3988d03e40dc443c4f19a67e"} Mar 18 13:38:38.040425 master-0 kubenswrapper[27835]: I0318 13:38:38.036894 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srhrp\" (UniqueName: \"kubernetes.io/projected/4eadbe2a-43ed-4b28-871c-b8f7a3579fae-kube-api-access-srhrp\") pod \"nmstate-metrics-9b8c8685d-29vrd\" (UID: \"4eadbe2a-43ed-4b28-871c-b8f7a3579fae\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-29vrd" Mar 18 13:38:38.040425 master-0 kubenswrapper[27835]: I0318 13:38:38.036977 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/2954b55d-28c7-453e-b699-c73bb05e05b9-nmstate-lock\") pod \"nmstate-handler-k22dv\" (UID: \"2954b55d-28c7-453e-b699-c73bb05e05b9\") " pod="openshift-nmstate/nmstate-handler-k22dv" Mar 18 13:38:38.040425 master-0 kubenswrapper[27835]: I0318 13:38:38.037003 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlcbh\" (UniqueName: \"kubernetes.io/projected/bc036027-dfb3-45cb-aedb-017872d83490-kube-api-access-xlcbh\") pod \"nmstate-webhook-5f558f5558-mqsmx\" (UID: \"bc036027-dfb3-45cb-aedb-017872d83490\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-mqsmx" Mar 18 13:38:38.040425 master-0 kubenswrapper[27835]: I0318 13:38:38.037039 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m6mv\" (UniqueName: \"kubernetes.io/projected/2954b55d-28c7-453e-b699-c73bb05e05b9-kube-api-access-2m6mv\") pod \"nmstate-handler-k22dv\" (UID: \"2954b55d-28c7-453e-b699-c73bb05e05b9\") " pod="openshift-nmstate/nmstate-handler-k22dv" Mar 18 13:38:38.040425 master-0 kubenswrapper[27835]: I0318 13:38:38.037057 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/bc036027-dfb3-45cb-aedb-017872d83490-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-mqsmx\" (UID: \"bc036027-dfb3-45cb-aedb-017872d83490\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-mqsmx" Mar 18 13:38:38.040425 master-0 kubenswrapper[27835]: I0318 13:38:38.037168 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/2954b55d-28c7-453e-b699-c73bb05e05b9-ovs-socket\") pod \"nmstate-handler-k22dv\" (UID: \"2954b55d-28c7-453e-b699-c73bb05e05b9\") " pod="openshift-nmstate/nmstate-handler-k22dv" Mar 18 13:38:38.040425 master-0 kubenswrapper[27835]: I0318 13:38:38.037198 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/2954b55d-28c7-453e-b699-c73bb05e05b9-dbus-socket\") pod \"nmstate-handler-k22dv\" (UID: \"2954b55d-28c7-453e-b699-c73bb05e05b9\") " pod="openshift-nmstate/nmstate-handler-k22dv" Mar 18 13:38:38.102471 master-0 kubenswrapper[27835]: I0318 13:38:38.086484 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srhrp\" (UniqueName: \"kubernetes.io/projected/4eadbe2a-43ed-4b28-871c-b8f7a3579fae-kube-api-access-srhrp\") pod \"nmstate-metrics-9b8c8685d-29vrd\" (UID: \"4eadbe2a-43ed-4b28-871c-b8f7a3579fae\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-29vrd" Mar 18 13:38:38.148559 master-0 kubenswrapper[27835]: I0318 13:38:38.148460 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/2954b55d-28c7-453e-b699-c73bb05e05b9-nmstate-lock\") pod \"nmstate-handler-k22dv\" (UID: \"2954b55d-28c7-453e-b699-c73bb05e05b9\") " pod="openshift-nmstate/nmstate-handler-k22dv" Mar 18 13:38:38.148559 master-0 kubenswrapper[27835]: I0318 13:38:38.148522 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlcbh\" (UniqueName: \"kubernetes.io/projected/bc036027-dfb3-45cb-aedb-017872d83490-kube-api-access-xlcbh\") pod \"nmstate-webhook-5f558f5558-mqsmx\" (UID: \"bc036027-dfb3-45cb-aedb-017872d83490\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-mqsmx" Mar 18 13:38:38.148807 master-0 kubenswrapper[27835]: I0318 13:38:38.148581 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m6mv\" (UniqueName: \"kubernetes.io/projected/2954b55d-28c7-453e-b699-c73bb05e05b9-kube-api-access-2m6mv\") pod \"nmstate-handler-k22dv\" (UID: \"2954b55d-28c7-453e-b699-c73bb05e05b9\") " pod="openshift-nmstate/nmstate-handler-k22dv" Mar 18 13:38:38.148807 master-0 kubenswrapper[27835]: I0318 13:38:38.148614 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/bc036027-dfb3-45cb-aedb-017872d83490-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-mqsmx\" (UID: \"bc036027-dfb3-45cb-aedb-017872d83490\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-mqsmx" Mar 18 13:38:38.148807 master-0 kubenswrapper[27835]: I0318 13:38:38.148672 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/2954b55d-28c7-453e-b699-c73bb05e05b9-ovs-socket\") pod \"nmstate-handler-k22dv\" (UID: \"2954b55d-28c7-453e-b699-c73bb05e05b9\") " pod="openshift-nmstate/nmstate-handler-k22dv" Mar 18 13:38:38.148807 master-0 kubenswrapper[27835]: I0318 13:38:38.148702 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/2954b55d-28c7-453e-b699-c73bb05e05b9-dbus-socket\") pod \"nmstate-handler-k22dv\" (UID: \"2954b55d-28c7-453e-b699-c73bb05e05b9\") " pod="openshift-nmstate/nmstate-handler-k22dv" Mar 18 13:38:38.148932 master-0 kubenswrapper[27835]: I0318 13:38:38.148853 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/2954b55d-28c7-453e-b699-c73bb05e05b9-dbus-socket\") pod \"nmstate-handler-k22dv\" (UID: \"2954b55d-28c7-453e-b699-c73bb05e05b9\") " pod="openshift-nmstate/nmstate-handler-k22dv" Mar 18 13:38:38.148932 master-0 kubenswrapper[27835]: I0318 13:38:38.148903 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/2954b55d-28c7-453e-b699-c73bb05e05b9-nmstate-lock\") pod \"nmstate-handler-k22dv\" (UID: \"2954b55d-28c7-453e-b699-c73bb05e05b9\") " pod="openshift-nmstate/nmstate-handler-k22dv" Mar 18 13:38:38.151161 master-0 kubenswrapper[27835]: I0318 13:38:38.149482 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-jxd5c"] Mar 18 13:38:38.155748 master-0 kubenswrapper[27835]: I0318 13:38:38.155399 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/bc036027-dfb3-45cb-aedb-017872d83490-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-mqsmx\" (UID: \"bc036027-dfb3-45cb-aedb-017872d83490\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-mqsmx" Mar 18 13:38:38.155748 master-0 kubenswrapper[27835]: I0318 13:38:38.155522 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/2954b55d-28c7-453e-b699-c73bb05e05b9-ovs-socket\") pod \"nmstate-handler-k22dv\" (UID: \"2954b55d-28c7-453e-b699-c73bb05e05b9\") " pod="openshift-nmstate/nmstate-handler-k22dv" Mar 18 13:38:38.169252 master-0 kubenswrapper[27835]: I0318 13:38:38.167942 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-jxd5c" Mar 18 13:38:38.174809 master-0 kubenswrapper[27835]: I0318 13:38:38.174768 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m6mv\" (UniqueName: \"kubernetes.io/projected/2954b55d-28c7-453e-b699-c73bb05e05b9-kube-api-access-2m6mv\") pod \"nmstate-handler-k22dv\" (UID: \"2954b55d-28c7-453e-b699-c73bb05e05b9\") " pod="openshift-nmstate/nmstate-handler-k22dv" Mar 18 13:38:38.175003 master-0 kubenswrapper[27835]: I0318 13:38:38.174897 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 18 13:38:38.180125 master-0 kubenswrapper[27835]: I0318 13:38:38.175120 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 18 13:38:38.180125 master-0 kubenswrapper[27835]: I0318 13:38:38.177450 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlcbh\" (UniqueName: \"kubernetes.io/projected/bc036027-dfb3-45cb-aedb-017872d83490-kube-api-access-xlcbh\") pod \"nmstate-webhook-5f558f5558-mqsmx\" (UID: \"bc036027-dfb3-45cb-aedb-017872d83490\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-mqsmx" Mar 18 13:38:38.198871 master-0 kubenswrapper[27835]: I0318 13:38:38.198763 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-jxd5c"] Mar 18 13:38:38.264437 master-0 kubenswrapper[27835]: I0318 13:38:38.258273 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tt99\" (UniqueName: \"kubernetes.io/projected/4a9778bb-d878-4ae5-a7ab-004a1c9ee1e3-kube-api-access-8tt99\") pod \"nmstate-console-plugin-86f58fcf4-jxd5c\" (UID: \"4a9778bb-d878-4ae5-a7ab-004a1c9ee1e3\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-jxd5c" Mar 18 13:38:38.264437 master-0 kubenswrapper[27835]: I0318 13:38:38.258341 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4a9778bb-d878-4ae5-a7ab-004a1c9ee1e3-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-jxd5c\" (UID: \"4a9778bb-d878-4ae5-a7ab-004a1c9ee1e3\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-jxd5c" Mar 18 13:38:38.264437 master-0 kubenswrapper[27835]: I0318 13:38:38.258548 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4a9778bb-d878-4ae5-a7ab-004a1c9ee1e3-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-jxd5c\" (UID: \"4a9778bb-d878-4ae5-a7ab-004a1c9ee1e3\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-jxd5c" Mar 18 13:38:38.286440 master-0 kubenswrapper[27835]: I0318 13:38:38.284384 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-29vrd" Mar 18 13:38:38.299746 master-0 kubenswrapper[27835]: I0318 13:38:38.293133 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-mqsmx" Mar 18 13:38:38.325450 master-0 kubenswrapper[27835]: I0318 13:38:38.325039 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-k22dv" Mar 18 13:38:38.364441 master-0 kubenswrapper[27835]: I0318 13:38:38.364355 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4a9778bb-d878-4ae5-a7ab-004a1c9ee1e3-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-jxd5c\" (UID: \"4a9778bb-d878-4ae5-a7ab-004a1c9ee1e3\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-jxd5c" Mar 18 13:38:38.374968 master-0 kubenswrapper[27835]: I0318 13:38:38.365150 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tt99\" (UniqueName: \"kubernetes.io/projected/4a9778bb-d878-4ae5-a7ab-004a1c9ee1e3-kube-api-access-8tt99\") pod \"nmstate-console-plugin-86f58fcf4-jxd5c\" (UID: \"4a9778bb-d878-4ae5-a7ab-004a1c9ee1e3\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-jxd5c" Mar 18 13:38:38.374968 master-0 kubenswrapper[27835]: I0318 13:38:38.365232 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4a9778bb-d878-4ae5-a7ab-004a1c9ee1e3-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-jxd5c\" (UID: \"4a9778bb-d878-4ae5-a7ab-004a1c9ee1e3\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-jxd5c" Mar 18 13:38:38.374968 master-0 kubenswrapper[27835]: I0318 13:38:38.371594 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4a9778bb-d878-4ae5-a7ab-004a1c9ee1e3-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-jxd5c\" (UID: \"4a9778bb-d878-4ae5-a7ab-004a1c9ee1e3\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-jxd5c" Mar 18 13:38:38.374968 master-0 kubenswrapper[27835]: I0318 13:38:38.374354 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/4a9778bb-d878-4ae5-a7ab-004a1c9ee1e3-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-jxd5c\" (UID: \"4a9778bb-d878-4ae5-a7ab-004a1c9ee1e3\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-jxd5c" Mar 18 13:38:38.416399 master-0 kubenswrapper[27835]: I0318 13:38:38.416231 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tt99\" (UniqueName: \"kubernetes.io/projected/4a9778bb-d878-4ae5-a7ab-004a1c9ee1e3-kube-api-access-8tt99\") pod \"nmstate-console-plugin-86f58fcf4-jxd5c\" (UID: \"4a9778bb-d878-4ae5-a7ab-004a1c9ee1e3\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-jxd5c" Mar 18 13:38:38.497870 master-0 kubenswrapper[27835]: I0318 13:38:38.497664 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7d8bdbbd8c-dfpg8"] Mar 18 13:38:38.505921 master-0 kubenswrapper[27835]: I0318 13:38:38.505878 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7d8bdbbd8c-dfpg8" Mar 18 13:38:38.509164 master-0 kubenswrapper[27835]: I0318 13:38:38.509125 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7d8bdbbd8c-dfpg8"] Mar 18 13:38:38.555170 master-0 kubenswrapper[27835]: I0318 13:38:38.555090 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-jxd5c" Mar 18 13:38:38.571053 master-0 kubenswrapper[27835]: I0318 13:38:38.571013 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/30881447-b8cd-4e98-a3d6-78b186a00d82-console-serving-cert\") pod \"console-7d8bdbbd8c-dfpg8\" (UID: \"30881447-b8cd-4e98-a3d6-78b186a00d82\") " pod="openshift-console/console-7d8bdbbd8c-dfpg8" Mar 18 13:38:38.571183 master-0 kubenswrapper[27835]: I0318 13:38:38.571062 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/30881447-b8cd-4e98-a3d6-78b186a00d82-oauth-serving-cert\") pod \"console-7d8bdbbd8c-dfpg8\" (UID: \"30881447-b8cd-4e98-a3d6-78b186a00d82\") " pod="openshift-console/console-7d8bdbbd8c-dfpg8" Mar 18 13:38:38.571183 master-0 kubenswrapper[27835]: I0318 13:38:38.571095 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/30881447-b8cd-4e98-a3d6-78b186a00d82-console-config\") pod \"console-7d8bdbbd8c-dfpg8\" (UID: \"30881447-b8cd-4e98-a3d6-78b186a00d82\") " pod="openshift-console/console-7d8bdbbd8c-dfpg8" Mar 18 13:38:38.571183 master-0 kubenswrapper[27835]: I0318 13:38:38.571122 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/30881447-b8cd-4e98-a3d6-78b186a00d82-console-oauth-config\") pod \"console-7d8bdbbd8c-dfpg8\" (UID: \"30881447-b8cd-4e98-a3d6-78b186a00d82\") " pod="openshift-console/console-7d8bdbbd8c-dfpg8" Mar 18 13:38:38.571183 master-0 kubenswrapper[27835]: I0318 13:38:38.571144 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30881447-b8cd-4e98-a3d6-78b186a00d82-trusted-ca-bundle\") pod \"console-7d8bdbbd8c-dfpg8\" (UID: \"30881447-b8cd-4e98-a3d6-78b186a00d82\") " pod="openshift-console/console-7d8bdbbd8c-dfpg8" Mar 18 13:38:38.571183 master-0 kubenswrapper[27835]: I0318 13:38:38.571175 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/30881447-b8cd-4e98-a3d6-78b186a00d82-service-ca\") pod \"console-7d8bdbbd8c-dfpg8\" (UID: \"30881447-b8cd-4e98-a3d6-78b186a00d82\") " pod="openshift-console/console-7d8bdbbd8c-dfpg8" Mar 18 13:38:38.571401 master-0 kubenswrapper[27835]: I0318 13:38:38.571251 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cf44\" (UniqueName: \"kubernetes.io/projected/30881447-b8cd-4e98-a3d6-78b186a00d82-kube-api-access-8cf44\") pod \"console-7d8bdbbd8c-dfpg8\" (UID: \"30881447-b8cd-4e98-a3d6-78b186a00d82\") " pod="openshift-console/console-7d8bdbbd8c-dfpg8" Mar 18 13:38:38.673369 master-0 kubenswrapper[27835]: I0318 13:38:38.672198 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cf44\" (UniqueName: \"kubernetes.io/projected/30881447-b8cd-4e98-a3d6-78b186a00d82-kube-api-access-8cf44\") pod \"console-7d8bdbbd8c-dfpg8\" (UID: \"30881447-b8cd-4e98-a3d6-78b186a00d82\") " pod="openshift-console/console-7d8bdbbd8c-dfpg8" Mar 18 13:38:38.673369 master-0 kubenswrapper[27835]: I0318 13:38:38.672261 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/30881447-b8cd-4e98-a3d6-78b186a00d82-console-serving-cert\") pod \"console-7d8bdbbd8c-dfpg8\" (UID: \"30881447-b8cd-4e98-a3d6-78b186a00d82\") " pod="openshift-console/console-7d8bdbbd8c-dfpg8" Mar 18 13:38:38.673369 master-0 kubenswrapper[27835]: I0318 13:38:38.672637 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/30881447-b8cd-4e98-a3d6-78b186a00d82-oauth-serving-cert\") pod \"console-7d8bdbbd8c-dfpg8\" (UID: \"30881447-b8cd-4e98-a3d6-78b186a00d82\") " pod="openshift-console/console-7d8bdbbd8c-dfpg8" Mar 18 13:38:38.673369 master-0 kubenswrapper[27835]: I0318 13:38:38.672677 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/30881447-b8cd-4e98-a3d6-78b186a00d82-console-config\") pod \"console-7d8bdbbd8c-dfpg8\" (UID: \"30881447-b8cd-4e98-a3d6-78b186a00d82\") " pod="openshift-console/console-7d8bdbbd8c-dfpg8" Mar 18 13:38:38.673369 master-0 kubenswrapper[27835]: I0318 13:38:38.672936 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/30881447-b8cd-4e98-a3d6-78b186a00d82-console-oauth-config\") pod \"console-7d8bdbbd8c-dfpg8\" (UID: \"30881447-b8cd-4e98-a3d6-78b186a00d82\") " pod="openshift-console/console-7d8bdbbd8c-dfpg8" Mar 18 13:38:38.673369 master-0 kubenswrapper[27835]: I0318 13:38:38.672977 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30881447-b8cd-4e98-a3d6-78b186a00d82-trusted-ca-bundle\") pod \"console-7d8bdbbd8c-dfpg8\" (UID: \"30881447-b8cd-4e98-a3d6-78b186a00d82\") " pod="openshift-console/console-7d8bdbbd8c-dfpg8" Mar 18 13:38:38.673369 master-0 kubenswrapper[27835]: I0318 13:38:38.673020 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/30881447-b8cd-4e98-a3d6-78b186a00d82-service-ca\") pod \"console-7d8bdbbd8c-dfpg8\" (UID: \"30881447-b8cd-4e98-a3d6-78b186a00d82\") " pod="openshift-console/console-7d8bdbbd8c-dfpg8" Mar 18 13:38:38.675902 master-0 kubenswrapper[27835]: I0318 13:38:38.675844 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/30881447-b8cd-4e98-a3d6-78b186a00d82-console-config\") pod \"console-7d8bdbbd8c-dfpg8\" (UID: \"30881447-b8cd-4e98-a3d6-78b186a00d82\") " pod="openshift-console/console-7d8bdbbd8c-dfpg8" Mar 18 13:38:38.676202 master-0 kubenswrapper[27835]: I0318 13:38:38.676177 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/30881447-b8cd-4e98-a3d6-78b186a00d82-oauth-serving-cert\") pod \"console-7d8bdbbd8c-dfpg8\" (UID: \"30881447-b8cd-4e98-a3d6-78b186a00d82\") " pod="openshift-console/console-7d8bdbbd8c-dfpg8" Mar 18 13:38:38.677457 master-0 kubenswrapper[27835]: I0318 13:38:38.677389 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/30881447-b8cd-4e98-a3d6-78b186a00d82-service-ca\") pod \"console-7d8bdbbd8c-dfpg8\" (UID: \"30881447-b8cd-4e98-a3d6-78b186a00d82\") " pod="openshift-console/console-7d8bdbbd8c-dfpg8" Mar 18 13:38:38.680719 master-0 kubenswrapper[27835]: I0318 13:38:38.680682 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/30881447-b8cd-4e98-a3d6-78b186a00d82-console-oauth-config\") pod \"console-7d8bdbbd8c-dfpg8\" (UID: \"30881447-b8cd-4e98-a3d6-78b186a00d82\") " pod="openshift-console/console-7d8bdbbd8c-dfpg8" Mar 18 13:38:38.681827 master-0 kubenswrapper[27835]: I0318 13:38:38.681349 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30881447-b8cd-4e98-a3d6-78b186a00d82-trusted-ca-bundle\") pod \"console-7d8bdbbd8c-dfpg8\" (UID: \"30881447-b8cd-4e98-a3d6-78b186a00d82\") " pod="openshift-console/console-7d8bdbbd8c-dfpg8" Mar 18 13:38:38.681901 master-0 kubenswrapper[27835]: I0318 13:38:38.681843 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/30881447-b8cd-4e98-a3d6-78b186a00d82-console-serving-cert\") pod \"console-7d8bdbbd8c-dfpg8\" (UID: \"30881447-b8cd-4e98-a3d6-78b186a00d82\") " pod="openshift-console/console-7d8bdbbd8c-dfpg8" Mar 18 13:38:38.701764 master-0 kubenswrapper[27835]: I0318 13:38:38.701692 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cf44\" (UniqueName: \"kubernetes.io/projected/30881447-b8cd-4e98-a3d6-78b186a00d82-kube-api-access-8cf44\") pod \"console-7d8bdbbd8c-dfpg8\" (UID: \"30881447-b8cd-4e98-a3d6-78b186a00d82\") " pod="openshift-console/console-7d8bdbbd8c-dfpg8" Mar 18 13:38:38.847436 master-0 kubenswrapper[27835]: I0318 13:38:38.838319 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7d8bdbbd8c-dfpg8" Mar 18 13:38:38.948494 master-0 kubenswrapper[27835]: I0318 13:38:38.948443 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-29vrd"] Mar 18 13:38:39.031999 master-0 kubenswrapper[27835]: I0318 13:38:39.031868 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-h6chv" event={"ID":"c730deaa-80a1-4fa1-aa84-0c523df12f53","Type":"ContainerStarted","Data":"033c982cbd160e2ab426f7b3af8b6dbe306793a3b393b1fced9dde7488cb55ea"} Mar 18 13:38:39.033326 master-0 kubenswrapper[27835]: I0318 13:38:39.033287 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-h6chv" Mar 18 13:38:39.034594 master-0 kubenswrapper[27835]: I0318 13:38:39.034556 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-29vrd" event={"ID":"4eadbe2a-43ed-4b28-871c-b8f7a3579fae","Type":"ContainerStarted","Data":"90db81b6611471ea6c41b58d441ab7c1105d4594f8f8e6a770ac5376a10e5130"} Mar 18 13:38:39.035429 master-0 kubenswrapper[27835]: I0318 13:38:39.035382 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-k22dv" event={"ID":"2954b55d-28c7-453e-b699-c73bb05e05b9","Type":"ContainerStarted","Data":"65047fddc418eecae14ed146ef0bf832e370396979cf253b66b0ea9275c2c1a7"} Mar 18 13:38:39.099217 master-0 kubenswrapper[27835]: I0318 13:38:39.099100 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-7bb4cc7c98-h6chv" podStartSLOduration=2.244475963 podStartE2EDuration="4.099077914s" podCreationTimestamp="2026-03-18 13:38:35 +0000 UTC" firstStartedPulling="2026-03-18 13:38:36.53596036 +0000 UTC m=+880.501171920" lastFinishedPulling="2026-03-18 13:38:38.390562321 +0000 UTC m=+882.355773871" observedRunningTime="2026-03-18 13:38:39.088721504 +0000 UTC m=+883.053933064" watchObservedRunningTime="2026-03-18 13:38:39.099077914 +0000 UTC m=+883.064289474" Mar 18 13:38:39.152568 master-0 kubenswrapper[27835]: I0318 13:38:39.149539 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-mqsmx"] Mar 18 13:38:39.164733 master-0 kubenswrapper[27835]: I0318 13:38:39.164581 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-jxd5c"] Mar 18 13:38:39.174066 master-0 kubenswrapper[27835]: W0318 13:38:39.174013 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a9778bb_d878_4ae5_a7ab_004a1c9ee1e3.slice/crio-ff9a2c715fb8ade196f56feada23ccf9ceb031b1ff7f2d8ed1c700e1b8e66104 WatchSource:0}: Error finding container ff9a2c715fb8ade196f56feada23ccf9ceb031b1ff7f2d8ed1c700e1b8e66104: Status 404 returned error can't find the container with id ff9a2c715fb8ade196f56feada23ccf9ceb031b1ff7f2d8ed1c700e1b8e66104 Mar 18 13:38:39.443500 master-0 kubenswrapper[27835]: I0318 13:38:39.442621 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7d8bdbbd8c-dfpg8"] Mar 18 13:38:39.444126 master-0 kubenswrapper[27835]: W0318 13:38:39.444092 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30881447_b8cd_4e98_a3d6_78b186a00d82.slice/crio-cd5ee6160f564f774d49037f56d2bdee0a09e4342f815a8637539f2d21a86edf WatchSource:0}: Error finding container cd5ee6160f564f774d49037f56d2bdee0a09e4342f815a8637539f2d21a86edf: Status 404 returned error can't find the container with id cd5ee6160f564f774d49037f56d2bdee0a09e4342f815a8637539f2d21a86edf Mar 18 13:38:40.046830 master-0 kubenswrapper[27835]: I0318 13:38:40.046698 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7d8bdbbd8c-dfpg8" event={"ID":"30881447-b8cd-4e98-a3d6-78b186a00d82","Type":"ContainerStarted","Data":"bc53b95aeab4bfa37557e4a7dbc298ad0ea6114795054c77bab79eb7e122df1f"} Mar 18 13:38:40.046830 master-0 kubenswrapper[27835]: I0318 13:38:40.046762 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7d8bdbbd8c-dfpg8" event={"ID":"30881447-b8cd-4e98-a3d6-78b186a00d82","Type":"ContainerStarted","Data":"cd5ee6160f564f774d49037f56d2bdee0a09e4342f815a8637539f2d21a86edf"} Mar 18 13:38:40.049602 master-0 kubenswrapper[27835]: I0318 13:38:40.049564 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-mqsmx" event={"ID":"bc036027-dfb3-45cb-aedb-017872d83490","Type":"ContainerStarted","Data":"5dcc2fcc737ba8ea0ece9528d511792bf1e306347c75949b3f8098a65f318fc8"} Mar 18 13:38:40.051317 master-0 kubenswrapper[27835]: I0318 13:38:40.051288 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-jxd5c" event={"ID":"4a9778bb-d878-4ae5-a7ab-004a1c9ee1e3","Type":"ContainerStarted","Data":"ff9a2c715fb8ade196f56feada23ccf9ceb031b1ff7f2d8ed1c700e1b8e66104"} Mar 18 13:38:40.054708 master-0 kubenswrapper[27835]: I0318 13:38:40.054670 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jr6nk" event={"ID":"675171a9-caa6-495c-930b-aee9a8d4cbeb","Type":"ContainerStarted","Data":"e6b4f5b7785e10855456d001b6f81084ddf549c9accff3a1d270cb322941e653"} Mar 18 13:38:40.054708 master-0 kubenswrapper[27835]: I0318 13:38:40.054704 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-jr6nk" Mar 18 13:38:40.069914 master-0 kubenswrapper[27835]: I0318 13:38:40.069832 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7d8bdbbd8c-dfpg8" podStartSLOduration=2.069816048 podStartE2EDuration="2.069816048s" podCreationTimestamp="2026-03-18 13:38:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:38:40.067118315 +0000 UTC m=+884.032329885" watchObservedRunningTime="2026-03-18 13:38:40.069816048 +0000 UTC m=+884.035027608" Mar 18 13:38:40.118130 master-0 kubenswrapper[27835]: I0318 13:38:40.117897 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-jr6nk" podStartSLOduration=3.291144325 podStartE2EDuration="5.117874555s" podCreationTimestamp="2026-03-18 13:38:35 +0000 UTC" firstStartedPulling="2026-03-18 13:38:37.786073887 +0000 UTC m=+881.751285457" lastFinishedPulling="2026-03-18 13:38:39.612804127 +0000 UTC m=+883.578015687" observedRunningTime="2026-03-18 13:38:40.112813859 +0000 UTC m=+884.078025429" watchObservedRunningTime="2026-03-18 13:38:40.117874555 +0000 UTC m=+884.083086135" Mar 18 13:38:46.107014 master-0 kubenswrapper[27835]: I0318 13:38:46.106934 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-k22dv" event={"ID":"2954b55d-28c7-453e-b699-c73bb05e05b9","Type":"ContainerStarted","Data":"88eeeb784d0188849f5439d8ca0cc2c1b090b5961d1c03b5aaf9e3cbc2d60bcb"} Mar 18 13:38:46.107729 master-0 kubenswrapper[27835]: I0318 13:38:46.107078 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-k22dv" Mar 18 13:38:46.108401 master-0 kubenswrapper[27835]: I0318 13:38:46.108363 27835 generic.go:334] "Generic (PLEG): container finished" podID="bb0c2adb-b940-48a0-b870-826b63cc2de4" containerID="07b1c70dd3d6335fdd70482daffeff319f36162eafed29fc59b93db9231bc971" exitCode=0 Mar 18 13:38:46.108510 master-0 kubenswrapper[27835]: I0318 13:38:46.108450 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n7qbc" event={"ID":"bb0c2adb-b940-48a0-b870-826b63cc2de4","Type":"ContainerDied","Data":"07b1c70dd3d6335fdd70482daffeff319f36162eafed29fc59b93db9231bc971"} Mar 18 13:38:46.110488 master-0 kubenswrapper[27835]: I0318 13:38:46.110451 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-29vrd" event={"ID":"4eadbe2a-43ed-4b28-871c-b8f7a3579fae","Type":"ContainerStarted","Data":"1dd710eeff497b1aed36ac711850f1b19da8201bc875acbcf05b04b86f45b833"} Mar 18 13:38:46.110488 master-0 kubenswrapper[27835]: I0318 13:38:46.110491 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-29vrd" event={"ID":"4eadbe2a-43ed-4b28-871c-b8f7a3579fae","Type":"ContainerStarted","Data":"5092f009a1f1813bb5fec76f4e73b045b72a1f3c2a9e844b532069516caefa83"} Mar 18 13:38:46.112802 master-0 kubenswrapper[27835]: I0318 13:38:46.112768 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-mqsmx" event={"ID":"bc036027-dfb3-45cb-aedb-017872d83490","Type":"ContainerStarted","Data":"32432c909538ddadf9e31df0ebcbe600e980dc93be53e404fbef2524ae50c4f4"} Mar 18 13:38:46.112996 master-0 kubenswrapper[27835]: I0318 13:38:46.112972 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-mqsmx" Mar 18 13:38:46.125819 master-0 kubenswrapper[27835]: I0318 13:38:46.119863 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-jxd5c" event={"ID":"4a9778bb-d878-4ae5-a7ab-004a1c9ee1e3","Type":"ContainerStarted","Data":"d6b83ce2a88829661ea184fcf59ed0ddebfb8ec245921f5941a38cdfd7effd9c"} Mar 18 13:38:46.130871 master-0 kubenswrapper[27835]: I0318 13:38:46.130797 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-49gvc" event={"ID":"ea0bf2ff-c852-4365-9856-f3e4aa5b766d","Type":"ContainerStarted","Data":"d04b62ea7d0e4644ce6f3d49f8db7ae0c946458d8e4a75e886efb111098d65fe"} Mar 18 13:38:46.131063 master-0 kubenswrapper[27835]: I0318 13:38:46.130973 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-49gvc" Mar 18 13:38:46.137910 master-0 kubenswrapper[27835]: I0318 13:38:46.137784 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-k22dv" podStartSLOduration=2.241779047 podStartE2EDuration="9.137756175s" podCreationTimestamp="2026-03-18 13:38:37 +0000 UTC" firstStartedPulling="2026-03-18 13:38:38.469949375 +0000 UTC m=+882.435160935" lastFinishedPulling="2026-03-18 13:38:45.365926503 +0000 UTC m=+889.331138063" observedRunningTime="2026-03-18 13:38:46.13235651 +0000 UTC m=+890.097568090" watchObservedRunningTime="2026-03-18 13:38:46.137756175 +0000 UTC m=+890.102967745" Mar 18 13:38:46.166228 master-0 kubenswrapper[27835]: I0318 13:38:46.166150 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-jxd5c" podStartSLOduration=1.984336849 podStartE2EDuration="8.166131462s" podCreationTimestamp="2026-03-18 13:38:38 +0000 UTC" firstStartedPulling="2026-03-18 13:38:39.180575304 +0000 UTC m=+883.145786864" lastFinishedPulling="2026-03-18 13:38:45.362369907 +0000 UTC m=+889.327581477" observedRunningTime="2026-03-18 13:38:46.155458523 +0000 UTC m=+890.120670093" watchObservedRunningTime="2026-03-18 13:38:46.166131462 +0000 UTC m=+890.131343032" Mar 18 13:38:46.203899 master-0 kubenswrapper[27835]: I0318 13:38:46.203795 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-29vrd" podStartSLOduration=2.785816838 podStartE2EDuration="9.203773458s" podCreationTimestamp="2026-03-18 13:38:37 +0000 UTC" firstStartedPulling="2026-03-18 13:38:38.954203861 +0000 UTC m=+882.919415411" lastFinishedPulling="2026-03-18 13:38:45.372160471 +0000 UTC m=+889.337372031" observedRunningTime="2026-03-18 13:38:46.173982354 +0000 UTC m=+890.139193934" watchObservedRunningTime="2026-03-18 13:38:46.203773458 +0000 UTC m=+890.168985018" Mar 18 13:38:46.229478 master-0 kubenswrapper[27835]: I0318 13:38:46.226060 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f558f5558-mqsmx" podStartSLOduration=3.037268309 podStartE2EDuration="9.226037979s" podCreationTimestamp="2026-03-18 13:38:37 +0000 UTC" firstStartedPulling="2026-03-18 13:38:39.168848608 +0000 UTC m=+883.134060158" lastFinishedPulling="2026-03-18 13:38:45.357618248 +0000 UTC m=+889.322829828" observedRunningTime="2026-03-18 13:38:46.203072199 +0000 UTC m=+890.168283749" watchObservedRunningTime="2026-03-18 13:38:46.226037979 +0000 UTC m=+890.191249529" Mar 18 13:38:46.263100 master-0 kubenswrapper[27835]: I0318 13:38:46.261853 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-49gvc" podStartSLOduration=2.207322889 podStartE2EDuration="11.261829386s" podCreationTimestamp="2026-03-18 13:38:35 +0000 UTC" firstStartedPulling="2026-03-18 13:38:36.30155131 +0000 UTC m=+880.266762870" lastFinishedPulling="2026-03-18 13:38:45.356057807 +0000 UTC m=+889.321269367" observedRunningTime="2026-03-18 13:38:46.254112847 +0000 UTC m=+890.219324407" watchObservedRunningTime="2026-03-18 13:38:46.261829386 +0000 UTC m=+890.227040956" Mar 18 13:38:47.141233 master-0 kubenswrapper[27835]: I0318 13:38:47.140131 27835 generic.go:334] "Generic (PLEG): container finished" podID="bb0c2adb-b940-48a0-b870-826b63cc2de4" containerID="7f5be68c3d7582291cd14908174e01f47d05cb6a615fe0bdc174f3ea17239d22" exitCode=0 Mar 18 13:38:47.141233 master-0 kubenswrapper[27835]: I0318 13:38:47.140257 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n7qbc" event={"ID":"bb0c2adb-b940-48a0-b870-826b63cc2de4","Type":"ContainerDied","Data":"7f5be68c3d7582291cd14908174e01f47d05cb6a615fe0bdc174f3ea17239d22"} Mar 18 13:38:47.483184 master-0 kubenswrapper[27835]: I0318 13:38:47.483108 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-jr6nk" Mar 18 13:38:48.149916 master-0 kubenswrapper[27835]: I0318 13:38:48.149844 27835 generic.go:334] "Generic (PLEG): container finished" podID="bb0c2adb-b940-48a0-b870-826b63cc2de4" containerID="078ae025661fd8553232a3e413e771a042651dd2c92e405fd938ebf07506c005" exitCode=0 Mar 18 13:38:48.149916 master-0 kubenswrapper[27835]: I0318 13:38:48.149896 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n7qbc" event={"ID":"bb0c2adb-b940-48a0-b870-826b63cc2de4","Type":"ContainerDied","Data":"078ae025661fd8553232a3e413e771a042651dd2c92e405fd938ebf07506c005"} Mar 18 13:38:48.840032 master-0 kubenswrapper[27835]: I0318 13:38:48.839211 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7d8bdbbd8c-dfpg8" Mar 18 13:38:48.840032 master-0 kubenswrapper[27835]: I0318 13:38:48.839264 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7d8bdbbd8c-dfpg8" Mar 18 13:38:48.843519 master-0 kubenswrapper[27835]: I0318 13:38:48.843457 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7d8bdbbd8c-dfpg8" Mar 18 13:38:49.173437 master-0 kubenswrapper[27835]: I0318 13:38:49.173354 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n7qbc" event={"ID":"bb0c2adb-b940-48a0-b870-826b63cc2de4","Type":"ContainerStarted","Data":"df61cb9bc5364c592fc2b37f2b4cfd31b85f17d1a9f85213d8bf8e055173e2bb"} Mar 18 13:38:49.173437 master-0 kubenswrapper[27835]: I0318 13:38:49.173405 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n7qbc" event={"ID":"bb0c2adb-b940-48a0-b870-826b63cc2de4","Type":"ContainerStarted","Data":"53328c7846fd9d7f4218735c1502572a769053777f7fe6e461deba8a43fc2e8f"} Mar 18 13:38:49.173437 master-0 kubenswrapper[27835]: I0318 13:38:49.173440 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n7qbc" event={"ID":"bb0c2adb-b940-48a0-b870-826b63cc2de4","Type":"ContainerStarted","Data":"98c1c16625cba14986ce72628f77863f9077e05e9480f2858d88b439fb3de6fb"} Mar 18 13:38:49.187717 master-0 kubenswrapper[27835]: I0318 13:38:49.173453 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n7qbc" event={"ID":"bb0c2adb-b940-48a0-b870-826b63cc2de4","Type":"ContainerStarted","Data":"0e3bad9ea70ef4e4ba1be6c4f6e12595713a61618628f501b218708c2abbb27b"} Mar 18 13:38:49.187717 master-0 kubenswrapper[27835]: I0318 13:38:49.173467 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n7qbc" event={"ID":"bb0c2adb-b940-48a0-b870-826b63cc2de4","Type":"ContainerStarted","Data":"c4a5e584dac2b34ac28f0a8d6aa2c631affd633606b0faa22446fa9f94b64a00"} Mar 18 13:38:49.187717 master-0 kubenswrapper[27835]: I0318 13:38:49.177701 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7d8bdbbd8c-dfpg8" Mar 18 13:38:49.273449 master-0 kubenswrapper[27835]: I0318 13:38:49.272950 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-9cc97458b-bkd6r"] Mar 18 13:38:50.193857 master-0 kubenswrapper[27835]: I0318 13:38:50.193766 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-n7qbc" event={"ID":"bb0c2adb-b940-48a0-b870-826b63cc2de4","Type":"ContainerStarted","Data":"f3ca465091afc5ff1be0cd76b58cad2a59d6e4a646e110629dda99786ae4bad2"} Mar 18 13:38:50.232045 master-0 kubenswrapper[27835]: I0318 13:38:50.231940 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-n7qbc" podStartSLOduration=5.951221979 podStartE2EDuration="15.231915923s" podCreationTimestamp="2026-03-18 13:38:35 +0000 UTC" firstStartedPulling="2026-03-18 13:38:36.067615863 +0000 UTC m=+880.032827423" lastFinishedPulling="2026-03-18 13:38:45.348309807 +0000 UTC m=+889.313521367" observedRunningTime="2026-03-18 13:38:50.227950186 +0000 UTC m=+894.193161826" watchObservedRunningTime="2026-03-18 13:38:50.231915923 +0000 UTC m=+894.197127483" Mar 18 13:38:50.928277 master-0 kubenswrapper[27835]: I0318 13:38:50.928184 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-n7qbc" Mar 18 13:38:50.981397 master-0 kubenswrapper[27835]: I0318 13:38:50.981349 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-n7qbc" Mar 18 13:38:51.212130 master-0 kubenswrapper[27835]: I0318 13:38:51.211973 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-n7qbc" Mar 18 13:38:53.364249 master-0 kubenswrapper[27835]: I0318 13:38:53.364159 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-k22dv" Mar 18 13:38:55.885399 master-0 kubenswrapper[27835]: I0318 13:38:55.885353 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-49gvc" Mar 18 13:38:55.997966 master-0 kubenswrapper[27835]: I0318 13:38:55.997899 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-h6chv" Mar 18 13:38:58.299801 master-0 kubenswrapper[27835]: I0318 13:38:58.299730 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-mqsmx" Mar 18 13:39:03.714459 master-0 kubenswrapper[27835]: I0318 13:39:03.714339 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/vg-manager-499dk"] Mar 18 13:39:03.724699 master-0 kubenswrapper[27835]: I0318 13:39:03.724424 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:03.732678 master-0 kubenswrapper[27835]: I0318 13:39:03.732621 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"vg-manager-metrics-cert" Mar 18 13:39:03.734031 master-0 kubenswrapper[27835]: I0318 13:39:03.733991 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-499dk"] Mar 18 13:39:03.787014 master-0 kubenswrapper[27835]: I0318 13:39:03.786967 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/9aaddfeb-c49d-4821-bcef-43cb2f9b5611-device-dir\") pod \"vg-manager-499dk\" (UID: \"9aaddfeb-c49d-4821-bcef-43cb2f9b5611\") " pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:03.787343 master-0 kubenswrapper[27835]: I0318 13:39:03.787303 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/9aaddfeb-c49d-4821-bcef-43cb2f9b5611-pod-volumes-dir\") pod \"vg-manager-499dk\" (UID: \"9aaddfeb-c49d-4821-bcef-43cb2f9b5611\") " pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:03.787619 master-0 kubenswrapper[27835]: I0318 13:39:03.787603 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9aaddfeb-c49d-4821-bcef-43cb2f9b5611-sys\") pod \"vg-manager-499dk\" (UID: \"9aaddfeb-c49d-4821-bcef-43cb2f9b5611\") " pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:03.787743 master-0 kubenswrapper[27835]: I0318 13:39:03.787729 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/9aaddfeb-c49d-4821-bcef-43cb2f9b5611-lvmd-config\") pod \"vg-manager-499dk\" (UID: \"9aaddfeb-c49d-4821-bcef-43cb2f9b5611\") " pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:03.787836 master-0 kubenswrapper[27835]: I0318 13:39:03.787823 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/9aaddfeb-c49d-4821-bcef-43cb2f9b5611-csi-plugin-dir\") pod \"vg-manager-499dk\" (UID: \"9aaddfeb-c49d-4821-bcef-43cb2f9b5611\") " pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:03.787948 master-0 kubenswrapper[27835]: I0318 13:39:03.787934 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/9aaddfeb-c49d-4821-bcef-43cb2f9b5611-metrics-cert\") pod \"vg-manager-499dk\" (UID: \"9aaddfeb-c49d-4821-bcef-43cb2f9b5611\") " pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:03.788037 master-0 kubenswrapper[27835]: I0318 13:39:03.788025 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/9aaddfeb-c49d-4821-bcef-43cb2f9b5611-file-lock-dir\") pod \"vg-manager-499dk\" (UID: \"9aaddfeb-c49d-4821-bcef-43cb2f9b5611\") " pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:03.788118 master-0 kubenswrapper[27835]: I0318 13:39:03.788105 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/9aaddfeb-c49d-4821-bcef-43cb2f9b5611-run-udev\") pod \"vg-manager-499dk\" (UID: \"9aaddfeb-c49d-4821-bcef-43cb2f9b5611\") " pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:03.788199 master-0 kubenswrapper[27835]: I0318 13:39:03.788187 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9aaddfeb-c49d-4821-bcef-43cb2f9b5611-registration-dir\") pod \"vg-manager-499dk\" (UID: \"9aaddfeb-c49d-4821-bcef-43cb2f9b5611\") " pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:03.788327 master-0 kubenswrapper[27835]: I0318 13:39:03.788311 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfm25\" (UniqueName: \"kubernetes.io/projected/9aaddfeb-c49d-4821-bcef-43cb2f9b5611-kube-api-access-sfm25\") pod \"vg-manager-499dk\" (UID: \"9aaddfeb-c49d-4821-bcef-43cb2f9b5611\") " pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:03.788460 master-0 kubenswrapper[27835]: I0318 13:39:03.788447 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/9aaddfeb-c49d-4821-bcef-43cb2f9b5611-node-plugin-dir\") pod \"vg-manager-499dk\" (UID: \"9aaddfeb-c49d-4821-bcef-43cb2f9b5611\") " pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:03.890011 master-0 kubenswrapper[27835]: I0318 13:39:03.889926 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfm25\" (UniqueName: \"kubernetes.io/projected/9aaddfeb-c49d-4821-bcef-43cb2f9b5611-kube-api-access-sfm25\") pod \"vg-manager-499dk\" (UID: \"9aaddfeb-c49d-4821-bcef-43cb2f9b5611\") " pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:03.890011 master-0 kubenswrapper[27835]: I0318 13:39:03.890014 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/9aaddfeb-c49d-4821-bcef-43cb2f9b5611-node-plugin-dir\") pod \"vg-manager-499dk\" (UID: \"9aaddfeb-c49d-4821-bcef-43cb2f9b5611\") " pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:03.890304 master-0 kubenswrapper[27835]: I0318 13:39:03.890275 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/9aaddfeb-c49d-4821-bcef-43cb2f9b5611-device-dir\") pod \"vg-manager-499dk\" (UID: \"9aaddfeb-c49d-4821-bcef-43cb2f9b5611\") " pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:03.890351 master-0 kubenswrapper[27835]: I0318 13:39:03.890340 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/9aaddfeb-c49d-4821-bcef-43cb2f9b5611-pod-volumes-dir\") pod \"vg-manager-499dk\" (UID: \"9aaddfeb-c49d-4821-bcef-43cb2f9b5611\") " pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:03.890404 master-0 kubenswrapper[27835]: I0318 13:39:03.890386 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9aaddfeb-c49d-4821-bcef-43cb2f9b5611-sys\") pod \"vg-manager-499dk\" (UID: \"9aaddfeb-c49d-4821-bcef-43cb2f9b5611\") " pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:03.890486 master-0 kubenswrapper[27835]: I0318 13:39:03.890435 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/9aaddfeb-c49d-4821-bcef-43cb2f9b5611-lvmd-config\") pod \"vg-manager-499dk\" (UID: \"9aaddfeb-c49d-4821-bcef-43cb2f9b5611\") " pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:03.890486 master-0 kubenswrapper[27835]: I0318 13:39:03.890471 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/9aaddfeb-c49d-4821-bcef-43cb2f9b5611-csi-plugin-dir\") pod \"vg-manager-499dk\" (UID: \"9aaddfeb-c49d-4821-bcef-43cb2f9b5611\") " pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:03.890548 master-0 kubenswrapper[27835]: I0318 13:39:03.890511 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/9aaddfeb-c49d-4821-bcef-43cb2f9b5611-device-dir\") pod \"vg-manager-499dk\" (UID: \"9aaddfeb-c49d-4821-bcef-43cb2f9b5611\") " pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:03.890693 master-0 kubenswrapper[27835]: I0318 13:39:03.890529 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/9aaddfeb-c49d-4821-bcef-43cb2f9b5611-file-lock-dir\") pod \"vg-manager-499dk\" (UID: \"9aaddfeb-c49d-4821-bcef-43cb2f9b5611\") " pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:03.890761 master-0 kubenswrapper[27835]: I0318 13:39:03.890682 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/9aaddfeb-c49d-4821-bcef-43cb2f9b5611-pod-volumes-dir\") pod \"vg-manager-499dk\" (UID: \"9aaddfeb-c49d-4821-bcef-43cb2f9b5611\") " pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:03.890761 master-0 kubenswrapper[27835]: I0318 13:39:03.890709 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/9aaddfeb-c49d-4821-bcef-43cb2f9b5611-metrics-cert\") pod \"vg-manager-499dk\" (UID: \"9aaddfeb-c49d-4821-bcef-43cb2f9b5611\") " pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:03.890761 master-0 kubenswrapper[27835]: I0318 13:39:03.890715 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/9aaddfeb-c49d-4821-bcef-43cb2f9b5611-node-plugin-dir\") pod \"vg-manager-499dk\" (UID: \"9aaddfeb-c49d-4821-bcef-43cb2f9b5611\") " pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:03.890761 master-0 kubenswrapper[27835]: I0318 13:39:03.890738 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/9aaddfeb-c49d-4821-bcef-43cb2f9b5611-run-udev\") pod \"vg-manager-499dk\" (UID: \"9aaddfeb-c49d-4821-bcef-43cb2f9b5611\") " pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:03.890889 master-0 kubenswrapper[27835]: I0318 13:39:03.890780 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9aaddfeb-c49d-4821-bcef-43cb2f9b5611-registration-dir\") pod \"vg-manager-499dk\" (UID: \"9aaddfeb-c49d-4821-bcef-43cb2f9b5611\") " pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:03.890889 master-0 kubenswrapper[27835]: I0318 13:39:03.890824 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/9aaddfeb-c49d-4821-bcef-43cb2f9b5611-lvmd-config\") pod \"vg-manager-499dk\" (UID: \"9aaddfeb-c49d-4821-bcef-43cb2f9b5611\") " pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:03.890889 master-0 kubenswrapper[27835]: I0318 13:39:03.890830 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/9aaddfeb-c49d-4821-bcef-43cb2f9b5611-run-udev\") pod \"vg-manager-499dk\" (UID: \"9aaddfeb-c49d-4821-bcef-43cb2f9b5611\") " pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:03.890889 master-0 kubenswrapper[27835]: I0318 13:39:03.890836 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/9aaddfeb-c49d-4821-bcef-43cb2f9b5611-csi-plugin-dir\") pod \"vg-manager-499dk\" (UID: \"9aaddfeb-c49d-4821-bcef-43cb2f9b5611\") " pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:03.890889 master-0 kubenswrapper[27835]: I0318 13:39:03.890873 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9aaddfeb-c49d-4821-bcef-43cb2f9b5611-sys\") pod \"vg-manager-499dk\" (UID: \"9aaddfeb-c49d-4821-bcef-43cb2f9b5611\") " pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:03.891034 master-0 kubenswrapper[27835]: I0318 13:39:03.890905 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/9aaddfeb-c49d-4821-bcef-43cb2f9b5611-file-lock-dir\") pod \"vg-manager-499dk\" (UID: \"9aaddfeb-c49d-4821-bcef-43cb2f9b5611\") " pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:03.891034 master-0 kubenswrapper[27835]: I0318 13:39:03.890979 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9aaddfeb-c49d-4821-bcef-43cb2f9b5611-registration-dir\") pod \"vg-manager-499dk\" (UID: \"9aaddfeb-c49d-4821-bcef-43cb2f9b5611\") " pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:03.897698 master-0 kubenswrapper[27835]: I0318 13:39:03.897646 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/9aaddfeb-c49d-4821-bcef-43cb2f9b5611-metrics-cert\") pod \"vg-manager-499dk\" (UID: \"9aaddfeb-c49d-4821-bcef-43cb2f9b5611\") " pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:03.907942 master-0 kubenswrapper[27835]: I0318 13:39:03.907895 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfm25\" (UniqueName: \"kubernetes.io/projected/9aaddfeb-c49d-4821-bcef-43cb2f9b5611-kube-api-access-sfm25\") pod \"vg-manager-499dk\" (UID: \"9aaddfeb-c49d-4821-bcef-43cb2f9b5611\") " pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:04.061961 master-0 kubenswrapper[27835]: I0318 13:39:04.061837 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:04.509833 master-0 kubenswrapper[27835]: I0318 13:39:04.506127 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-499dk"] Mar 18 13:39:04.511983 master-0 kubenswrapper[27835]: W0318 13:39:04.511936 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9aaddfeb_c49d_4821_bcef_43cb2f9b5611.slice/crio-539f967eecf8b6d78f2a90066a838cdf38fb8829fff52ef0622f1607b4e445af WatchSource:0}: Error finding container 539f967eecf8b6d78f2a90066a838cdf38fb8829fff52ef0622f1607b4e445af: Status 404 returned error can't find the container with id 539f967eecf8b6d78f2a90066a838cdf38fb8829fff52ef0622f1607b4e445af Mar 18 13:39:05.356303 master-0 kubenswrapper[27835]: I0318 13:39:05.356243 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-499dk" event={"ID":"9aaddfeb-c49d-4821-bcef-43cb2f9b5611","Type":"ContainerStarted","Data":"04db432f053936ee9e844fb66b5aa775f9897f40c69b349c08ef584c92de10ed"} Mar 18 13:39:05.356950 master-0 kubenswrapper[27835]: I0318 13:39:05.356313 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-499dk" event={"ID":"9aaddfeb-c49d-4821-bcef-43cb2f9b5611","Type":"ContainerStarted","Data":"539f967eecf8b6d78f2a90066a838cdf38fb8829fff52ef0622f1607b4e445af"} Mar 18 13:39:05.381697 master-0 kubenswrapper[27835]: I0318 13:39:05.381540 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/vg-manager-499dk" podStartSLOduration=2.38151178 podStartE2EDuration="2.38151178s" podCreationTimestamp="2026-03-18 13:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:39:05.378247282 +0000 UTC m=+909.343458852" watchObservedRunningTime="2026-03-18 13:39:05.38151178 +0000 UTC m=+909.346723340" Mar 18 13:39:05.954437 master-0 kubenswrapper[27835]: I0318 13:39:05.931188 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-n7qbc" Mar 18 13:39:06.386505 master-0 kubenswrapper[27835]: I0318 13:39:06.380952 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-499dk_9aaddfeb-c49d-4821-bcef-43cb2f9b5611/vg-manager/0.log" Mar 18 13:39:06.386505 master-0 kubenswrapper[27835]: I0318 13:39:06.381023 27835 generic.go:334] "Generic (PLEG): container finished" podID="9aaddfeb-c49d-4821-bcef-43cb2f9b5611" containerID="04db432f053936ee9e844fb66b5aa775f9897f40c69b349c08ef584c92de10ed" exitCode=1 Mar 18 13:39:06.386505 master-0 kubenswrapper[27835]: I0318 13:39:06.381061 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-499dk" event={"ID":"9aaddfeb-c49d-4821-bcef-43cb2f9b5611","Type":"ContainerDied","Data":"04db432f053936ee9e844fb66b5aa775f9897f40c69b349c08ef584c92de10ed"} Mar 18 13:39:06.386505 master-0 kubenswrapper[27835]: I0318 13:39:06.382218 27835 scope.go:117] "RemoveContainer" containerID="04db432f053936ee9e844fb66b5aa775f9897f40c69b349c08ef584c92de10ed" Mar 18 13:39:06.676138 master-0 kubenswrapper[27835]: I0318 13:39:06.676091 27835 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock" Mar 18 13:39:06.873624 master-0 kubenswrapper[27835]: I0318 13:39:06.870914 27835 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock","Timestamp":"2026-03-18T13:39:06.676364045Z","Handler":null,"Name":""} Mar 18 13:39:06.876193 master-0 kubenswrapper[27835]: I0318 13:39:06.876175 27835 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: topolvm.io endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock versions: 1.0.0 Mar 18 13:39:06.876292 master-0 kubenswrapper[27835]: I0318 13:39:06.876282 27835 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: topolvm.io at endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock Mar 18 13:39:07.397254 master-0 kubenswrapper[27835]: I0318 13:39:07.397101 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-499dk_9aaddfeb-c49d-4821-bcef-43cb2f9b5611/vg-manager/0.log" Mar 18 13:39:07.397254 master-0 kubenswrapper[27835]: I0318 13:39:07.397257 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-499dk" event={"ID":"9aaddfeb-c49d-4821-bcef-43cb2f9b5611","Type":"ContainerStarted","Data":"4265fa6f32bc68c5fce002ead5d44f5dcdd50b28af80127769044de84a8dfed8"} Mar 18 13:39:10.296180 master-0 kubenswrapper[27835]: I0318 13:39:10.294345 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-g2g5l"] Mar 18 13:39:10.296180 master-0 kubenswrapper[27835]: I0318 13:39:10.295529 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-g2g5l" Mar 18 13:39:10.299208 master-0 kubenswrapper[27835]: I0318 13:39:10.298774 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 18 13:39:10.304783 master-0 kubenswrapper[27835]: I0318 13:39:10.304655 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 18 13:39:10.337560 master-0 kubenswrapper[27835]: I0318 13:39:10.337455 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-g2g5l"] Mar 18 13:39:10.427464 master-0 kubenswrapper[27835]: I0318 13:39:10.427272 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-765z5\" (UniqueName: \"kubernetes.io/projected/532fda53-59a9-4252-a8e2-0c5262c0afe6-kube-api-access-765z5\") pod \"openstack-operator-index-g2g5l\" (UID: \"532fda53-59a9-4252-a8e2-0c5262c0afe6\") " pod="openstack-operators/openstack-operator-index-g2g5l" Mar 18 13:39:10.534633 master-0 kubenswrapper[27835]: I0318 13:39:10.529232 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-765z5\" (UniqueName: \"kubernetes.io/projected/532fda53-59a9-4252-a8e2-0c5262c0afe6-kube-api-access-765z5\") pod \"openstack-operator-index-g2g5l\" (UID: \"532fda53-59a9-4252-a8e2-0c5262c0afe6\") " pod="openstack-operators/openstack-operator-index-g2g5l" Mar 18 13:39:10.549537 master-0 kubenswrapper[27835]: I0318 13:39:10.549403 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-765z5\" (UniqueName: \"kubernetes.io/projected/532fda53-59a9-4252-a8e2-0c5262c0afe6-kube-api-access-765z5\") pod \"openstack-operator-index-g2g5l\" (UID: \"532fda53-59a9-4252-a8e2-0c5262c0afe6\") " pod="openstack-operators/openstack-operator-index-g2g5l" Mar 18 13:39:10.668350 master-0 kubenswrapper[27835]: I0318 13:39:10.668293 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-g2g5l" Mar 18 13:39:11.131033 master-0 kubenswrapper[27835]: I0318 13:39:11.130978 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-g2g5l"] Mar 18 13:39:11.459923 master-0 kubenswrapper[27835]: I0318 13:39:11.459861 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-g2g5l" event={"ID":"532fda53-59a9-4252-a8e2-0c5262c0afe6","Type":"ContainerStarted","Data":"37cd709407b4e31be738d9ec33886f51a4a34b7b9b6b074e18adfa69ceb46a91"} Mar 18 13:39:12.468966 master-0 kubenswrapper[27835]: I0318 13:39:12.468915 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-g2g5l" event={"ID":"532fda53-59a9-4252-a8e2-0c5262c0afe6","Type":"ContainerStarted","Data":"a44bf55e863ee91398cdb3b5e458b4a41890cc215714a49bafa8b038c0e59b98"} Mar 18 13:39:13.120988 master-0 kubenswrapper[27835]: I0318 13:39:13.120889 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-g2g5l" podStartSLOduration=2.024074744 podStartE2EDuration="3.120864452s" podCreationTimestamp="2026-03-18 13:39:10 +0000 UTC" firstStartedPulling="2026-03-18 13:39:11.138822719 +0000 UTC m=+915.104034279" lastFinishedPulling="2026-03-18 13:39:12.235612427 +0000 UTC m=+916.200823987" observedRunningTime="2026-03-18 13:39:13.113111023 +0000 UTC m=+917.078322603" watchObservedRunningTime="2026-03-18 13:39:13.120864452 +0000 UTC m=+917.086076022" Mar 18 13:39:14.062783 master-0 kubenswrapper[27835]: I0318 13:39:14.062708 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:14.065588 master-0 kubenswrapper[27835]: I0318 13:39:14.065539 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:14.323921 master-0 kubenswrapper[27835]: I0318 13:39:14.323762 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-9cc97458b-bkd6r" podUID="6600dd48-0759-43de-b1df-e99334590bac" containerName="console" containerID="cri-o://f0a78f5a48a297ffd715ddb4d57dccd0a5527ce4781f61229142c90e8164889e" gracePeriod=15 Mar 18 13:39:14.488290 master-0 kubenswrapper[27835]: I0318 13:39:14.488224 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-9cc97458b-bkd6r_6600dd48-0759-43de-b1df-e99334590bac/console/0.log" Mar 18 13:39:14.488537 master-0 kubenswrapper[27835]: I0318 13:39:14.488305 27835 generic.go:334] "Generic (PLEG): container finished" podID="6600dd48-0759-43de-b1df-e99334590bac" containerID="f0a78f5a48a297ffd715ddb4d57dccd0a5527ce4781f61229142c90e8164889e" exitCode=2 Mar 18 13:39:14.488537 master-0 kubenswrapper[27835]: I0318 13:39:14.488458 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-9cc97458b-bkd6r" event={"ID":"6600dd48-0759-43de-b1df-e99334590bac","Type":"ContainerDied","Data":"f0a78f5a48a297ffd715ddb4d57dccd0a5527ce4781f61229142c90e8164889e"} Mar 18 13:39:14.488898 master-0 kubenswrapper[27835]: I0318 13:39:14.488830 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:14.489671 master-0 kubenswrapper[27835]: I0318 13:39:14.489630 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/vg-manager-499dk" Mar 18 13:39:14.811543 master-0 kubenswrapper[27835]: I0318 13:39:14.811507 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-9cc97458b-bkd6r_6600dd48-0759-43de-b1df-e99334590bac/console/0.log" Mar 18 13:39:14.811543 master-0 kubenswrapper[27835]: I0318 13:39:14.811576 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-9cc97458b-bkd6r" Mar 18 13:39:14.936300 master-0 kubenswrapper[27835]: I0318 13:39:14.936204 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6600dd48-0759-43de-b1df-e99334590bac-trusted-ca-bundle\") pod \"6600dd48-0759-43de-b1df-e99334590bac\" (UID: \"6600dd48-0759-43de-b1df-e99334590bac\") " Mar 18 13:39:14.936300 master-0 kubenswrapper[27835]: I0318 13:39:14.936292 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6600dd48-0759-43de-b1df-e99334590bac-oauth-serving-cert\") pod \"6600dd48-0759-43de-b1df-e99334590bac\" (UID: \"6600dd48-0759-43de-b1df-e99334590bac\") " Mar 18 13:39:14.936767 master-0 kubenswrapper[27835]: I0318 13:39:14.936331 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nbgp\" (UniqueName: \"kubernetes.io/projected/6600dd48-0759-43de-b1df-e99334590bac-kube-api-access-7nbgp\") pod \"6600dd48-0759-43de-b1df-e99334590bac\" (UID: \"6600dd48-0759-43de-b1df-e99334590bac\") " Mar 18 13:39:14.936767 master-0 kubenswrapper[27835]: I0318 13:39:14.936373 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6600dd48-0759-43de-b1df-e99334590bac-console-oauth-config\") pod \"6600dd48-0759-43de-b1df-e99334590bac\" (UID: \"6600dd48-0759-43de-b1df-e99334590bac\") " Mar 18 13:39:14.936767 master-0 kubenswrapper[27835]: I0318 13:39:14.936406 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6600dd48-0759-43de-b1df-e99334590bac-service-ca\") pod \"6600dd48-0759-43de-b1df-e99334590bac\" (UID: \"6600dd48-0759-43de-b1df-e99334590bac\") " Mar 18 13:39:14.936767 master-0 kubenswrapper[27835]: I0318 13:39:14.936480 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6600dd48-0759-43de-b1df-e99334590bac-console-config\") pod \"6600dd48-0759-43de-b1df-e99334590bac\" (UID: \"6600dd48-0759-43de-b1df-e99334590bac\") " Mar 18 13:39:14.936767 master-0 kubenswrapper[27835]: I0318 13:39:14.936507 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6600dd48-0759-43de-b1df-e99334590bac-console-serving-cert\") pod \"6600dd48-0759-43de-b1df-e99334590bac\" (UID: \"6600dd48-0759-43de-b1df-e99334590bac\") " Mar 18 13:39:14.937297 master-0 kubenswrapper[27835]: I0318 13:39:14.937060 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6600dd48-0759-43de-b1df-e99334590bac-console-config" (OuterVolumeSpecName: "console-config") pod "6600dd48-0759-43de-b1df-e99334590bac" (UID: "6600dd48-0759-43de-b1df-e99334590bac"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:39:14.937297 master-0 kubenswrapper[27835]: I0318 13:39:14.937256 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6600dd48-0759-43de-b1df-e99334590bac-service-ca" (OuterVolumeSpecName: "service-ca") pod "6600dd48-0759-43de-b1df-e99334590bac" (UID: "6600dd48-0759-43de-b1df-e99334590bac"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:39:14.937500 master-0 kubenswrapper[27835]: I0318 13:39:14.937396 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6600dd48-0759-43de-b1df-e99334590bac-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6600dd48-0759-43de-b1df-e99334590bac" (UID: "6600dd48-0759-43de-b1df-e99334590bac"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:39:14.937890 master-0 kubenswrapper[27835]: I0318 13:39:14.937856 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6600dd48-0759-43de-b1df-e99334590bac-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6600dd48-0759-43de-b1df-e99334590bac" (UID: "6600dd48-0759-43de-b1df-e99334590bac"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:39:14.939652 master-0 kubenswrapper[27835]: I0318 13:39:14.939303 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6600dd48-0759-43de-b1df-e99334590bac-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6600dd48-0759-43de-b1df-e99334590bac" (UID: "6600dd48-0759-43de-b1df-e99334590bac"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:39:14.941121 master-0 kubenswrapper[27835]: I0318 13:39:14.941031 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6600dd48-0759-43de-b1df-e99334590bac-kube-api-access-7nbgp" (OuterVolumeSpecName: "kube-api-access-7nbgp") pod "6600dd48-0759-43de-b1df-e99334590bac" (UID: "6600dd48-0759-43de-b1df-e99334590bac"). InnerVolumeSpecName "kube-api-access-7nbgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:39:14.941523 master-0 kubenswrapper[27835]: I0318 13:39:14.941467 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6600dd48-0759-43de-b1df-e99334590bac-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6600dd48-0759-43de-b1df-e99334590bac" (UID: "6600dd48-0759-43de-b1df-e99334590bac"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:39:15.039569 master-0 kubenswrapper[27835]: I0318 13:39:15.039014 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nbgp\" (UniqueName: \"kubernetes.io/projected/6600dd48-0759-43de-b1df-e99334590bac-kube-api-access-7nbgp\") on node \"master-0\" DevicePath \"\"" Mar 18 13:39:15.039569 master-0 kubenswrapper[27835]: I0318 13:39:15.039071 27835 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6600dd48-0759-43de-b1df-e99334590bac-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:39:15.039569 master-0 kubenswrapper[27835]: I0318 13:39:15.039085 27835 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6600dd48-0759-43de-b1df-e99334590bac-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 13:39:15.039569 master-0 kubenswrapper[27835]: I0318 13:39:15.039101 27835 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6600dd48-0759-43de-b1df-e99334590bac-console-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:39:15.039569 master-0 kubenswrapper[27835]: I0318 13:39:15.039113 27835 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6600dd48-0759-43de-b1df-e99334590bac-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:39:15.039569 master-0 kubenswrapper[27835]: I0318 13:39:15.039126 27835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6600dd48-0759-43de-b1df-e99334590bac-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:39:15.039569 master-0 kubenswrapper[27835]: I0318 13:39:15.039137 27835 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6600dd48-0759-43de-b1df-e99334590bac-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 13:39:15.504608 master-0 kubenswrapper[27835]: I0318 13:39:15.504552 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-9cc97458b-bkd6r_6600dd48-0759-43de-b1df-e99334590bac/console/0.log" Mar 18 13:39:15.505194 master-0 kubenswrapper[27835]: I0318 13:39:15.504688 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-9cc97458b-bkd6r" event={"ID":"6600dd48-0759-43de-b1df-e99334590bac","Type":"ContainerDied","Data":"d9f83ea5d9ad7cd809601507209e5981026898c5afd124c3c2542a7b973c1699"} Mar 18 13:39:15.505194 master-0 kubenswrapper[27835]: I0318 13:39:15.504759 27835 scope.go:117] "RemoveContainer" containerID="f0a78f5a48a297ffd715ddb4d57dccd0a5527ce4781f61229142c90e8164889e" Mar 18 13:39:15.505194 master-0 kubenswrapper[27835]: I0318 13:39:15.504704 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-9cc97458b-bkd6r" Mar 18 13:39:15.548161 master-0 kubenswrapper[27835]: I0318 13:39:15.548014 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-9cc97458b-bkd6r"] Mar 18 13:39:15.554825 master-0 kubenswrapper[27835]: I0318 13:39:15.554761 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-9cc97458b-bkd6r"] Mar 18 13:39:16.301060 master-0 kubenswrapper[27835]: I0318 13:39:16.300952 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6600dd48-0759-43de-b1df-e99334590bac" path="/var/lib/kubelet/pods/6600dd48-0759-43de-b1df-e99334590bac/volumes" Mar 18 13:39:20.668912 master-0 kubenswrapper[27835]: I0318 13:39:20.668826 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-g2g5l" Mar 18 13:39:20.668912 master-0 kubenswrapper[27835]: I0318 13:39:20.668916 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-g2g5l" Mar 18 13:39:20.703014 master-0 kubenswrapper[27835]: I0318 13:39:20.702947 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-g2g5l" Mar 18 13:39:21.589865 master-0 kubenswrapper[27835]: I0318 13:39:21.589720 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-g2g5l" Mar 18 13:39:22.457996 master-0 kubenswrapper[27835]: I0318 13:39:22.457927 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cpblcq"] Mar 18 13:39:22.459668 master-0 kubenswrapper[27835]: E0318 13:39:22.459623 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6600dd48-0759-43de-b1df-e99334590bac" containerName="console" Mar 18 13:39:22.459836 master-0 kubenswrapper[27835]: I0318 13:39:22.459797 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="6600dd48-0759-43de-b1df-e99334590bac" containerName="console" Mar 18 13:39:22.460169 master-0 kubenswrapper[27835]: I0318 13:39:22.460150 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="6600dd48-0759-43de-b1df-e99334590bac" containerName="console" Mar 18 13:39:22.461623 master-0 kubenswrapper[27835]: I0318 13:39:22.461599 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cpblcq" Mar 18 13:39:22.476258 master-0 kubenswrapper[27835]: I0318 13:39:22.476199 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cpblcq"] Mar 18 13:39:22.567960 master-0 kubenswrapper[27835]: I0318 13:39:22.566879 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fbca4be6-7742-4c5b-ae0f-29f374f6e8a8-bundle\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cpblcq\" (UID: \"fbca4be6-7742-4c5b-ae0f-29f374f6e8a8\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cpblcq" Mar 18 13:39:22.567960 master-0 kubenswrapper[27835]: I0318 13:39:22.566943 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fbca4be6-7742-4c5b-ae0f-29f374f6e8a8-util\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cpblcq\" (UID: \"fbca4be6-7742-4c5b-ae0f-29f374f6e8a8\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cpblcq" Mar 18 13:39:22.567960 master-0 kubenswrapper[27835]: I0318 13:39:22.567091 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz4pk\" (UniqueName: \"kubernetes.io/projected/fbca4be6-7742-4c5b-ae0f-29f374f6e8a8-kube-api-access-gz4pk\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cpblcq\" (UID: \"fbca4be6-7742-4c5b-ae0f-29f374f6e8a8\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cpblcq" Mar 18 13:39:22.669242 master-0 kubenswrapper[27835]: I0318 13:39:22.669145 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz4pk\" (UniqueName: \"kubernetes.io/projected/fbca4be6-7742-4c5b-ae0f-29f374f6e8a8-kube-api-access-gz4pk\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cpblcq\" (UID: \"fbca4be6-7742-4c5b-ae0f-29f374f6e8a8\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cpblcq" Mar 18 13:39:22.669506 master-0 kubenswrapper[27835]: I0318 13:39:22.669445 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fbca4be6-7742-4c5b-ae0f-29f374f6e8a8-bundle\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cpblcq\" (UID: \"fbca4be6-7742-4c5b-ae0f-29f374f6e8a8\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cpblcq" Mar 18 13:39:22.669506 master-0 kubenswrapper[27835]: I0318 13:39:22.669490 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fbca4be6-7742-4c5b-ae0f-29f374f6e8a8-util\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cpblcq\" (UID: \"fbca4be6-7742-4c5b-ae0f-29f374f6e8a8\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cpblcq" Mar 18 13:39:22.670107 master-0 kubenswrapper[27835]: I0318 13:39:22.670063 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fbca4be6-7742-4c5b-ae0f-29f374f6e8a8-bundle\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cpblcq\" (UID: \"fbca4be6-7742-4c5b-ae0f-29f374f6e8a8\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cpblcq" Mar 18 13:39:22.670202 master-0 kubenswrapper[27835]: I0318 13:39:22.670165 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fbca4be6-7742-4c5b-ae0f-29f374f6e8a8-util\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cpblcq\" (UID: \"fbca4be6-7742-4c5b-ae0f-29f374f6e8a8\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cpblcq" Mar 18 13:39:22.692681 master-0 kubenswrapper[27835]: I0318 13:39:22.691899 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz4pk\" (UniqueName: \"kubernetes.io/projected/fbca4be6-7742-4c5b-ae0f-29f374f6e8a8-kube-api-access-gz4pk\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cpblcq\" (UID: \"fbca4be6-7742-4c5b-ae0f-29f374f6e8a8\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cpblcq" Mar 18 13:39:22.790988 master-0 kubenswrapper[27835]: I0318 13:39:22.790833 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cpblcq" Mar 18 13:39:23.360218 master-0 kubenswrapper[27835]: I0318 13:39:23.358570 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cpblcq"] Mar 18 13:39:23.369104 master-0 kubenswrapper[27835]: W0318 13:39:23.367938 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbca4be6_7742_4c5b_ae0f_29f374f6e8a8.slice/crio-61e823bfb417e609e40cc63463cc6c82ab8237f277d52414d718200d8af2a7e5 WatchSource:0}: Error finding container 61e823bfb417e609e40cc63463cc6c82ab8237f277d52414d718200d8af2a7e5: Status 404 returned error can't find the container with id 61e823bfb417e609e40cc63463cc6c82ab8237f277d52414d718200d8af2a7e5 Mar 18 13:39:23.579064 master-0 kubenswrapper[27835]: I0318 13:39:23.579000 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cpblcq" event={"ID":"fbca4be6-7742-4c5b-ae0f-29f374f6e8a8","Type":"ContainerStarted","Data":"3ae016ef6209be7b76e611b42da2bccdf2028b026e9054ee4d10886d4cb41750"} Mar 18 13:39:23.579064 master-0 kubenswrapper[27835]: I0318 13:39:23.579055 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cpblcq" event={"ID":"fbca4be6-7742-4c5b-ae0f-29f374f6e8a8","Type":"ContainerStarted","Data":"61e823bfb417e609e40cc63463cc6c82ab8237f277d52414d718200d8af2a7e5"} Mar 18 13:39:24.591777 master-0 kubenswrapper[27835]: I0318 13:39:24.591592 27835 generic.go:334] "Generic (PLEG): container finished" podID="fbca4be6-7742-4c5b-ae0f-29f374f6e8a8" containerID="3ae016ef6209be7b76e611b42da2bccdf2028b026e9054ee4d10886d4cb41750" exitCode=0 Mar 18 13:39:24.591777 master-0 kubenswrapper[27835]: I0318 13:39:24.591662 27835 generic.go:334] "Generic (PLEG): container finished" podID="fbca4be6-7742-4c5b-ae0f-29f374f6e8a8" containerID="dc2e18dcd4deab0e52a2e25b6df45c87c4416feb84fd22432bd461db10ba13cf" exitCode=0 Mar 18 13:39:24.591777 master-0 kubenswrapper[27835]: I0318 13:39:24.591666 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cpblcq" event={"ID":"fbca4be6-7742-4c5b-ae0f-29f374f6e8a8","Type":"ContainerDied","Data":"3ae016ef6209be7b76e611b42da2bccdf2028b026e9054ee4d10886d4cb41750"} Mar 18 13:39:24.591777 master-0 kubenswrapper[27835]: I0318 13:39:24.591743 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cpblcq" event={"ID":"fbca4be6-7742-4c5b-ae0f-29f374f6e8a8","Type":"ContainerDied","Data":"dc2e18dcd4deab0e52a2e25b6df45c87c4416feb84fd22432bd461db10ba13cf"} Mar 18 13:39:25.604048 master-0 kubenswrapper[27835]: I0318 13:39:25.603979 27835 generic.go:334] "Generic (PLEG): container finished" podID="fbca4be6-7742-4c5b-ae0f-29f374f6e8a8" containerID="6d795d8c2673d00bccb03a4f5a2547cde7f971131d077fc71ab76a8ab9e1e673" exitCode=0 Mar 18 13:39:25.604048 master-0 kubenswrapper[27835]: I0318 13:39:25.604050 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cpblcq" event={"ID":"fbca4be6-7742-4c5b-ae0f-29f374f6e8a8","Type":"ContainerDied","Data":"6d795d8c2673d00bccb03a4f5a2547cde7f971131d077fc71ab76a8ab9e1e673"} Mar 18 13:39:27.010552 master-0 kubenswrapper[27835]: I0318 13:39:27.010496 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cpblcq" Mar 18 13:39:27.064320 master-0 kubenswrapper[27835]: I0318 13:39:27.060488 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gz4pk\" (UniqueName: \"kubernetes.io/projected/fbca4be6-7742-4c5b-ae0f-29f374f6e8a8-kube-api-access-gz4pk\") pod \"fbca4be6-7742-4c5b-ae0f-29f374f6e8a8\" (UID: \"fbca4be6-7742-4c5b-ae0f-29f374f6e8a8\") " Mar 18 13:39:27.064320 master-0 kubenswrapper[27835]: I0318 13:39:27.060625 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fbca4be6-7742-4c5b-ae0f-29f374f6e8a8-util\") pod \"fbca4be6-7742-4c5b-ae0f-29f374f6e8a8\" (UID: \"fbca4be6-7742-4c5b-ae0f-29f374f6e8a8\") " Mar 18 13:39:27.064320 master-0 kubenswrapper[27835]: I0318 13:39:27.060728 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fbca4be6-7742-4c5b-ae0f-29f374f6e8a8-bundle\") pod \"fbca4be6-7742-4c5b-ae0f-29f374f6e8a8\" (UID: \"fbca4be6-7742-4c5b-ae0f-29f374f6e8a8\") " Mar 18 13:39:27.064320 master-0 kubenswrapper[27835]: I0318 13:39:27.061486 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbca4be6-7742-4c5b-ae0f-29f374f6e8a8-bundle" (OuterVolumeSpecName: "bundle") pod "fbca4be6-7742-4c5b-ae0f-29f374f6e8a8" (UID: "fbca4be6-7742-4c5b-ae0f-29f374f6e8a8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:39:27.064320 master-0 kubenswrapper[27835]: I0318 13:39:27.064049 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbca4be6-7742-4c5b-ae0f-29f374f6e8a8-kube-api-access-gz4pk" (OuterVolumeSpecName: "kube-api-access-gz4pk") pod "fbca4be6-7742-4c5b-ae0f-29f374f6e8a8" (UID: "fbca4be6-7742-4c5b-ae0f-29f374f6e8a8"). InnerVolumeSpecName "kube-api-access-gz4pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:39:27.074528 master-0 kubenswrapper[27835]: I0318 13:39:27.073508 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbca4be6-7742-4c5b-ae0f-29f374f6e8a8-util" (OuterVolumeSpecName: "util") pod "fbca4be6-7742-4c5b-ae0f-29f374f6e8a8" (UID: "fbca4be6-7742-4c5b-ae0f-29f374f6e8a8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:39:27.163123 master-0 kubenswrapper[27835]: I0318 13:39:27.163055 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gz4pk\" (UniqueName: \"kubernetes.io/projected/fbca4be6-7742-4c5b-ae0f-29f374f6e8a8-kube-api-access-gz4pk\") on node \"master-0\" DevicePath \"\"" Mar 18 13:39:27.163123 master-0 kubenswrapper[27835]: I0318 13:39:27.163099 27835 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fbca4be6-7742-4c5b-ae0f-29f374f6e8a8-util\") on node \"master-0\" DevicePath \"\"" Mar 18 13:39:27.163123 master-0 kubenswrapper[27835]: I0318 13:39:27.163109 27835 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fbca4be6-7742-4c5b-ae0f-29f374f6e8a8-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:39:27.624732 master-0 kubenswrapper[27835]: I0318 13:39:27.624597 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cpblcq" event={"ID":"fbca4be6-7742-4c5b-ae0f-29f374f6e8a8","Type":"ContainerDied","Data":"61e823bfb417e609e40cc63463cc6c82ab8237f277d52414d718200d8af2a7e5"} Mar 18 13:39:27.624732 master-0 kubenswrapper[27835]: I0318 13:39:27.624681 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61e823bfb417e609e40cc63463cc6c82ab8237f277d52414d718200d8af2a7e5" Mar 18 13:39:27.625116 master-0 kubenswrapper[27835]: I0318 13:39:27.624759 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cpblcq" Mar 18 13:39:35.261364 master-0 kubenswrapper[27835]: I0318 13:39:35.261295 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-b85c4d696-nqjrq"] Mar 18 13:39:35.262144 master-0 kubenswrapper[27835]: E0318 13:39:35.261778 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbca4be6-7742-4c5b-ae0f-29f374f6e8a8" containerName="util" Mar 18 13:39:35.262144 master-0 kubenswrapper[27835]: I0318 13:39:35.261797 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbca4be6-7742-4c5b-ae0f-29f374f6e8a8" containerName="util" Mar 18 13:39:35.262144 master-0 kubenswrapper[27835]: E0318 13:39:35.261851 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbca4be6-7742-4c5b-ae0f-29f374f6e8a8" containerName="extract" Mar 18 13:39:35.262144 master-0 kubenswrapper[27835]: I0318 13:39:35.261862 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbca4be6-7742-4c5b-ae0f-29f374f6e8a8" containerName="extract" Mar 18 13:39:35.262144 master-0 kubenswrapper[27835]: E0318 13:39:35.261894 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbca4be6-7742-4c5b-ae0f-29f374f6e8a8" containerName="pull" Mar 18 13:39:35.262144 master-0 kubenswrapper[27835]: I0318 13:39:35.261905 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbca4be6-7742-4c5b-ae0f-29f374f6e8a8" containerName="pull" Mar 18 13:39:35.262144 master-0 kubenswrapper[27835]: I0318 13:39:35.262110 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbca4be6-7742-4c5b-ae0f-29f374f6e8a8" containerName="extract" Mar 18 13:39:35.262848 master-0 kubenswrapper[27835]: I0318 13:39:35.262816 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-b85c4d696-nqjrq" Mar 18 13:39:35.308955 master-0 kubenswrapper[27835]: I0318 13:39:35.308894 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-b85c4d696-nqjrq"] Mar 18 13:39:35.424235 master-0 kubenswrapper[27835]: I0318 13:39:35.424167 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfhmh\" (UniqueName: \"kubernetes.io/projected/02f5e574-fb26-4580-8e90-b6c6f66b3828-kube-api-access-bfhmh\") pod \"openstack-operator-controller-init-b85c4d696-nqjrq\" (UID: \"02f5e574-fb26-4580-8e90-b6c6f66b3828\") " pod="openstack-operators/openstack-operator-controller-init-b85c4d696-nqjrq" Mar 18 13:39:35.526210 master-0 kubenswrapper[27835]: I0318 13:39:35.526074 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfhmh\" (UniqueName: \"kubernetes.io/projected/02f5e574-fb26-4580-8e90-b6c6f66b3828-kube-api-access-bfhmh\") pod \"openstack-operator-controller-init-b85c4d696-nqjrq\" (UID: \"02f5e574-fb26-4580-8e90-b6c6f66b3828\") " pod="openstack-operators/openstack-operator-controller-init-b85c4d696-nqjrq" Mar 18 13:39:35.556090 master-0 kubenswrapper[27835]: I0318 13:39:35.556016 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfhmh\" (UniqueName: \"kubernetes.io/projected/02f5e574-fb26-4580-8e90-b6c6f66b3828-kube-api-access-bfhmh\") pod \"openstack-operator-controller-init-b85c4d696-nqjrq\" (UID: \"02f5e574-fb26-4580-8e90-b6c6f66b3828\") " pod="openstack-operators/openstack-operator-controller-init-b85c4d696-nqjrq" Mar 18 13:39:35.584555 master-0 kubenswrapper[27835]: I0318 13:39:35.584464 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-b85c4d696-nqjrq" Mar 18 13:39:35.901509 master-0 kubenswrapper[27835]: I0318 13:39:35.897019 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-b85c4d696-nqjrq"] Mar 18 13:39:35.938587 master-0 kubenswrapper[27835]: I0318 13:39:35.935093 27835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 13:39:36.733126 master-0 kubenswrapper[27835]: I0318 13:39:36.733006 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-b85c4d696-nqjrq" event={"ID":"02f5e574-fb26-4580-8e90-b6c6f66b3828","Type":"ContainerStarted","Data":"663b2f74b30dd8fa3cdb58cf3d00cde234818b6a7edde76cabee4b6662464e9a"} Mar 18 13:39:41.784363 master-0 kubenswrapper[27835]: I0318 13:39:41.784293 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-b85c4d696-nqjrq" event={"ID":"02f5e574-fb26-4580-8e90-b6c6f66b3828","Type":"ContainerStarted","Data":"ff0b5456c2a10b928c333d962b5a0e8e2972da8735cc2b2ac13fb2bcf8d4b4d5"} Mar 18 13:39:41.785000 master-0 kubenswrapper[27835]: I0318 13:39:41.784755 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-b85c4d696-nqjrq" Mar 18 13:39:41.819318 master-0 kubenswrapper[27835]: I0318 13:39:41.819221 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-b85c4d696-nqjrq" podStartSLOduration=1.89995766 podStartE2EDuration="6.819202228s" podCreationTimestamp="2026-03-18 13:39:35 +0000 UTC" firstStartedPulling="2026-03-18 13:39:35.935042673 +0000 UTC m=+939.900254233" lastFinishedPulling="2026-03-18 13:39:40.854287241 +0000 UTC m=+944.819498801" observedRunningTime="2026-03-18 13:39:41.816523295 +0000 UTC m=+945.781734875" watchObservedRunningTime="2026-03-18 13:39:41.819202228 +0000 UTC m=+945.784413788" Mar 18 13:39:45.586910 master-0 kubenswrapper[27835]: I0318 13:39:45.586836 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-b85c4d696-nqjrq" Mar 18 13:40:06.393442 master-0 kubenswrapper[27835]: I0318 13:40:06.390449 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59bc569d95-9hllz"] Mar 18 13:40:06.393442 master-0 kubenswrapper[27835]: I0318 13:40:06.391493 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-9hllz" Mar 18 13:40:06.409375 master-0 kubenswrapper[27835]: I0318 13:40:06.409329 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59bc569d95-9hllz"] Mar 18 13:40:06.419279 master-0 kubenswrapper[27835]: I0318 13:40:06.418972 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d58dc466-7vhnh"] Mar 18 13:40:06.420440 master-0 kubenswrapper[27835]: I0318 13:40:06.420394 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-7vhnh" Mar 18 13:40:06.464543 master-0 kubenswrapper[27835]: I0318 13:40:06.464488 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-588d4d986b-zr74z"] Mar 18 13:40:06.466109 master-0 kubenswrapper[27835]: I0318 13:40:06.466081 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-zr74z" Mar 18 13:40:06.479364 master-0 kubenswrapper[27835]: I0318 13:40:06.477558 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhqxb\" (UniqueName: \"kubernetes.io/projected/ddaec091-2dee-4ea7-a06e-c30e9c1ba96e-kube-api-access-nhqxb\") pod \"barbican-operator-controller-manager-59bc569d95-9hllz\" (UID: \"ddaec091-2dee-4ea7-a06e-c30e9c1ba96e\") " pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-9hllz" Mar 18 13:40:06.479364 master-0 kubenswrapper[27835]: I0318 13:40:06.477643 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhzmg\" (UniqueName: \"kubernetes.io/projected/ee6bd2ee-8a51-4d24-9abf-5029e73a106a-kube-api-access-dhzmg\") pod \"cinder-operator-controller-manager-8d58dc466-7vhnh\" (UID: \"ee6bd2ee-8a51-4d24-9abf-5029e73a106a\") " pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-7vhnh" Mar 18 13:40:06.525088 master-0 kubenswrapper[27835]: I0318 13:40:06.525012 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d58dc466-7vhnh"] Mar 18 13:40:06.542620 master-0 kubenswrapper[27835]: I0318 13:40:06.542575 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-588d4d986b-zr74z"] Mar 18 13:40:06.572718 master-0 kubenswrapper[27835]: I0318 13:40:06.572614 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-79df6bcc97-hjwh4"] Mar 18 13:40:06.574154 master-0 kubenswrapper[27835]: I0318 13:40:06.574122 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-hjwh4" Mar 18 13:40:06.579605 master-0 kubenswrapper[27835]: I0318 13:40:06.579322 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhqxb\" (UniqueName: \"kubernetes.io/projected/ddaec091-2dee-4ea7-a06e-c30e9c1ba96e-kube-api-access-nhqxb\") pod \"barbican-operator-controller-manager-59bc569d95-9hllz\" (UID: \"ddaec091-2dee-4ea7-a06e-c30e9c1ba96e\") " pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-9hllz" Mar 18 13:40:06.579605 master-0 kubenswrapper[27835]: I0318 13:40:06.579441 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsrq6\" (UniqueName: \"kubernetes.io/projected/8979be93-aaa2-4ad9-b6d3-50af024d681a-kube-api-access-zsrq6\") pod \"designate-operator-controller-manager-588d4d986b-zr74z\" (UID: \"8979be93-aaa2-4ad9-b6d3-50af024d681a\") " pod="openstack-operators/designate-operator-controller-manager-588d4d986b-zr74z" Mar 18 13:40:06.579605 master-0 kubenswrapper[27835]: I0318 13:40:06.579500 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhzmg\" (UniqueName: \"kubernetes.io/projected/ee6bd2ee-8a51-4d24-9abf-5029e73a106a-kube-api-access-dhzmg\") pod \"cinder-operator-controller-manager-8d58dc466-7vhnh\" (UID: \"ee6bd2ee-8a51-4d24-9abf-5029e73a106a\") " pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-7vhnh" Mar 18 13:40:06.612444 master-0 kubenswrapper[27835]: I0318 13:40:06.608785 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-79df6bcc97-hjwh4"] Mar 18 13:40:06.631444 master-0 kubenswrapper[27835]: I0318 13:40:06.624157 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-67dd5f86f5-cjvmp"] Mar 18 13:40:06.631444 master-0 kubenswrapper[27835]: I0318 13:40:06.625577 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-8464cc45fb-xpppk"] Mar 18 13:40:06.631444 master-0 kubenswrapper[27835]: I0318 13:40:06.627035 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhqxb\" (UniqueName: \"kubernetes.io/projected/ddaec091-2dee-4ea7-a06e-c30e9c1ba96e-kube-api-access-nhqxb\") pod \"barbican-operator-controller-manager-59bc569d95-9hllz\" (UID: \"ddaec091-2dee-4ea7-a06e-c30e9c1ba96e\") " pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-9hllz" Mar 18 13:40:06.631444 master-0 kubenswrapper[27835]: I0318 13:40:06.627492 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhzmg\" (UniqueName: \"kubernetes.io/projected/ee6bd2ee-8a51-4d24-9abf-5029e73a106a-kube-api-access-dhzmg\") pod \"cinder-operator-controller-manager-8d58dc466-7vhnh\" (UID: \"ee6bd2ee-8a51-4d24-9abf-5029e73a106a\") " pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-7vhnh" Mar 18 13:40:06.631444 master-0 kubenswrapper[27835]: I0318 13:40:06.627932 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-cjvmp" Mar 18 13:40:06.681325 master-0 kubenswrapper[27835]: I0318 13:40:06.680724 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbpvg\" (UniqueName: \"kubernetes.io/projected/b6047c49-c76b-4345-a72b-74be859fddc7-kube-api-access-hbpvg\") pod \"glance-operator-controller-manager-79df6bcc97-hjwh4\" (UID: \"b6047c49-c76b-4345-a72b-74be859fddc7\") " pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-hjwh4" Mar 18 13:40:06.681667 master-0 kubenswrapper[27835]: I0318 13:40:06.681643 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsrq6\" (UniqueName: \"kubernetes.io/projected/8979be93-aaa2-4ad9-b6d3-50af024d681a-kube-api-access-zsrq6\") pod \"designate-operator-controller-manager-588d4d986b-zr74z\" (UID: \"8979be93-aaa2-4ad9-b6d3-50af024d681a\") " pod="openstack-operators/designate-operator-controller-manager-588d4d986b-zr74z" Mar 18 13:40:06.681776 master-0 kubenswrapper[27835]: I0318 13:40:06.681761 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq46p\" (UniqueName: \"kubernetes.io/projected/265baffa-3bec-4faa-be16-00c3d75f3b99-kube-api-access-vq46p\") pod \"heat-operator-controller-manager-67dd5f86f5-cjvmp\" (UID: \"265baffa-3bec-4faa-be16-00c3d75f3b99\") " pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-cjvmp" Mar 18 13:40:06.685002 master-0 kubenswrapper[27835]: I0318 13:40:06.684811 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-67dd5f86f5-cjvmp"] Mar 18 13:40:06.685002 master-0 kubenswrapper[27835]: I0318 13:40:06.684939 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-xpppk" Mar 18 13:40:06.690489 master-0 kubenswrapper[27835]: I0318 13:40:06.688300 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-7dd6bb94c9-w5stx"] Mar 18 13:40:06.690489 master-0 kubenswrapper[27835]: I0318 13:40:06.689772 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-w5stx" Mar 18 13:40:06.696810 master-0 kubenswrapper[27835]: I0318 13:40:06.696301 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Mar 18 13:40:06.733535 master-0 kubenswrapper[27835]: I0318 13:40:06.731988 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsrq6\" (UniqueName: \"kubernetes.io/projected/8979be93-aaa2-4ad9-b6d3-50af024d681a-kube-api-access-zsrq6\") pod \"designate-operator-controller-manager-588d4d986b-zr74z\" (UID: \"8979be93-aaa2-4ad9-b6d3-50af024d681a\") " pod="openstack-operators/designate-operator-controller-manager-588d4d986b-zr74z" Mar 18 13:40:06.747769 master-0 kubenswrapper[27835]: I0318 13:40:06.746130 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-8464cc45fb-xpppk"] Mar 18 13:40:06.764067 master-0 kubenswrapper[27835]: I0318 13:40:06.762475 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-9hllz" Mar 18 13:40:06.783547 master-0 kubenswrapper[27835]: I0318 13:40:06.783484 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq46p\" (UniqueName: \"kubernetes.io/projected/265baffa-3bec-4faa-be16-00c3d75f3b99-kube-api-access-vq46p\") pod \"heat-operator-controller-manager-67dd5f86f5-cjvmp\" (UID: \"265baffa-3bec-4faa-be16-00c3d75f3b99\") " pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-cjvmp" Mar 18 13:40:06.783784 master-0 kubenswrapper[27835]: I0318 13:40:06.783561 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/71018e48-64b3-42f0-b37f-dfa72163b1bf-cert\") pod \"infra-operator-controller-manager-7dd6bb94c9-w5stx\" (UID: \"71018e48-64b3-42f0-b37f-dfa72163b1bf\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-w5stx" Mar 18 13:40:06.783784 master-0 kubenswrapper[27835]: I0318 13:40:06.783584 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg6cm\" (UniqueName: \"kubernetes.io/projected/71018e48-64b3-42f0-b37f-dfa72163b1bf-kube-api-access-mg6cm\") pod \"infra-operator-controller-manager-7dd6bb94c9-w5stx\" (UID: \"71018e48-64b3-42f0-b37f-dfa72163b1bf\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-w5stx" Mar 18 13:40:06.783784 master-0 kubenswrapper[27835]: I0318 13:40:06.783679 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8qqk\" (UniqueName: \"kubernetes.io/projected/d109b5af-e96b-47ce-b4cc-c41c4e87ee49-kube-api-access-d8qqk\") pod \"horizon-operator-controller-manager-8464cc45fb-xpppk\" (UID: \"d109b5af-e96b-47ce-b4cc-c41c4e87ee49\") " pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-xpppk" Mar 18 13:40:06.783784 master-0 kubenswrapper[27835]: I0318 13:40:06.783713 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbpvg\" (UniqueName: \"kubernetes.io/projected/b6047c49-c76b-4345-a72b-74be859fddc7-kube-api-access-hbpvg\") pod \"glance-operator-controller-manager-79df6bcc97-hjwh4\" (UID: \"b6047c49-c76b-4345-a72b-74be859fddc7\") " pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-hjwh4" Mar 18 13:40:06.789547 master-0 kubenswrapper[27835]: I0318 13:40:06.784050 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7dd6bb94c9-w5stx"] Mar 18 13:40:06.795103 master-0 kubenswrapper[27835]: I0318 13:40:06.793839 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-7vhnh" Mar 18 13:40:06.826152 master-0 kubenswrapper[27835]: I0318 13:40:06.815475 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-768b96df4c-9xn5p"] Mar 18 13:40:06.826152 master-0 kubenswrapper[27835]: I0318 13:40:06.816841 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-9xn5p" Mar 18 13:40:06.831969 master-0 kubenswrapper[27835]: I0318 13:40:06.827379 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbpvg\" (UniqueName: \"kubernetes.io/projected/b6047c49-c76b-4345-a72b-74be859fddc7-kube-api-access-hbpvg\") pod \"glance-operator-controller-manager-79df6bcc97-hjwh4\" (UID: \"b6047c49-c76b-4345-a72b-74be859fddc7\") " pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-hjwh4" Mar 18 13:40:06.831969 master-0 kubenswrapper[27835]: I0318 13:40:06.828851 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-zr74z" Mar 18 13:40:06.893106 master-0 kubenswrapper[27835]: I0318 13:40:06.892946 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8qqk\" (UniqueName: \"kubernetes.io/projected/d109b5af-e96b-47ce-b4cc-c41c4e87ee49-kube-api-access-d8qqk\") pod \"horizon-operator-controller-manager-8464cc45fb-xpppk\" (UID: \"d109b5af-e96b-47ce-b4cc-c41c4e87ee49\") " pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-xpppk" Mar 18 13:40:06.893202 master-0 kubenswrapper[27835]: I0318 13:40:06.893169 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/71018e48-64b3-42f0-b37f-dfa72163b1bf-cert\") pod \"infra-operator-controller-manager-7dd6bb94c9-w5stx\" (UID: \"71018e48-64b3-42f0-b37f-dfa72163b1bf\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-w5stx" Mar 18 13:40:06.893249 master-0 kubenswrapper[27835]: I0318 13:40:06.893204 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg6cm\" (UniqueName: \"kubernetes.io/projected/71018e48-64b3-42f0-b37f-dfa72163b1bf-kube-api-access-mg6cm\") pod \"infra-operator-controller-manager-7dd6bb94c9-w5stx\" (UID: \"71018e48-64b3-42f0-b37f-dfa72163b1bf\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-w5stx" Mar 18 13:40:06.893249 master-0 kubenswrapper[27835]: I0318 13:40:06.893227 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq59w\" (UniqueName: \"kubernetes.io/projected/9d322608-4d0f-41cd-aff3-5de61bc2d86e-kube-api-access-nq59w\") pod \"keystone-operator-controller-manager-768b96df4c-9xn5p\" (UID: \"9d322608-4d0f-41cd-aff3-5de61bc2d86e\") " pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-9xn5p" Mar 18 13:40:06.899451 master-0 kubenswrapper[27835]: E0318 13:40:06.893857 27835 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 18 13:40:06.899451 master-0 kubenswrapper[27835]: E0318 13:40:06.896228 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71018e48-64b3-42f0-b37f-dfa72163b1bf-cert podName:71018e48-64b3-42f0-b37f-dfa72163b1bf nodeName:}" failed. No retries permitted until 2026-03-18 13:40:07.396205784 +0000 UTC m=+971.361417344 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/71018e48-64b3-42f0-b37f-dfa72163b1bf-cert") pod "infra-operator-controller-manager-7dd6bb94c9-w5stx" (UID: "71018e48-64b3-42f0-b37f-dfa72163b1bf") : secret "infra-operator-webhook-server-cert" not found Mar 18 13:40:06.900343 master-0 kubenswrapper[27835]: I0318 13:40:06.900310 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq46p\" (UniqueName: \"kubernetes.io/projected/265baffa-3bec-4faa-be16-00c3d75f3b99-kube-api-access-vq46p\") pod \"heat-operator-controller-manager-67dd5f86f5-cjvmp\" (UID: \"265baffa-3bec-4faa-be16-00c3d75f3b99\") " pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-cjvmp" Mar 18 13:40:06.928984 master-0 kubenswrapper[27835]: I0318 13:40:06.928919 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8qqk\" (UniqueName: \"kubernetes.io/projected/d109b5af-e96b-47ce-b4cc-c41c4e87ee49-kube-api-access-d8qqk\") pod \"horizon-operator-controller-manager-8464cc45fb-xpppk\" (UID: \"d109b5af-e96b-47ce-b4cc-c41c4e87ee49\") " pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-xpppk" Mar 18 13:40:06.950946 master-0 kubenswrapper[27835]: I0318 13:40:06.950850 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg6cm\" (UniqueName: \"kubernetes.io/projected/71018e48-64b3-42f0-b37f-dfa72163b1bf-kube-api-access-mg6cm\") pod \"infra-operator-controller-manager-7dd6bb94c9-w5stx\" (UID: \"71018e48-64b3-42f0-b37f-dfa72163b1bf\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-w5stx" Mar 18 13:40:06.959158 master-0 kubenswrapper[27835]: I0318 13:40:06.958487 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6f787dddc9-pmtxh"] Mar 18 13:40:06.961068 master-0 kubenswrapper[27835]: I0318 13:40:06.959963 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-pmtxh" Mar 18 13:40:06.968146 master-0 kubenswrapper[27835]: I0318 13:40:06.968111 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-768b96df4c-9xn5p"] Mar 18 13:40:06.979578 master-0 kubenswrapper[27835]: I0318 13:40:06.979530 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-55f864c847-mqptr"] Mar 18 13:40:06.988304 master-0 kubenswrapper[27835]: I0318 13:40:06.980972 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-55f864c847-mqptr" Mar 18 13:40:06.994359 master-0 kubenswrapper[27835]: I0318 13:40:06.994314 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq59w\" (UniqueName: \"kubernetes.io/projected/9d322608-4d0f-41cd-aff3-5de61bc2d86e-kube-api-access-nq59w\") pod \"keystone-operator-controller-manager-768b96df4c-9xn5p\" (UID: \"9d322608-4d0f-41cd-aff3-5de61bc2d86e\") " pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-9xn5p" Mar 18 13:40:06.995038 master-0 kubenswrapper[27835]: I0318 13:40:06.994407 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qns4\" (UniqueName: \"kubernetes.io/projected/f84705cb-9f70-43e3-ba36-6f9530ad53af-kube-api-access-4qns4\") pod \"ironic-operator-controller-manager-6f787dddc9-pmtxh\" (UID: \"f84705cb-9f70-43e3-ba36-6f9530ad53af\") " pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-pmtxh" Mar 18 13:40:06.995359 master-0 kubenswrapper[27835]: I0318 13:40:06.995334 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6f787dddc9-pmtxh"] Mar 18 13:40:07.006934 master-0 kubenswrapper[27835]: I0318 13:40:07.006835 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67ccfc9778-n87m7"] Mar 18 13:40:07.008051 master-0 kubenswrapper[27835]: I0318 13:40:07.007957 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-n87m7" Mar 18 13:40:07.021498 master-0 kubenswrapper[27835]: I0318 13:40:07.014895 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-hjwh4" Mar 18 13:40:07.021498 master-0 kubenswrapper[27835]: I0318 13:40:07.018909 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67ccfc9778-n87m7"] Mar 18 13:40:07.048753 master-0 kubenswrapper[27835]: I0318 13:40:07.038505 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-55f864c847-mqptr"] Mar 18 13:40:07.050546 master-0 kubenswrapper[27835]: I0318 13:40:07.050512 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-767865f676-g2nh4"] Mar 18 13:40:07.054095 master-0 kubenswrapper[27835]: I0318 13:40:07.054013 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq59w\" (UniqueName: \"kubernetes.io/projected/9d322608-4d0f-41cd-aff3-5de61bc2d86e-kube-api-access-nq59w\") pod \"keystone-operator-controller-manager-768b96df4c-9xn5p\" (UID: \"9d322608-4d0f-41cd-aff3-5de61bc2d86e\") " pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-9xn5p" Mar 18 13:40:07.086554 master-0 kubenswrapper[27835]: I0318 13:40:07.079942 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-cjvmp" Mar 18 13:40:07.102888 master-0 kubenswrapper[27835]: I0318 13:40:07.101324 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qns4\" (UniqueName: \"kubernetes.io/projected/f84705cb-9f70-43e3-ba36-6f9530ad53af-kube-api-access-4qns4\") pod \"ironic-operator-controller-manager-6f787dddc9-pmtxh\" (UID: \"f84705cb-9f70-43e3-ba36-6f9530ad53af\") " pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-pmtxh" Mar 18 13:40:07.102888 master-0 kubenswrapper[27835]: I0318 13:40:07.101399 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppj65\" (UniqueName: \"kubernetes.io/projected/3640aec0-34c2-454b-95cb-822cccc6425f-kube-api-access-ppj65\") pod \"manila-operator-controller-manager-55f864c847-mqptr\" (UID: \"3640aec0-34c2-454b-95cb-822cccc6425f\") " pod="openstack-operators/manila-operator-controller-manager-55f864c847-mqptr" Mar 18 13:40:07.102888 master-0 kubenswrapper[27835]: I0318 13:40:07.101443 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8n7c\" (UniqueName: \"kubernetes.io/projected/f0c88737-f524-4267-9c13-719936da8c4e-kube-api-access-p8n7c\") pod \"mariadb-operator-controller-manager-67ccfc9778-n87m7\" (UID: \"f0c88737-f524-4267-9c13-719936da8c4e\") " pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-n87m7" Mar 18 13:40:07.122778 master-0 kubenswrapper[27835]: I0318 13:40:07.121897 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-767865f676-g2nh4" Mar 18 13:40:07.205495 master-0 kubenswrapper[27835]: I0318 13:40:07.203560 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-xpppk" Mar 18 13:40:07.223224 master-0 kubenswrapper[27835]: I0318 13:40:07.218448 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-767865f676-g2nh4"] Mar 18 13:40:07.241781 master-0 kubenswrapper[27835]: I0318 13:40:07.234578 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppj65\" (UniqueName: \"kubernetes.io/projected/3640aec0-34c2-454b-95cb-822cccc6425f-kube-api-access-ppj65\") pod \"manila-operator-controller-manager-55f864c847-mqptr\" (UID: \"3640aec0-34c2-454b-95cb-822cccc6425f\") " pod="openstack-operators/manila-operator-controller-manager-55f864c847-mqptr" Mar 18 13:40:07.241781 master-0 kubenswrapper[27835]: I0318 13:40:07.234628 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8n7c\" (UniqueName: \"kubernetes.io/projected/f0c88737-f524-4267-9c13-719936da8c4e-kube-api-access-p8n7c\") pod \"mariadb-operator-controller-manager-67ccfc9778-n87m7\" (UID: \"f0c88737-f524-4267-9c13-719936da8c4e\") " pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-n87m7" Mar 18 13:40:07.241781 master-0 kubenswrapper[27835]: I0318 13:40:07.234663 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h7gz\" (UniqueName: \"kubernetes.io/projected/5b456512-ca01-4530-9368-6380bd8144e8-kube-api-access-2h7gz\") pod \"neutron-operator-controller-manager-767865f676-g2nh4\" (UID: \"5b456512-ca01-4530-9368-6380bd8144e8\") " pod="openstack-operators/neutron-operator-controller-manager-767865f676-g2nh4" Mar 18 13:40:07.262153 master-0 kubenswrapper[27835]: I0318 13:40:07.258540 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5d488d59fb-fhj89"] Mar 18 13:40:07.262153 master-0 kubenswrapper[27835]: I0318 13:40:07.259765 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-fhj89" Mar 18 13:40:07.285244 master-0 kubenswrapper[27835]: I0318 13:40:07.266884 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppj65\" (UniqueName: \"kubernetes.io/projected/3640aec0-34c2-454b-95cb-822cccc6425f-kube-api-access-ppj65\") pod \"manila-operator-controller-manager-55f864c847-mqptr\" (UID: \"3640aec0-34c2-454b-95cb-822cccc6425f\") " pod="openstack-operators/manila-operator-controller-manager-55f864c847-mqptr" Mar 18 13:40:07.285244 master-0 kubenswrapper[27835]: I0318 13:40:07.271032 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5d488d59fb-fhj89"] Mar 18 13:40:07.285244 master-0 kubenswrapper[27835]: I0318 13:40:07.275434 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8n7c\" (UniqueName: \"kubernetes.io/projected/f0c88737-f524-4267-9c13-719936da8c4e-kube-api-access-p8n7c\") pod \"mariadb-operator-controller-manager-67ccfc9778-n87m7\" (UID: \"f0c88737-f524-4267-9c13-719936da8c4e\") " pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-n87m7" Mar 18 13:40:07.285244 master-0 kubenswrapper[27835]: I0318 13:40:07.283592 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-9xn5p" Mar 18 13:40:07.287115 master-0 kubenswrapper[27835]: I0318 13:40:07.287085 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qns4\" (UniqueName: \"kubernetes.io/projected/f84705cb-9f70-43e3-ba36-6f9530ad53af-kube-api-access-4qns4\") pod \"ironic-operator-controller-manager-6f787dddc9-pmtxh\" (UID: \"f84705cb-9f70-43e3-ba36-6f9530ad53af\") " pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-pmtxh" Mar 18 13:40:07.330201 master-0 kubenswrapper[27835]: I0318 13:40:07.327491 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5b9f45d989-j47dt"] Mar 18 13:40:07.330201 master-0 kubenswrapper[27835]: I0318 13:40:07.329051 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-j47dt" Mar 18 13:40:07.353599 master-0 kubenswrapper[27835]: I0318 13:40:07.336888 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h7gz\" (UniqueName: \"kubernetes.io/projected/5b456512-ca01-4530-9368-6380bd8144e8-kube-api-access-2h7gz\") pod \"neutron-operator-controller-manager-767865f676-g2nh4\" (UID: \"5b456512-ca01-4530-9368-6380bd8144e8\") " pod="openstack-operators/neutron-operator-controller-manager-767865f676-g2nh4" Mar 18 13:40:07.353599 master-0 kubenswrapper[27835]: I0318 13:40:07.336992 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j27bx\" (UniqueName: \"kubernetes.io/projected/9d6a5f9d-4997-4253-9a8a-2783b30e219a-kube-api-access-j27bx\") pod \"nova-operator-controller-manager-5d488d59fb-fhj89\" (UID: \"9d6a5f9d-4997-4253-9a8a-2783b30e219a\") " pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-fhj89" Mar 18 13:40:07.353599 master-0 kubenswrapper[27835]: I0318 13:40:07.352026 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5b9f45d989-j47dt"] Mar 18 13:40:07.364440 master-0 kubenswrapper[27835]: I0318 13:40:07.361374 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-pmtxh" Mar 18 13:40:07.370060 master-0 kubenswrapper[27835]: I0318 13:40:07.368269 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h7gz\" (UniqueName: \"kubernetes.io/projected/5b456512-ca01-4530-9368-6380bd8144e8-kube-api-access-2h7gz\") pod \"neutron-operator-controller-manager-767865f676-g2nh4\" (UID: \"5b456512-ca01-4530-9368-6380bd8144e8\") " pod="openstack-operators/neutron-operator-controller-manager-767865f676-g2nh4" Mar 18 13:40:07.372326 master-0 kubenswrapper[27835]: I0318 13:40:07.372261 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-74c479689995w99"] Mar 18 13:40:07.373830 master-0 kubenswrapper[27835]: I0318 13:40:07.373798 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c479689995w99" Mar 18 13:40:07.378132 master-0 kubenswrapper[27835]: I0318 13:40:07.378095 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Mar 18 13:40:07.379981 master-0 kubenswrapper[27835]: I0318 13:40:07.379940 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-74c479689995w99"] Mar 18 13:40:07.387202 master-0 kubenswrapper[27835]: I0318 13:40:07.387157 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-884679f54-mmprj"] Mar 18 13:40:07.388509 master-0 kubenswrapper[27835]: I0318 13:40:07.388431 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-884679f54-mmprj" Mar 18 13:40:07.396507 master-0 kubenswrapper[27835]: I0318 13:40:07.396473 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-884679f54-mmprj"] Mar 18 13:40:07.418295 master-0 kubenswrapper[27835]: I0318 13:40:07.414045 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5784578c99-zb64s"] Mar 18 13:40:07.418295 master-0 kubenswrapper[27835]: I0318 13:40:07.416511 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5784578c99-zb64s" Mar 18 13:40:07.418295 master-0 kubenswrapper[27835]: I0318 13:40:07.417850 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-55f864c847-mqptr" Mar 18 13:40:07.421565 master-0 kubenswrapper[27835]: I0318 13:40:07.421492 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5784578c99-zb64s"] Mar 18 13:40:07.437717 master-0 kubenswrapper[27835]: I0318 13:40:07.437350 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-c674c5965-xzxt9"] Mar 18 13:40:07.439498 master-0 kubenswrapper[27835]: I0318 13:40:07.439452 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j27bx\" (UniqueName: \"kubernetes.io/projected/9d6a5f9d-4997-4253-9a8a-2783b30e219a-kube-api-access-j27bx\") pod \"nova-operator-controller-manager-5d488d59fb-fhj89\" (UID: \"9d6a5f9d-4997-4253-9a8a-2783b30e219a\") " pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-fhj89" Mar 18 13:40:07.439570 master-0 kubenswrapper[27835]: I0318 13:40:07.439526 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4zwn\" (UniqueName: \"kubernetes.io/projected/ef8868b4-d8bd-4e66-a07e-b853494fadd4-kube-api-access-h4zwn\") pod \"octavia-operator-controller-manager-5b9f45d989-j47dt\" (UID: \"ef8868b4-d8bd-4e66-a07e-b853494fadd4\") " pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-j47dt" Mar 18 13:40:07.439616 master-0 kubenswrapper[27835]: I0318 13:40:07.439570 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8ae1c4d1-c972-4a7f-b82b-77081d857b54-cert\") pod \"openstack-baremetal-operator-controller-manager-74c479689995w99\" (UID: \"8ae1c4d1-c972-4a7f-b82b-77081d857b54\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c479689995w99" Mar 18 13:40:07.439616 master-0 kubenswrapper[27835]: I0318 13:40:07.439607 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/71018e48-64b3-42f0-b37f-dfa72163b1bf-cert\") pod \"infra-operator-controller-manager-7dd6bb94c9-w5stx\" (UID: \"71018e48-64b3-42f0-b37f-dfa72163b1bf\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-w5stx" Mar 18 13:40:07.439698 master-0 kubenswrapper[27835]: I0318 13:40:07.439643 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sv45\" (UniqueName: \"kubernetes.io/projected/8ae1c4d1-c972-4a7f-b82b-77081d857b54-kube-api-access-6sv45\") pod \"openstack-baremetal-operator-controller-manager-74c479689995w99\" (UID: \"8ae1c4d1-c972-4a7f-b82b-77081d857b54\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c479689995w99" Mar 18 13:40:07.447999 master-0 kubenswrapper[27835]: I0318 13:40:07.447949 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-c674c5965-xzxt9"] Mar 18 13:40:07.448232 master-0 kubenswrapper[27835]: I0318 13:40:07.448096 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-c674c5965-xzxt9" Mar 18 13:40:07.451972 master-0 kubenswrapper[27835]: E0318 13:40:07.451922 27835 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 18 13:40:07.452103 master-0 kubenswrapper[27835]: E0318 13:40:07.452006 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71018e48-64b3-42f0-b37f-dfa72163b1bf-cert podName:71018e48-64b3-42f0-b37f-dfa72163b1bf nodeName:}" failed. No retries permitted until 2026-03-18 13:40:08.451988972 +0000 UTC m=+972.417200532 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/71018e48-64b3-42f0-b37f-dfa72163b1bf-cert") pod "infra-operator-controller-manager-7dd6bb94c9-w5stx" (UID: "71018e48-64b3-42f0-b37f-dfa72163b1bf") : secret "infra-operator-webhook-server-cert" not found Mar 18 13:40:07.467314 master-0 kubenswrapper[27835]: I0318 13:40:07.467274 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-d6b694c5-t6wn4"] Mar 18 13:40:07.470131 master-0 kubenswrapper[27835]: I0318 13:40:07.469904 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-t6wn4" Mar 18 13:40:07.474628 master-0 kubenswrapper[27835]: I0318 13:40:07.474396 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j27bx\" (UniqueName: \"kubernetes.io/projected/9d6a5f9d-4997-4253-9a8a-2783b30e219a-kube-api-access-j27bx\") pod \"nova-operator-controller-manager-5d488d59fb-fhj89\" (UID: \"9d6a5f9d-4997-4253-9a8a-2783b30e219a\") " pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-fhj89" Mar 18 13:40:07.502288 master-0 kubenswrapper[27835]: I0318 13:40:07.502234 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-d6b694c5-t6wn4"] Mar 18 13:40:07.535182 master-0 kubenswrapper[27835]: I0318 13:40:07.535142 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-b84m4"] Mar 18 13:40:07.538568 master-0 kubenswrapper[27835]: I0318 13:40:07.536353 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-b84m4" Mar 18 13:40:07.543604 master-0 kubenswrapper[27835]: I0318 13:40:07.543557 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-b84m4"] Mar 18 13:40:07.552995 master-0 kubenswrapper[27835]: I0318 13:40:07.552951 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr7f7\" (UniqueName: \"kubernetes.io/projected/bfe66ed0-ad9d-4c19-9b57-ee59b0a3ca54-kube-api-access-jr7f7\") pod \"placement-operator-controller-manager-5784578c99-zb64s\" (UID: \"bfe66ed0-ad9d-4c19-9b57-ee59b0a3ca54\") " pod="openstack-operators/placement-operator-controller-manager-5784578c99-zb64s" Mar 18 13:40:07.553206 master-0 kubenswrapper[27835]: I0318 13:40:07.553012 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sv45\" (UniqueName: \"kubernetes.io/projected/8ae1c4d1-c972-4a7f-b82b-77081d857b54-kube-api-access-6sv45\") pod \"openstack-baremetal-operator-controller-manager-74c479689995w99\" (UID: \"8ae1c4d1-c972-4a7f-b82b-77081d857b54\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c479689995w99" Mar 18 13:40:07.553206 master-0 kubenswrapper[27835]: I0318 13:40:07.553085 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sp8s\" (UniqueName: \"kubernetes.io/projected/a19a49f4-4b05-4b24-8674-2bfeaeb5d36e-kube-api-access-5sp8s\") pod \"telemetry-operator-controller-manager-d6b694c5-t6wn4\" (UID: \"a19a49f4-4b05-4b24-8674-2bfeaeb5d36e\") " pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-t6wn4" Mar 18 13:40:07.553404 master-0 kubenswrapper[27835]: I0318 13:40:07.553379 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n57s7\" (UniqueName: \"kubernetes.io/projected/72310f46-e1d8-4fac-9ff5-6c76d7f8f1f7-kube-api-access-n57s7\") pod \"swift-operator-controller-manager-c674c5965-xzxt9\" (UID: \"72310f46-e1d8-4fac-9ff5-6c76d7f8f1f7\") " pod="openstack-operators/swift-operator-controller-manager-c674c5965-xzxt9" Mar 18 13:40:07.553484 master-0 kubenswrapper[27835]: I0318 13:40:07.553440 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4zwn\" (UniqueName: \"kubernetes.io/projected/ef8868b4-d8bd-4e66-a07e-b853494fadd4-kube-api-access-h4zwn\") pod \"octavia-operator-controller-manager-5b9f45d989-j47dt\" (UID: \"ef8868b4-d8bd-4e66-a07e-b853494fadd4\") " pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-j47dt" Mar 18 13:40:07.553484 master-0 kubenswrapper[27835]: I0318 13:40:07.553476 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxlgp\" (UniqueName: \"kubernetes.io/projected/0cafaee3-22fe-4afa-9ab3-d028404cdfdb-kube-api-access-pxlgp\") pod \"ovn-operator-controller-manager-884679f54-mmprj\" (UID: \"0cafaee3-22fe-4afa-9ab3-d028404cdfdb\") " pod="openstack-operators/ovn-operator-controller-manager-884679f54-mmprj" Mar 18 13:40:07.553548 master-0 kubenswrapper[27835]: I0318 13:40:07.553493 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8ae1c4d1-c972-4a7f-b82b-77081d857b54-cert\") pod \"openstack-baremetal-operator-controller-manager-74c479689995w99\" (UID: \"8ae1c4d1-c972-4a7f-b82b-77081d857b54\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c479689995w99" Mar 18 13:40:07.553719 master-0 kubenswrapper[27835]: E0318 13:40:07.553612 27835 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 18 13:40:07.553719 master-0 kubenswrapper[27835]: E0318 13:40:07.553653 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ae1c4d1-c972-4a7f-b82b-77081d857b54-cert podName:8ae1c4d1-c972-4a7f-b82b-77081d857b54 nodeName:}" failed. No retries permitted until 2026-03-18 13:40:08.053639076 +0000 UTC m=+972.018850636 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8ae1c4d1-c972-4a7f-b82b-77081d857b54-cert") pod "openstack-baremetal-operator-controller-manager-74c479689995w99" (UID: "8ae1c4d1-c972-4a7f-b82b-77081d857b54") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 18 13:40:07.557020 master-0 kubenswrapper[27835]: I0318 13:40:07.556969 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-htxdk"] Mar 18 13:40:07.558289 master-0 kubenswrapper[27835]: I0318 13:40:07.558219 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-htxdk" Mar 18 13:40:07.563739 master-0 kubenswrapper[27835]: I0318 13:40:07.563694 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-n87m7" Mar 18 13:40:07.592578 master-0 kubenswrapper[27835]: I0318 13:40:07.590339 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-htxdk"] Mar 18 13:40:07.617980 master-0 kubenswrapper[27835]: I0318 13:40:07.617929 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-767865f676-g2nh4" Mar 18 13:40:07.636852 master-0 kubenswrapper[27835]: I0318 13:40:07.635865 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-fhj89" Mar 18 13:40:07.645660 master-0 kubenswrapper[27835]: I0318 13:40:07.645459 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86bd8996f6-xstdn"] Mar 18 13:40:07.651610 master-0 kubenswrapper[27835]: I0318 13:40:07.650725 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-xstdn" Mar 18 13:40:07.653649 master-0 kubenswrapper[27835]: I0318 13:40:07.653538 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Mar 18 13:40:07.662872 master-0 kubenswrapper[27835]: I0318 13:40:07.656075 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w54v2\" (UniqueName: \"kubernetes.io/projected/a20ee255-08be-423f-92c3-d4bdd0680f11-kube-api-access-w54v2\") pod \"watcher-operator-controller-manager-6c4d75f7f9-htxdk\" (UID: \"a20ee255-08be-423f-92c3-d4bdd0680f11\") " pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-htxdk" Mar 18 13:40:07.662872 master-0 kubenswrapper[27835]: I0318 13:40:07.656182 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh6fd\" (UniqueName: \"kubernetes.io/projected/843225e7-36e3-4ef3-853f-bc0aa9452deb-kube-api-access-kh6fd\") pod \"test-operator-controller-manager-5c5cb9c4d7-b84m4\" (UID: \"843225e7-36e3-4ef3-853f-bc0aa9452deb\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-b84m4" Mar 18 13:40:07.662872 master-0 kubenswrapper[27835]: I0318 13:40:07.656207 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sp8s\" (UniqueName: \"kubernetes.io/projected/a19a49f4-4b05-4b24-8674-2bfeaeb5d36e-kube-api-access-5sp8s\") pod \"telemetry-operator-controller-manager-d6b694c5-t6wn4\" (UID: \"a19a49f4-4b05-4b24-8674-2bfeaeb5d36e\") " pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-t6wn4" Mar 18 13:40:07.662872 master-0 kubenswrapper[27835]: I0318 13:40:07.656264 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n57s7\" (UniqueName: \"kubernetes.io/projected/72310f46-e1d8-4fac-9ff5-6c76d7f8f1f7-kube-api-access-n57s7\") pod \"swift-operator-controller-manager-c674c5965-xzxt9\" (UID: \"72310f46-e1d8-4fac-9ff5-6c76d7f8f1f7\") " pod="openstack-operators/swift-operator-controller-manager-c674c5965-xzxt9" Mar 18 13:40:07.662872 master-0 kubenswrapper[27835]: I0318 13:40:07.656824 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxlgp\" (UniqueName: \"kubernetes.io/projected/0cafaee3-22fe-4afa-9ab3-d028404cdfdb-kube-api-access-pxlgp\") pod \"ovn-operator-controller-manager-884679f54-mmprj\" (UID: \"0cafaee3-22fe-4afa-9ab3-d028404cdfdb\") " pod="openstack-operators/ovn-operator-controller-manager-884679f54-mmprj" Mar 18 13:40:07.662872 master-0 kubenswrapper[27835]: I0318 13:40:07.656888 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr7f7\" (UniqueName: \"kubernetes.io/projected/bfe66ed0-ad9d-4c19-9b57-ee59b0a3ca54-kube-api-access-jr7f7\") pod \"placement-operator-controller-manager-5784578c99-zb64s\" (UID: \"bfe66ed0-ad9d-4c19-9b57-ee59b0a3ca54\") " pod="openstack-operators/placement-operator-controller-manager-5784578c99-zb64s" Mar 18 13:40:07.662872 master-0 kubenswrapper[27835]: I0318 13:40:07.656892 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4zwn\" (UniqueName: \"kubernetes.io/projected/ef8868b4-d8bd-4e66-a07e-b853494fadd4-kube-api-access-h4zwn\") pod \"octavia-operator-controller-manager-5b9f45d989-j47dt\" (UID: \"ef8868b4-d8bd-4e66-a07e-b853494fadd4\") " pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-j47dt" Mar 18 13:40:07.662872 master-0 kubenswrapper[27835]: I0318 13:40:07.657164 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Mar 18 13:40:07.691003 master-0 kubenswrapper[27835]: I0318 13:40:07.690425 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-j47dt" Mar 18 13:40:07.692270 master-0 kubenswrapper[27835]: I0318 13:40:07.691790 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr7f7\" (UniqueName: \"kubernetes.io/projected/bfe66ed0-ad9d-4c19-9b57-ee59b0a3ca54-kube-api-access-jr7f7\") pod \"placement-operator-controller-manager-5784578c99-zb64s\" (UID: \"bfe66ed0-ad9d-4c19-9b57-ee59b0a3ca54\") " pod="openstack-operators/placement-operator-controller-manager-5784578c99-zb64s" Mar 18 13:40:07.697915 master-0 kubenswrapper[27835]: I0318 13:40:07.697617 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sv45\" (UniqueName: \"kubernetes.io/projected/8ae1c4d1-c972-4a7f-b82b-77081d857b54-kube-api-access-6sv45\") pod \"openstack-baremetal-operator-controller-manager-74c479689995w99\" (UID: \"8ae1c4d1-c972-4a7f-b82b-77081d857b54\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c479689995w99" Mar 18 13:40:07.699378 master-0 kubenswrapper[27835]: I0318 13:40:07.698854 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n57s7\" (UniqueName: \"kubernetes.io/projected/72310f46-e1d8-4fac-9ff5-6c76d7f8f1f7-kube-api-access-n57s7\") pod \"swift-operator-controller-manager-c674c5965-xzxt9\" (UID: \"72310f46-e1d8-4fac-9ff5-6c76d7f8f1f7\") " pod="openstack-operators/swift-operator-controller-manager-c674c5965-xzxt9" Mar 18 13:40:07.701899 master-0 kubenswrapper[27835]: I0318 13:40:07.701836 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sp8s\" (UniqueName: \"kubernetes.io/projected/a19a49f4-4b05-4b24-8674-2bfeaeb5d36e-kube-api-access-5sp8s\") pod \"telemetry-operator-controller-manager-d6b694c5-t6wn4\" (UID: \"a19a49f4-4b05-4b24-8674-2bfeaeb5d36e\") " pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-t6wn4" Mar 18 13:40:07.702017 master-0 kubenswrapper[27835]: I0318 13:40:07.701852 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxlgp\" (UniqueName: \"kubernetes.io/projected/0cafaee3-22fe-4afa-9ab3-d028404cdfdb-kube-api-access-pxlgp\") pod \"ovn-operator-controller-manager-884679f54-mmprj\" (UID: \"0cafaee3-22fe-4afa-9ab3-d028404cdfdb\") " pod="openstack-operators/ovn-operator-controller-manager-884679f54-mmprj" Mar 18 13:40:07.706928 master-0 kubenswrapper[27835]: I0318 13:40:07.706654 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86bd8996f6-xstdn"] Mar 18 13:40:07.721030 master-0 kubenswrapper[27835]: I0318 13:40:07.720271 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-884679f54-mmprj" Mar 18 13:40:07.726854 master-0 kubenswrapper[27835]: I0318 13:40:07.726331 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qqgnc"] Mar 18 13:40:07.728887 master-0 kubenswrapper[27835]: I0318 13:40:07.728289 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qqgnc" Mar 18 13:40:07.754325 master-0 kubenswrapper[27835]: I0318 13:40:07.753909 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5784578c99-zb64s" Mar 18 13:40:07.755755 master-0 kubenswrapper[27835]: I0318 13:40:07.754832 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qqgnc"] Mar 18 13:40:07.780920 master-0 kubenswrapper[27835]: I0318 13:40:07.777984 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prkjp\" (UniqueName: \"kubernetes.io/projected/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-kube-api-access-prkjp\") pod \"openstack-operator-controller-manager-86bd8996f6-xstdn\" (UID: \"b34ee23d-ec7f-42ef-a3d5-addfc168f64d\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-xstdn" Mar 18 13:40:07.780920 master-0 kubenswrapper[27835]: I0318 13:40:07.778097 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-xstdn\" (UID: \"b34ee23d-ec7f-42ef-a3d5-addfc168f64d\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-xstdn" Mar 18 13:40:07.780920 master-0 kubenswrapper[27835]: I0318 13:40:07.778215 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w54v2\" (UniqueName: \"kubernetes.io/projected/a20ee255-08be-423f-92c3-d4bdd0680f11-kube-api-access-w54v2\") pod \"watcher-operator-controller-manager-6c4d75f7f9-htxdk\" (UID: \"a20ee255-08be-423f-92c3-d4bdd0680f11\") " pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-htxdk" Mar 18 13:40:07.780920 master-0 kubenswrapper[27835]: I0318 13:40:07.778386 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-xstdn\" (UID: \"b34ee23d-ec7f-42ef-a3d5-addfc168f64d\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-xstdn" Mar 18 13:40:07.780920 master-0 kubenswrapper[27835]: I0318 13:40:07.778440 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh6fd\" (UniqueName: \"kubernetes.io/projected/843225e7-36e3-4ef3-853f-bc0aa9452deb-kube-api-access-kh6fd\") pod \"test-operator-controller-manager-5c5cb9c4d7-b84m4\" (UID: \"843225e7-36e3-4ef3-853f-bc0aa9452deb\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-b84m4" Mar 18 13:40:07.792488 master-0 kubenswrapper[27835]: I0318 13:40:07.791013 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-c674c5965-xzxt9" Mar 18 13:40:07.815503 master-0 kubenswrapper[27835]: I0318 13:40:07.812389 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w54v2\" (UniqueName: \"kubernetes.io/projected/a20ee255-08be-423f-92c3-d4bdd0680f11-kube-api-access-w54v2\") pod \"watcher-operator-controller-manager-6c4d75f7f9-htxdk\" (UID: \"a20ee255-08be-423f-92c3-d4bdd0680f11\") " pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-htxdk" Mar 18 13:40:07.815503 master-0 kubenswrapper[27835]: I0318 13:40:07.813509 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh6fd\" (UniqueName: \"kubernetes.io/projected/843225e7-36e3-4ef3-853f-bc0aa9452deb-kube-api-access-kh6fd\") pod \"test-operator-controller-manager-5c5cb9c4d7-b84m4\" (UID: \"843225e7-36e3-4ef3-853f-bc0aa9452deb\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-b84m4" Mar 18 13:40:07.843190 master-0 kubenswrapper[27835]: I0318 13:40:07.842331 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-t6wn4" Mar 18 13:40:07.880580 master-0 kubenswrapper[27835]: I0318 13:40:07.879856 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-xstdn\" (UID: \"b34ee23d-ec7f-42ef-a3d5-addfc168f64d\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-xstdn" Mar 18 13:40:07.880580 master-0 kubenswrapper[27835]: I0318 13:40:07.879918 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jm8w\" (UniqueName: \"kubernetes.io/projected/b1a537c2-c838-4bb6-9bcc-29e68e909b77-kube-api-access-6jm8w\") pod \"rabbitmq-cluster-operator-manager-668c99d594-qqgnc\" (UID: \"b1a537c2-c838-4bb6-9bcc-29e68e909b77\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qqgnc" Mar 18 13:40:07.880580 master-0 kubenswrapper[27835]: I0318 13:40:07.879959 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prkjp\" (UniqueName: \"kubernetes.io/projected/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-kube-api-access-prkjp\") pod \"openstack-operator-controller-manager-86bd8996f6-xstdn\" (UID: \"b34ee23d-ec7f-42ef-a3d5-addfc168f64d\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-xstdn" Mar 18 13:40:07.880580 master-0 kubenswrapper[27835]: I0318 13:40:07.879993 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-xstdn\" (UID: \"b34ee23d-ec7f-42ef-a3d5-addfc168f64d\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-xstdn" Mar 18 13:40:07.880580 master-0 kubenswrapper[27835]: E0318 13:40:07.880098 27835 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 18 13:40:07.880580 master-0 kubenswrapper[27835]: E0318 13:40:07.880134 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-webhook-certs podName:b34ee23d-ec7f-42ef-a3d5-addfc168f64d nodeName:}" failed. No retries permitted until 2026-03-18 13:40:08.380122523 +0000 UTC m=+972.345334083 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-webhook-certs") pod "openstack-operator-controller-manager-86bd8996f6-xstdn" (UID: "b34ee23d-ec7f-42ef-a3d5-addfc168f64d") : secret "webhook-server-cert" not found Mar 18 13:40:07.880580 master-0 kubenswrapper[27835]: E0318 13:40:07.880172 27835 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 18 13:40:07.880580 master-0 kubenswrapper[27835]: E0318 13:40:07.880188 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-metrics-certs podName:b34ee23d-ec7f-42ef-a3d5-addfc168f64d nodeName:}" failed. No retries permitted until 2026-03-18 13:40:08.380182495 +0000 UTC m=+972.345394055 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-metrics-certs") pod "openstack-operator-controller-manager-86bd8996f6-xstdn" (UID: "b34ee23d-ec7f-42ef-a3d5-addfc168f64d") : secret "metrics-server-cert" not found Mar 18 13:40:07.881911 master-0 kubenswrapper[27835]: I0318 13:40:07.881011 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-b84m4" Mar 18 13:40:07.926627 master-0 kubenswrapper[27835]: I0318 13:40:07.907545 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-htxdk" Mar 18 13:40:07.926627 master-0 kubenswrapper[27835]: I0318 13:40:07.912510 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d58dc466-7vhnh"] Mar 18 13:40:07.926627 master-0 kubenswrapper[27835]: I0318 13:40:07.912882 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prkjp\" (UniqueName: \"kubernetes.io/projected/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-kube-api-access-prkjp\") pod \"openstack-operator-controller-manager-86bd8996f6-xstdn\" (UID: \"b34ee23d-ec7f-42ef-a3d5-addfc168f64d\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-xstdn" Mar 18 13:40:07.926627 master-0 kubenswrapper[27835]: I0318 13:40:07.922353 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59bc569d95-9hllz"] Mar 18 13:40:07.982145 master-0 kubenswrapper[27835]: I0318 13:40:07.981682 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jm8w\" (UniqueName: \"kubernetes.io/projected/b1a537c2-c838-4bb6-9bcc-29e68e909b77-kube-api-access-6jm8w\") pod \"rabbitmq-cluster-operator-manager-668c99d594-qqgnc\" (UID: \"b1a537c2-c838-4bb6-9bcc-29e68e909b77\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qqgnc" Mar 18 13:40:08.008053 master-0 kubenswrapper[27835]: I0318 13:40:08.007676 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jm8w\" (UniqueName: \"kubernetes.io/projected/b1a537c2-c838-4bb6-9bcc-29e68e909b77-kube-api-access-6jm8w\") pod \"rabbitmq-cluster-operator-manager-668c99d594-qqgnc\" (UID: \"b1a537c2-c838-4bb6-9bcc-29e68e909b77\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qqgnc" Mar 18 13:40:08.097445 master-0 kubenswrapper[27835]: I0318 13:40:08.095972 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8ae1c4d1-c972-4a7f-b82b-77081d857b54-cert\") pod \"openstack-baremetal-operator-controller-manager-74c479689995w99\" (UID: \"8ae1c4d1-c972-4a7f-b82b-77081d857b54\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c479689995w99" Mar 18 13:40:08.097445 master-0 kubenswrapper[27835]: E0318 13:40:08.096188 27835 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 18 13:40:08.097445 master-0 kubenswrapper[27835]: E0318 13:40:08.096253 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ae1c4d1-c972-4a7f-b82b-77081d857b54-cert podName:8ae1c4d1-c972-4a7f-b82b-77081d857b54 nodeName:}" failed. No retries permitted until 2026-03-18 13:40:09.096233439 +0000 UTC m=+973.061444999 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8ae1c4d1-c972-4a7f-b82b-77081d857b54-cert") pod "openstack-baremetal-operator-controller-manager-74c479689995w99" (UID: "8ae1c4d1-c972-4a7f-b82b-77081d857b54") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 18 13:40:08.147454 master-0 kubenswrapper[27835]: I0318 13:40:08.146519 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-9hllz" event={"ID":"ddaec091-2dee-4ea7-a06e-c30e9c1ba96e","Type":"ContainerStarted","Data":"c25946016ce2df28aea82e02e6063ed84f8162b0068b54214c390546c3a772e7"} Mar 18 13:40:08.152502 master-0 kubenswrapper[27835]: I0318 13:40:08.149953 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-7vhnh" event={"ID":"ee6bd2ee-8a51-4d24-9abf-5029e73a106a","Type":"ContainerStarted","Data":"cca183dd3d83d29eada0d440490b02e23bcefa8d6e778ea884d8085241cf0ed7"} Mar 18 13:40:08.169190 master-0 kubenswrapper[27835]: I0318 13:40:08.168985 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qqgnc" Mar 18 13:40:08.406226 master-0 kubenswrapper[27835]: E0318 13:40:08.405371 27835 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 18 13:40:08.406226 master-0 kubenswrapper[27835]: E0318 13:40:08.405448 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-metrics-certs podName:b34ee23d-ec7f-42ef-a3d5-addfc168f64d nodeName:}" failed. No retries permitted until 2026-03-18 13:40:09.405434118 +0000 UTC m=+973.370645678 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-metrics-certs") pod "openstack-operator-controller-manager-86bd8996f6-xstdn" (UID: "b34ee23d-ec7f-42ef-a3d5-addfc168f64d") : secret "metrics-server-cert" not found Mar 18 13:40:08.406226 master-0 kubenswrapper[27835]: I0318 13:40:08.405563 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-xstdn\" (UID: \"b34ee23d-ec7f-42ef-a3d5-addfc168f64d\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-xstdn" Mar 18 13:40:08.406226 master-0 kubenswrapper[27835]: I0318 13:40:08.405903 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-xstdn\" (UID: \"b34ee23d-ec7f-42ef-a3d5-addfc168f64d\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-xstdn" Mar 18 13:40:08.406226 master-0 kubenswrapper[27835]: E0318 13:40:08.406002 27835 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 18 13:40:08.406226 master-0 kubenswrapper[27835]: E0318 13:40:08.406148 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-webhook-certs podName:b34ee23d-ec7f-42ef-a3d5-addfc168f64d nodeName:}" failed. No retries permitted until 2026-03-18 13:40:09.406129548 +0000 UTC m=+973.371341158 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-webhook-certs") pod "openstack-operator-controller-manager-86bd8996f6-xstdn" (UID: "b34ee23d-ec7f-42ef-a3d5-addfc168f64d") : secret "webhook-server-cert" not found Mar 18 13:40:08.410233 master-0 kubenswrapper[27835]: I0318 13:40:08.410187 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-8464cc45fb-xpppk"] Mar 18 13:40:08.424920 master-0 kubenswrapper[27835]: W0318 13:40:08.424586 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d322608_4d0f_41cd_aff3_5de61bc2d86e.slice/crio-516d524ba12edf3a9ea0301f92183d3dda955a2d9556bdb12a683999c33bcb85 WatchSource:0}: Error finding container 516d524ba12edf3a9ea0301f92183d3dda955a2d9556bdb12a683999c33bcb85: Status 404 returned error can't find the container with id 516d524ba12edf3a9ea0301f92183d3dda955a2d9556bdb12a683999c33bcb85 Mar 18 13:40:08.435976 master-0 kubenswrapper[27835]: W0318 13:40:08.435927 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8979be93_aaa2_4ad9_b6d3_50af024d681a.slice/crio-ae9a1b777ad4fd513f68715b1b3805fa41eea21f6fd14e7a97603da5bd49224c WatchSource:0}: Error finding container ae9a1b777ad4fd513f68715b1b3805fa41eea21f6fd14e7a97603da5bd49224c: Status 404 returned error can't find the container with id ae9a1b777ad4fd513f68715b1b3805fa41eea21f6fd14e7a97603da5bd49224c Mar 18 13:40:08.443221 master-0 kubenswrapper[27835]: I0318 13:40:08.443171 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-79df6bcc97-hjwh4"] Mar 18 13:40:08.455449 master-0 kubenswrapper[27835]: I0318 13:40:08.455364 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-67dd5f86f5-cjvmp"] Mar 18 13:40:08.466323 master-0 kubenswrapper[27835]: I0318 13:40:08.466259 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-768b96df4c-9xn5p"] Mar 18 13:40:08.476392 master-0 kubenswrapper[27835]: I0318 13:40:08.476340 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-588d4d986b-zr74z"] Mar 18 13:40:08.508789 master-0 kubenswrapper[27835]: I0318 13:40:08.508684 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/71018e48-64b3-42f0-b37f-dfa72163b1bf-cert\") pod \"infra-operator-controller-manager-7dd6bb94c9-w5stx\" (UID: \"71018e48-64b3-42f0-b37f-dfa72163b1bf\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-w5stx" Mar 18 13:40:08.508960 master-0 kubenswrapper[27835]: E0318 13:40:08.508897 27835 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 18 13:40:08.509007 master-0 kubenswrapper[27835]: E0318 13:40:08.508990 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71018e48-64b3-42f0-b37f-dfa72163b1bf-cert podName:71018e48-64b3-42f0-b37f-dfa72163b1bf nodeName:}" failed. No retries permitted until 2026-03-18 13:40:10.508966804 +0000 UTC m=+974.474178374 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/71018e48-64b3-42f0-b37f-dfa72163b1bf-cert") pod "infra-operator-controller-manager-7dd6bb94c9-w5stx" (UID: "71018e48-64b3-42f0-b37f-dfa72163b1bf") : secret "infra-operator-webhook-server-cert" not found Mar 18 13:40:09.121935 master-0 kubenswrapper[27835]: I0318 13:40:09.121840 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8ae1c4d1-c972-4a7f-b82b-77081d857b54-cert\") pod \"openstack-baremetal-operator-controller-manager-74c479689995w99\" (UID: \"8ae1c4d1-c972-4a7f-b82b-77081d857b54\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c479689995w99" Mar 18 13:40:09.122374 master-0 kubenswrapper[27835]: E0318 13:40:09.122344 27835 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 18 13:40:09.122459 master-0 kubenswrapper[27835]: E0318 13:40:09.122434 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ae1c4d1-c972-4a7f-b82b-77081d857b54-cert podName:8ae1c4d1-c972-4a7f-b82b-77081d857b54 nodeName:}" failed. No retries permitted until 2026-03-18 13:40:11.122401269 +0000 UTC m=+975.087612829 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8ae1c4d1-c972-4a7f-b82b-77081d857b54-cert") pod "openstack-baremetal-operator-controller-manager-74c479689995w99" (UID: "8ae1c4d1-c972-4a7f-b82b-77081d857b54") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 18 13:40:09.160968 master-0 kubenswrapper[27835]: I0318 13:40:09.160891 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-9xn5p" event={"ID":"9d322608-4d0f-41cd-aff3-5de61bc2d86e","Type":"ContainerStarted","Data":"516d524ba12edf3a9ea0301f92183d3dda955a2d9556bdb12a683999c33bcb85"} Mar 18 13:40:09.172455 master-0 kubenswrapper[27835]: I0318 13:40:09.172300 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-cjvmp" event={"ID":"265baffa-3bec-4faa-be16-00c3d75f3b99","Type":"ContainerStarted","Data":"953eac005b3ad45485165caa1d0ccc9ef0822ce56fac428bc099cd6ee5506446"} Mar 18 13:40:09.174592 master-0 kubenswrapper[27835]: I0318 13:40:09.174547 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-hjwh4" event={"ID":"b6047c49-c76b-4345-a72b-74be859fddc7","Type":"ContainerStarted","Data":"b5d7e00844098497c7b13d49d8e8e22bf2d1fc0758922e8d527f2eafde64d661"} Mar 18 13:40:09.176236 master-0 kubenswrapper[27835]: I0318 13:40:09.176182 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-zr74z" event={"ID":"8979be93-aaa2-4ad9-b6d3-50af024d681a","Type":"ContainerStarted","Data":"ae9a1b777ad4fd513f68715b1b3805fa41eea21f6fd14e7a97603da5bd49224c"} Mar 18 13:40:09.178007 master-0 kubenswrapper[27835]: I0318 13:40:09.177968 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-xpppk" event={"ID":"d109b5af-e96b-47ce-b4cc-c41c4e87ee49","Type":"ContainerStarted","Data":"a37110e4fff3ba21c36e3219c902c39e7f9e307511b0c60036e8db2c744761c7"} Mar 18 13:40:09.294590 master-0 kubenswrapper[27835]: I0318 13:40:09.293827 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5b9f45d989-j47dt"] Mar 18 13:40:09.325739 master-0 kubenswrapper[27835]: W0318 13:40:09.319342 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef8868b4_d8bd_4e66_a07e_b853494fadd4.slice/crio-64f1ed88a5f53efb6383548403f7bdbfef8bd8c81bab53b4779e43d0722a6edb WatchSource:0}: Error finding container 64f1ed88a5f53efb6383548403f7bdbfef8bd8c81bab53b4779e43d0722a6edb: Status 404 returned error can't find the container with id 64f1ed88a5f53efb6383548403f7bdbfef8bd8c81bab53b4779e43d0722a6edb Mar 18 13:40:09.330538 master-0 kubenswrapper[27835]: I0318 13:40:09.330070 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-c674c5965-xzxt9"] Mar 18 13:40:09.352148 master-0 kubenswrapper[27835]: W0318 13:40:09.344668 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0cafaee3_22fe_4afa_9ab3_d028404cdfdb.slice/crio-1a75650812968fdc70ccf0e6138b344e1a21173fe87bc8bef920a9402f2a3f7c WatchSource:0}: Error finding container 1a75650812968fdc70ccf0e6138b344e1a21173fe87bc8bef920a9402f2a3f7c: Status 404 returned error can't find the container with id 1a75650812968fdc70ccf0e6138b344e1a21173fe87bc8bef920a9402f2a3f7c Mar 18 13:40:09.411555 master-0 kubenswrapper[27835]: I0318 13:40:09.409469 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6f787dddc9-pmtxh"] Mar 18 13:40:09.431458 master-0 kubenswrapper[27835]: I0318 13:40:09.424547 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5d488d59fb-fhj89"] Mar 18 13:40:09.445433 master-0 kubenswrapper[27835]: I0318 13:40:09.437862 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-xstdn\" (UID: \"b34ee23d-ec7f-42ef-a3d5-addfc168f64d\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-xstdn" Mar 18 13:40:09.445433 master-0 kubenswrapper[27835]: I0318 13:40:09.438001 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-xstdn\" (UID: \"b34ee23d-ec7f-42ef-a3d5-addfc168f64d\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-xstdn" Mar 18 13:40:09.445433 master-0 kubenswrapper[27835]: E0318 13:40:09.438167 27835 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 18 13:40:09.445433 master-0 kubenswrapper[27835]: E0318 13:40:09.438216 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-webhook-certs podName:b34ee23d-ec7f-42ef-a3d5-addfc168f64d nodeName:}" failed. No retries permitted until 2026-03-18 13:40:11.438200897 +0000 UTC m=+975.403412457 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-webhook-certs") pod "openstack-operator-controller-manager-86bd8996f6-xstdn" (UID: "b34ee23d-ec7f-42ef-a3d5-addfc168f64d") : secret "webhook-server-cert" not found Mar 18 13:40:09.445433 master-0 kubenswrapper[27835]: E0318 13:40:09.438255 27835 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 18 13:40:09.445433 master-0 kubenswrapper[27835]: E0318 13:40:09.438272 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-metrics-certs podName:b34ee23d-ec7f-42ef-a3d5-addfc168f64d nodeName:}" failed. No retries permitted until 2026-03-18 13:40:11.438266869 +0000 UTC m=+975.403478429 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-metrics-certs") pod "openstack-operator-controller-manager-86bd8996f6-xstdn" (UID: "b34ee23d-ec7f-42ef-a3d5-addfc168f64d") : secret "metrics-server-cert" not found Mar 18 13:40:09.502797 master-0 kubenswrapper[27835]: I0318 13:40:09.492485 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-767865f676-g2nh4"] Mar 18 13:40:09.522792 master-0 kubenswrapper[27835]: I0318 13:40:09.520649 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-884679f54-mmprj"] Mar 18 13:40:09.551846 master-0 kubenswrapper[27835]: W0318 13:40:09.542548 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3640aec0_34c2_454b_95cb_822cccc6425f.slice/crio-7bf4865752940d94ba9d4032137ecdafc352fde9d7e32d4ef95ecd1c39099ed8 WatchSource:0}: Error finding container 7bf4865752940d94ba9d4032137ecdafc352fde9d7e32d4ef95ecd1c39099ed8: Status 404 returned error can't find the container with id 7bf4865752940d94ba9d4032137ecdafc352fde9d7e32d4ef95ecd1c39099ed8 Mar 18 13:40:09.575819 master-0 kubenswrapper[27835]: I0318 13:40:09.570096 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67ccfc9778-n87m7"] Mar 18 13:40:09.600431 master-0 kubenswrapper[27835]: I0318 13:40:09.597636 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5784578c99-zb64s"] Mar 18 13:40:09.641474 master-0 kubenswrapper[27835]: I0318 13:40:09.632717 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-55f864c847-mqptr"] Mar 18 13:40:09.670874 master-0 kubenswrapper[27835]: W0318 13:40:09.670815 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod843225e7_36e3_4ef3_853f_bc0aa9452deb.slice/crio-787c69fc95cbe8872e4c05d2e47356bace3e84251bd7d61037907ee7aac761a7 WatchSource:0}: Error finding container 787c69fc95cbe8872e4c05d2e47356bace3e84251bd7d61037907ee7aac761a7: Status 404 returned error can't find the container with id 787c69fc95cbe8872e4c05d2e47356bace3e84251bd7d61037907ee7aac761a7 Mar 18 13:40:09.725003 master-0 kubenswrapper[27835]: E0318 13:40:09.724860 27835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kh6fd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5c5cb9c4d7-b84m4_openstack-operators(843225e7-36e3-4ef3-853f-bc0aa9452deb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 18 13:40:09.726611 master-0 kubenswrapper[27835]: E0318 13:40:09.726543 27835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-b84m4" podUID="843225e7-36e3-4ef3-853f-bc0aa9452deb" Mar 18 13:40:09.731228 master-0 kubenswrapper[27835]: E0318 13:40:09.731162 27835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6jm8w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000810000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-qqgnc_openstack-operators(b1a537c2-c838-4bb6-9bcc-29e68e909b77): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 18 13:40:09.731344 master-0 kubenswrapper[27835]: I0318 13:40:09.731239 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-htxdk"] Mar 18 13:40:09.733278 master-0 kubenswrapper[27835]: E0318 13:40:09.733202 27835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qqgnc" podUID="b1a537c2-c838-4bb6-9bcc-29e68e909b77" Mar 18 13:40:09.751236 master-0 kubenswrapper[27835]: I0318 13:40:09.747621 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-b84m4"] Mar 18 13:40:09.759252 master-0 kubenswrapper[27835]: I0318 13:40:09.759174 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-d6b694c5-t6wn4"] Mar 18 13:40:09.770305 master-0 kubenswrapper[27835]: I0318 13:40:09.770259 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qqgnc"] Mar 18 13:40:10.191032 master-0 kubenswrapper[27835]: I0318 13:40:10.190980 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-c674c5965-xzxt9" event={"ID":"72310f46-e1d8-4fac-9ff5-6c76d7f8f1f7","Type":"ContainerStarted","Data":"2c1ae92637925fdfca6b019017deab8ec2c22893f68b257eb3901b4d6d63b175"} Mar 18 13:40:10.192582 master-0 kubenswrapper[27835]: I0318 13:40:10.192550 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qqgnc" event={"ID":"b1a537c2-c838-4bb6-9bcc-29e68e909b77","Type":"ContainerStarted","Data":"32b109675f1e976616ff34995baf1b84bc5b3584173482ca4e3b4619a2196b50"} Mar 18 13:40:10.194839 master-0 kubenswrapper[27835]: I0318 13:40:10.194799 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-htxdk" event={"ID":"a20ee255-08be-423f-92c3-d4bdd0680f11","Type":"ContainerStarted","Data":"2d5e5ce514b98d94f0bd19d579001df2875e4bb4f2fc7e4382b10e23d6713957"} Mar 18 13:40:10.195354 master-0 kubenswrapper[27835]: E0318 13:40:10.195326 27835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qqgnc" podUID="b1a537c2-c838-4bb6-9bcc-29e68e909b77" Mar 18 13:40:10.196226 master-0 kubenswrapper[27835]: I0318 13:40:10.196198 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-n87m7" event={"ID":"f0c88737-f524-4267-9c13-719936da8c4e","Type":"ContainerStarted","Data":"82f91dfdd5a86652213c7ed103a00dba8708fde30ea0674443a62e43e6459d49"} Mar 18 13:40:10.198296 master-0 kubenswrapper[27835]: I0318 13:40:10.198271 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-t6wn4" event={"ID":"a19a49f4-4b05-4b24-8674-2bfeaeb5d36e","Type":"ContainerStarted","Data":"2b11f89c187a996724cb4e9db869f5a389603205fb94516d017ba0d9afca44cb"} Mar 18 13:40:10.204358 master-0 kubenswrapper[27835]: I0318 13:40:10.204297 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-fhj89" event={"ID":"9d6a5f9d-4997-4253-9a8a-2783b30e219a","Type":"ContainerStarted","Data":"445b2570f55ff5003f3da16c62e4abff0411d6b165299396e4ec8a0221a4757b"} Mar 18 13:40:10.206126 master-0 kubenswrapper[27835]: I0318 13:40:10.206078 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-884679f54-mmprj" event={"ID":"0cafaee3-22fe-4afa-9ab3-d028404cdfdb","Type":"ContainerStarted","Data":"1a75650812968fdc70ccf0e6138b344e1a21173fe87bc8bef920a9402f2a3f7c"} Mar 18 13:40:10.210900 master-0 kubenswrapper[27835]: I0318 13:40:10.210853 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-767865f676-g2nh4" event={"ID":"5b456512-ca01-4530-9368-6380bd8144e8","Type":"ContainerStarted","Data":"b2a90b870932d3ea9ebd5fd534e83dc3318187af36821ab1a0e637adafb08244"} Mar 18 13:40:10.213070 master-0 kubenswrapper[27835]: I0318 13:40:10.213041 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-j47dt" event={"ID":"ef8868b4-d8bd-4e66-a07e-b853494fadd4","Type":"ContainerStarted","Data":"64f1ed88a5f53efb6383548403f7bdbfef8bd8c81bab53b4779e43d0722a6edb"} Mar 18 13:40:10.216312 master-0 kubenswrapper[27835]: I0318 13:40:10.216280 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-b84m4" event={"ID":"843225e7-36e3-4ef3-853f-bc0aa9452deb","Type":"ContainerStarted","Data":"787c69fc95cbe8872e4c05d2e47356bace3e84251bd7d61037907ee7aac761a7"} Mar 18 13:40:10.217573 master-0 kubenswrapper[27835]: E0318 13:40:10.217547 27835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42\\\"\"" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-b84m4" podUID="843225e7-36e3-4ef3-853f-bc0aa9452deb" Mar 18 13:40:10.218675 master-0 kubenswrapper[27835]: I0318 13:40:10.218645 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-pmtxh" event={"ID":"f84705cb-9f70-43e3-ba36-6f9530ad53af","Type":"ContainerStarted","Data":"d62c1a76f2dd3a99306f8ca20f84ac5128a88b3021bc53f48f01aae37bd6b93d"} Mar 18 13:40:10.220270 master-0 kubenswrapper[27835]: I0318 13:40:10.220214 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5784578c99-zb64s" event={"ID":"bfe66ed0-ad9d-4c19-9b57-ee59b0a3ca54","Type":"ContainerStarted","Data":"c3ba089e9363627e2ce57915ff16537d6fbf07e79f74774f72795fe5c2010b8b"} Mar 18 13:40:10.223151 master-0 kubenswrapper[27835]: I0318 13:40:10.223119 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-55f864c847-mqptr" event={"ID":"3640aec0-34c2-454b-95cb-822cccc6425f","Type":"ContainerStarted","Data":"7bf4865752940d94ba9d4032137ecdafc352fde9d7e32d4ef95ecd1c39099ed8"} Mar 18 13:40:10.590285 master-0 kubenswrapper[27835]: I0318 13:40:10.590157 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/71018e48-64b3-42f0-b37f-dfa72163b1bf-cert\") pod \"infra-operator-controller-manager-7dd6bb94c9-w5stx\" (UID: \"71018e48-64b3-42f0-b37f-dfa72163b1bf\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-w5stx" Mar 18 13:40:10.591221 master-0 kubenswrapper[27835]: E0318 13:40:10.590959 27835 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 18 13:40:10.591221 master-0 kubenswrapper[27835]: E0318 13:40:10.591096 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71018e48-64b3-42f0-b37f-dfa72163b1bf-cert podName:71018e48-64b3-42f0-b37f-dfa72163b1bf nodeName:}" failed. No retries permitted until 2026-03-18 13:40:14.59107325 +0000 UTC m=+978.556284810 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/71018e48-64b3-42f0-b37f-dfa72163b1bf-cert") pod "infra-operator-controller-manager-7dd6bb94c9-w5stx" (UID: "71018e48-64b3-42f0-b37f-dfa72163b1bf") : secret "infra-operator-webhook-server-cert" not found Mar 18 13:40:11.219434 master-0 kubenswrapper[27835]: I0318 13:40:11.217709 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8ae1c4d1-c972-4a7f-b82b-77081d857b54-cert\") pod \"openstack-baremetal-operator-controller-manager-74c479689995w99\" (UID: \"8ae1c4d1-c972-4a7f-b82b-77081d857b54\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c479689995w99" Mar 18 13:40:11.219434 master-0 kubenswrapper[27835]: E0318 13:40:11.217946 27835 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 18 13:40:11.219434 master-0 kubenswrapper[27835]: E0318 13:40:11.218003 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ae1c4d1-c972-4a7f-b82b-77081d857b54-cert podName:8ae1c4d1-c972-4a7f-b82b-77081d857b54 nodeName:}" failed. No retries permitted until 2026-03-18 13:40:15.217986018 +0000 UTC m=+979.183197578 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8ae1c4d1-c972-4a7f-b82b-77081d857b54-cert") pod "openstack-baremetal-operator-controller-manager-74c479689995w99" (UID: "8ae1c4d1-c972-4a7f-b82b-77081d857b54") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 18 13:40:11.265453 master-0 kubenswrapper[27835]: E0318 13:40:11.265375 27835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42\\\"\"" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-b84m4" podUID="843225e7-36e3-4ef3-853f-bc0aa9452deb" Mar 18 13:40:11.265765 master-0 kubenswrapper[27835]: E0318 13:40:11.265737 27835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qqgnc" podUID="b1a537c2-c838-4bb6-9bcc-29e68e909b77" Mar 18 13:40:11.524669 master-0 kubenswrapper[27835]: I0318 13:40:11.524385 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-xstdn\" (UID: \"b34ee23d-ec7f-42ef-a3d5-addfc168f64d\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-xstdn" Mar 18 13:40:11.524669 master-0 kubenswrapper[27835]: E0318 13:40:11.524640 27835 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 18 13:40:11.525185 master-0 kubenswrapper[27835]: E0318 13:40:11.524707 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-metrics-certs podName:b34ee23d-ec7f-42ef-a3d5-addfc168f64d nodeName:}" failed. No retries permitted until 2026-03-18 13:40:15.524690811 +0000 UTC m=+979.489902361 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-metrics-certs") pod "openstack-operator-controller-manager-86bd8996f6-xstdn" (UID: "b34ee23d-ec7f-42ef-a3d5-addfc168f64d") : secret "metrics-server-cert" not found Mar 18 13:40:11.525185 master-0 kubenswrapper[27835]: I0318 13:40:11.524983 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-xstdn\" (UID: \"b34ee23d-ec7f-42ef-a3d5-addfc168f64d\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-xstdn" Mar 18 13:40:11.525304 master-0 kubenswrapper[27835]: E0318 13:40:11.525214 27835 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 18 13:40:11.525304 master-0 kubenswrapper[27835]: E0318 13:40:11.525245 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-webhook-certs podName:b34ee23d-ec7f-42ef-a3d5-addfc168f64d nodeName:}" failed. No retries permitted until 2026-03-18 13:40:15.525237035 +0000 UTC m=+979.490448595 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-webhook-certs") pod "openstack-operator-controller-manager-86bd8996f6-xstdn" (UID: "b34ee23d-ec7f-42ef-a3d5-addfc168f64d") : secret "webhook-server-cert" not found Mar 18 13:40:14.685522 master-0 kubenswrapper[27835]: I0318 13:40:14.683932 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/71018e48-64b3-42f0-b37f-dfa72163b1bf-cert\") pod \"infra-operator-controller-manager-7dd6bb94c9-w5stx\" (UID: \"71018e48-64b3-42f0-b37f-dfa72163b1bf\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-w5stx" Mar 18 13:40:14.685522 master-0 kubenswrapper[27835]: E0318 13:40:14.684196 27835 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 18 13:40:14.685522 master-0 kubenswrapper[27835]: E0318 13:40:14.684242 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71018e48-64b3-42f0-b37f-dfa72163b1bf-cert podName:71018e48-64b3-42f0-b37f-dfa72163b1bf nodeName:}" failed. No retries permitted until 2026-03-18 13:40:22.68422888 +0000 UTC m=+986.649440440 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/71018e48-64b3-42f0-b37f-dfa72163b1bf-cert") pod "infra-operator-controller-manager-7dd6bb94c9-w5stx" (UID: "71018e48-64b3-42f0-b37f-dfa72163b1bf") : secret "infra-operator-webhook-server-cert" not found Mar 18 13:40:15.295225 master-0 kubenswrapper[27835]: I0318 13:40:15.295176 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8ae1c4d1-c972-4a7f-b82b-77081d857b54-cert\") pod \"openstack-baremetal-operator-controller-manager-74c479689995w99\" (UID: \"8ae1c4d1-c972-4a7f-b82b-77081d857b54\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c479689995w99" Mar 18 13:40:15.295460 master-0 kubenswrapper[27835]: E0318 13:40:15.295369 27835 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 18 13:40:15.295508 master-0 kubenswrapper[27835]: E0318 13:40:15.295483 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ae1c4d1-c972-4a7f-b82b-77081d857b54-cert podName:8ae1c4d1-c972-4a7f-b82b-77081d857b54 nodeName:}" failed. No retries permitted until 2026-03-18 13:40:23.295450085 +0000 UTC m=+987.260661665 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8ae1c4d1-c972-4a7f-b82b-77081d857b54-cert") pod "openstack-baremetal-operator-controller-manager-74c479689995w99" (UID: "8ae1c4d1-c972-4a7f-b82b-77081d857b54") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 18 13:40:15.600235 master-0 kubenswrapper[27835]: I0318 13:40:15.600096 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-xstdn\" (UID: \"b34ee23d-ec7f-42ef-a3d5-addfc168f64d\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-xstdn" Mar 18 13:40:15.600457 master-0 kubenswrapper[27835]: I0318 13:40:15.600235 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-xstdn\" (UID: \"b34ee23d-ec7f-42ef-a3d5-addfc168f64d\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-xstdn" Mar 18 13:40:15.600457 master-0 kubenswrapper[27835]: E0318 13:40:15.600333 27835 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 18 13:40:15.600457 master-0 kubenswrapper[27835]: E0318 13:40:15.600404 27835 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 18 13:40:15.600579 master-0 kubenswrapper[27835]: E0318 13:40:15.600475 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-metrics-certs podName:b34ee23d-ec7f-42ef-a3d5-addfc168f64d nodeName:}" failed. No retries permitted until 2026-03-18 13:40:23.600457472 +0000 UTC m=+987.565669032 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-metrics-certs") pod "openstack-operator-controller-manager-86bd8996f6-xstdn" (UID: "b34ee23d-ec7f-42ef-a3d5-addfc168f64d") : secret "metrics-server-cert" not found Mar 18 13:40:15.600579 master-0 kubenswrapper[27835]: E0318 13:40:15.600516 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-webhook-certs podName:b34ee23d-ec7f-42ef-a3d5-addfc168f64d nodeName:}" failed. No retries permitted until 2026-03-18 13:40:23.600498123 +0000 UTC m=+987.565709683 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-webhook-certs") pod "openstack-operator-controller-manager-86bd8996f6-xstdn" (UID: "b34ee23d-ec7f-42ef-a3d5-addfc168f64d") : secret "webhook-server-cert" not found Mar 18 13:40:22.740440 master-0 kubenswrapper[27835]: I0318 13:40:22.740362 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/71018e48-64b3-42f0-b37f-dfa72163b1bf-cert\") pod \"infra-operator-controller-manager-7dd6bb94c9-w5stx\" (UID: \"71018e48-64b3-42f0-b37f-dfa72163b1bf\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-w5stx" Mar 18 13:40:22.741180 master-0 kubenswrapper[27835]: E0318 13:40:22.740873 27835 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 18 13:40:22.741180 master-0 kubenswrapper[27835]: E0318 13:40:22.740997 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71018e48-64b3-42f0-b37f-dfa72163b1bf-cert podName:71018e48-64b3-42f0-b37f-dfa72163b1bf nodeName:}" failed. No retries permitted until 2026-03-18 13:40:38.740966191 +0000 UTC m=+1002.706177781 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/71018e48-64b3-42f0-b37f-dfa72163b1bf-cert") pod "infra-operator-controller-manager-7dd6bb94c9-w5stx" (UID: "71018e48-64b3-42f0-b37f-dfa72163b1bf") : secret "infra-operator-webhook-server-cert" not found Mar 18 13:40:23.355719 master-0 kubenswrapper[27835]: I0318 13:40:23.355319 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8ae1c4d1-c972-4a7f-b82b-77081d857b54-cert\") pod \"openstack-baremetal-operator-controller-manager-74c479689995w99\" (UID: \"8ae1c4d1-c972-4a7f-b82b-77081d857b54\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c479689995w99" Mar 18 13:40:23.355719 master-0 kubenswrapper[27835]: E0318 13:40:23.355712 27835 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 18 13:40:23.355955 master-0 kubenswrapper[27835]: E0318 13:40:23.355776 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ae1c4d1-c972-4a7f-b82b-77081d857b54-cert podName:8ae1c4d1-c972-4a7f-b82b-77081d857b54 nodeName:}" failed. No retries permitted until 2026-03-18 13:40:39.355761283 +0000 UTC m=+1003.320972833 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8ae1c4d1-c972-4a7f-b82b-77081d857b54-cert") pod "openstack-baremetal-operator-controller-manager-74c479689995w99" (UID: "8ae1c4d1-c972-4a7f-b82b-77081d857b54") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 18 13:40:23.660784 master-0 kubenswrapper[27835]: I0318 13:40:23.660710 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-xstdn\" (UID: \"b34ee23d-ec7f-42ef-a3d5-addfc168f64d\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-xstdn" Mar 18 13:40:23.661034 master-0 kubenswrapper[27835]: E0318 13:40:23.660902 27835 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 18 13:40:23.661034 master-0 kubenswrapper[27835]: E0318 13:40:23.661016 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-webhook-certs podName:b34ee23d-ec7f-42ef-a3d5-addfc168f64d nodeName:}" failed. No retries permitted until 2026-03-18 13:40:39.660993125 +0000 UTC m=+1003.626204705 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-webhook-certs") pod "openstack-operator-controller-manager-86bd8996f6-xstdn" (UID: "b34ee23d-ec7f-42ef-a3d5-addfc168f64d") : secret "webhook-server-cert" not found Mar 18 13:40:23.661198 master-0 kubenswrapper[27835]: I0318 13:40:23.661050 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-xstdn\" (UID: \"b34ee23d-ec7f-42ef-a3d5-addfc168f64d\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-xstdn" Mar 18 13:40:23.661287 master-0 kubenswrapper[27835]: E0318 13:40:23.661263 27835 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 18 13:40:23.661343 master-0 kubenswrapper[27835]: E0318 13:40:23.661336 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-metrics-certs podName:b34ee23d-ec7f-42ef-a3d5-addfc168f64d nodeName:}" failed. No retries permitted until 2026-03-18 13:40:39.661318094 +0000 UTC m=+1003.626529664 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-metrics-certs") pod "openstack-operator-controller-manager-86bd8996f6-xstdn" (UID: "b34ee23d-ec7f-42ef-a3d5-addfc168f64d") : secret "metrics-server-cert" not found Mar 18 13:40:29.483632 master-0 kubenswrapper[27835]: I0318 13:40:29.479528 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-xpppk" event={"ID":"d109b5af-e96b-47ce-b4cc-c41c4e87ee49","Type":"ContainerStarted","Data":"9d4bd25b290b3a446a2fa43e7656279ebb040281d74dbea81e15d3fc5a4724a2"} Mar 18 13:40:29.483632 master-0 kubenswrapper[27835]: I0318 13:40:29.480614 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-xpppk" Mar 18 13:40:29.504527 master-0 kubenswrapper[27835]: I0318 13:40:29.503892 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-cjvmp" event={"ID":"265baffa-3bec-4faa-be16-00c3d75f3b99","Type":"ContainerStarted","Data":"c86d83b1a95fd7cc92a9b7db37bf2667243fb099e6bd3ff6c8657be9be22d868"} Mar 18 13:40:29.504739 master-0 kubenswrapper[27835]: I0318 13:40:29.504594 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-cjvmp" Mar 18 13:40:29.527462 master-0 kubenswrapper[27835]: I0318 13:40:29.525964 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-t6wn4" event={"ID":"a19a49f4-4b05-4b24-8674-2bfeaeb5d36e","Type":"ContainerStarted","Data":"45f764d2d854d7ae7481e01155cfd757b9d01baf29f3d9d4975a8aba41575515"} Mar 18 13:40:29.527462 master-0 kubenswrapper[27835]: I0318 13:40:29.526271 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-t6wn4" Mar 18 13:40:29.544469 master-0 kubenswrapper[27835]: I0318 13:40:29.539237 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-9hllz" event={"ID":"ddaec091-2dee-4ea7-a06e-c30e9c1ba96e","Type":"ContainerStarted","Data":"1d56aecaeacc001c4012a87b6945b5b9ae764738041ca2aa2ddbb9ff43061314"} Mar 18 13:40:29.544469 master-0 kubenswrapper[27835]: I0318 13:40:29.539704 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-9hllz" Mar 18 13:40:29.557440 master-0 kubenswrapper[27835]: I0318 13:40:29.554195 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-zr74z" event={"ID":"8979be93-aaa2-4ad9-b6d3-50af024d681a","Type":"ContainerStarted","Data":"b84c6e206723b6f872e58ca577f1d41fb34214372738f05700fd6312430caba6"} Mar 18 13:40:29.557440 master-0 kubenswrapper[27835]: I0318 13:40:29.554353 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-zr74z" Mar 18 13:40:29.570440 master-0 kubenswrapper[27835]: I0318 13:40:29.568587 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-hjwh4" event={"ID":"b6047c49-c76b-4345-a72b-74be859fddc7","Type":"ContainerStarted","Data":"0948cb7a63c9e9703b7d6e13a5241f1d39af5d4f91366adf4a01714d5dd84f38"} Mar 18 13:40:29.570440 master-0 kubenswrapper[27835]: I0318 13:40:29.569494 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-hjwh4" Mar 18 13:40:29.586436 master-0 kubenswrapper[27835]: I0318 13:40:29.586294 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-9xn5p" event={"ID":"9d322608-4d0f-41cd-aff3-5de61bc2d86e","Type":"ContainerStarted","Data":"f6c1da98a2b109c5ebe147debc6f5c050e47fe9ac244c36cef9c87cb52523482"} Mar 18 13:40:29.591433 master-0 kubenswrapper[27835]: I0318 13:40:29.587080 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-9xn5p" Mar 18 13:40:29.604640 master-0 kubenswrapper[27835]: I0318 13:40:29.603292 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-55f864c847-mqptr" event={"ID":"3640aec0-34c2-454b-95cb-822cccc6425f","Type":"ContainerStarted","Data":"d6542600f62fea132f4009f4c645aa91ed565d754ee68f97895c30349db8dd9f"} Mar 18 13:40:29.604640 master-0 kubenswrapper[27835]: I0318 13:40:29.604161 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-55f864c847-mqptr" Mar 18 13:40:29.631303 master-0 kubenswrapper[27835]: I0318 13:40:29.631238 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-n87m7" event={"ID":"f0c88737-f524-4267-9c13-719936da8c4e","Type":"ContainerStarted","Data":"1724e10cdc1344e6cd7ee043fdda874b4b715cdda44ad2e2193b4ff45a8d1ea7"} Mar 18 13:40:29.631303 master-0 kubenswrapper[27835]: I0318 13:40:29.631291 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-n87m7" Mar 18 13:40:29.660448 master-0 kubenswrapper[27835]: I0318 13:40:29.659682 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-j47dt" event={"ID":"ef8868b4-d8bd-4e66-a07e-b853494fadd4","Type":"ContainerStarted","Data":"eb5dd86c0747fd98937f0033ba06bfb8ccdf12ac3928c52aaa87da86c199b7e1"} Mar 18 13:40:29.660689 master-0 kubenswrapper[27835]: I0318 13:40:29.660478 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-j47dt" Mar 18 13:40:29.679749 master-0 kubenswrapper[27835]: I0318 13:40:29.679632 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-c674c5965-xzxt9" event={"ID":"72310f46-e1d8-4fac-9ff5-6c76d7f8f1f7","Type":"ContainerStarted","Data":"582440cde9fc144bf56ae9b1a2622a1ecfcddb0db28799e097f744337a09cb27"} Mar 18 13:40:29.684289 master-0 kubenswrapper[27835]: I0318 13:40:29.680653 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-c674c5965-xzxt9" Mar 18 13:40:29.702438 master-0 kubenswrapper[27835]: I0318 13:40:29.698621 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-pmtxh" event={"ID":"f84705cb-9f70-43e3-ba36-6f9530ad53af","Type":"ContainerStarted","Data":"2a34f9309c13a898b8b5e3618d359a00c0f099f61315dd876859821167c2971d"} Mar 18 13:40:29.714448 master-0 kubenswrapper[27835]: I0318 13:40:29.713577 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-pmtxh" Mar 18 13:40:29.745072 master-0 kubenswrapper[27835]: I0318 13:40:29.743831 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5784578c99-zb64s" event={"ID":"bfe66ed0-ad9d-4c19-9b57-ee59b0a3ca54","Type":"ContainerStarted","Data":"0d991ff18dd5035c2af6a51f206a149124d3e43705ac3898ba17f28aadd02e26"} Mar 18 13:40:29.745072 master-0 kubenswrapper[27835]: I0318 13:40:29.745027 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5784578c99-zb64s" Mar 18 13:40:29.778197 master-0 kubenswrapper[27835]: I0318 13:40:29.778085 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-767865f676-g2nh4" event={"ID":"5b456512-ca01-4530-9368-6380bd8144e8","Type":"ContainerStarted","Data":"056352ac2d0a5deeebe02c021eea484cbf9d2ec306d5a46d07226d945e5027a8"} Mar 18 13:40:29.779251 master-0 kubenswrapper[27835]: I0318 13:40:29.779230 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-767865f676-g2nh4" Mar 18 13:40:29.800309 master-0 kubenswrapper[27835]: I0318 13:40:29.798011 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-xpppk" podStartSLOduration=5.149228291 podStartE2EDuration="23.797988148s" podCreationTimestamp="2026-03-18 13:40:06 +0000 UTC" firstStartedPulling="2026-03-18 13:40:08.434032961 +0000 UTC m=+972.399244531" lastFinishedPulling="2026-03-18 13:40:27.082792828 +0000 UTC m=+991.048004388" observedRunningTime="2026-03-18 13:40:29.778565014 +0000 UTC m=+993.743776594" watchObservedRunningTime="2026-03-18 13:40:29.797988148 +0000 UTC m=+993.763199708" Mar 18 13:40:29.820852 master-0 kubenswrapper[27835]: I0318 13:40:29.820798 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qqgnc" event={"ID":"b1a537c2-c838-4bb6-9bcc-29e68e909b77","Type":"ContainerStarted","Data":"343a72974012f0ef6e6359dc34c05d652993273c89fc9cd7f16e2ec1770ce32d"} Mar 18 13:40:29.855358 master-0 kubenswrapper[27835]: I0318 13:40:29.855296 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-htxdk" event={"ID":"a20ee255-08be-423f-92c3-d4bdd0680f11","Type":"ContainerStarted","Data":"1011a940e276e0d9fff4f41a286138b7d6a4b2997c09107875861473869a7e35"} Mar 18 13:40:29.856147 master-0 kubenswrapper[27835]: I0318 13:40:29.856122 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-htxdk" Mar 18 13:40:29.885478 master-0 kubenswrapper[27835]: I0318 13:40:29.885388 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-n87m7" podStartSLOduration=6.18381267 podStartE2EDuration="23.885374298s" podCreationTimestamp="2026-03-18 13:40:06 +0000 UTC" firstStartedPulling="2026-03-18 13:40:09.376514182 +0000 UTC m=+973.341725742" lastFinishedPulling="2026-03-18 13:40:27.07807581 +0000 UTC m=+991.043287370" observedRunningTime="2026-03-18 13:40:29.884817333 +0000 UTC m=+993.850028883" watchObservedRunningTime="2026-03-18 13:40:29.885374298 +0000 UTC m=+993.850585858" Mar 18 13:40:29.885778 master-0 kubenswrapper[27835]: I0318 13:40:29.885719 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-884679f54-mmprj" event={"ID":"0cafaee3-22fe-4afa-9ab3-d028404cdfdb","Type":"ContainerStarted","Data":"a181f328f13edf032796a1d47f1f97ec3d24f391fe663bc607be68a40d53790a"} Mar 18 13:40:29.886679 master-0 kubenswrapper[27835]: I0318 13:40:29.886654 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-884679f54-mmprj" Mar 18 13:40:29.892252 master-0 kubenswrapper[27835]: I0318 13:40:29.887535 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-t6wn4" podStartSLOduration=6.495408783 podStartE2EDuration="23.887525236s" podCreationTimestamp="2026-03-18 13:40:06 +0000 UTC" firstStartedPulling="2026-03-18 13:40:09.68754224 +0000 UTC m=+973.652753800" lastFinishedPulling="2026-03-18 13:40:27.079658683 +0000 UTC m=+991.044870253" observedRunningTime="2026-03-18 13:40:29.83695134 +0000 UTC m=+993.802162910" watchObservedRunningTime="2026-03-18 13:40:29.887525236 +0000 UTC m=+993.852736796" Mar 18 13:40:29.911254 master-0 kubenswrapper[27835]: I0318 13:40:29.911186 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-fhj89" event={"ID":"9d6a5f9d-4997-4253-9a8a-2783b30e219a","Type":"ContainerStarted","Data":"71153482f85107800e19ec0236573caa71b89e2577ed35e6758bff82d3a20fd0"} Mar 18 13:40:29.912146 master-0 kubenswrapper[27835]: I0318 13:40:29.912105 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-fhj89" Mar 18 13:40:29.940433 master-0 kubenswrapper[27835]: I0318 13:40:29.939716 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-b84m4" event={"ID":"843225e7-36e3-4ef3-853f-bc0aa9452deb","Type":"ContainerStarted","Data":"5135a5567612f75d10ec671e2bfd50a9abc8adcc46a460ffac7cc411fb819bbd"} Mar 18 13:40:29.942093 master-0 kubenswrapper[27835]: I0318 13:40:29.940691 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-b84m4" Mar 18 13:40:29.960138 master-0 kubenswrapper[27835]: I0318 13:40:29.959684 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-7vhnh" event={"ID":"ee6bd2ee-8a51-4d24-9abf-5029e73a106a","Type":"ContainerStarted","Data":"a210c10ea6df61312741ba3f8ecfef7ab0a471f4f04ffb04673f6e7809d56b11"} Mar 18 13:40:29.960493 master-0 kubenswrapper[27835]: I0318 13:40:29.960467 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-7vhnh" Mar 18 13:40:29.968931 master-0 kubenswrapper[27835]: I0318 13:40:29.968850 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-j47dt" podStartSLOduration=5.232652885 podStartE2EDuration="23.968827601s" podCreationTimestamp="2026-03-18 13:40:06 +0000 UTC" firstStartedPulling="2026-03-18 13:40:09.334570939 +0000 UTC m=+973.299782499" lastFinishedPulling="2026-03-18 13:40:28.070745655 +0000 UTC m=+992.035957215" observedRunningTime="2026-03-18 13:40:29.948128763 +0000 UTC m=+993.913340333" watchObservedRunningTime="2026-03-18 13:40:29.968827601 +0000 UTC m=+993.934039151" Mar 18 13:40:30.094515 master-0 kubenswrapper[27835]: I0318 13:40:30.090435 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-9xn5p" podStartSLOduration=5.440880887 podStartE2EDuration="24.090398664s" podCreationTimestamp="2026-03-18 13:40:06 +0000 UTC" firstStartedPulling="2026-03-18 13:40:08.429457747 +0000 UTC m=+972.394669307" lastFinishedPulling="2026-03-18 13:40:27.078975524 +0000 UTC m=+991.044187084" observedRunningTime="2026-03-18 13:40:30.086138839 +0000 UTC m=+994.051350399" watchObservedRunningTime="2026-03-18 13:40:30.090398664 +0000 UTC m=+994.055610224" Mar 18 13:40:30.094515 master-0 kubenswrapper[27835]: I0318 13:40:30.091015 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-zr74z" podStartSLOduration=5.463720874 podStartE2EDuration="24.091008381s" podCreationTimestamp="2026-03-18 13:40:06 +0000 UTC" firstStartedPulling="2026-03-18 13:40:08.449850418 +0000 UTC m=+972.415061978" lastFinishedPulling="2026-03-18 13:40:27.077137925 +0000 UTC m=+991.042349485" observedRunningTime="2026-03-18 13:40:30.006545859 +0000 UTC m=+993.971757429" watchObservedRunningTime="2026-03-18 13:40:30.091008381 +0000 UTC m=+994.056219941" Mar 18 13:40:30.118459 master-0 kubenswrapper[27835]: I0318 13:40:30.114875 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5784578c99-zb64s" podStartSLOduration=5.586679255 podStartE2EDuration="24.114856425s" podCreationTimestamp="2026-03-18 13:40:06 +0000 UTC" firstStartedPulling="2026-03-18 13:40:09.542328729 +0000 UTC m=+973.507540289" lastFinishedPulling="2026-03-18 13:40:28.070505899 +0000 UTC m=+992.035717459" observedRunningTime="2026-03-18 13:40:30.114561007 +0000 UTC m=+994.079772567" watchObservedRunningTime="2026-03-18 13:40:30.114856425 +0000 UTC m=+994.080067975" Mar 18 13:40:30.165317 master-0 kubenswrapper[27835]: I0318 13:40:30.165224 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-767865f676-g2nh4" podStartSLOduration=5.419821119 podStartE2EDuration="24.165204604s" podCreationTimestamp="2026-03-18 13:40:06 +0000 UTC" firstStartedPulling="2026-03-18 13:40:09.325256048 +0000 UTC m=+973.290467608" lastFinishedPulling="2026-03-18 13:40:28.070639533 +0000 UTC m=+992.035851093" observedRunningTime="2026-03-18 13:40:30.162783719 +0000 UTC m=+994.127995279" watchObservedRunningTime="2026-03-18 13:40:30.165204604 +0000 UTC m=+994.130416164" Mar 18 13:40:30.292496 master-0 kubenswrapper[27835]: I0318 13:40:30.292424 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-cjvmp" podStartSLOduration=5.645326757 podStartE2EDuration="24.292392748s" podCreationTimestamp="2026-03-18 13:40:06 +0000 UTC" firstStartedPulling="2026-03-18 13:40:08.432708545 +0000 UTC m=+972.397920105" lastFinishedPulling="2026-03-18 13:40:27.079774536 +0000 UTC m=+991.044986096" observedRunningTime="2026-03-18 13:40:30.291287869 +0000 UTC m=+994.256499429" watchObservedRunningTime="2026-03-18 13:40:30.292392748 +0000 UTC m=+994.257604308" Mar 18 13:40:30.300039 master-0 kubenswrapper[27835]: I0318 13:40:30.299881 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-hjwh4" podStartSLOduration=7.875744929 podStartE2EDuration="24.299865021s" podCreationTimestamp="2026-03-18 13:40:06 +0000 UTC" firstStartedPulling="2026-03-18 13:40:08.443546788 +0000 UTC m=+972.408758348" lastFinishedPulling="2026-03-18 13:40:24.86766688 +0000 UTC m=+988.832878440" observedRunningTime="2026-03-18 13:40:30.232360048 +0000 UTC m=+994.197571618" watchObservedRunningTime="2026-03-18 13:40:30.299865021 +0000 UTC m=+994.265076581" Mar 18 13:40:30.359774 master-0 kubenswrapper[27835]: I0318 13:40:30.359617 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-pmtxh" podStartSLOduration=6.608333132 podStartE2EDuration="24.359600763s" podCreationTimestamp="2026-03-18 13:40:06 +0000 UTC" firstStartedPulling="2026-03-18 13:40:09.331113716 +0000 UTC m=+973.296325276" lastFinishedPulling="2026-03-18 13:40:27.082381347 +0000 UTC m=+991.047592907" observedRunningTime="2026-03-18 13:40:30.347776784 +0000 UTC m=+994.312988364" watchObservedRunningTime="2026-03-18 13:40:30.359600763 +0000 UTC m=+994.324812323" Mar 18 13:40:30.398259 master-0 kubenswrapper[27835]: I0318 13:40:30.398157 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-55f864c847-mqptr" podStartSLOduration=6.882226699 podStartE2EDuration="24.398132544s" podCreationTimestamp="2026-03-18 13:40:06 +0000 UTC" firstStartedPulling="2026-03-18 13:40:09.564097117 +0000 UTC m=+973.529308677" lastFinishedPulling="2026-03-18 13:40:27.080002962 +0000 UTC m=+991.045214522" observedRunningTime="2026-03-18 13:40:30.387130257 +0000 UTC m=+994.352341837" watchObservedRunningTime="2026-03-18 13:40:30.398132544 +0000 UTC m=+994.363344104" Mar 18 13:40:30.434462 master-0 kubenswrapper[27835]: I0318 13:40:30.434371 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-9hllz" podStartSLOduration=7.44516493 podStartE2EDuration="24.434351882s" podCreationTimestamp="2026-03-18 13:40:06 +0000 UTC" firstStartedPulling="2026-03-18 13:40:07.878480988 +0000 UTC m=+971.843692548" lastFinishedPulling="2026-03-18 13:40:24.86766794 +0000 UTC m=+988.832879500" observedRunningTime="2026-03-18 13:40:30.430950631 +0000 UTC m=+994.396162191" watchObservedRunningTime="2026-03-18 13:40:30.434351882 +0000 UTC m=+994.399563452" Mar 18 13:40:30.521393 master-0 kubenswrapper[27835]: I0318 13:40:30.521312 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-c674c5965-xzxt9" podStartSLOduration=6.784054828 podStartE2EDuration="24.52129107s" podCreationTimestamp="2026-03-18 13:40:06 +0000 UTC" firstStartedPulling="2026-03-18 13:40:09.343097429 +0000 UTC m=+973.308308989" lastFinishedPulling="2026-03-18 13:40:27.080333671 +0000 UTC m=+991.045545231" observedRunningTime="2026-03-18 13:40:30.487045325 +0000 UTC m=+994.452256885" watchObservedRunningTime="2026-03-18 13:40:30.52129107 +0000 UTC m=+994.486502640" Mar 18 13:40:30.525327 master-0 kubenswrapper[27835]: I0318 13:40:30.525258 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-fhj89" podStartSLOduration=5.780979192 podStartE2EDuration="24.525239157s" podCreationTimestamp="2026-03-18 13:40:06 +0000 UTC" firstStartedPulling="2026-03-18 13:40:09.325380141 +0000 UTC m=+973.290591701" lastFinishedPulling="2026-03-18 13:40:28.069640106 +0000 UTC m=+992.034851666" observedRunningTime="2026-03-18 13:40:30.518084553 +0000 UTC m=+994.483296113" watchObservedRunningTime="2026-03-18 13:40:30.525239157 +0000 UTC m=+994.490450717" Mar 18 13:40:30.548790 master-0 kubenswrapper[27835]: I0318 13:40:30.548720 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-7vhnh" podStartSLOduration=6.2477403559999996 podStartE2EDuration="24.54870495s" podCreationTimestamp="2026-03-18 13:40:06 +0000 UTC" firstStartedPulling="2026-03-18 13:40:07.869899557 +0000 UTC m=+971.835111117" lastFinishedPulling="2026-03-18 13:40:26.170864141 +0000 UTC m=+990.136075711" observedRunningTime="2026-03-18 13:40:30.543291414 +0000 UTC m=+994.508502984" watchObservedRunningTime="2026-03-18 13:40:30.54870495 +0000 UTC m=+994.513916510" Mar 18 13:40:30.570218 master-0 kubenswrapper[27835]: I0318 13:40:30.570125 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-qqgnc" podStartSLOduration=4.639333298 podStartE2EDuration="23.570105679s" podCreationTimestamp="2026-03-18 13:40:07 +0000 UTC" firstStartedPulling="2026-03-18 13:40:09.731033555 +0000 UTC m=+973.696245105" lastFinishedPulling="2026-03-18 13:40:28.661805926 +0000 UTC m=+992.627017486" observedRunningTime="2026-03-18 13:40:30.565675599 +0000 UTC m=+994.530887159" watchObservedRunningTime="2026-03-18 13:40:30.570105679 +0000 UTC m=+994.535317229" Mar 18 13:40:30.596347 master-0 kubenswrapper[27835]: I0318 13:40:30.596275 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-884679f54-mmprj" podStartSLOduration=7.032044796 podStartE2EDuration="24.596259985s" podCreationTimestamp="2026-03-18 13:40:06 +0000 UTC" firstStartedPulling="2026-03-18 13:40:09.513986164 +0000 UTC m=+973.479197734" lastFinishedPulling="2026-03-18 13:40:27.078201363 +0000 UTC m=+991.043412923" observedRunningTime="2026-03-18 13:40:30.594208659 +0000 UTC m=+994.559420229" watchObservedRunningTime="2026-03-18 13:40:30.596259985 +0000 UTC m=+994.561471545" Mar 18 13:40:30.624139 master-0 kubenswrapper[27835]: I0318 13:40:30.623981 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-htxdk" podStartSLOduration=7.210853954 podStartE2EDuration="24.623958703s" podCreationTimestamp="2026-03-18 13:40:06 +0000 UTC" firstStartedPulling="2026-03-18 13:40:09.670235913 +0000 UTC m=+973.635447473" lastFinishedPulling="2026-03-18 13:40:27.083340662 +0000 UTC m=+991.048552222" observedRunningTime="2026-03-18 13:40:30.621018363 +0000 UTC m=+994.586229933" watchObservedRunningTime="2026-03-18 13:40:30.623958703 +0000 UTC m=+994.589170273" Mar 18 13:40:30.662505 master-0 kubenswrapper[27835]: I0318 13:40:30.662430 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-b84m4" podStartSLOduration=5.814813765 podStartE2EDuration="24.66239941s" podCreationTimestamp="2026-03-18 13:40:06 +0000 UTC" firstStartedPulling="2026-03-18 13:40:09.724104618 +0000 UTC m=+973.689316178" lastFinishedPulling="2026-03-18 13:40:28.571690263 +0000 UTC m=+992.536901823" observedRunningTime="2026-03-18 13:40:30.655649878 +0000 UTC m=+994.620861458" watchObservedRunningTime="2026-03-18 13:40:30.66239941 +0000 UTC m=+994.627610970" Mar 18 13:40:36.766756 master-0 kubenswrapper[27835]: I0318 13:40:36.766691 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-9hllz" Mar 18 13:40:36.797023 master-0 kubenswrapper[27835]: I0318 13:40:36.796972 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-7vhnh" Mar 18 13:40:36.831759 master-0 kubenswrapper[27835]: I0318 13:40:36.831658 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-zr74z" Mar 18 13:40:37.018741 master-0 kubenswrapper[27835]: I0318 13:40:37.018626 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-hjwh4" Mar 18 13:40:37.083365 master-0 kubenswrapper[27835]: I0318 13:40:37.083302 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-cjvmp" Mar 18 13:40:37.213044 master-0 kubenswrapper[27835]: I0318 13:40:37.212980 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-xpppk" Mar 18 13:40:37.288586 master-0 kubenswrapper[27835]: I0318 13:40:37.288372 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-9xn5p" Mar 18 13:40:37.365836 master-0 kubenswrapper[27835]: I0318 13:40:37.365771 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-pmtxh" Mar 18 13:40:37.423745 master-0 kubenswrapper[27835]: I0318 13:40:37.423606 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-55f864c847-mqptr" Mar 18 13:40:37.568554 master-0 kubenswrapper[27835]: I0318 13:40:37.567893 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-n87m7" Mar 18 13:40:37.630673 master-0 kubenswrapper[27835]: I0318 13:40:37.629705 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-767865f676-g2nh4" Mar 18 13:40:37.639284 master-0 kubenswrapper[27835]: I0318 13:40:37.639218 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-fhj89" Mar 18 13:40:37.695772 master-0 kubenswrapper[27835]: I0318 13:40:37.694887 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-j47dt" Mar 18 13:40:37.724183 master-0 kubenswrapper[27835]: I0318 13:40:37.724108 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-884679f54-mmprj" Mar 18 13:40:37.758467 master-0 kubenswrapper[27835]: I0318 13:40:37.758370 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5784578c99-zb64s" Mar 18 13:40:37.796124 master-0 kubenswrapper[27835]: I0318 13:40:37.796050 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-c674c5965-xzxt9" Mar 18 13:40:37.845895 master-0 kubenswrapper[27835]: I0318 13:40:37.845751 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-t6wn4" Mar 18 13:40:37.885011 master-0 kubenswrapper[27835]: I0318 13:40:37.884936 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-b84m4" Mar 18 13:40:37.913495 master-0 kubenswrapper[27835]: I0318 13:40:37.912722 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-htxdk" Mar 18 13:40:38.804134 master-0 kubenswrapper[27835]: I0318 13:40:38.804021 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/71018e48-64b3-42f0-b37f-dfa72163b1bf-cert\") pod \"infra-operator-controller-manager-7dd6bb94c9-w5stx\" (UID: \"71018e48-64b3-42f0-b37f-dfa72163b1bf\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-w5stx" Mar 18 13:40:38.808760 master-0 kubenswrapper[27835]: I0318 13:40:38.808703 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/71018e48-64b3-42f0-b37f-dfa72163b1bf-cert\") pod \"infra-operator-controller-manager-7dd6bb94c9-w5stx\" (UID: \"71018e48-64b3-42f0-b37f-dfa72163b1bf\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-w5stx" Mar 18 13:40:39.034282 master-0 kubenswrapper[27835]: I0318 13:40:39.034141 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-w5stx" Mar 18 13:40:39.442682 master-0 kubenswrapper[27835]: I0318 13:40:39.442599 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8ae1c4d1-c972-4a7f-b82b-77081d857b54-cert\") pod \"openstack-baremetal-operator-controller-manager-74c479689995w99\" (UID: \"8ae1c4d1-c972-4a7f-b82b-77081d857b54\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c479689995w99" Mar 18 13:40:39.445857 master-0 kubenswrapper[27835]: I0318 13:40:39.445825 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8ae1c4d1-c972-4a7f-b82b-77081d857b54-cert\") pod \"openstack-baremetal-operator-controller-manager-74c479689995w99\" (UID: \"8ae1c4d1-c972-4a7f-b82b-77081d857b54\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c479689995w99" Mar 18 13:40:39.502070 master-0 kubenswrapper[27835]: I0318 13:40:39.501955 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c479689995w99" Mar 18 13:40:39.749132 master-0 kubenswrapper[27835]: I0318 13:40:39.748072 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-xstdn\" (UID: \"b34ee23d-ec7f-42ef-a3d5-addfc168f64d\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-xstdn" Mar 18 13:40:39.749132 master-0 kubenswrapper[27835]: I0318 13:40:39.748260 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-xstdn\" (UID: \"b34ee23d-ec7f-42ef-a3d5-addfc168f64d\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-xstdn" Mar 18 13:40:39.752602 master-0 kubenswrapper[27835]: I0318 13:40:39.752552 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-xstdn\" (UID: \"b34ee23d-ec7f-42ef-a3d5-addfc168f64d\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-xstdn" Mar 18 13:40:39.753269 master-0 kubenswrapper[27835]: I0318 13:40:39.753241 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b34ee23d-ec7f-42ef-a3d5-addfc168f64d-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-xstdn\" (UID: \"b34ee23d-ec7f-42ef-a3d5-addfc168f64d\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-xstdn" Mar 18 13:40:39.948154 master-0 kubenswrapper[27835]: I0318 13:40:39.948071 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-xstdn" Mar 18 13:40:40.354213 master-0 kubenswrapper[27835]: W0318 13:40:40.350821 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71018e48_64b3_42f0_b37f_dfa72163b1bf.slice/crio-4aa3ba5b49d2ac7c8900c5a0f788ba434c5b65788d899a07a04cfe9f1aefb1ab WatchSource:0}: Error finding container 4aa3ba5b49d2ac7c8900c5a0f788ba434c5b65788d899a07a04cfe9f1aefb1ab: Status 404 returned error can't find the container with id 4aa3ba5b49d2ac7c8900c5a0f788ba434c5b65788d899a07a04cfe9f1aefb1ab Mar 18 13:40:40.371493 master-0 kubenswrapper[27835]: I0318 13:40:40.371404 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7dd6bb94c9-w5stx"] Mar 18 13:40:41.079109 master-0 kubenswrapper[27835]: I0318 13:40:41.079039 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-w5stx" event={"ID":"71018e48-64b3-42f0-b37f-dfa72163b1bf","Type":"ContainerStarted","Data":"4aa3ba5b49d2ac7c8900c5a0f788ba434c5b65788d899a07a04cfe9f1aefb1ab"} Mar 18 13:40:41.270683 master-0 kubenswrapper[27835]: I0318 13:40:41.270334 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-74c479689995w99"] Mar 18 13:40:41.280834 master-0 kubenswrapper[27835]: W0318 13:40:41.276636 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ae1c4d1_c972_4a7f_b82b_77081d857b54.slice/crio-5739e277f6ca669d434cd88a08cd75ef3baad628fa90656710bc3d575a70d738 WatchSource:0}: Error finding container 5739e277f6ca669d434cd88a08cd75ef3baad628fa90656710bc3d575a70d738: Status 404 returned error can't find the container with id 5739e277f6ca669d434cd88a08cd75ef3baad628fa90656710bc3d575a70d738 Mar 18 13:40:41.589481 master-0 kubenswrapper[27835]: I0318 13:40:41.586926 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86bd8996f6-xstdn"] Mar 18 13:40:42.125508 master-0 kubenswrapper[27835]: I0318 13:40:42.123312 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-xstdn" event={"ID":"b34ee23d-ec7f-42ef-a3d5-addfc168f64d","Type":"ContainerStarted","Data":"8e49f7e38b7d1802e778a77caef8fc79ce8456197d254bfa594b28b9ecdef99c"} Mar 18 13:40:42.125508 master-0 kubenswrapper[27835]: I0318 13:40:42.123372 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-xstdn" event={"ID":"b34ee23d-ec7f-42ef-a3d5-addfc168f64d","Type":"ContainerStarted","Data":"a8efcb398dfd17492398cae5b6db127f74eca2ee44efe736b7b6473a708c48a7"} Mar 18 13:40:42.125508 master-0 kubenswrapper[27835]: I0318 13:40:42.124292 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-xstdn" Mar 18 13:40:42.125508 master-0 kubenswrapper[27835]: I0318 13:40:42.125064 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c479689995w99" event={"ID":"8ae1c4d1-c972-4a7f-b82b-77081d857b54","Type":"ContainerStarted","Data":"5739e277f6ca669d434cd88a08cd75ef3baad628fa90656710bc3d575a70d738"} Mar 18 13:40:42.162438 master-0 kubenswrapper[27835]: I0318 13:40:42.160377 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-xstdn" podStartSLOduration=35.16035255 podStartE2EDuration="35.16035255s" podCreationTimestamp="2026-03-18 13:40:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:40:42.150943756 +0000 UTC m=+1006.116155316" watchObservedRunningTime="2026-03-18 13:40:42.16035255 +0000 UTC m=+1006.125564110" Mar 18 13:40:45.161249 master-0 kubenswrapper[27835]: I0318 13:40:45.161166 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c479689995w99" event={"ID":"8ae1c4d1-c972-4a7f-b82b-77081d857b54","Type":"ContainerStarted","Data":"f691c11199dfe62f9e8859db6621b9caa5eb66900db071011e9aad5a033b18c2"} Mar 18 13:40:45.161879 master-0 kubenswrapper[27835]: I0318 13:40:45.161310 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c479689995w99" Mar 18 13:40:45.163243 master-0 kubenswrapper[27835]: I0318 13:40:45.163209 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-w5stx" event={"ID":"71018e48-64b3-42f0-b37f-dfa72163b1bf","Type":"ContainerStarted","Data":"210237d8c117b319896fd21a80672f36bfd6526462617014e0818301331a4839"} Mar 18 13:40:45.163494 master-0 kubenswrapper[27835]: I0318 13:40:45.163469 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-w5stx" Mar 18 13:40:45.212074 master-0 kubenswrapper[27835]: I0318 13:40:45.211987 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c479689995w99" podStartSLOduration=35.997561833 podStartE2EDuration="39.211967065s" podCreationTimestamp="2026-03-18 13:40:06 +0000 UTC" firstStartedPulling="2026-03-18 13:40:41.301375874 +0000 UTC m=+1005.266587434" lastFinishedPulling="2026-03-18 13:40:44.515781106 +0000 UTC m=+1008.480992666" observedRunningTime="2026-03-18 13:40:45.209123279 +0000 UTC m=+1009.174334859" watchObservedRunningTime="2026-03-18 13:40:45.211967065 +0000 UTC m=+1009.177178625" Mar 18 13:40:45.238451 master-0 kubenswrapper[27835]: I0318 13:40:45.238325 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-w5stx" podStartSLOduration=35.078522906 podStartE2EDuration="39.238301767s" podCreationTimestamp="2026-03-18 13:40:06 +0000 UTC" firstStartedPulling="2026-03-18 13:40:40.358998456 +0000 UTC m=+1004.324210026" lastFinishedPulling="2026-03-18 13:40:44.518777317 +0000 UTC m=+1008.483988887" observedRunningTime="2026-03-18 13:40:45.23474442 +0000 UTC m=+1009.199955990" watchObservedRunningTime="2026-03-18 13:40:45.238301767 +0000 UTC m=+1009.203513327" Mar 18 13:40:49.041200 master-0 kubenswrapper[27835]: I0318 13:40:49.041136 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-w5stx" Mar 18 13:40:49.513805 master-0 kubenswrapper[27835]: I0318 13:40:49.513742 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c479689995w99" Mar 18 13:40:49.958254 master-0 kubenswrapper[27835]: I0318 13:40:49.958181 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-xstdn" Mar 18 13:41:29.612002 master-0 kubenswrapper[27835]: I0318 13:41:29.611938 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-kkhsf"] Mar 18 13:41:29.613865 master-0 kubenswrapper[27835]: I0318 13:41:29.613840 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685c76cf85-kkhsf" Mar 18 13:41:29.627786 master-0 kubenswrapper[27835]: I0318 13:41:29.625353 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-kkhsf"] Mar 18 13:41:29.639677 master-0 kubenswrapper[27835]: I0318 13:41:29.637962 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 18 13:41:29.639677 master-0 kubenswrapper[27835]: I0318 13:41:29.638205 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 18 13:41:29.639677 master-0 kubenswrapper[27835]: I0318 13:41:29.638659 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd002959-2759-4b29-97c8-05ce0441059d-config\") pod \"dnsmasq-dns-685c76cf85-kkhsf\" (UID: \"bd002959-2759-4b29-97c8-05ce0441059d\") " pod="openstack/dnsmasq-dns-685c76cf85-kkhsf" Mar 18 13:41:29.639677 master-0 kubenswrapper[27835]: I0318 13:41:29.638781 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmq2j\" (UniqueName: \"kubernetes.io/projected/bd002959-2759-4b29-97c8-05ce0441059d-kube-api-access-bmq2j\") pod \"dnsmasq-dns-685c76cf85-kkhsf\" (UID: \"bd002959-2759-4b29-97c8-05ce0441059d\") " pod="openstack/dnsmasq-dns-685c76cf85-kkhsf" Mar 18 13:41:29.661891 master-0 kubenswrapper[27835]: I0318 13:41:29.661834 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 18 13:41:29.715062 master-0 kubenswrapper[27835]: I0318 13:41:29.712981 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-ncvcc"] Mar 18 13:41:29.732838 master-0 kubenswrapper[27835]: I0318 13:41:29.732745 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476fd89bc-ncvcc" Mar 18 13:41:29.736872 master-0 kubenswrapper[27835]: I0318 13:41:29.735948 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 18 13:41:29.739519 master-0 kubenswrapper[27835]: I0318 13:41:29.739399 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-ncvcc"] Mar 18 13:41:29.742518 master-0 kubenswrapper[27835]: I0318 13:41:29.741718 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmq2j\" (UniqueName: \"kubernetes.io/projected/bd002959-2759-4b29-97c8-05ce0441059d-kube-api-access-bmq2j\") pod \"dnsmasq-dns-685c76cf85-kkhsf\" (UID: \"bd002959-2759-4b29-97c8-05ce0441059d\") " pod="openstack/dnsmasq-dns-685c76cf85-kkhsf" Mar 18 13:41:29.742518 master-0 kubenswrapper[27835]: I0318 13:41:29.741871 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e9fdf93-f5f4-4a4a-b20c-c786e160437e-config\") pod \"dnsmasq-dns-8476fd89bc-ncvcc\" (UID: \"8e9fdf93-f5f4-4a4a-b20c-c786e160437e\") " pod="openstack/dnsmasq-dns-8476fd89bc-ncvcc" Mar 18 13:41:29.742518 master-0 kubenswrapper[27835]: I0318 13:41:29.741922 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv6q5\" (UniqueName: \"kubernetes.io/projected/8e9fdf93-f5f4-4a4a-b20c-c786e160437e-kube-api-access-mv6q5\") pod \"dnsmasq-dns-8476fd89bc-ncvcc\" (UID: \"8e9fdf93-f5f4-4a4a-b20c-c786e160437e\") " pod="openstack/dnsmasq-dns-8476fd89bc-ncvcc" Mar 18 13:41:29.742518 master-0 kubenswrapper[27835]: I0318 13:41:29.742053 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e9fdf93-f5f4-4a4a-b20c-c786e160437e-dns-svc\") pod \"dnsmasq-dns-8476fd89bc-ncvcc\" (UID: \"8e9fdf93-f5f4-4a4a-b20c-c786e160437e\") " pod="openstack/dnsmasq-dns-8476fd89bc-ncvcc" Mar 18 13:41:29.742518 master-0 kubenswrapper[27835]: I0318 13:41:29.742115 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd002959-2759-4b29-97c8-05ce0441059d-config\") pod \"dnsmasq-dns-685c76cf85-kkhsf\" (UID: \"bd002959-2759-4b29-97c8-05ce0441059d\") " pod="openstack/dnsmasq-dns-685c76cf85-kkhsf" Mar 18 13:41:29.743137 master-0 kubenswrapper[27835]: I0318 13:41:29.743101 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd002959-2759-4b29-97c8-05ce0441059d-config\") pod \"dnsmasq-dns-685c76cf85-kkhsf\" (UID: \"bd002959-2759-4b29-97c8-05ce0441059d\") " pod="openstack/dnsmasq-dns-685c76cf85-kkhsf" Mar 18 13:41:29.759569 master-0 kubenswrapper[27835]: I0318 13:41:29.759514 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmq2j\" (UniqueName: \"kubernetes.io/projected/bd002959-2759-4b29-97c8-05ce0441059d-kube-api-access-bmq2j\") pod \"dnsmasq-dns-685c76cf85-kkhsf\" (UID: \"bd002959-2759-4b29-97c8-05ce0441059d\") " pod="openstack/dnsmasq-dns-685c76cf85-kkhsf" Mar 18 13:41:29.843637 master-0 kubenswrapper[27835]: I0318 13:41:29.843467 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e9fdf93-f5f4-4a4a-b20c-c786e160437e-dns-svc\") pod \"dnsmasq-dns-8476fd89bc-ncvcc\" (UID: \"8e9fdf93-f5f4-4a4a-b20c-c786e160437e\") " pod="openstack/dnsmasq-dns-8476fd89bc-ncvcc" Mar 18 13:41:29.843637 master-0 kubenswrapper[27835]: I0318 13:41:29.843596 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e9fdf93-f5f4-4a4a-b20c-c786e160437e-config\") pod \"dnsmasq-dns-8476fd89bc-ncvcc\" (UID: \"8e9fdf93-f5f4-4a4a-b20c-c786e160437e\") " pod="openstack/dnsmasq-dns-8476fd89bc-ncvcc" Mar 18 13:41:29.843914 master-0 kubenswrapper[27835]: I0318 13:41:29.843754 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv6q5\" (UniqueName: \"kubernetes.io/projected/8e9fdf93-f5f4-4a4a-b20c-c786e160437e-kube-api-access-mv6q5\") pod \"dnsmasq-dns-8476fd89bc-ncvcc\" (UID: \"8e9fdf93-f5f4-4a4a-b20c-c786e160437e\") " pod="openstack/dnsmasq-dns-8476fd89bc-ncvcc" Mar 18 13:41:29.844862 master-0 kubenswrapper[27835]: I0318 13:41:29.844529 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e9fdf93-f5f4-4a4a-b20c-c786e160437e-config\") pod \"dnsmasq-dns-8476fd89bc-ncvcc\" (UID: \"8e9fdf93-f5f4-4a4a-b20c-c786e160437e\") " pod="openstack/dnsmasq-dns-8476fd89bc-ncvcc" Mar 18 13:41:29.844862 master-0 kubenswrapper[27835]: I0318 13:41:29.844597 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e9fdf93-f5f4-4a4a-b20c-c786e160437e-dns-svc\") pod \"dnsmasq-dns-8476fd89bc-ncvcc\" (UID: \"8e9fdf93-f5f4-4a4a-b20c-c786e160437e\") " pod="openstack/dnsmasq-dns-8476fd89bc-ncvcc" Mar 18 13:41:29.865373 master-0 kubenswrapper[27835]: I0318 13:41:29.861536 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv6q5\" (UniqueName: \"kubernetes.io/projected/8e9fdf93-f5f4-4a4a-b20c-c786e160437e-kube-api-access-mv6q5\") pod \"dnsmasq-dns-8476fd89bc-ncvcc\" (UID: \"8e9fdf93-f5f4-4a4a-b20c-c786e160437e\") " pod="openstack/dnsmasq-dns-8476fd89bc-ncvcc" Mar 18 13:41:29.975482 master-0 kubenswrapper[27835]: I0318 13:41:29.975341 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685c76cf85-kkhsf" Mar 18 13:41:30.067466 master-0 kubenswrapper[27835]: I0318 13:41:30.066693 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476fd89bc-ncvcc" Mar 18 13:41:30.412164 master-0 kubenswrapper[27835]: I0318 13:41:30.412103 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-kkhsf"] Mar 18 13:41:30.423906 master-0 kubenswrapper[27835]: W0318 13:41:30.423838 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd002959_2759_4b29_97c8_05ce0441059d.slice/crio-fae4cb6d26dead2bd894b9d6e510738cfc964f2a39f33601f13de9940a044061 WatchSource:0}: Error finding container fae4cb6d26dead2bd894b9d6e510738cfc964f2a39f33601f13de9940a044061: Status 404 returned error can't find the container with id fae4cb6d26dead2bd894b9d6e510738cfc964f2a39f33601f13de9940a044061 Mar 18 13:41:30.527064 master-0 kubenswrapper[27835]: W0318 13:41:30.527002 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e9fdf93_f5f4_4a4a_b20c_c786e160437e.slice/crio-59a4029f047cc0c5e11264398f342d4d4b8a3dee860035f3d71fdcbbdc6a1028 WatchSource:0}: Error finding container 59a4029f047cc0c5e11264398f342d4d4b8a3dee860035f3d71fdcbbdc6a1028: Status 404 returned error can't find the container with id 59a4029f047cc0c5e11264398f342d4d4b8a3dee860035f3d71fdcbbdc6a1028 Mar 18 13:41:30.532594 master-0 kubenswrapper[27835]: I0318 13:41:30.532537 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-ncvcc"] Mar 18 13:41:30.823657 master-0 kubenswrapper[27835]: I0318 13:41:30.823577 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685c76cf85-kkhsf" event={"ID":"bd002959-2759-4b29-97c8-05ce0441059d","Type":"ContainerStarted","Data":"fae4cb6d26dead2bd894b9d6e510738cfc964f2a39f33601f13de9940a044061"} Mar 18 13:41:30.825212 master-0 kubenswrapper[27835]: I0318 13:41:30.825170 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8476fd89bc-ncvcc" event={"ID":"8e9fdf93-f5f4-4a4a-b20c-c786e160437e","Type":"ContainerStarted","Data":"59a4029f047cc0c5e11264398f342d4d4b8a3dee860035f3d71fdcbbdc6a1028"} Mar 18 13:41:32.486528 master-0 kubenswrapper[27835]: I0318 13:41:32.486402 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-kkhsf"] Mar 18 13:41:32.539476 master-0 kubenswrapper[27835]: I0318 13:41:32.537809 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76849d6659-hmn6c"] Mar 18 13:41:32.540671 master-0 kubenswrapper[27835]: I0318 13:41:32.540613 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76849d6659-hmn6c" Mar 18 13:41:32.587625 master-0 kubenswrapper[27835]: I0318 13:41:32.587556 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76849d6659-hmn6c"] Mar 18 13:41:32.715040 master-0 kubenswrapper[27835]: I0318 13:41:32.714012 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/521cb15d-54dc-46b7-bab1-a7389273be9f-dns-svc\") pod \"dnsmasq-dns-76849d6659-hmn6c\" (UID: \"521cb15d-54dc-46b7-bab1-a7389273be9f\") " pod="openstack/dnsmasq-dns-76849d6659-hmn6c" Mar 18 13:41:32.715040 master-0 kubenswrapper[27835]: I0318 13:41:32.714137 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/521cb15d-54dc-46b7-bab1-a7389273be9f-config\") pod \"dnsmasq-dns-76849d6659-hmn6c\" (UID: \"521cb15d-54dc-46b7-bab1-a7389273be9f\") " pod="openstack/dnsmasq-dns-76849d6659-hmn6c" Mar 18 13:41:32.715040 master-0 kubenswrapper[27835]: I0318 13:41:32.714182 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-547dn\" (UniqueName: \"kubernetes.io/projected/521cb15d-54dc-46b7-bab1-a7389273be9f-kube-api-access-547dn\") pod \"dnsmasq-dns-76849d6659-hmn6c\" (UID: \"521cb15d-54dc-46b7-bab1-a7389273be9f\") " pod="openstack/dnsmasq-dns-76849d6659-hmn6c" Mar 18 13:41:32.822562 master-0 kubenswrapper[27835]: I0318 13:41:32.822342 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-547dn\" (UniqueName: \"kubernetes.io/projected/521cb15d-54dc-46b7-bab1-a7389273be9f-kube-api-access-547dn\") pod \"dnsmasq-dns-76849d6659-hmn6c\" (UID: \"521cb15d-54dc-46b7-bab1-a7389273be9f\") " pod="openstack/dnsmasq-dns-76849d6659-hmn6c" Mar 18 13:41:32.822562 master-0 kubenswrapper[27835]: I0318 13:41:32.822503 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/521cb15d-54dc-46b7-bab1-a7389273be9f-dns-svc\") pod \"dnsmasq-dns-76849d6659-hmn6c\" (UID: \"521cb15d-54dc-46b7-bab1-a7389273be9f\") " pod="openstack/dnsmasq-dns-76849d6659-hmn6c" Mar 18 13:41:32.822562 master-0 kubenswrapper[27835]: I0318 13:41:32.822549 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/521cb15d-54dc-46b7-bab1-a7389273be9f-config\") pod \"dnsmasq-dns-76849d6659-hmn6c\" (UID: \"521cb15d-54dc-46b7-bab1-a7389273be9f\") " pod="openstack/dnsmasq-dns-76849d6659-hmn6c" Mar 18 13:41:32.824377 master-0 kubenswrapper[27835]: I0318 13:41:32.824329 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/521cb15d-54dc-46b7-bab1-a7389273be9f-config\") pod \"dnsmasq-dns-76849d6659-hmn6c\" (UID: \"521cb15d-54dc-46b7-bab1-a7389273be9f\") " pod="openstack/dnsmasq-dns-76849d6659-hmn6c" Mar 18 13:41:32.832781 master-0 kubenswrapper[27835]: I0318 13:41:32.831669 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/521cb15d-54dc-46b7-bab1-a7389273be9f-dns-svc\") pod \"dnsmasq-dns-76849d6659-hmn6c\" (UID: \"521cb15d-54dc-46b7-bab1-a7389273be9f\") " pod="openstack/dnsmasq-dns-76849d6659-hmn6c" Mar 18 13:41:32.882046 master-0 kubenswrapper[27835]: I0318 13:41:32.880194 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-547dn\" (UniqueName: \"kubernetes.io/projected/521cb15d-54dc-46b7-bab1-a7389273be9f-kube-api-access-547dn\") pod \"dnsmasq-dns-76849d6659-hmn6c\" (UID: \"521cb15d-54dc-46b7-bab1-a7389273be9f\") " pod="openstack/dnsmasq-dns-76849d6659-hmn6c" Mar 18 13:41:32.928019 master-0 kubenswrapper[27835]: I0318 13:41:32.927955 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76849d6659-hmn6c" Mar 18 13:41:33.019590 master-0 kubenswrapper[27835]: I0318 13:41:33.016231 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-ncvcc"] Mar 18 13:41:33.051576 master-0 kubenswrapper[27835]: I0318 13:41:33.049632 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-v9p5z"] Mar 18 13:41:33.062382 master-0 kubenswrapper[27835]: I0318 13:41:33.062330 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff8fd9d5c-v9p5z" Mar 18 13:41:33.134205 master-0 kubenswrapper[27835]: I0318 13:41:33.133475 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-v9p5z"] Mar 18 13:41:33.245792 master-0 kubenswrapper[27835]: I0318 13:41:33.236201 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a036cb1-24e8-401c-af08-1291061013fa-dns-svc\") pod \"dnsmasq-dns-6ff8fd9d5c-v9p5z\" (UID: \"6a036cb1-24e8-401c-af08-1291061013fa\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-v9p5z" Mar 18 13:41:33.245792 master-0 kubenswrapper[27835]: I0318 13:41:33.236281 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a036cb1-24e8-401c-af08-1291061013fa-config\") pod \"dnsmasq-dns-6ff8fd9d5c-v9p5z\" (UID: \"6a036cb1-24e8-401c-af08-1291061013fa\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-v9p5z" Mar 18 13:41:33.245792 master-0 kubenswrapper[27835]: I0318 13:41:33.236391 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28546\" (UniqueName: \"kubernetes.io/projected/6a036cb1-24e8-401c-af08-1291061013fa-kube-api-access-28546\") pod \"dnsmasq-dns-6ff8fd9d5c-v9p5z\" (UID: \"6a036cb1-24e8-401c-af08-1291061013fa\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-v9p5z" Mar 18 13:41:33.339606 master-0 kubenswrapper[27835]: I0318 13:41:33.338243 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a036cb1-24e8-401c-af08-1291061013fa-config\") pod \"dnsmasq-dns-6ff8fd9d5c-v9p5z\" (UID: \"6a036cb1-24e8-401c-af08-1291061013fa\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-v9p5z" Mar 18 13:41:33.339606 master-0 kubenswrapper[27835]: I0318 13:41:33.338436 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28546\" (UniqueName: \"kubernetes.io/projected/6a036cb1-24e8-401c-af08-1291061013fa-kube-api-access-28546\") pod \"dnsmasq-dns-6ff8fd9d5c-v9p5z\" (UID: \"6a036cb1-24e8-401c-af08-1291061013fa\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-v9p5z" Mar 18 13:41:33.339606 master-0 kubenswrapper[27835]: I0318 13:41:33.338538 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a036cb1-24e8-401c-af08-1291061013fa-dns-svc\") pod \"dnsmasq-dns-6ff8fd9d5c-v9p5z\" (UID: \"6a036cb1-24e8-401c-af08-1291061013fa\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-v9p5z" Mar 18 13:41:33.339606 master-0 kubenswrapper[27835]: I0318 13:41:33.339460 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a036cb1-24e8-401c-af08-1291061013fa-dns-svc\") pod \"dnsmasq-dns-6ff8fd9d5c-v9p5z\" (UID: \"6a036cb1-24e8-401c-af08-1291061013fa\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-v9p5z" Mar 18 13:41:33.340552 master-0 kubenswrapper[27835]: I0318 13:41:33.339732 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a036cb1-24e8-401c-af08-1291061013fa-config\") pod \"dnsmasq-dns-6ff8fd9d5c-v9p5z\" (UID: \"6a036cb1-24e8-401c-af08-1291061013fa\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-v9p5z" Mar 18 13:41:33.361610 master-0 kubenswrapper[27835]: I0318 13:41:33.361544 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28546\" (UniqueName: \"kubernetes.io/projected/6a036cb1-24e8-401c-af08-1291061013fa-kube-api-access-28546\") pod \"dnsmasq-dns-6ff8fd9d5c-v9p5z\" (UID: \"6a036cb1-24e8-401c-af08-1291061013fa\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-v9p5z" Mar 18 13:41:33.434435 master-0 kubenswrapper[27835]: I0318 13:41:33.434330 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff8fd9d5c-v9p5z" Mar 18 13:41:34.104139 master-0 kubenswrapper[27835]: I0318 13:41:34.102328 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76849d6659-hmn6c"] Mar 18 13:41:34.209885 master-0 kubenswrapper[27835]: I0318 13:41:34.207503 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-v9p5z"] Mar 18 13:41:34.945781 master-0 kubenswrapper[27835]: I0318 13:41:34.945478 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-v9p5z" event={"ID":"6a036cb1-24e8-401c-af08-1291061013fa","Type":"ContainerStarted","Data":"ade841e38ea6aa818bd0a8868c2bbeecc8f8c8f6676792920edff5d93802f85f"} Mar 18 13:41:34.949604 master-0 kubenswrapper[27835]: I0318 13:41:34.948628 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76849d6659-hmn6c" event={"ID":"521cb15d-54dc-46b7-bab1-a7389273be9f","Type":"ContainerStarted","Data":"ab5e12233e5308b13b8f8f08a63193c22e5c0b430c8ea8dea2d6c41246fbe987"} Mar 18 13:41:36.771619 master-0 kubenswrapper[27835]: I0318 13:41:36.771536 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 18 13:41:36.773180 master-0 kubenswrapper[27835]: I0318 13:41:36.773144 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 18 13:41:36.783495 master-0 kubenswrapper[27835]: I0318 13:41:36.783326 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 18 13:41:36.784833 master-0 kubenswrapper[27835]: I0318 13:41:36.783349 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 18 13:41:36.794690 master-0 kubenswrapper[27835]: I0318 13:41:36.792535 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 18 13:41:36.830574 master-0 kubenswrapper[27835]: I0318 13:41:36.830227 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 18 13:41:36.849589 master-0 kubenswrapper[27835]: I0318 13:41:36.842066 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 18 13:41:36.849589 master-0 kubenswrapper[27835]: I0318 13:41:36.848254 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:41:36.855206 master-0 kubenswrapper[27835]: I0318 13:41:36.850532 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 18 13:41:36.855206 master-0 kubenswrapper[27835]: I0318 13:41:36.850598 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9vzb\" (UniqueName: \"kubernetes.io/projected/52c3f355-8836-4d58-84ee-d6c2afb6c776-kube-api-access-v9vzb\") pod \"memcached-0\" (UID: \"52c3f355-8836-4d58-84ee-d6c2afb6c776\") " pod="openstack/memcached-0" Mar 18 13:41:36.855206 master-0 kubenswrapper[27835]: I0318 13:41:36.850652 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52c3f355-8836-4d58-84ee-d6c2afb6c776-combined-ca-bundle\") pod \"memcached-0\" (UID: \"52c3f355-8836-4d58-84ee-d6c2afb6c776\") " pod="openstack/memcached-0" Mar 18 13:41:36.855206 master-0 kubenswrapper[27835]: I0318 13:41:36.850732 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/52c3f355-8836-4d58-84ee-d6c2afb6c776-kolla-config\") pod \"memcached-0\" (UID: \"52c3f355-8836-4d58-84ee-d6c2afb6c776\") " pod="openstack/memcached-0" Mar 18 13:41:36.855206 master-0 kubenswrapper[27835]: I0318 13:41:36.850762 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52c3f355-8836-4d58-84ee-d6c2afb6c776-config-data\") pod \"memcached-0\" (UID: \"52c3f355-8836-4d58-84ee-d6c2afb6c776\") " pod="openstack/memcached-0" Mar 18 13:41:36.855206 master-0 kubenswrapper[27835]: I0318 13:41:36.850796 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/52c3f355-8836-4d58-84ee-d6c2afb6c776-memcached-tls-certs\") pod \"memcached-0\" (UID: \"52c3f355-8836-4d58-84ee-d6c2afb6c776\") " pod="openstack/memcached-0" Mar 18 13:41:36.855206 master-0 kubenswrapper[27835]: I0318 13:41:36.852550 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 18 13:41:36.855206 master-0 kubenswrapper[27835]: I0318 13:41:36.852683 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 18 13:41:36.855206 master-0 kubenswrapper[27835]: I0318 13:41:36.852728 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 18 13:41:36.855206 master-0 kubenswrapper[27835]: I0318 13:41:36.852891 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 18 13:41:36.855206 master-0 kubenswrapper[27835]: I0318 13:41:36.854710 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 18 13:41:36.855698 master-0 kubenswrapper[27835]: I0318 13:41:36.854983 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 18 13:41:36.962444 master-0 kubenswrapper[27835]: I0318 13:41:36.958770 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/52c3f355-8836-4d58-84ee-d6c2afb6c776-memcached-tls-certs\") pod \"memcached-0\" (UID: \"52c3f355-8836-4d58-84ee-d6c2afb6c776\") " pod="openstack/memcached-0" Mar 18 13:41:36.962444 master-0 kubenswrapper[27835]: I0318 13:41:36.960283 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6f51d7b8-7e16-4c10-8e64-a5af8a8522ed-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:41:36.962444 master-0 kubenswrapper[27835]: I0318 13:41:36.960369 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9vzb\" (UniqueName: \"kubernetes.io/projected/52c3f355-8836-4d58-84ee-d6c2afb6c776-kube-api-access-v9vzb\") pod \"memcached-0\" (UID: \"52c3f355-8836-4d58-84ee-d6c2afb6c776\") " pod="openstack/memcached-0" Mar 18 13:41:36.962444 master-0 kubenswrapper[27835]: I0318 13:41:36.960440 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6f51d7b8-7e16-4c10-8e64-a5af8a8522ed-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:41:36.962444 master-0 kubenswrapper[27835]: I0318 13:41:36.960991 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52c3f355-8836-4d58-84ee-d6c2afb6c776-combined-ca-bundle\") pod \"memcached-0\" (UID: \"52c3f355-8836-4d58-84ee-d6c2afb6c776\") " pod="openstack/memcached-0" Mar 18 13:41:36.962444 master-0 kubenswrapper[27835]: I0318 13:41:36.961097 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6f51d7b8-7e16-4c10-8e64-a5af8a8522ed-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:41:36.962444 master-0 kubenswrapper[27835]: I0318 13:41:36.961166 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0eebb854-46df-4ff4-a29f-800811284621\" (UniqueName: \"kubernetes.io/csi/topolvm.io^9575f2ba-ce9f-48b6-8de8-c4ea8fa9ee04\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:41:36.962444 master-0 kubenswrapper[27835]: I0318 13:41:36.961385 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6f51d7b8-7e16-4c10-8e64-a5af8a8522ed-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:41:36.962444 master-0 kubenswrapper[27835]: I0318 13:41:36.961465 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6f51d7b8-7e16-4c10-8e64-a5af8a8522ed-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:41:36.962444 master-0 kubenswrapper[27835]: I0318 13:41:36.961564 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/52c3f355-8836-4d58-84ee-d6c2afb6c776-kolla-config\") pod \"memcached-0\" (UID: \"52c3f355-8836-4d58-84ee-d6c2afb6c776\") " pod="openstack/memcached-0" Mar 18 13:41:36.962444 master-0 kubenswrapper[27835]: I0318 13:41:36.961919 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6f51d7b8-7e16-4c10-8e64-a5af8a8522ed-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:41:36.962444 master-0 kubenswrapper[27835]: I0318 13:41:36.961956 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6f51d7b8-7e16-4c10-8e64-a5af8a8522ed-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:41:36.962444 master-0 kubenswrapper[27835]: I0318 13:41:36.962036 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52c3f355-8836-4d58-84ee-d6c2afb6c776-config-data\") pod \"memcached-0\" (UID: \"52c3f355-8836-4d58-84ee-d6c2afb6c776\") " pod="openstack/memcached-0" Mar 18 13:41:36.962444 master-0 kubenswrapper[27835]: I0318 13:41:36.962106 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62gwv\" (UniqueName: \"kubernetes.io/projected/6f51d7b8-7e16-4c10-8e64-a5af8a8522ed-kube-api-access-62gwv\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:41:36.962444 master-0 kubenswrapper[27835]: I0318 13:41:36.962153 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6f51d7b8-7e16-4c10-8e64-a5af8a8522ed-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:41:36.962444 master-0 kubenswrapper[27835]: I0318 13:41:36.962221 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6f51d7b8-7e16-4c10-8e64-a5af8a8522ed-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:41:36.977379 master-0 kubenswrapper[27835]: I0318 13:41:36.972542 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52c3f355-8836-4d58-84ee-d6c2afb6c776-config-data\") pod \"memcached-0\" (UID: \"52c3f355-8836-4d58-84ee-d6c2afb6c776\") " pod="openstack/memcached-0" Mar 18 13:41:36.977379 master-0 kubenswrapper[27835]: I0318 13:41:36.974651 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/52c3f355-8836-4d58-84ee-d6c2afb6c776-kolla-config\") pod \"memcached-0\" (UID: \"52c3f355-8836-4d58-84ee-d6c2afb6c776\") " pod="openstack/memcached-0" Mar 18 13:41:36.990207 master-0 kubenswrapper[27835]: I0318 13:41:36.985246 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9vzb\" (UniqueName: \"kubernetes.io/projected/52c3f355-8836-4d58-84ee-d6c2afb6c776-kube-api-access-v9vzb\") pod \"memcached-0\" (UID: \"52c3f355-8836-4d58-84ee-d6c2afb6c776\") " pod="openstack/memcached-0" Mar 18 13:41:36.991641 master-0 kubenswrapper[27835]: I0318 13:41:36.990653 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52c3f355-8836-4d58-84ee-d6c2afb6c776-combined-ca-bundle\") pod \"memcached-0\" (UID: \"52c3f355-8836-4d58-84ee-d6c2afb6c776\") " pod="openstack/memcached-0" Mar 18 13:41:36.991962 master-0 kubenswrapper[27835]: I0318 13:41:36.991925 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/52c3f355-8836-4d58-84ee-d6c2afb6c776-memcached-tls-certs\") pod \"memcached-0\" (UID: \"52c3f355-8836-4d58-84ee-d6c2afb6c776\") " pod="openstack/memcached-0" Mar 18 13:41:37.089160 master-0 kubenswrapper[27835]: I0318 13:41:37.086783 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62gwv\" (UniqueName: \"kubernetes.io/projected/6f51d7b8-7e16-4c10-8e64-a5af8a8522ed-kube-api-access-62gwv\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:41:37.092077 master-0 kubenswrapper[27835]: I0318 13:41:37.090966 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6f51d7b8-7e16-4c10-8e64-a5af8a8522ed-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:41:37.092077 master-0 kubenswrapper[27835]: I0318 13:41:37.091031 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6f51d7b8-7e16-4c10-8e64-a5af8a8522ed-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:41:37.092077 master-0 kubenswrapper[27835]: I0318 13:41:37.091120 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6f51d7b8-7e16-4c10-8e64-a5af8a8522ed-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:41:37.092077 master-0 kubenswrapper[27835]: I0318 13:41:37.091862 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6f51d7b8-7e16-4c10-8e64-a5af8a8522ed-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:41:37.102940 master-0 kubenswrapper[27835]: I0318 13:41:37.092286 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6f51d7b8-7e16-4c10-8e64-a5af8a8522ed-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:41:37.102940 master-0 kubenswrapper[27835]: I0318 13:41:37.092689 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0eebb854-46df-4ff4-a29f-800811284621\" (UniqueName: \"kubernetes.io/csi/topolvm.io^9575f2ba-ce9f-48b6-8de8-c4ea8fa9ee04\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:41:37.102940 master-0 kubenswrapper[27835]: I0318 13:41:37.093397 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6f51d7b8-7e16-4c10-8e64-a5af8a8522ed-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:41:37.102940 master-0 kubenswrapper[27835]: I0318 13:41:37.095377 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6f51d7b8-7e16-4c10-8e64-a5af8a8522ed-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:41:37.102940 master-0 kubenswrapper[27835]: I0318 13:41:37.098922 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6f51d7b8-7e16-4c10-8e64-a5af8a8522ed-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:41:37.102940 master-0 kubenswrapper[27835]: I0318 13:41:37.099869 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6f51d7b8-7e16-4c10-8e64-a5af8a8522ed-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:41:37.104377 master-0 kubenswrapper[27835]: I0318 13:41:37.096816 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6f51d7b8-7e16-4c10-8e64-a5af8a8522ed-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:41:37.105776 master-0 kubenswrapper[27835]: I0318 13:41:37.105746 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6f51d7b8-7e16-4c10-8e64-a5af8a8522ed-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:41:37.106133 master-0 kubenswrapper[27835]: I0318 13:41:37.106022 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6f51d7b8-7e16-4c10-8e64-a5af8a8522ed-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:41:37.106289 master-0 kubenswrapper[27835]: I0318 13:41:37.106257 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6f51d7b8-7e16-4c10-8e64-a5af8a8522ed-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:41:37.106289 master-0 kubenswrapper[27835]: I0318 13:41:37.106259 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6f51d7b8-7e16-4c10-8e64-a5af8a8522ed-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:41:37.119740 master-0 kubenswrapper[27835]: I0318 13:41:37.119687 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6f51d7b8-7e16-4c10-8e64-a5af8a8522ed-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:41:37.120096 master-0 kubenswrapper[27835]: I0318 13:41:37.120061 27835 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 13:41:37.120200 master-0 kubenswrapper[27835]: I0318 13:41:37.120165 27835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0eebb854-46df-4ff4-a29f-800811284621\" (UniqueName: \"kubernetes.io/csi/topolvm.io^9575f2ba-ce9f-48b6-8de8-c4ea8fa9ee04\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/175ee37daf985028a02e23f01bf16ad669957af3eb1b34ba46598ec7bb629a73/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:41:37.168616 master-0 kubenswrapper[27835]: I0318 13:41:37.167998 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 18 13:41:37.170484 master-0 kubenswrapper[27835]: I0318 13:41:37.169473 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6f51d7b8-7e16-4c10-8e64-a5af8a8522ed-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:41:37.170484 master-0 kubenswrapper[27835]: I0318 13:41:37.169894 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6f51d7b8-7e16-4c10-8e64-a5af8a8522ed-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:41:37.172633 master-0 kubenswrapper[27835]: I0318 13:41:37.171120 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6f51d7b8-7e16-4c10-8e64-a5af8a8522ed-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:41:37.224202 master-0 kubenswrapper[27835]: I0318 13:41:37.223971 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62gwv\" (UniqueName: \"kubernetes.io/projected/6f51d7b8-7e16-4c10-8e64-a5af8a8522ed-kube-api-access-62gwv\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:41:38.117387 master-0 kubenswrapper[27835]: I0318 13:41:38.117253 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 18 13:41:38.119226 master-0 kubenswrapper[27835]: I0318 13:41:38.119159 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 18 13:41:38.128272 master-0 kubenswrapper[27835]: I0318 13:41:38.128209 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 18 13:41:38.128532 master-0 kubenswrapper[27835]: I0318 13:41:38.128334 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 18 13:41:38.128765 master-0 kubenswrapper[27835]: I0318 13:41:38.128717 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 18 13:41:38.130575 master-0 kubenswrapper[27835]: I0318 13:41:38.130518 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 18 13:41:38.130786 master-0 kubenswrapper[27835]: I0318 13:41:38.130730 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 18 13:41:38.132137 master-0 kubenswrapper[27835]: I0318 13:41:38.131390 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 18 13:41:38.177945 master-0 kubenswrapper[27835]: I0318 13:41:38.177393 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 18 13:41:38.272940 master-0 kubenswrapper[27835]: I0318 13:41:38.272887 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1b76c81c-7824-4bfa-af04-9c1fd928fb63-config-data\") pod \"rabbitmq-server-0\" (UID: \"1b76c81c-7824-4bfa-af04-9c1fd928fb63\") " pod="openstack/rabbitmq-server-0" Mar 18 13:41:38.273264 master-0 kubenswrapper[27835]: I0318 13:41:38.272983 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1b76c81c-7824-4bfa-af04-9c1fd928fb63-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1b76c81c-7824-4bfa-af04-9c1fd928fb63\") " pod="openstack/rabbitmq-server-0" Mar 18 13:41:38.273264 master-0 kubenswrapper[27835]: I0318 13:41:38.273020 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1b76c81c-7824-4bfa-af04-9c1fd928fb63-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1b76c81c-7824-4bfa-af04-9c1fd928fb63\") " pod="openstack/rabbitmq-server-0" Mar 18 13:41:38.273264 master-0 kubenswrapper[27835]: I0318 13:41:38.273061 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1b76c81c-7824-4bfa-af04-9c1fd928fb63-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1b76c81c-7824-4bfa-af04-9c1fd928fb63\") " pod="openstack/rabbitmq-server-0" Mar 18 13:41:38.274716 master-0 kubenswrapper[27835]: I0318 13:41:38.274684 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1b76c81c-7824-4bfa-af04-9c1fd928fb63-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1b76c81c-7824-4bfa-af04-9c1fd928fb63\") " pod="openstack/rabbitmq-server-0" Mar 18 13:41:38.274788 master-0 kubenswrapper[27835]: I0318 13:41:38.274761 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1b76c81c-7824-4bfa-af04-9c1fd928fb63-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1b76c81c-7824-4bfa-af04-9c1fd928fb63\") " pod="openstack/rabbitmq-server-0" Mar 18 13:41:38.274900 master-0 kubenswrapper[27835]: I0318 13:41:38.274849 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8ae8072b-f58a-42e1-9404-6be8076c3add\" (UniqueName: \"kubernetes.io/csi/topolvm.io^9041c259-83e8-4ef7-bc46-498a1e7696d1\") pod \"rabbitmq-server-0\" (UID: \"1b76c81c-7824-4bfa-af04-9c1fd928fb63\") " pod="openstack/rabbitmq-server-0" Mar 18 13:41:38.275004 master-0 kubenswrapper[27835]: I0318 13:41:38.274979 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1b76c81c-7824-4bfa-af04-9c1fd928fb63-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1b76c81c-7824-4bfa-af04-9c1fd928fb63\") " pod="openstack/rabbitmq-server-0" Mar 18 13:41:38.275050 master-0 kubenswrapper[27835]: I0318 13:41:38.275033 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1b76c81c-7824-4bfa-af04-9c1fd928fb63-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1b76c81c-7824-4bfa-af04-9c1fd928fb63\") " pod="openstack/rabbitmq-server-0" Mar 18 13:41:38.275085 master-0 kubenswrapper[27835]: I0318 13:41:38.275064 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xjzq\" (UniqueName: \"kubernetes.io/projected/1b76c81c-7824-4bfa-af04-9c1fd928fb63-kube-api-access-6xjzq\") pod \"rabbitmq-server-0\" (UID: \"1b76c81c-7824-4bfa-af04-9c1fd928fb63\") " pod="openstack/rabbitmq-server-0" Mar 18 13:41:38.275117 master-0 kubenswrapper[27835]: I0318 13:41:38.275094 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1b76c81c-7824-4bfa-af04-9c1fd928fb63-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1b76c81c-7824-4bfa-af04-9c1fd928fb63\") " pod="openstack/rabbitmq-server-0" Mar 18 13:41:38.377524 master-0 kubenswrapper[27835]: I0318 13:41:38.377337 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1b76c81c-7824-4bfa-af04-9c1fd928fb63-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1b76c81c-7824-4bfa-af04-9c1fd928fb63\") " pod="openstack/rabbitmq-server-0" Mar 18 13:41:38.377524 master-0 kubenswrapper[27835]: I0318 13:41:38.377422 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1b76c81c-7824-4bfa-af04-9c1fd928fb63-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1b76c81c-7824-4bfa-af04-9c1fd928fb63\") " pod="openstack/rabbitmq-server-0" Mar 18 13:41:38.378432 master-0 kubenswrapper[27835]: I0318 13:41:38.377907 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1b76c81c-7824-4bfa-af04-9c1fd928fb63-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1b76c81c-7824-4bfa-af04-9c1fd928fb63\") " pod="openstack/rabbitmq-server-0" Mar 18 13:41:38.378432 master-0 kubenswrapper[27835]: I0318 13:41:38.377968 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1b76c81c-7824-4bfa-af04-9c1fd928fb63-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1b76c81c-7824-4bfa-af04-9c1fd928fb63\") " pod="openstack/rabbitmq-server-0" Mar 18 13:41:38.378432 master-0 kubenswrapper[27835]: I0318 13:41:38.378061 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1b76c81c-7824-4bfa-af04-9c1fd928fb63-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1b76c81c-7824-4bfa-af04-9c1fd928fb63\") " pod="openstack/rabbitmq-server-0" Mar 18 13:41:38.378432 master-0 kubenswrapper[27835]: I0318 13:41:38.378128 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8ae8072b-f58a-42e1-9404-6be8076c3add\" (UniqueName: \"kubernetes.io/csi/topolvm.io^9041c259-83e8-4ef7-bc46-498a1e7696d1\") pod \"rabbitmq-server-0\" (UID: \"1b76c81c-7824-4bfa-af04-9c1fd928fb63\") " pod="openstack/rabbitmq-server-0" Mar 18 13:41:38.378432 master-0 kubenswrapper[27835]: I0318 13:41:38.378225 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1b76c81c-7824-4bfa-af04-9c1fd928fb63-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1b76c81c-7824-4bfa-af04-9c1fd928fb63\") " pod="openstack/rabbitmq-server-0" Mar 18 13:41:38.378432 master-0 kubenswrapper[27835]: I0318 13:41:38.378296 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1b76c81c-7824-4bfa-af04-9c1fd928fb63-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1b76c81c-7824-4bfa-af04-9c1fd928fb63\") " pod="openstack/rabbitmq-server-0" Mar 18 13:41:38.378432 master-0 kubenswrapper[27835]: I0318 13:41:38.378317 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xjzq\" (UniqueName: \"kubernetes.io/projected/1b76c81c-7824-4bfa-af04-9c1fd928fb63-kube-api-access-6xjzq\") pod \"rabbitmq-server-0\" (UID: \"1b76c81c-7824-4bfa-af04-9c1fd928fb63\") " pod="openstack/rabbitmq-server-0" Mar 18 13:41:38.378432 master-0 kubenswrapper[27835]: I0318 13:41:38.378342 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1b76c81c-7824-4bfa-af04-9c1fd928fb63-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1b76c81c-7824-4bfa-af04-9c1fd928fb63\") " pod="openstack/rabbitmq-server-0" Mar 18 13:41:38.379349 master-0 kubenswrapper[27835]: I0318 13:41:38.378854 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1b76c81c-7824-4bfa-af04-9c1fd928fb63-config-data\") pod \"rabbitmq-server-0\" (UID: \"1b76c81c-7824-4bfa-af04-9c1fd928fb63\") " pod="openstack/rabbitmq-server-0" Mar 18 13:41:38.379349 master-0 kubenswrapper[27835]: I0318 13:41:38.379122 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1b76c81c-7824-4bfa-af04-9c1fd928fb63-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1b76c81c-7824-4bfa-af04-9c1fd928fb63\") " pod="openstack/rabbitmq-server-0" Mar 18 13:41:38.380433 master-0 kubenswrapper[27835]: I0318 13:41:38.379519 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1b76c81c-7824-4bfa-af04-9c1fd928fb63-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1b76c81c-7824-4bfa-af04-9c1fd928fb63\") " pod="openstack/rabbitmq-server-0" Mar 18 13:41:38.381206 master-0 kubenswrapper[27835]: I0318 13:41:38.380602 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1b76c81c-7824-4bfa-af04-9c1fd928fb63-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1b76c81c-7824-4bfa-af04-9c1fd928fb63\") " pod="openstack/rabbitmq-server-0" Mar 18 13:41:38.381206 master-0 kubenswrapper[27835]: I0318 13:41:38.381182 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1b76c81c-7824-4bfa-af04-9c1fd928fb63-config-data\") pod \"rabbitmq-server-0\" (UID: \"1b76c81c-7824-4bfa-af04-9c1fd928fb63\") " pod="openstack/rabbitmq-server-0" Mar 18 13:41:38.383527 master-0 kubenswrapper[27835]: I0318 13:41:38.383490 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1b76c81c-7824-4bfa-af04-9c1fd928fb63-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1b76c81c-7824-4bfa-af04-9c1fd928fb63\") " pod="openstack/rabbitmq-server-0" Mar 18 13:41:38.384081 master-0 kubenswrapper[27835]: I0318 13:41:38.384026 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1b76c81c-7824-4bfa-af04-9c1fd928fb63-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1b76c81c-7824-4bfa-af04-9c1fd928fb63\") " pod="openstack/rabbitmq-server-0" Mar 18 13:41:38.389140 master-0 kubenswrapper[27835]: I0318 13:41:38.389063 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1b76c81c-7824-4bfa-af04-9c1fd928fb63-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1b76c81c-7824-4bfa-af04-9c1fd928fb63\") " pod="openstack/rabbitmq-server-0" Mar 18 13:41:38.389140 master-0 kubenswrapper[27835]: I0318 13:41:38.389095 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1b76c81c-7824-4bfa-af04-9c1fd928fb63-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1b76c81c-7824-4bfa-af04-9c1fd928fb63\") " pod="openstack/rabbitmq-server-0" Mar 18 13:41:38.392473 master-0 kubenswrapper[27835]: I0318 13:41:38.391542 27835 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 13:41:38.392473 master-0 kubenswrapper[27835]: I0318 13:41:38.391570 27835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8ae8072b-f58a-42e1-9404-6be8076c3add\" (UniqueName: \"kubernetes.io/csi/topolvm.io^9041c259-83e8-4ef7-bc46-498a1e7696d1\") pod \"rabbitmq-server-0\" (UID: \"1b76c81c-7824-4bfa-af04-9c1fd928fb63\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/8e667f2a34e70cd5acea75ff149dcfc63e11281c5f8de9b3dd21668ea69d57d5/globalmount\"" pod="openstack/rabbitmq-server-0" Mar 18 13:41:38.393652 master-0 kubenswrapper[27835]: I0318 13:41:38.393316 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1b76c81c-7824-4bfa-af04-9c1fd928fb63-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1b76c81c-7824-4bfa-af04-9c1fd928fb63\") " pod="openstack/rabbitmq-server-0" Mar 18 13:41:38.411383 master-0 kubenswrapper[27835]: I0318 13:41:38.410541 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xjzq\" (UniqueName: \"kubernetes.io/projected/1b76c81c-7824-4bfa-af04-9c1fd928fb63-kube-api-access-6xjzq\") pod \"rabbitmq-server-0\" (UID: \"1b76c81c-7824-4bfa-af04-9c1fd928fb63\") " pod="openstack/rabbitmq-server-0" Mar 18 13:41:38.667998 master-0 kubenswrapper[27835]: I0318 13:41:38.667751 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 18 13:41:38.671479 master-0 kubenswrapper[27835]: I0318 13:41:38.670183 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 18 13:41:38.685602 master-0 kubenswrapper[27835]: I0318 13:41:38.678722 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 18 13:41:38.685602 master-0 kubenswrapper[27835]: I0318 13:41:38.678998 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 18 13:41:38.685602 master-0 kubenswrapper[27835]: I0318 13:41:38.679150 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 18 13:41:38.757860 master-0 kubenswrapper[27835]: I0318 13:41:38.757800 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 18 13:41:38.803303 master-0 kubenswrapper[27835]: I0318 13:41:38.802382 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/483c8547-dea7-4fd8-b4db-4849a346d73a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"483c8547-dea7-4fd8-b4db-4849a346d73a\") " pod="openstack/openstack-galera-0" Mar 18 13:41:38.803303 master-0 kubenswrapper[27835]: I0318 13:41:38.802487 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/483c8547-dea7-4fd8-b4db-4849a346d73a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"483c8547-dea7-4fd8-b4db-4849a346d73a\") " pod="openstack/openstack-galera-0" Mar 18 13:41:38.803303 master-0 kubenswrapper[27835]: I0318 13:41:38.802514 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/483c8547-dea7-4fd8-b4db-4849a346d73a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"483c8547-dea7-4fd8-b4db-4849a346d73a\") " pod="openstack/openstack-galera-0" Mar 18 13:41:38.803303 master-0 kubenswrapper[27835]: I0318 13:41:38.802596 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/483c8547-dea7-4fd8-b4db-4849a346d73a-config-data-default\") pod \"openstack-galera-0\" (UID: \"483c8547-dea7-4fd8-b4db-4849a346d73a\") " pod="openstack/openstack-galera-0" Mar 18 13:41:38.803303 master-0 kubenswrapper[27835]: I0318 13:41:38.802828 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjsvj\" (UniqueName: \"kubernetes.io/projected/483c8547-dea7-4fd8-b4db-4849a346d73a-kube-api-access-gjsvj\") pod \"openstack-galera-0\" (UID: \"483c8547-dea7-4fd8-b4db-4849a346d73a\") " pod="openstack/openstack-galera-0" Mar 18 13:41:38.803303 master-0 kubenswrapper[27835]: I0318 13:41:38.802861 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/483c8547-dea7-4fd8-b4db-4849a346d73a-kolla-config\") pod \"openstack-galera-0\" (UID: \"483c8547-dea7-4fd8-b4db-4849a346d73a\") " pod="openstack/openstack-galera-0" Mar 18 13:41:38.803303 master-0 kubenswrapper[27835]: I0318 13:41:38.803003 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ebf6ac13-979d-4c66-b169-0d8af519ecf9\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a0e08c44-19e3-40ef-b0ba-2c5d0b437471\") pod \"openstack-galera-0\" (UID: \"483c8547-dea7-4fd8-b4db-4849a346d73a\") " pod="openstack/openstack-galera-0" Mar 18 13:41:38.803303 master-0 kubenswrapper[27835]: I0318 13:41:38.803036 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/483c8547-dea7-4fd8-b4db-4849a346d73a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"483c8547-dea7-4fd8-b4db-4849a346d73a\") " pod="openstack/openstack-galera-0" Mar 18 13:41:38.906253 master-0 kubenswrapper[27835]: I0318 13:41:38.906183 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/483c8547-dea7-4fd8-b4db-4849a346d73a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"483c8547-dea7-4fd8-b4db-4849a346d73a\") " pod="openstack/openstack-galera-0" Mar 18 13:41:38.907424 master-0 kubenswrapper[27835]: I0318 13:41:38.907375 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/483c8547-dea7-4fd8-b4db-4849a346d73a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"483c8547-dea7-4fd8-b4db-4849a346d73a\") " pod="openstack/openstack-galera-0" Mar 18 13:41:38.907552 master-0 kubenswrapper[27835]: I0318 13:41:38.907535 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/483c8547-dea7-4fd8-b4db-4849a346d73a-config-data-default\") pod \"openstack-galera-0\" (UID: \"483c8547-dea7-4fd8-b4db-4849a346d73a\") " pod="openstack/openstack-galera-0" Mar 18 13:41:38.917509 master-0 kubenswrapper[27835]: I0318 13:41:38.917446 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjsvj\" (UniqueName: \"kubernetes.io/projected/483c8547-dea7-4fd8-b4db-4849a346d73a-kube-api-access-gjsvj\") pod \"openstack-galera-0\" (UID: \"483c8547-dea7-4fd8-b4db-4849a346d73a\") " pod="openstack/openstack-galera-0" Mar 18 13:41:38.918541 master-0 kubenswrapper[27835]: I0318 13:41:38.918248 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/483c8547-dea7-4fd8-b4db-4849a346d73a-config-data-default\") pod \"openstack-galera-0\" (UID: \"483c8547-dea7-4fd8-b4db-4849a346d73a\") " pod="openstack/openstack-galera-0" Mar 18 13:41:38.918794 master-0 kubenswrapper[27835]: I0318 13:41:38.918751 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/483c8547-dea7-4fd8-b4db-4849a346d73a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"483c8547-dea7-4fd8-b4db-4849a346d73a\") " pod="openstack/openstack-galera-0" Mar 18 13:41:38.918892 master-0 kubenswrapper[27835]: I0318 13:41:38.918861 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/483c8547-dea7-4fd8-b4db-4849a346d73a-kolla-config\") pod \"openstack-galera-0\" (UID: \"483c8547-dea7-4fd8-b4db-4849a346d73a\") " pod="openstack/openstack-galera-0" Mar 18 13:41:38.922482 master-0 kubenswrapper[27835]: I0318 13:41:38.920439 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/483c8547-dea7-4fd8-b4db-4849a346d73a-kolla-config\") pod \"openstack-galera-0\" (UID: \"483c8547-dea7-4fd8-b4db-4849a346d73a\") " pod="openstack/openstack-galera-0" Mar 18 13:41:38.925960 master-0 kubenswrapper[27835]: I0318 13:41:38.925608 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ebf6ac13-979d-4c66-b169-0d8af519ecf9\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a0e08c44-19e3-40ef-b0ba-2c5d0b437471\") pod \"openstack-galera-0\" (UID: \"483c8547-dea7-4fd8-b4db-4849a346d73a\") " pod="openstack/openstack-galera-0" Mar 18 13:41:38.926842 master-0 kubenswrapper[27835]: I0318 13:41:38.926815 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/483c8547-dea7-4fd8-b4db-4849a346d73a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"483c8547-dea7-4fd8-b4db-4849a346d73a\") " pod="openstack/openstack-galera-0" Mar 18 13:41:38.926943 master-0 kubenswrapper[27835]: I0318 13:41:38.926891 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/483c8547-dea7-4fd8-b4db-4849a346d73a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"483c8547-dea7-4fd8-b4db-4849a346d73a\") " pod="openstack/openstack-galera-0" Mar 18 13:41:38.928107 master-0 kubenswrapper[27835]: I0318 13:41:38.928066 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/483c8547-dea7-4fd8-b4db-4849a346d73a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"483c8547-dea7-4fd8-b4db-4849a346d73a\") " pod="openstack/openstack-galera-0" Mar 18 13:41:38.934500 master-0 kubenswrapper[27835]: I0318 13:41:38.934323 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0eebb854-46df-4ff4-a29f-800811284621\" (UniqueName: \"kubernetes.io/csi/topolvm.io^9575f2ba-ce9f-48b6-8de8-c4ea8fa9ee04\") pod \"rabbitmq-cell1-server-0\" (UID: \"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:41:38.945622 master-0 kubenswrapper[27835]: I0318 13:41:38.942518 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/483c8547-dea7-4fd8-b4db-4849a346d73a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"483c8547-dea7-4fd8-b4db-4849a346d73a\") " pod="openstack/openstack-galera-0" Mar 18 13:41:38.945622 master-0 kubenswrapper[27835]: I0318 13:41:38.945125 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/483c8547-dea7-4fd8-b4db-4849a346d73a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"483c8547-dea7-4fd8-b4db-4849a346d73a\") " pod="openstack/openstack-galera-0" Mar 18 13:41:38.946201 master-0 kubenswrapper[27835]: I0318 13:41:38.945937 27835 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 13:41:38.946201 master-0 kubenswrapper[27835]: I0318 13:41:38.945964 27835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ebf6ac13-979d-4c66-b169-0d8af519ecf9\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a0e08c44-19e3-40ef-b0ba-2c5d0b437471\") pod \"openstack-galera-0\" (UID: \"483c8547-dea7-4fd8-b4db-4849a346d73a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/368a2fb6ae8165f7ee2e3f828625ab63a1f96dd6834d291483a0b311ce384467/globalmount\"" pod="openstack/openstack-galera-0" Mar 18 13:41:38.967258 master-0 kubenswrapper[27835]: I0318 13:41:38.967216 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjsvj\" (UniqueName: \"kubernetes.io/projected/483c8547-dea7-4fd8-b4db-4849a346d73a-kube-api-access-gjsvj\") pod \"openstack-galera-0\" (UID: \"483c8547-dea7-4fd8-b4db-4849a346d73a\") " pod="openstack/openstack-galera-0" Mar 18 13:41:39.002279 master-0 kubenswrapper[27835]: I0318 13:41:39.002221 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:41:40.250360 master-0 kubenswrapper[27835]: I0318 13:41:40.250305 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8ae8072b-f58a-42e1-9404-6be8076c3add\" (UniqueName: \"kubernetes.io/csi/topolvm.io^9041c259-83e8-4ef7-bc46-498a1e7696d1\") pod \"rabbitmq-server-0\" (UID: \"1b76c81c-7824-4bfa-af04-9c1fd928fb63\") " pod="openstack/rabbitmq-server-0" Mar 18 13:41:40.262977 master-0 kubenswrapper[27835]: I0318 13:41:40.262848 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 18 13:41:40.574156 master-0 kubenswrapper[27835]: I0318 13:41:40.574087 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 18 13:41:40.589639 master-0 kubenswrapper[27835]: I0318 13:41:40.589581 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 18 13:41:40.590454 master-0 kubenswrapper[27835]: I0318 13:41:40.590191 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 18 13:41:40.593887 master-0 kubenswrapper[27835]: I0318 13:41:40.593693 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 18 13:41:40.593887 master-0 kubenswrapper[27835]: I0318 13:41:40.593788 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 18 13:41:40.595895 master-0 kubenswrapper[27835]: I0318 13:41:40.595782 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 18 13:41:40.691963 master-0 kubenswrapper[27835]: I0318 13:41:40.691894 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b2c70993-8f51-411e-ae8d-65ea5161c75e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b2c70993-8f51-411e-ae8d-65ea5161c75e\") " pod="openstack/openstack-cell1-galera-0" Mar 18 13:41:40.692177 master-0 kubenswrapper[27835]: I0318 13:41:40.691982 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b2c70993-8f51-411e-ae8d-65ea5161c75e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b2c70993-8f51-411e-ae8d-65ea5161c75e\") " pod="openstack/openstack-cell1-galera-0" Mar 18 13:41:40.692177 master-0 kubenswrapper[27835]: I0318 13:41:40.692018 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v9sc\" (UniqueName: \"kubernetes.io/projected/b2c70993-8f51-411e-ae8d-65ea5161c75e-kube-api-access-6v9sc\") pod \"openstack-cell1-galera-0\" (UID: \"b2c70993-8f51-411e-ae8d-65ea5161c75e\") " pod="openstack/openstack-cell1-galera-0" Mar 18 13:41:40.692177 master-0 kubenswrapper[27835]: I0318 13:41:40.692071 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-eb68287f-9fbb-4c2e-98c4-04e29a7a4591\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3badc28c-7437-467f-a60a-dbbecc7e44a5\") pod \"openstack-cell1-galera-0\" (UID: \"b2c70993-8f51-411e-ae8d-65ea5161c75e\") " pod="openstack/openstack-cell1-galera-0" Mar 18 13:41:40.692337 master-0 kubenswrapper[27835]: I0318 13:41:40.692209 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2c70993-8f51-411e-ae8d-65ea5161c75e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b2c70993-8f51-411e-ae8d-65ea5161c75e\") " pod="openstack/openstack-cell1-galera-0" Mar 18 13:41:40.692337 master-0 kubenswrapper[27835]: I0318 13:41:40.692257 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b2c70993-8f51-411e-ae8d-65ea5161c75e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b2c70993-8f51-411e-ae8d-65ea5161c75e\") " pod="openstack/openstack-cell1-galera-0" Mar 18 13:41:40.692337 master-0 kubenswrapper[27835]: I0318 13:41:40.692300 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2c70993-8f51-411e-ae8d-65ea5161c75e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b2c70993-8f51-411e-ae8d-65ea5161c75e\") " pod="openstack/openstack-cell1-galera-0" Mar 18 13:41:40.692551 master-0 kubenswrapper[27835]: I0318 13:41:40.692351 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2c70993-8f51-411e-ae8d-65ea5161c75e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b2c70993-8f51-411e-ae8d-65ea5161c75e\") " pod="openstack/openstack-cell1-galera-0" Mar 18 13:41:40.801543 master-0 kubenswrapper[27835]: I0318 13:41:40.794379 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b2c70993-8f51-411e-ae8d-65ea5161c75e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b2c70993-8f51-411e-ae8d-65ea5161c75e\") " pod="openstack/openstack-cell1-galera-0" Mar 18 13:41:40.801543 master-0 kubenswrapper[27835]: I0318 13:41:40.794453 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b2c70993-8f51-411e-ae8d-65ea5161c75e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b2c70993-8f51-411e-ae8d-65ea5161c75e\") " pod="openstack/openstack-cell1-galera-0" Mar 18 13:41:40.801543 master-0 kubenswrapper[27835]: I0318 13:41:40.794476 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v9sc\" (UniqueName: \"kubernetes.io/projected/b2c70993-8f51-411e-ae8d-65ea5161c75e-kube-api-access-6v9sc\") pod \"openstack-cell1-galera-0\" (UID: \"b2c70993-8f51-411e-ae8d-65ea5161c75e\") " pod="openstack/openstack-cell1-galera-0" Mar 18 13:41:40.801543 master-0 kubenswrapper[27835]: I0318 13:41:40.794503 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-eb68287f-9fbb-4c2e-98c4-04e29a7a4591\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3badc28c-7437-467f-a60a-dbbecc7e44a5\") pod \"openstack-cell1-galera-0\" (UID: \"b2c70993-8f51-411e-ae8d-65ea5161c75e\") " pod="openstack/openstack-cell1-galera-0" Mar 18 13:41:40.801543 master-0 kubenswrapper[27835]: I0318 13:41:40.794558 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2c70993-8f51-411e-ae8d-65ea5161c75e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b2c70993-8f51-411e-ae8d-65ea5161c75e\") " pod="openstack/openstack-cell1-galera-0" Mar 18 13:41:40.801543 master-0 kubenswrapper[27835]: I0318 13:41:40.794581 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b2c70993-8f51-411e-ae8d-65ea5161c75e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b2c70993-8f51-411e-ae8d-65ea5161c75e\") " pod="openstack/openstack-cell1-galera-0" Mar 18 13:41:40.801543 master-0 kubenswrapper[27835]: I0318 13:41:40.794606 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2c70993-8f51-411e-ae8d-65ea5161c75e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b2c70993-8f51-411e-ae8d-65ea5161c75e\") " pod="openstack/openstack-cell1-galera-0" Mar 18 13:41:40.801543 master-0 kubenswrapper[27835]: I0318 13:41:40.794638 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2c70993-8f51-411e-ae8d-65ea5161c75e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b2c70993-8f51-411e-ae8d-65ea5161c75e\") " pod="openstack/openstack-cell1-galera-0" Mar 18 13:41:40.801543 master-0 kubenswrapper[27835]: I0318 13:41:40.796076 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b2c70993-8f51-411e-ae8d-65ea5161c75e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b2c70993-8f51-411e-ae8d-65ea5161c75e\") " pod="openstack/openstack-cell1-galera-0" Mar 18 13:41:40.801543 master-0 kubenswrapper[27835]: I0318 13:41:40.796128 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b2c70993-8f51-411e-ae8d-65ea5161c75e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b2c70993-8f51-411e-ae8d-65ea5161c75e\") " pod="openstack/openstack-cell1-galera-0" Mar 18 13:41:40.801543 master-0 kubenswrapper[27835]: I0318 13:41:40.796701 27835 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 13:41:40.801543 master-0 kubenswrapper[27835]: I0318 13:41:40.796726 27835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-eb68287f-9fbb-4c2e-98c4-04e29a7a4591\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3badc28c-7437-467f-a60a-dbbecc7e44a5\") pod \"openstack-cell1-galera-0\" (UID: \"b2c70993-8f51-411e-ae8d-65ea5161c75e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/1a9dd3b3679b373935fe992f28b1435f41f1af54bf98220522372d99896aca41/globalmount\"" pod="openstack/openstack-cell1-galera-0" Mar 18 13:41:40.801543 master-0 kubenswrapper[27835]: I0318 13:41:40.797099 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2c70993-8f51-411e-ae8d-65ea5161c75e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b2c70993-8f51-411e-ae8d-65ea5161c75e\") " pod="openstack/openstack-cell1-galera-0" Mar 18 13:41:40.801543 master-0 kubenswrapper[27835]: I0318 13:41:40.797210 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b2c70993-8f51-411e-ae8d-65ea5161c75e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b2c70993-8f51-411e-ae8d-65ea5161c75e\") " pod="openstack/openstack-cell1-galera-0" Mar 18 13:41:40.802507 master-0 kubenswrapper[27835]: I0318 13:41:40.802340 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2c70993-8f51-411e-ae8d-65ea5161c75e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b2c70993-8f51-411e-ae8d-65ea5161c75e\") " pod="openstack/openstack-cell1-galera-0" Mar 18 13:41:40.803022 master-0 kubenswrapper[27835]: I0318 13:41:40.802970 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2c70993-8f51-411e-ae8d-65ea5161c75e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b2c70993-8f51-411e-ae8d-65ea5161c75e\") " pod="openstack/openstack-cell1-galera-0" Mar 18 13:41:40.813172 master-0 kubenswrapper[27835]: I0318 13:41:40.813125 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v9sc\" (UniqueName: \"kubernetes.io/projected/b2c70993-8f51-411e-ae8d-65ea5161c75e-kube-api-access-6v9sc\") pod \"openstack-cell1-galera-0\" (UID: \"b2c70993-8f51-411e-ae8d-65ea5161c75e\") " pod="openstack/openstack-cell1-galera-0" Mar 18 13:41:41.264497 master-0 kubenswrapper[27835]: I0318 13:41:41.264440 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ebf6ac13-979d-4c66-b169-0d8af519ecf9\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a0e08c44-19e3-40ef-b0ba-2c5d0b437471\") pod \"openstack-galera-0\" (UID: \"483c8547-dea7-4fd8-b4db-4849a346d73a\") " pod="openstack/openstack-galera-0" Mar 18 13:41:41.416614 master-0 kubenswrapper[27835]: I0318 13:41:41.416540 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 18 13:41:42.332132 master-0 kubenswrapper[27835]: I0318 13:41:42.331254 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-eb68287f-9fbb-4c2e-98c4-04e29a7a4591\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3badc28c-7437-467f-a60a-dbbecc7e44a5\") pod \"openstack-cell1-galera-0\" (UID: \"b2c70993-8f51-411e-ae8d-65ea5161c75e\") " pod="openstack/openstack-cell1-galera-0" Mar 18 13:41:42.343245 master-0 kubenswrapper[27835]: I0318 13:41:42.343181 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sf7pp"] Mar 18 13:41:42.349071 master-0 kubenswrapper[27835]: I0318 13:41:42.349010 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sf7pp" Mar 18 13:41:42.356469 master-0 kubenswrapper[27835]: I0318 13:41:42.356401 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Mar 18 13:41:42.356675 master-0 kubenswrapper[27835]: I0318 13:41:42.356660 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Mar 18 13:41:42.411731 master-0 kubenswrapper[27835]: I0318 13:41:42.411176 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sf7pp"] Mar 18 13:41:42.442898 master-0 kubenswrapper[27835]: I0318 13:41:42.442843 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 18 13:41:42.460958 master-0 kubenswrapper[27835]: I0318 13:41:42.460899 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c90d0a00-53e9-4145-8137-d73cee5337f0-combined-ca-bundle\") pod \"ovn-controller-sf7pp\" (UID: \"c90d0a00-53e9-4145-8137-d73cee5337f0\") " pod="openstack/ovn-controller-sf7pp" Mar 18 13:41:42.461173 master-0 kubenswrapper[27835]: I0318 13:41:42.460988 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c90d0a00-53e9-4145-8137-d73cee5337f0-ovn-controller-tls-certs\") pod \"ovn-controller-sf7pp\" (UID: \"c90d0a00-53e9-4145-8137-d73cee5337f0\") " pod="openstack/ovn-controller-sf7pp" Mar 18 13:41:42.461173 master-0 kubenswrapper[27835]: I0318 13:41:42.461012 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c90d0a00-53e9-4145-8137-d73cee5337f0-var-log-ovn\") pod \"ovn-controller-sf7pp\" (UID: \"c90d0a00-53e9-4145-8137-d73cee5337f0\") " pod="openstack/ovn-controller-sf7pp" Mar 18 13:41:42.461173 master-0 kubenswrapper[27835]: I0318 13:41:42.461039 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvrrs\" (UniqueName: \"kubernetes.io/projected/c90d0a00-53e9-4145-8137-d73cee5337f0-kube-api-access-wvrrs\") pod \"ovn-controller-sf7pp\" (UID: \"c90d0a00-53e9-4145-8137-d73cee5337f0\") " pod="openstack/ovn-controller-sf7pp" Mar 18 13:41:42.461173 master-0 kubenswrapper[27835]: I0318 13:41:42.461066 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c90d0a00-53e9-4145-8137-d73cee5337f0-scripts\") pod \"ovn-controller-sf7pp\" (UID: \"c90d0a00-53e9-4145-8137-d73cee5337f0\") " pod="openstack/ovn-controller-sf7pp" Mar 18 13:41:42.461173 master-0 kubenswrapper[27835]: I0318 13:41:42.461124 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c90d0a00-53e9-4145-8137-d73cee5337f0-var-run\") pod \"ovn-controller-sf7pp\" (UID: \"c90d0a00-53e9-4145-8137-d73cee5337f0\") " pod="openstack/ovn-controller-sf7pp" Mar 18 13:41:42.461173 master-0 kubenswrapper[27835]: I0318 13:41:42.461146 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c90d0a00-53e9-4145-8137-d73cee5337f0-var-run-ovn\") pod \"ovn-controller-sf7pp\" (UID: \"c90d0a00-53e9-4145-8137-d73cee5337f0\") " pod="openstack/ovn-controller-sf7pp" Mar 18 13:41:42.475324 master-0 kubenswrapper[27835]: I0318 13:41:42.474218 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-gtvxg"] Mar 18 13:41:42.476984 master-0 kubenswrapper[27835]: I0318 13:41:42.476945 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-gtvxg" Mar 18 13:41:42.488302 master-0 kubenswrapper[27835]: I0318 13:41:42.488251 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-gtvxg"] Mar 18 13:41:42.562260 master-0 kubenswrapper[27835]: I0318 13:41:42.562180 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c0b4c95e-e177-4d01-bd2e-ff94c66d594d-etc-ovs\") pod \"ovn-controller-ovs-gtvxg\" (UID: \"c0b4c95e-e177-4d01-bd2e-ff94c66d594d\") " pod="openstack/ovn-controller-ovs-gtvxg" Mar 18 13:41:42.562596 master-0 kubenswrapper[27835]: I0318 13:41:42.562285 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c90d0a00-53e9-4145-8137-d73cee5337f0-ovn-controller-tls-certs\") pod \"ovn-controller-sf7pp\" (UID: \"c90d0a00-53e9-4145-8137-d73cee5337f0\") " pod="openstack/ovn-controller-sf7pp" Mar 18 13:41:42.562596 master-0 kubenswrapper[27835]: I0318 13:41:42.562307 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c90d0a00-53e9-4145-8137-d73cee5337f0-var-log-ovn\") pod \"ovn-controller-sf7pp\" (UID: \"c90d0a00-53e9-4145-8137-d73cee5337f0\") " pod="openstack/ovn-controller-sf7pp" Mar 18 13:41:42.562596 master-0 kubenswrapper[27835]: I0318 13:41:42.562339 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvrrs\" (UniqueName: \"kubernetes.io/projected/c90d0a00-53e9-4145-8137-d73cee5337f0-kube-api-access-wvrrs\") pod \"ovn-controller-sf7pp\" (UID: \"c90d0a00-53e9-4145-8137-d73cee5337f0\") " pod="openstack/ovn-controller-sf7pp" Mar 18 13:41:42.562965 master-0 kubenswrapper[27835]: I0318 13:41:42.562914 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c90d0a00-53e9-4145-8137-d73cee5337f0-scripts\") pod \"ovn-controller-sf7pp\" (UID: \"c90d0a00-53e9-4145-8137-d73cee5337f0\") " pod="openstack/ovn-controller-sf7pp" Mar 18 13:41:42.563053 master-0 kubenswrapper[27835]: I0318 13:41:42.562988 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c90d0a00-53e9-4145-8137-d73cee5337f0-var-log-ovn\") pod \"ovn-controller-sf7pp\" (UID: \"c90d0a00-53e9-4145-8137-d73cee5337f0\") " pod="openstack/ovn-controller-sf7pp" Mar 18 13:41:42.563663 master-0 kubenswrapper[27835]: I0318 13:41:42.563523 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c0b4c95e-e177-4d01-bd2e-ff94c66d594d-var-run\") pod \"ovn-controller-ovs-gtvxg\" (UID: \"c0b4c95e-e177-4d01-bd2e-ff94c66d594d\") " pod="openstack/ovn-controller-ovs-gtvxg" Mar 18 13:41:42.563663 master-0 kubenswrapper[27835]: I0318 13:41:42.563578 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c0b4c95e-e177-4d01-bd2e-ff94c66d594d-var-lib\") pod \"ovn-controller-ovs-gtvxg\" (UID: \"c0b4c95e-e177-4d01-bd2e-ff94c66d594d\") " pod="openstack/ovn-controller-ovs-gtvxg" Mar 18 13:41:42.563663 master-0 kubenswrapper[27835]: I0318 13:41:42.563623 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c90d0a00-53e9-4145-8137-d73cee5337f0-var-run\") pod \"ovn-controller-sf7pp\" (UID: \"c90d0a00-53e9-4145-8137-d73cee5337f0\") " pod="openstack/ovn-controller-sf7pp" Mar 18 13:41:42.563837 master-0 kubenswrapper[27835]: I0318 13:41:42.563707 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c90d0a00-53e9-4145-8137-d73cee5337f0-var-run-ovn\") pod \"ovn-controller-sf7pp\" (UID: \"c90d0a00-53e9-4145-8137-d73cee5337f0\") " pod="openstack/ovn-controller-sf7pp" Mar 18 13:41:42.563837 master-0 kubenswrapper[27835]: I0318 13:41:42.563774 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c0b4c95e-e177-4d01-bd2e-ff94c66d594d-var-log\") pod \"ovn-controller-ovs-gtvxg\" (UID: \"c0b4c95e-e177-4d01-bd2e-ff94c66d594d\") " pod="openstack/ovn-controller-ovs-gtvxg" Mar 18 13:41:42.563920 master-0 kubenswrapper[27835]: I0318 13:41:42.563839 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c90d0a00-53e9-4145-8137-d73cee5337f0-var-run\") pod \"ovn-controller-sf7pp\" (UID: \"c90d0a00-53e9-4145-8137-d73cee5337f0\") " pod="openstack/ovn-controller-sf7pp" Mar 18 13:41:42.563920 master-0 kubenswrapper[27835]: I0318 13:41:42.563890 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c90d0a00-53e9-4145-8137-d73cee5337f0-combined-ca-bundle\") pod \"ovn-controller-sf7pp\" (UID: \"c90d0a00-53e9-4145-8137-d73cee5337f0\") " pod="openstack/ovn-controller-sf7pp" Mar 18 13:41:42.563998 master-0 kubenswrapper[27835]: I0318 13:41:42.563926 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c90d0a00-53e9-4145-8137-d73cee5337f0-var-run-ovn\") pod \"ovn-controller-sf7pp\" (UID: \"c90d0a00-53e9-4145-8137-d73cee5337f0\") " pod="openstack/ovn-controller-sf7pp" Mar 18 13:41:42.563998 master-0 kubenswrapper[27835]: I0318 13:41:42.563936 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh4sh\" (UniqueName: \"kubernetes.io/projected/c0b4c95e-e177-4d01-bd2e-ff94c66d594d-kube-api-access-fh4sh\") pod \"ovn-controller-ovs-gtvxg\" (UID: \"c0b4c95e-e177-4d01-bd2e-ff94c66d594d\") " pod="openstack/ovn-controller-ovs-gtvxg" Mar 18 13:41:42.563998 master-0 kubenswrapper[27835]: I0318 13:41:42.563968 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0b4c95e-e177-4d01-bd2e-ff94c66d594d-scripts\") pod \"ovn-controller-ovs-gtvxg\" (UID: \"c0b4c95e-e177-4d01-bd2e-ff94c66d594d\") " pod="openstack/ovn-controller-ovs-gtvxg" Mar 18 13:41:42.565484 master-0 kubenswrapper[27835]: I0318 13:41:42.565365 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c90d0a00-53e9-4145-8137-d73cee5337f0-scripts\") pod \"ovn-controller-sf7pp\" (UID: \"c90d0a00-53e9-4145-8137-d73cee5337f0\") " pod="openstack/ovn-controller-sf7pp" Mar 18 13:41:42.582114 master-0 kubenswrapper[27835]: I0318 13:41:42.582043 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c90d0a00-53e9-4145-8137-d73cee5337f0-ovn-controller-tls-certs\") pod \"ovn-controller-sf7pp\" (UID: \"c90d0a00-53e9-4145-8137-d73cee5337f0\") " pod="openstack/ovn-controller-sf7pp" Mar 18 13:41:42.582541 master-0 kubenswrapper[27835]: I0318 13:41:42.582496 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c90d0a00-53e9-4145-8137-d73cee5337f0-combined-ca-bundle\") pod \"ovn-controller-sf7pp\" (UID: \"c90d0a00-53e9-4145-8137-d73cee5337f0\") " pod="openstack/ovn-controller-sf7pp" Mar 18 13:41:42.590583 master-0 kubenswrapper[27835]: I0318 13:41:42.590540 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvrrs\" (UniqueName: \"kubernetes.io/projected/c90d0a00-53e9-4145-8137-d73cee5337f0-kube-api-access-wvrrs\") pod \"ovn-controller-sf7pp\" (UID: \"c90d0a00-53e9-4145-8137-d73cee5337f0\") " pod="openstack/ovn-controller-sf7pp" Mar 18 13:41:42.665501 master-0 kubenswrapper[27835]: I0318 13:41:42.665447 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0b4c95e-e177-4d01-bd2e-ff94c66d594d-scripts\") pod \"ovn-controller-ovs-gtvxg\" (UID: \"c0b4c95e-e177-4d01-bd2e-ff94c66d594d\") " pod="openstack/ovn-controller-ovs-gtvxg" Mar 18 13:41:42.665501 master-0 kubenswrapper[27835]: I0318 13:41:42.665511 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c0b4c95e-e177-4d01-bd2e-ff94c66d594d-etc-ovs\") pod \"ovn-controller-ovs-gtvxg\" (UID: \"c0b4c95e-e177-4d01-bd2e-ff94c66d594d\") " pod="openstack/ovn-controller-ovs-gtvxg" Mar 18 13:41:42.665733 master-0 kubenswrapper[27835]: I0318 13:41:42.665581 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c0b4c95e-e177-4d01-bd2e-ff94c66d594d-var-run\") pod \"ovn-controller-ovs-gtvxg\" (UID: \"c0b4c95e-e177-4d01-bd2e-ff94c66d594d\") " pod="openstack/ovn-controller-ovs-gtvxg" Mar 18 13:41:42.665733 master-0 kubenswrapper[27835]: I0318 13:41:42.665612 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c0b4c95e-e177-4d01-bd2e-ff94c66d594d-var-lib\") pod \"ovn-controller-ovs-gtvxg\" (UID: \"c0b4c95e-e177-4d01-bd2e-ff94c66d594d\") " pod="openstack/ovn-controller-ovs-gtvxg" Mar 18 13:41:42.665733 master-0 kubenswrapper[27835]: I0318 13:41:42.665673 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c0b4c95e-e177-4d01-bd2e-ff94c66d594d-var-log\") pod \"ovn-controller-ovs-gtvxg\" (UID: \"c0b4c95e-e177-4d01-bd2e-ff94c66d594d\") " pod="openstack/ovn-controller-ovs-gtvxg" Mar 18 13:41:42.665885 master-0 kubenswrapper[27835]: I0318 13:41:42.665836 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh4sh\" (UniqueName: \"kubernetes.io/projected/c0b4c95e-e177-4d01-bd2e-ff94c66d594d-kube-api-access-fh4sh\") pod \"ovn-controller-ovs-gtvxg\" (UID: \"c0b4c95e-e177-4d01-bd2e-ff94c66d594d\") " pod="openstack/ovn-controller-ovs-gtvxg" Mar 18 13:41:42.665988 master-0 kubenswrapper[27835]: I0318 13:41:42.665963 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c0b4c95e-e177-4d01-bd2e-ff94c66d594d-var-run\") pod \"ovn-controller-ovs-gtvxg\" (UID: \"c0b4c95e-e177-4d01-bd2e-ff94c66d594d\") " pod="openstack/ovn-controller-ovs-gtvxg" Mar 18 13:41:42.666036 master-0 kubenswrapper[27835]: I0318 13:41:42.665970 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c0b4c95e-e177-4d01-bd2e-ff94c66d594d-var-lib\") pod \"ovn-controller-ovs-gtvxg\" (UID: \"c0b4c95e-e177-4d01-bd2e-ff94c66d594d\") " pod="openstack/ovn-controller-ovs-gtvxg" Mar 18 13:41:42.666123 master-0 kubenswrapper[27835]: I0318 13:41:42.666100 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c0b4c95e-e177-4d01-bd2e-ff94c66d594d-var-log\") pod \"ovn-controller-ovs-gtvxg\" (UID: \"c0b4c95e-e177-4d01-bd2e-ff94c66d594d\") " pod="openstack/ovn-controller-ovs-gtvxg" Mar 18 13:41:42.666355 master-0 kubenswrapper[27835]: I0318 13:41:42.666336 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c0b4c95e-e177-4d01-bd2e-ff94c66d594d-etc-ovs\") pod \"ovn-controller-ovs-gtvxg\" (UID: \"c0b4c95e-e177-4d01-bd2e-ff94c66d594d\") " pod="openstack/ovn-controller-ovs-gtvxg" Mar 18 13:41:42.668372 master-0 kubenswrapper[27835]: I0318 13:41:42.668332 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0b4c95e-e177-4d01-bd2e-ff94c66d594d-scripts\") pod \"ovn-controller-ovs-gtvxg\" (UID: \"c0b4c95e-e177-4d01-bd2e-ff94c66d594d\") " pod="openstack/ovn-controller-ovs-gtvxg" Mar 18 13:41:42.732206 master-0 kubenswrapper[27835]: I0318 13:41:42.732148 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh4sh\" (UniqueName: \"kubernetes.io/projected/c0b4c95e-e177-4d01-bd2e-ff94c66d594d-kube-api-access-fh4sh\") pod \"ovn-controller-ovs-gtvxg\" (UID: \"c0b4c95e-e177-4d01-bd2e-ff94c66d594d\") " pod="openstack/ovn-controller-ovs-gtvxg" Mar 18 13:41:42.738620 master-0 kubenswrapper[27835]: I0318 13:41:42.732844 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sf7pp" Mar 18 13:41:42.815204 master-0 kubenswrapper[27835]: I0318 13:41:42.815079 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-gtvxg" Mar 18 13:41:43.331173 master-0 kubenswrapper[27835]: I0318 13:41:43.331076 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 18 13:41:43.333217 master-0 kubenswrapper[27835]: I0318 13:41:43.333165 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 18 13:41:43.337002 master-0 kubenswrapper[27835]: I0318 13:41:43.335887 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 18 13:41:43.337002 master-0 kubenswrapper[27835]: I0318 13:41:43.336176 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 18 13:41:43.337002 master-0 kubenswrapper[27835]: I0318 13:41:43.336588 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 18 13:41:43.337002 master-0 kubenswrapper[27835]: I0318 13:41:43.336748 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 18 13:41:43.346102 master-0 kubenswrapper[27835]: I0318 13:41:43.346040 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 18 13:41:43.525932 master-0 kubenswrapper[27835]: I0318 13:41:43.525868 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d31955ae-5786-4417-880f-f71c7d4347c1-config\") pod \"ovsdbserver-nb-0\" (UID: \"d31955ae-5786-4417-880f-f71c7d4347c1\") " pod="openstack/ovsdbserver-nb-0" Mar 18 13:41:43.526214 master-0 kubenswrapper[27835]: I0318 13:41:43.526133 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d31955ae-5786-4417-880f-f71c7d4347c1-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d31955ae-5786-4417-880f-f71c7d4347c1\") " pod="openstack/ovsdbserver-nb-0" Mar 18 13:41:43.526306 master-0 kubenswrapper[27835]: I0318 13:41:43.526275 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d31955ae-5786-4417-880f-f71c7d4347c1-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d31955ae-5786-4417-880f-f71c7d4347c1\") " pod="openstack/ovsdbserver-nb-0" Mar 18 13:41:43.526358 master-0 kubenswrapper[27835]: I0318 13:41:43.526310 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5blt7\" (UniqueName: \"kubernetes.io/projected/d31955ae-5786-4417-880f-f71c7d4347c1-kube-api-access-5blt7\") pod \"ovsdbserver-nb-0\" (UID: \"d31955ae-5786-4417-880f-f71c7d4347c1\") " pod="openstack/ovsdbserver-nb-0" Mar 18 13:41:43.526406 master-0 kubenswrapper[27835]: I0318 13:41:43.526373 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d31955ae-5786-4417-880f-f71c7d4347c1-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d31955ae-5786-4417-880f-f71c7d4347c1\") " pod="openstack/ovsdbserver-nb-0" Mar 18 13:41:43.526478 master-0 kubenswrapper[27835]: I0318 13:41:43.526443 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d31955ae-5786-4417-880f-f71c7d4347c1-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d31955ae-5786-4417-880f-f71c7d4347c1\") " pod="openstack/ovsdbserver-nb-0" Mar 18 13:41:43.528026 master-0 kubenswrapper[27835]: I0318 13:41:43.526536 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d31955ae-5786-4417-880f-f71c7d4347c1-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d31955ae-5786-4417-880f-f71c7d4347c1\") " pod="openstack/ovsdbserver-nb-0" Mar 18 13:41:43.528338 master-0 kubenswrapper[27835]: I0318 13:41:43.528062 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f97ec006-a1f9-4165-a0e7-17643be3004a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^54b6b3d2-d656-4f65-93d0-76acd4e2a4a1\") pod \"ovsdbserver-nb-0\" (UID: \"d31955ae-5786-4417-880f-f71c7d4347c1\") " pod="openstack/ovsdbserver-nb-0" Mar 18 13:41:43.637368 master-0 kubenswrapper[27835]: I0318 13:41:43.637221 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d31955ae-5786-4417-880f-f71c7d4347c1-config\") pod \"ovsdbserver-nb-0\" (UID: \"d31955ae-5786-4417-880f-f71c7d4347c1\") " pod="openstack/ovsdbserver-nb-0" Mar 18 13:41:43.637573 master-0 kubenswrapper[27835]: I0318 13:41:43.637460 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d31955ae-5786-4417-880f-f71c7d4347c1-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d31955ae-5786-4417-880f-f71c7d4347c1\") " pod="openstack/ovsdbserver-nb-0" Mar 18 13:41:43.637573 master-0 kubenswrapper[27835]: I0318 13:41:43.637521 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d31955ae-5786-4417-880f-f71c7d4347c1-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d31955ae-5786-4417-880f-f71c7d4347c1\") " pod="openstack/ovsdbserver-nb-0" Mar 18 13:41:43.637573 master-0 kubenswrapper[27835]: I0318 13:41:43.637559 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5blt7\" (UniqueName: \"kubernetes.io/projected/d31955ae-5786-4417-880f-f71c7d4347c1-kube-api-access-5blt7\") pod \"ovsdbserver-nb-0\" (UID: \"d31955ae-5786-4417-880f-f71c7d4347c1\") " pod="openstack/ovsdbserver-nb-0" Mar 18 13:41:43.637913 master-0 kubenswrapper[27835]: I0318 13:41:43.637643 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d31955ae-5786-4417-880f-f71c7d4347c1-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d31955ae-5786-4417-880f-f71c7d4347c1\") " pod="openstack/ovsdbserver-nb-0" Mar 18 13:41:43.637913 master-0 kubenswrapper[27835]: I0318 13:41:43.637862 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d31955ae-5786-4417-880f-f71c7d4347c1-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d31955ae-5786-4417-880f-f71c7d4347c1\") " pod="openstack/ovsdbserver-nb-0" Mar 18 13:41:43.637913 master-0 kubenswrapper[27835]: I0318 13:41:43.637895 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d31955ae-5786-4417-880f-f71c7d4347c1-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d31955ae-5786-4417-880f-f71c7d4347c1\") " pod="openstack/ovsdbserver-nb-0" Mar 18 13:41:43.638014 master-0 kubenswrapper[27835]: I0318 13:41:43.637917 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f97ec006-a1f9-4165-a0e7-17643be3004a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^54b6b3d2-d656-4f65-93d0-76acd4e2a4a1\") pod \"ovsdbserver-nb-0\" (UID: \"d31955ae-5786-4417-880f-f71c7d4347c1\") " pod="openstack/ovsdbserver-nb-0" Mar 18 13:41:43.644566 master-0 kubenswrapper[27835]: I0318 13:41:43.641942 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d31955ae-5786-4417-880f-f71c7d4347c1-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d31955ae-5786-4417-880f-f71c7d4347c1\") " pod="openstack/ovsdbserver-nb-0" Mar 18 13:41:43.644566 master-0 kubenswrapper[27835]: I0318 13:41:43.642649 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d31955ae-5786-4417-880f-f71c7d4347c1-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d31955ae-5786-4417-880f-f71c7d4347c1\") " pod="openstack/ovsdbserver-nb-0" Mar 18 13:41:43.644566 master-0 kubenswrapper[27835]: I0318 13:41:43.643317 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d31955ae-5786-4417-880f-f71c7d4347c1-config\") pod \"ovsdbserver-nb-0\" (UID: \"d31955ae-5786-4417-880f-f71c7d4347c1\") " pod="openstack/ovsdbserver-nb-0" Mar 18 13:41:43.644566 master-0 kubenswrapper[27835]: I0318 13:41:43.644125 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d31955ae-5786-4417-880f-f71c7d4347c1-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d31955ae-5786-4417-880f-f71c7d4347c1\") " pod="openstack/ovsdbserver-nb-0" Mar 18 13:41:43.644566 master-0 kubenswrapper[27835]: I0318 13:41:43.644388 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d31955ae-5786-4417-880f-f71c7d4347c1-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d31955ae-5786-4417-880f-f71c7d4347c1\") " pod="openstack/ovsdbserver-nb-0" Mar 18 13:41:43.645258 master-0 kubenswrapper[27835]: I0318 13:41:43.645229 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d31955ae-5786-4417-880f-f71c7d4347c1-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d31955ae-5786-4417-880f-f71c7d4347c1\") " pod="openstack/ovsdbserver-nb-0" Mar 18 13:41:43.648154 master-0 kubenswrapper[27835]: I0318 13:41:43.648112 27835 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 13:41:43.648255 master-0 kubenswrapper[27835]: I0318 13:41:43.648174 27835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f97ec006-a1f9-4165-a0e7-17643be3004a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^54b6b3d2-d656-4f65-93d0-76acd4e2a4a1\") pod \"ovsdbserver-nb-0\" (UID: \"d31955ae-5786-4417-880f-f71c7d4347c1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/8be890b7d7b1b94a17808d2a3f6fe90f219b60f8016b3edab16c5128329463a1/globalmount\"" pod="openstack/ovsdbserver-nb-0" Mar 18 13:41:43.662816 master-0 kubenswrapper[27835]: I0318 13:41:43.662758 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5blt7\" (UniqueName: \"kubernetes.io/projected/d31955ae-5786-4417-880f-f71c7d4347c1-kube-api-access-5blt7\") pod \"ovsdbserver-nb-0\" (UID: \"d31955ae-5786-4417-880f-f71c7d4347c1\") " pod="openstack/ovsdbserver-nb-0" Mar 18 13:41:45.075147 master-0 kubenswrapper[27835]: I0318 13:41:45.075082 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f97ec006-a1f9-4165-a0e7-17643be3004a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^54b6b3d2-d656-4f65-93d0-76acd4e2a4a1\") pod \"ovsdbserver-nb-0\" (UID: \"d31955ae-5786-4417-880f-f71c7d4347c1\") " pod="openstack/ovsdbserver-nb-0" Mar 18 13:41:45.164700 master-0 kubenswrapper[27835]: I0318 13:41:45.164646 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 18 13:41:46.699571 master-0 kubenswrapper[27835]: I0318 13:41:46.699333 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 18 13:41:46.701131 master-0 kubenswrapper[27835]: I0318 13:41:46.701070 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 18 13:41:46.703560 master-0 kubenswrapper[27835]: I0318 13:41:46.703501 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 18 13:41:46.704519 master-0 kubenswrapper[27835]: I0318 13:41:46.704487 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 18 13:41:46.704803 master-0 kubenswrapper[27835]: I0318 13:41:46.704769 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 18 13:41:46.723048 master-0 kubenswrapper[27835]: I0318 13:41:46.721616 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 18 13:41:46.824910 master-0 kubenswrapper[27835]: I0318 13:41:46.824858 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a40472e4-a359-41e5-8e65-f6c7cb3b7ac7-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a40472e4-a359-41e5-8e65-f6c7cb3b7ac7\") " pod="openstack/ovsdbserver-sb-0" Mar 18 13:41:46.825151 master-0 kubenswrapper[27835]: I0318 13:41:46.825022 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj5j7\" (UniqueName: \"kubernetes.io/projected/a40472e4-a359-41e5-8e65-f6c7cb3b7ac7-kube-api-access-hj5j7\") pod \"ovsdbserver-sb-0\" (UID: \"a40472e4-a359-41e5-8e65-f6c7cb3b7ac7\") " pod="openstack/ovsdbserver-sb-0" Mar 18 13:41:46.825326 master-0 kubenswrapper[27835]: I0318 13:41:46.825212 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a40472e4-a359-41e5-8e65-f6c7cb3b7ac7-config\") pod \"ovsdbserver-sb-0\" (UID: \"a40472e4-a359-41e5-8e65-f6c7cb3b7ac7\") " pod="openstack/ovsdbserver-sb-0" Mar 18 13:41:46.825326 master-0 kubenswrapper[27835]: I0318 13:41:46.825283 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a40472e4-a359-41e5-8e65-f6c7cb3b7ac7-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a40472e4-a359-41e5-8e65-f6c7cb3b7ac7\") " pod="openstack/ovsdbserver-sb-0" Mar 18 13:41:46.825447 master-0 kubenswrapper[27835]: I0318 13:41:46.825355 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a40472e4-a359-41e5-8e65-f6c7cb3b7ac7-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a40472e4-a359-41e5-8e65-f6c7cb3b7ac7\") " pod="openstack/ovsdbserver-sb-0" Mar 18 13:41:46.825554 master-0 kubenswrapper[27835]: I0318 13:41:46.825531 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-405c2e0e-e0a7-4685-adb7-7852440341c0\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f17fdfc2-7d9e-4029-8857-dff6700948db\") pod \"ovsdbserver-sb-0\" (UID: \"a40472e4-a359-41e5-8e65-f6c7cb3b7ac7\") " pod="openstack/ovsdbserver-sb-0" Mar 18 13:41:46.825687 master-0 kubenswrapper[27835]: I0318 13:41:46.825672 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a40472e4-a359-41e5-8e65-f6c7cb3b7ac7-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a40472e4-a359-41e5-8e65-f6c7cb3b7ac7\") " pod="openstack/ovsdbserver-sb-0" Mar 18 13:41:46.825798 master-0 kubenswrapper[27835]: I0318 13:41:46.825780 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a40472e4-a359-41e5-8e65-f6c7cb3b7ac7-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a40472e4-a359-41e5-8e65-f6c7cb3b7ac7\") " pod="openstack/ovsdbserver-sb-0" Mar 18 13:41:46.928293 master-0 kubenswrapper[27835]: I0318 13:41:46.928098 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a40472e4-a359-41e5-8e65-f6c7cb3b7ac7-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a40472e4-a359-41e5-8e65-f6c7cb3b7ac7\") " pod="openstack/ovsdbserver-sb-0" Mar 18 13:41:46.928527 master-0 kubenswrapper[27835]: I0318 13:41:46.928308 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-405c2e0e-e0a7-4685-adb7-7852440341c0\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f17fdfc2-7d9e-4029-8857-dff6700948db\") pod \"ovsdbserver-sb-0\" (UID: \"a40472e4-a359-41e5-8e65-f6c7cb3b7ac7\") " pod="openstack/ovsdbserver-sb-0" Mar 18 13:41:46.928527 master-0 kubenswrapper[27835]: I0318 13:41:46.928344 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a40472e4-a359-41e5-8e65-f6c7cb3b7ac7-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a40472e4-a359-41e5-8e65-f6c7cb3b7ac7\") " pod="openstack/ovsdbserver-sb-0" Mar 18 13:41:46.928527 master-0 kubenswrapper[27835]: I0318 13:41:46.928401 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a40472e4-a359-41e5-8e65-f6c7cb3b7ac7-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a40472e4-a359-41e5-8e65-f6c7cb3b7ac7\") " pod="openstack/ovsdbserver-sb-0" Mar 18 13:41:46.928527 master-0 kubenswrapper[27835]: I0318 13:41:46.928483 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a40472e4-a359-41e5-8e65-f6c7cb3b7ac7-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a40472e4-a359-41e5-8e65-f6c7cb3b7ac7\") " pod="openstack/ovsdbserver-sb-0" Mar 18 13:41:46.928794 master-0 kubenswrapper[27835]: I0318 13:41:46.928679 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj5j7\" (UniqueName: \"kubernetes.io/projected/a40472e4-a359-41e5-8e65-f6c7cb3b7ac7-kube-api-access-hj5j7\") pod \"ovsdbserver-sb-0\" (UID: \"a40472e4-a359-41e5-8e65-f6c7cb3b7ac7\") " pod="openstack/ovsdbserver-sb-0" Mar 18 13:41:46.931123 master-0 kubenswrapper[27835]: I0318 13:41:46.928808 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a40472e4-a359-41e5-8e65-f6c7cb3b7ac7-config\") pod \"ovsdbserver-sb-0\" (UID: \"a40472e4-a359-41e5-8e65-f6c7cb3b7ac7\") " pod="openstack/ovsdbserver-sb-0" Mar 18 13:41:46.931123 master-0 kubenswrapper[27835]: I0318 13:41:46.928860 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a40472e4-a359-41e5-8e65-f6c7cb3b7ac7-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a40472e4-a359-41e5-8e65-f6c7cb3b7ac7\") " pod="openstack/ovsdbserver-sb-0" Mar 18 13:41:46.932112 master-0 kubenswrapper[27835]: I0318 13:41:46.931909 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a40472e4-a359-41e5-8e65-f6c7cb3b7ac7-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a40472e4-a359-41e5-8e65-f6c7cb3b7ac7\") " pod="openstack/ovsdbserver-sb-0" Mar 18 13:41:46.932112 master-0 kubenswrapper[27835]: I0318 13:41:46.932025 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a40472e4-a359-41e5-8e65-f6c7cb3b7ac7-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a40472e4-a359-41e5-8e65-f6c7cb3b7ac7\") " pod="openstack/ovsdbserver-sb-0" Mar 18 13:41:46.932112 master-0 kubenswrapper[27835]: I0318 13:41:46.932050 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a40472e4-a359-41e5-8e65-f6c7cb3b7ac7-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a40472e4-a359-41e5-8e65-f6c7cb3b7ac7\") " pod="openstack/ovsdbserver-sb-0" Mar 18 13:41:46.932889 master-0 kubenswrapper[27835]: I0318 13:41:46.932854 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a40472e4-a359-41e5-8e65-f6c7cb3b7ac7-config\") pod \"ovsdbserver-sb-0\" (UID: \"a40472e4-a359-41e5-8e65-f6c7cb3b7ac7\") " pod="openstack/ovsdbserver-sb-0" Mar 18 13:41:46.932980 master-0 kubenswrapper[27835]: I0318 13:41:46.932950 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a40472e4-a359-41e5-8e65-f6c7cb3b7ac7-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a40472e4-a359-41e5-8e65-f6c7cb3b7ac7\") " pod="openstack/ovsdbserver-sb-0" Mar 18 13:41:46.933059 master-0 kubenswrapper[27835]: I0318 13:41:46.932689 27835 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 13:41:46.933112 master-0 kubenswrapper[27835]: I0318 13:41:46.933074 27835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-405c2e0e-e0a7-4685-adb7-7852440341c0\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f17fdfc2-7d9e-4029-8857-dff6700948db\") pod \"ovsdbserver-sb-0\" (UID: \"a40472e4-a359-41e5-8e65-f6c7cb3b7ac7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/440b418e993b4b28dd52bcfac693a44af6aa4a294cfba16ff5a74bf74a64b63c/globalmount\"" pod="openstack/ovsdbserver-sb-0" Mar 18 13:41:46.949126 master-0 kubenswrapper[27835]: I0318 13:41:46.949058 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj5j7\" (UniqueName: \"kubernetes.io/projected/a40472e4-a359-41e5-8e65-f6c7cb3b7ac7-kube-api-access-hj5j7\") pod \"ovsdbserver-sb-0\" (UID: \"a40472e4-a359-41e5-8e65-f6c7cb3b7ac7\") " pod="openstack/ovsdbserver-sb-0" Mar 18 13:41:46.952182 master-0 kubenswrapper[27835]: I0318 13:41:46.952117 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a40472e4-a359-41e5-8e65-f6c7cb3b7ac7-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a40472e4-a359-41e5-8e65-f6c7cb3b7ac7\") " pod="openstack/ovsdbserver-sb-0" Mar 18 13:41:48.320583 master-0 kubenswrapper[27835]: I0318 13:41:48.320535 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-405c2e0e-e0a7-4685-adb7-7852440341c0\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f17fdfc2-7d9e-4029-8857-dff6700948db\") pod \"ovsdbserver-sb-0\" (UID: \"a40472e4-a359-41e5-8e65-f6c7cb3b7ac7\") " pod="openstack/ovsdbserver-sb-0" Mar 18 13:41:48.548024 master-0 kubenswrapper[27835]: I0318 13:41:48.547469 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 18 13:41:51.221958 master-0 kubenswrapper[27835]: I0318 13:41:51.221855 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 18 13:41:52.186966 master-0 kubenswrapper[27835]: I0318 13:41:52.186850 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 18 13:41:52.214981 master-0 kubenswrapper[27835]: I0318 13:41:52.214936 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 18 13:41:52.233346 master-0 kubenswrapper[27835]: W0318 13:41:52.230134 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f51d7b8_7e16_4c10_8e64_a5af8a8522ed.slice/crio-4e0431fc3a4e13cdeb75d8aeaf2a1d822a2065956a9283ebb9dcb9a7155b5f59 WatchSource:0}: Error finding container 4e0431fc3a4e13cdeb75d8aeaf2a1d822a2065956a9283ebb9dcb9a7155b5f59: Status 404 returned error can't find the container with id 4e0431fc3a4e13cdeb75d8aeaf2a1d822a2065956a9283ebb9dcb9a7155b5f59 Mar 18 13:41:52.261679 master-0 kubenswrapper[27835]: I0318 13:41:52.261607 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed","Type":"ContainerStarted","Data":"4e0431fc3a4e13cdeb75d8aeaf2a1d822a2065956a9283ebb9dcb9a7155b5f59"} Mar 18 13:41:52.262991 master-0 kubenswrapper[27835]: I0318 13:41:52.262934 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"52c3f355-8836-4d58-84ee-d6c2afb6c776","Type":"ContainerStarted","Data":"fd57a35c695c6fe0bbbd37c29a670805262cc8dc1b5dc2ee930dfaddb29c9c79"} Mar 18 13:41:52.264010 master-0 kubenswrapper[27835]: I0318 13:41:52.263948 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"483c8547-dea7-4fd8-b4db-4849a346d73a","Type":"ContainerStarted","Data":"ed74a50cc661fd1830095d16426c5ab7cc6641b6fed21dc00db93719bab3dcba"} Mar 18 13:41:52.436101 master-0 kubenswrapper[27835]: I0318 13:41:52.436007 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 18 13:41:52.588964 master-0 kubenswrapper[27835]: I0318 13:41:52.588910 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sf7pp"] Mar 18 13:41:52.800463 master-0 kubenswrapper[27835]: I0318 13:41:52.798140 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 18 13:41:52.810657 master-0 kubenswrapper[27835]: W0318 13:41:52.810618 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2c70993_8f51_411e_ae8d_65ea5161c75e.slice/crio-bc15c50bd9ed0925c37440e3c5a3b52dd7c36ac5d584da42029e24b02e7aad28 WatchSource:0}: Error finding container bc15c50bd9ed0925c37440e3c5a3b52dd7c36ac5d584da42029e24b02e7aad28: Status 404 returned error can't find the container with id bc15c50bd9ed0925c37440e3c5a3b52dd7c36ac5d584da42029e24b02e7aad28 Mar 18 13:41:52.892076 master-0 kubenswrapper[27835]: I0318 13:41:52.892001 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 18 13:41:52.894541 master-0 kubenswrapper[27835]: W0318 13:41:52.894488 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda40472e4_a359_41e5_8e65_f6c7cb3b7ac7.slice/crio-47cd7fe673d77a1d236681dae353bd137ebebbf1110642bf8fce0015a9ad9f1c WatchSource:0}: Error finding container 47cd7fe673d77a1d236681dae353bd137ebebbf1110642bf8fce0015a9ad9f1c: Status 404 returned error can't find the container with id 47cd7fe673d77a1d236681dae353bd137ebebbf1110642bf8fce0015a9ad9f1c Mar 18 13:41:53.274122 master-0 kubenswrapper[27835]: I0318 13:41:53.274042 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1b76c81c-7824-4bfa-af04-9c1fd928fb63","Type":"ContainerStarted","Data":"63cb1dfae76c0e781366603a16d4ab9842d6f2794bcd5b97149e645817839c1c"} Mar 18 13:41:53.276525 master-0 kubenswrapper[27835]: I0318 13:41:53.276478 27835 generic.go:334] "Generic (PLEG): container finished" podID="521cb15d-54dc-46b7-bab1-a7389273be9f" containerID="28898c08f5a8853c1bf7282c019d927f5848f9447e446e94b5a724806579033b" exitCode=0 Mar 18 13:41:53.276951 master-0 kubenswrapper[27835]: I0318 13:41:53.276682 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76849d6659-hmn6c" event={"ID":"521cb15d-54dc-46b7-bab1-a7389273be9f","Type":"ContainerDied","Data":"28898c08f5a8853c1bf7282c019d927f5848f9447e446e94b5a724806579033b"} Mar 18 13:41:53.280110 master-0 kubenswrapper[27835]: I0318 13:41:53.280067 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b2c70993-8f51-411e-ae8d-65ea5161c75e","Type":"ContainerStarted","Data":"bc15c50bd9ed0925c37440e3c5a3b52dd7c36ac5d584da42029e24b02e7aad28"} Mar 18 13:41:53.283186 master-0 kubenswrapper[27835]: I0318 13:41:53.283155 27835 generic.go:334] "Generic (PLEG): container finished" podID="6a036cb1-24e8-401c-af08-1291061013fa" containerID="f378b2cd7842dc383b721f1b96d394d299a694cb1c6c4db8ef5f21ade54aabfe" exitCode=0 Mar 18 13:41:53.283264 master-0 kubenswrapper[27835]: I0318 13:41:53.283216 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-v9p5z" event={"ID":"6a036cb1-24e8-401c-af08-1291061013fa","Type":"ContainerDied","Data":"f378b2cd7842dc383b721f1b96d394d299a694cb1c6c4db8ef5f21ade54aabfe"} Mar 18 13:41:53.284975 master-0 kubenswrapper[27835]: I0318 13:41:53.284942 27835 generic.go:334] "Generic (PLEG): container finished" podID="8e9fdf93-f5f4-4a4a-b20c-c786e160437e" containerID="9957aa0f12a2d22dfb84f76895b88c82ebf4a6f0059b387493f36c58f4efcb0e" exitCode=0 Mar 18 13:41:53.285025 master-0 kubenswrapper[27835]: I0318 13:41:53.284988 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8476fd89bc-ncvcc" event={"ID":"8e9fdf93-f5f4-4a4a-b20c-c786e160437e","Type":"ContainerDied","Data":"9957aa0f12a2d22dfb84f76895b88c82ebf4a6f0059b387493f36c58f4efcb0e"} Mar 18 13:41:53.287279 master-0 kubenswrapper[27835]: I0318 13:41:53.287191 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sf7pp" event={"ID":"c90d0a00-53e9-4145-8137-d73cee5337f0","Type":"ContainerStarted","Data":"71d9a2136eac1d6ef3be51c61db1868d3ec27a3ddcaf1a6939dbc49ed864a81e"} Mar 18 13:41:53.289053 master-0 kubenswrapper[27835]: I0318 13:41:53.288942 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a40472e4-a359-41e5-8e65-f6c7cb3b7ac7","Type":"ContainerStarted","Data":"47cd7fe673d77a1d236681dae353bd137ebebbf1110642bf8fce0015a9ad9f1c"} Mar 18 13:41:53.292981 master-0 kubenswrapper[27835]: I0318 13:41:53.291909 27835 generic.go:334] "Generic (PLEG): container finished" podID="bd002959-2759-4b29-97c8-05ce0441059d" containerID="e7cdfb70b3cc7df51c7c1b85a032d733aefcc0b4e6c122dec64143d033d2ece9" exitCode=0 Mar 18 13:41:53.294294 master-0 kubenswrapper[27835]: I0318 13:41:53.292010 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685c76cf85-kkhsf" event={"ID":"bd002959-2759-4b29-97c8-05ce0441059d","Type":"ContainerDied","Data":"e7cdfb70b3cc7df51c7c1b85a032d733aefcc0b4e6c122dec64143d033d2ece9"} Mar 18 13:41:53.425543 master-0 kubenswrapper[27835]: I0318 13:41:53.425478 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 18 13:41:53.574516 master-0 kubenswrapper[27835]: I0318 13:41:53.574349 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-gtvxg"] Mar 18 13:41:54.663141 master-0 kubenswrapper[27835]: W0318 13:41:54.663082 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0b4c95e_e177_4d01_bd2e_ff94c66d594d.slice/crio-be3f726894d239cc8ab3a1694315702b760baff47005f031917902b948c1f33b WatchSource:0}: Error finding container be3f726894d239cc8ab3a1694315702b760baff47005f031917902b948c1f33b: Status 404 returned error can't find the container with id be3f726894d239cc8ab3a1694315702b760baff47005f031917902b948c1f33b Mar 18 13:41:54.788956 master-0 kubenswrapper[27835]: I0318 13:41:54.788898 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685c76cf85-kkhsf" Mar 18 13:41:54.796462 master-0 kubenswrapper[27835]: I0318 13:41:54.795244 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476fd89bc-ncvcc" Mar 18 13:41:54.896395 master-0 kubenswrapper[27835]: I0318 13:41:54.896322 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e9fdf93-f5f4-4a4a-b20c-c786e160437e-dns-svc\") pod \"8e9fdf93-f5f4-4a4a-b20c-c786e160437e\" (UID: \"8e9fdf93-f5f4-4a4a-b20c-c786e160437e\") " Mar 18 13:41:54.896395 master-0 kubenswrapper[27835]: I0318 13:41:54.896426 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmq2j\" (UniqueName: \"kubernetes.io/projected/bd002959-2759-4b29-97c8-05ce0441059d-kube-api-access-bmq2j\") pod \"bd002959-2759-4b29-97c8-05ce0441059d\" (UID: \"bd002959-2759-4b29-97c8-05ce0441059d\") " Mar 18 13:41:54.896707 master-0 kubenswrapper[27835]: I0318 13:41:54.896448 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd002959-2759-4b29-97c8-05ce0441059d-config\") pod \"bd002959-2759-4b29-97c8-05ce0441059d\" (UID: \"bd002959-2759-4b29-97c8-05ce0441059d\") " Mar 18 13:41:54.896707 master-0 kubenswrapper[27835]: I0318 13:41:54.896581 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e9fdf93-f5f4-4a4a-b20c-c786e160437e-config\") pod \"8e9fdf93-f5f4-4a4a-b20c-c786e160437e\" (UID: \"8e9fdf93-f5f4-4a4a-b20c-c786e160437e\") " Mar 18 13:41:54.896707 master-0 kubenswrapper[27835]: I0318 13:41:54.896601 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mv6q5\" (UniqueName: \"kubernetes.io/projected/8e9fdf93-f5f4-4a4a-b20c-c786e160437e-kube-api-access-mv6q5\") pod \"8e9fdf93-f5f4-4a4a-b20c-c786e160437e\" (UID: \"8e9fdf93-f5f4-4a4a-b20c-c786e160437e\") " Mar 18 13:41:54.900468 master-0 kubenswrapper[27835]: I0318 13:41:54.900395 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e9fdf93-f5f4-4a4a-b20c-c786e160437e-kube-api-access-mv6q5" (OuterVolumeSpecName: "kube-api-access-mv6q5") pod "8e9fdf93-f5f4-4a4a-b20c-c786e160437e" (UID: "8e9fdf93-f5f4-4a4a-b20c-c786e160437e"). InnerVolumeSpecName "kube-api-access-mv6q5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:41:54.900913 master-0 kubenswrapper[27835]: I0318 13:41:54.900875 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd002959-2759-4b29-97c8-05ce0441059d-kube-api-access-bmq2j" (OuterVolumeSpecName: "kube-api-access-bmq2j") pod "bd002959-2759-4b29-97c8-05ce0441059d" (UID: "bd002959-2759-4b29-97c8-05ce0441059d"). InnerVolumeSpecName "kube-api-access-bmq2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:41:54.921052 master-0 kubenswrapper[27835]: I0318 13:41:54.920944 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd002959-2759-4b29-97c8-05ce0441059d-config" (OuterVolumeSpecName: "config") pod "bd002959-2759-4b29-97c8-05ce0441059d" (UID: "bd002959-2759-4b29-97c8-05ce0441059d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:41:54.924847 master-0 kubenswrapper[27835]: I0318 13:41:54.924675 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e9fdf93-f5f4-4a4a-b20c-c786e160437e-config" (OuterVolumeSpecName: "config") pod "8e9fdf93-f5f4-4a4a-b20c-c786e160437e" (UID: "8e9fdf93-f5f4-4a4a-b20c-c786e160437e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:41:54.926845 master-0 kubenswrapper[27835]: I0318 13:41:54.926805 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e9fdf93-f5f4-4a4a-b20c-c786e160437e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8e9fdf93-f5f4-4a4a-b20c-c786e160437e" (UID: "8e9fdf93-f5f4-4a4a-b20c-c786e160437e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:41:54.998688 master-0 kubenswrapper[27835]: I0318 13:41:54.998401 27835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e9fdf93-f5f4-4a4a-b20c-c786e160437e-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:41:54.998688 master-0 kubenswrapper[27835]: I0318 13:41:54.998449 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mv6q5\" (UniqueName: \"kubernetes.io/projected/8e9fdf93-f5f4-4a4a-b20c-c786e160437e-kube-api-access-mv6q5\") on node \"master-0\" DevicePath \"\"" Mar 18 13:41:54.998688 master-0 kubenswrapper[27835]: I0318 13:41:54.998459 27835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e9fdf93-f5f4-4a4a-b20c-c786e160437e-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 18 13:41:54.998688 master-0 kubenswrapper[27835]: I0318 13:41:54.998468 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmq2j\" (UniqueName: \"kubernetes.io/projected/bd002959-2759-4b29-97c8-05ce0441059d-kube-api-access-bmq2j\") on node \"master-0\" DevicePath \"\"" Mar 18 13:41:54.998688 master-0 kubenswrapper[27835]: I0318 13:41:54.998477 27835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd002959-2759-4b29-97c8-05ce0441059d-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:41:55.317178 master-0 kubenswrapper[27835]: I0318 13:41:55.317051 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d31955ae-5786-4417-880f-f71c7d4347c1","Type":"ContainerStarted","Data":"b6e61ab4fb44eb607eabd2c563248b4d52ddbcea2201ddfa77556948460de3cc"} Mar 18 13:41:55.319715 master-0 kubenswrapper[27835]: I0318 13:41:55.319485 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476fd89bc-ncvcc" Mar 18 13:41:55.319715 master-0 kubenswrapper[27835]: I0318 13:41:55.319493 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8476fd89bc-ncvcc" event={"ID":"8e9fdf93-f5f4-4a4a-b20c-c786e160437e","Type":"ContainerDied","Data":"59a4029f047cc0c5e11264398f342d4d4b8a3dee860035f3d71fdcbbdc6a1028"} Mar 18 13:41:55.319715 master-0 kubenswrapper[27835]: I0318 13:41:55.319560 27835 scope.go:117] "RemoveContainer" containerID="9957aa0f12a2d22dfb84f76895b88c82ebf4a6f0059b387493f36c58f4efcb0e" Mar 18 13:41:55.326036 master-0 kubenswrapper[27835]: I0318 13:41:55.323349 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gtvxg" event={"ID":"c0b4c95e-e177-4d01-bd2e-ff94c66d594d","Type":"ContainerStarted","Data":"be3f726894d239cc8ab3a1694315702b760baff47005f031917902b948c1f33b"} Mar 18 13:41:55.326383 master-0 kubenswrapper[27835]: I0318 13:41:55.326308 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685c76cf85-kkhsf" event={"ID":"bd002959-2759-4b29-97c8-05ce0441059d","Type":"ContainerDied","Data":"fae4cb6d26dead2bd894b9d6e510738cfc964f2a39f33601f13de9940a044061"} Mar 18 13:41:55.326717 master-0 kubenswrapper[27835]: I0318 13:41:55.326692 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685c76cf85-kkhsf" Mar 18 13:41:55.399791 master-0 kubenswrapper[27835]: I0318 13:41:55.399721 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-ncvcc"] Mar 18 13:41:55.427020 master-0 kubenswrapper[27835]: I0318 13:41:55.426916 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-ncvcc"] Mar 18 13:41:55.445398 master-0 kubenswrapper[27835]: I0318 13:41:55.445297 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-kkhsf"] Mar 18 13:41:55.467586 master-0 kubenswrapper[27835]: I0318 13:41:55.467522 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-kkhsf"] Mar 18 13:41:56.006538 master-0 kubenswrapper[27835]: I0318 13:41:56.006402 27835 scope.go:117] "RemoveContainer" containerID="e7cdfb70b3cc7df51c7c1b85a032d733aefcc0b4e6c122dec64143d033d2ece9" Mar 18 13:41:56.302720 master-0 kubenswrapper[27835]: I0318 13:41:56.302218 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e9fdf93-f5f4-4a4a-b20c-c786e160437e" path="/var/lib/kubelet/pods/8e9fdf93-f5f4-4a4a-b20c-c786e160437e/volumes" Mar 18 13:41:56.305476 master-0 kubenswrapper[27835]: I0318 13:41:56.302941 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd002959-2759-4b29-97c8-05ce0441059d" path="/var/lib/kubelet/pods/bd002959-2759-4b29-97c8-05ce0441059d/volumes" Mar 18 13:42:01.410635 master-0 kubenswrapper[27835]: I0318 13:42:01.410573 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-v9p5z" event={"ID":"6a036cb1-24e8-401c-af08-1291061013fa","Type":"ContainerStarted","Data":"6728595522ef2ce6242ce29be25c45f543787f2c638e59b4d987ce1c24d424fb"} Mar 18 13:42:01.411266 master-0 kubenswrapper[27835]: I0318 13:42:01.410673 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6ff8fd9d5c-v9p5z" Mar 18 13:42:01.413429 master-0 kubenswrapper[27835]: I0318 13:42:01.413384 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"52c3f355-8836-4d58-84ee-d6c2afb6c776","Type":"ContainerStarted","Data":"20e46a89ccfadfd15c6cf5a27dffe496782b05fb8f315b37442681dfb48da905"} Mar 18 13:42:01.413710 master-0 kubenswrapper[27835]: I0318 13:42:01.413617 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 18 13:42:01.419749 master-0 kubenswrapper[27835]: I0318 13:42:01.419659 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sf7pp" event={"ID":"c90d0a00-53e9-4145-8137-d73cee5337f0","Type":"ContainerStarted","Data":"e56cd9ff2e75dad48467c405fa69944db0ce092874441711a8b7c518f6ec6ef7"} Mar 18 13:42:01.420007 master-0 kubenswrapper[27835]: I0318 13:42:01.419970 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-sf7pp" Mar 18 13:42:01.422912 master-0 kubenswrapper[27835]: I0318 13:42:01.422877 27835 generic.go:334] "Generic (PLEG): container finished" podID="c0b4c95e-e177-4d01-bd2e-ff94c66d594d" containerID="c5ae819218d03334b1e5f56e80681f0d4311a9f3685ea2750f312fceef2ddb87" exitCode=0 Mar 18 13:42:01.423389 master-0 kubenswrapper[27835]: I0318 13:42:01.423205 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gtvxg" event={"ID":"c0b4c95e-e177-4d01-bd2e-ff94c66d594d","Type":"ContainerDied","Data":"c5ae819218d03334b1e5f56e80681f0d4311a9f3685ea2750f312fceef2ddb87"} Mar 18 13:42:01.426916 master-0 kubenswrapper[27835]: I0318 13:42:01.426476 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a40472e4-a359-41e5-8e65-f6c7cb3b7ac7","Type":"ContainerStarted","Data":"8e26e63bda3418a69e1743e36863c1eb2668848474d5f89a482eb1bd3dbdea62"} Mar 18 13:42:01.444103 master-0 kubenswrapper[27835]: I0318 13:42:01.442576 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d31955ae-5786-4417-880f-f71c7d4347c1","Type":"ContainerStarted","Data":"5ba7763a4108cba1550f9d116bc37a13fc5ed40d8ddb56aef559f4883e463a6e"} Mar 18 13:42:01.448614 master-0 kubenswrapper[27835]: I0318 13:42:01.448387 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b2c70993-8f51-411e-ae8d-65ea5161c75e","Type":"ContainerStarted","Data":"e4fee481d77cea1b4f48d9330c187539949097e66cf696a0addee5b6cd617ba2"} Mar 18 13:42:01.452752 master-0 kubenswrapper[27835]: I0318 13:42:01.452601 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"483c8547-dea7-4fd8-b4db-4849a346d73a","Type":"ContainerStarted","Data":"9fe6640f73598fed88fcab64143e85b3588aedda7101bb91dfd343996978be14"} Mar 18 13:42:01.462390 master-0 kubenswrapper[27835]: I0318 13:42:01.462192 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76849d6659-hmn6c" event={"ID":"521cb15d-54dc-46b7-bab1-a7389273be9f","Type":"ContainerStarted","Data":"39fcdd7c178f14e444506d2085e2bf4cb2efd1c2f9e8ee8b358785773a846cd6"} Mar 18 13:42:01.462985 master-0 kubenswrapper[27835]: I0318 13:42:01.462730 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76849d6659-hmn6c" Mar 18 13:42:01.476773 master-0 kubenswrapper[27835]: I0318 13:42:01.476671 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6ff8fd9d5c-v9p5z" podStartSLOduration=11.720930298 podStartE2EDuration="29.476642376s" podCreationTimestamp="2026-03-18 13:41:32 +0000 UTC" firstStartedPulling="2026-03-18 13:41:34.233747263 +0000 UTC m=+1058.198958823" lastFinishedPulling="2026-03-18 13:41:51.989459341 +0000 UTC m=+1075.954670901" observedRunningTime="2026-03-18 13:42:01.430127124 +0000 UTC m=+1085.395338694" watchObservedRunningTime="2026-03-18 13:42:01.476642376 +0000 UTC m=+1085.441853936" Mar 18 13:42:01.507278 master-0 kubenswrapper[27835]: I0318 13:42:01.507192 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-sf7pp" podStartSLOduration=11.988531302 podStartE2EDuration="19.507171316s" podCreationTimestamp="2026-03-18 13:41:42 +0000 UTC" firstStartedPulling="2026-03-18 13:41:52.641012695 +0000 UTC m=+1076.606224255" lastFinishedPulling="2026-03-18 13:42:00.159652699 +0000 UTC m=+1084.124864269" observedRunningTime="2026-03-18 13:42:01.46584523 +0000 UTC m=+1085.431056790" watchObservedRunningTime="2026-03-18 13:42:01.507171316 +0000 UTC m=+1085.472382876" Mar 18 13:42:01.518227 master-0 kubenswrapper[27835]: I0318 13:42:01.518149 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=17.547749605 podStartE2EDuration="25.518128406s" podCreationTimestamp="2026-03-18 13:41:36 +0000 UTC" firstStartedPulling="2026-03-18 13:41:51.787842825 +0000 UTC m=+1075.753054385" lastFinishedPulling="2026-03-18 13:41:59.758221626 +0000 UTC m=+1083.723433186" observedRunningTime="2026-03-18 13:42:01.485394428 +0000 UTC m=+1085.450605988" watchObservedRunningTime="2026-03-18 13:42:01.518128406 +0000 UTC m=+1085.483339966" Mar 18 13:42:01.653243 master-0 kubenswrapper[27835]: I0318 13:42:01.653141 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76849d6659-hmn6c" podStartSLOduration=11.643312177 podStartE2EDuration="29.653115005s" podCreationTimestamp="2026-03-18 13:41:32 +0000 UTC" firstStartedPulling="2026-03-18 13:41:34.103877646 +0000 UTC m=+1058.069089206" lastFinishedPulling="2026-03-18 13:41:52.113680474 +0000 UTC m=+1076.078892034" observedRunningTime="2026-03-18 13:42:01.607310341 +0000 UTC m=+1085.572521901" watchObservedRunningTime="2026-03-18 13:42:01.653115005 +0000 UTC m=+1085.618326585" Mar 18 13:42:02.473363 master-0 kubenswrapper[27835]: I0318 13:42:02.473286 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1b76c81c-7824-4bfa-af04-9c1fd928fb63","Type":"ContainerStarted","Data":"78946f613903c49866632a32a02d94783bbf6c6a66400c112e85ef599af226c9"} Mar 18 13:42:02.477001 master-0 kubenswrapper[27835]: I0318 13:42:02.476345 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed","Type":"ContainerStarted","Data":"f03d087364e016e051b903fc5046b08a9917a4453cf4682b4b351e3b3f54bdba"} Mar 18 13:42:02.479166 master-0 kubenswrapper[27835]: I0318 13:42:02.479069 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gtvxg" event={"ID":"c0b4c95e-e177-4d01-bd2e-ff94c66d594d","Type":"ContainerStarted","Data":"84d670193b3b852ca544e6f41f88d65e74ac35463899a8ea39bdd68197825b6c"} Mar 18 13:42:02.479166 master-0 kubenswrapper[27835]: I0318 13:42:02.479119 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gtvxg" event={"ID":"c0b4c95e-e177-4d01-bd2e-ff94c66d594d","Type":"ContainerStarted","Data":"0b64af067773530f3b7ca08daf24cccf5a493f463d4ca53a6ed9e50931a06342"} Mar 18 13:42:02.816550 master-0 kubenswrapper[27835]: I0318 13:42:02.816428 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-gtvxg" Mar 18 13:42:02.816550 master-0 kubenswrapper[27835]: I0318 13:42:02.816521 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-gtvxg" Mar 18 13:42:02.835267 master-0 kubenswrapper[27835]: I0318 13:42:02.835167 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-gtvxg" podStartSLOduration=15.36147661 podStartE2EDuration="20.835148995s" podCreationTimestamp="2026-03-18 13:41:42 +0000 UTC" firstStartedPulling="2026-03-18 13:41:54.665580903 +0000 UTC m=+1078.630792463" lastFinishedPulling="2026-03-18 13:42:00.139253288 +0000 UTC m=+1084.104464848" observedRunningTime="2026-03-18 13:42:02.829270139 +0000 UTC m=+1086.794481699" watchObservedRunningTime="2026-03-18 13:42:02.835148995 +0000 UTC m=+1086.800360565" Mar 18 13:42:06.544384 master-0 kubenswrapper[27835]: I0318 13:42:06.544319 27835 generic.go:334] "Generic (PLEG): container finished" podID="b2c70993-8f51-411e-ae8d-65ea5161c75e" containerID="e4fee481d77cea1b4f48d9330c187539949097e66cf696a0addee5b6cd617ba2" exitCode=0 Mar 18 13:42:06.544946 master-0 kubenswrapper[27835]: I0318 13:42:06.544435 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b2c70993-8f51-411e-ae8d-65ea5161c75e","Type":"ContainerDied","Data":"e4fee481d77cea1b4f48d9330c187539949097e66cf696a0addee5b6cd617ba2"} Mar 18 13:42:06.547716 master-0 kubenswrapper[27835]: I0318 13:42:06.547665 27835 generic.go:334] "Generic (PLEG): container finished" podID="483c8547-dea7-4fd8-b4db-4849a346d73a" containerID="9fe6640f73598fed88fcab64143e85b3588aedda7101bb91dfd343996978be14" exitCode=0 Mar 18 13:42:06.547796 master-0 kubenswrapper[27835]: I0318 13:42:06.547717 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"483c8547-dea7-4fd8-b4db-4849a346d73a","Type":"ContainerDied","Data":"9fe6640f73598fed88fcab64143e85b3588aedda7101bb91dfd343996978be14"} Mar 18 13:42:07.170032 master-0 kubenswrapper[27835]: I0318 13:42:07.169936 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 18 13:42:07.558860 master-0 kubenswrapper[27835]: I0318 13:42:07.558795 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b2c70993-8f51-411e-ae8d-65ea5161c75e","Type":"ContainerStarted","Data":"00a6f197c204a3862249f3774be33763fca65761622cd4024c7c5b14c25d0766"} Mar 18 13:42:07.562039 master-0 kubenswrapper[27835]: I0318 13:42:07.561872 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"483c8547-dea7-4fd8-b4db-4849a346d73a","Type":"ContainerStarted","Data":"3f2ced92cca308ab1274afe50acd18ad9c7f13620a2a64877d475b7db72500c5"} Mar 18 13:42:07.565023 master-0 kubenswrapper[27835]: I0318 13:42:07.563949 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a40472e4-a359-41e5-8e65-f6c7cb3b7ac7","Type":"ContainerStarted","Data":"ca47c15db6cb00d9d7857188997680f5fe152bb398d561c014b8b24e3c2927cc"} Mar 18 13:42:07.908606 master-0 kubenswrapper[27835]: I0318 13:42:07.908400 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=25.531361585 podStartE2EDuration="32.908385424s" podCreationTimestamp="2026-03-18 13:41:35 +0000 UTC" firstStartedPulling="2026-03-18 13:41:52.813519279 +0000 UTC m=+1076.778730839" lastFinishedPulling="2026-03-18 13:42:00.190543128 +0000 UTC m=+1084.155754678" observedRunningTime="2026-03-18 13:42:07.90746147 +0000 UTC m=+1091.872673040" watchObservedRunningTime="2026-03-18 13:42:07.908385424 +0000 UTC m=+1091.873596974" Mar 18 13:42:07.930754 master-0 kubenswrapper[27835]: I0318 13:42:07.930678 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76849d6659-hmn6c" Mar 18 13:42:08.004262 master-0 kubenswrapper[27835]: I0318 13:42:07.997061 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=10.253640932 podStartE2EDuration="23.997042655s" podCreationTimestamp="2026-03-18 13:41:44 +0000 UTC" firstStartedPulling="2026-03-18 13:41:52.89879055 +0000 UTC m=+1076.864002110" lastFinishedPulling="2026-03-18 13:42:06.642192273 +0000 UTC m=+1090.607403833" observedRunningTime="2026-03-18 13:42:07.979725105 +0000 UTC m=+1091.944936655" watchObservedRunningTime="2026-03-18 13:42:07.997042655 +0000 UTC m=+1091.962254215" Mar 18 13:42:08.042562 master-0 kubenswrapper[27835]: I0318 13:42:08.041498 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=26.11589174 podStartE2EDuration="34.041470803s" podCreationTimestamp="2026-03-18 13:41:34 +0000 UTC" firstStartedPulling="2026-03-18 13:41:52.213255194 +0000 UTC m=+1076.178466754" lastFinishedPulling="2026-03-18 13:42:00.138834257 +0000 UTC m=+1084.104045817" observedRunningTime="2026-03-18 13:42:08.017942209 +0000 UTC m=+1091.983153769" watchObservedRunningTime="2026-03-18 13:42:08.041470803 +0000 UTC m=+1092.006682373" Mar 18 13:42:08.435618 master-0 kubenswrapper[27835]: I0318 13:42:08.435565 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6ff8fd9d5c-v9p5z" Mar 18 13:42:08.518224 master-0 kubenswrapper[27835]: I0318 13:42:08.517025 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76849d6659-hmn6c"] Mar 18 13:42:08.550501 master-0 kubenswrapper[27835]: I0318 13:42:08.547657 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 18 13:42:08.599228 master-0 kubenswrapper[27835]: I0318 13:42:08.599143 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76849d6659-hmn6c" podUID="521cb15d-54dc-46b7-bab1-a7389273be9f" containerName="dnsmasq-dns" containerID="cri-o://39fcdd7c178f14e444506d2085e2bf4cb2efd1c2f9e8ee8b358785773a846cd6" gracePeriod=10 Mar 18 13:42:08.603436 master-0 kubenswrapper[27835]: I0318 13:42:08.601648 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d31955ae-5786-4417-880f-f71c7d4347c1","Type":"ContainerStarted","Data":"15e4ecf27bacee944e692252ddba45eea2ca82c7ddbc061aedb7cdd2e2c2cfdf"} Mar 18 13:42:08.637067 master-0 kubenswrapper[27835]: I0318 13:42:08.636461 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=14.981568174 podStartE2EDuration="27.636442357s" podCreationTimestamp="2026-03-18 13:41:41 +0000 UTC" firstStartedPulling="2026-03-18 13:41:54.672664841 +0000 UTC m=+1078.637876411" lastFinishedPulling="2026-03-18 13:42:07.327539024 +0000 UTC m=+1091.292750594" observedRunningTime="2026-03-18 13:42:08.630376547 +0000 UTC m=+1092.595588117" watchObservedRunningTime="2026-03-18 13:42:08.636442357 +0000 UTC m=+1092.601653917" Mar 18 13:42:09.165511 master-0 kubenswrapper[27835]: I0318 13:42:09.165450 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 18 13:42:09.196339 master-0 kubenswrapper[27835]: I0318 13:42:09.196012 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76849d6659-hmn6c" Mar 18 13:42:09.204429 master-0 kubenswrapper[27835]: I0318 13:42:09.204376 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 18 13:42:09.374511 master-0 kubenswrapper[27835]: I0318 13:42:09.373660 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/521cb15d-54dc-46b7-bab1-a7389273be9f-config\") pod \"521cb15d-54dc-46b7-bab1-a7389273be9f\" (UID: \"521cb15d-54dc-46b7-bab1-a7389273be9f\") " Mar 18 13:42:09.374511 master-0 kubenswrapper[27835]: I0318 13:42:09.373762 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-547dn\" (UniqueName: \"kubernetes.io/projected/521cb15d-54dc-46b7-bab1-a7389273be9f-kube-api-access-547dn\") pod \"521cb15d-54dc-46b7-bab1-a7389273be9f\" (UID: \"521cb15d-54dc-46b7-bab1-a7389273be9f\") " Mar 18 13:42:09.374511 master-0 kubenswrapper[27835]: I0318 13:42:09.373909 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/521cb15d-54dc-46b7-bab1-a7389273be9f-dns-svc\") pod \"521cb15d-54dc-46b7-bab1-a7389273be9f\" (UID: \"521cb15d-54dc-46b7-bab1-a7389273be9f\") " Mar 18 13:42:09.380838 master-0 kubenswrapper[27835]: I0318 13:42:09.380767 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/521cb15d-54dc-46b7-bab1-a7389273be9f-kube-api-access-547dn" (OuterVolumeSpecName: "kube-api-access-547dn") pod "521cb15d-54dc-46b7-bab1-a7389273be9f" (UID: "521cb15d-54dc-46b7-bab1-a7389273be9f"). InnerVolumeSpecName "kube-api-access-547dn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:42:09.479564 master-0 kubenswrapper[27835]: I0318 13:42:09.478713 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-547dn\" (UniqueName: \"kubernetes.io/projected/521cb15d-54dc-46b7-bab1-a7389273be9f-kube-api-access-547dn\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:09.479564 master-0 kubenswrapper[27835]: I0318 13:42:09.479384 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/521cb15d-54dc-46b7-bab1-a7389273be9f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "521cb15d-54dc-46b7-bab1-a7389273be9f" (UID: "521cb15d-54dc-46b7-bab1-a7389273be9f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:42:09.497258 master-0 kubenswrapper[27835]: I0318 13:42:09.497198 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/521cb15d-54dc-46b7-bab1-a7389273be9f-config" (OuterVolumeSpecName: "config") pod "521cb15d-54dc-46b7-bab1-a7389273be9f" (UID: "521cb15d-54dc-46b7-bab1-a7389273be9f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:42:09.549114 master-0 kubenswrapper[27835]: I0318 13:42:09.547344 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 18 13:42:09.582591 master-0 kubenswrapper[27835]: I0318 13:42:09.582137 27835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/521cb15d-54dc-46b7-bab1-a7389273be9f-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:09.582591 master-0 kubenswrapper[27835]: I0318 13:42:09.582181 27835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/521cb15d-54dc-46b7-bab1-a7389273be9f-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:09.607357 master-0 kubenswrapper[27835]: I0318 13:42:09.604924 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bb8ffc699-tsdpw"] Mar 18 13:42:09.607357 master-0 kubenswrapper[27835]: E0318 13:42:09.605325 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e9fdf93-f5f4-4a4a-b20c-c786e160437e" containerName="init" Mar 18 13:42:09.607357 master-0 kubenswrapper[27835]: I0318 13:42:09.605339 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e9fdf93-f5f4-4a4a-b20c-c786e160437e" containerName="init" Mar 18 13:42:09.607357 master-0 kubenswrapper[27835]: E0318 13:42:09.605361 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="521cb15d-54dc-46b7-bab1-a7389273be9f" containerName="init" Mar 18 13:42:09.607357 master-0 kubenswrapper[27835]: I0318 13:42:09.605369 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="521cb15d-54dc-46b7-bab1-a7389273be9f" containerName="init" Mar 18 13:42:09.607357 master-0 kubenswrapper[27835]: E0318 13:42:09.605401 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="521cb15d-54dc-46b7-bab1-a7389273be9f" containerName="dnsmasq-dns" Mar 18 13:42:09.607357 master-0 kubenswrapper[27835]: I0318 13:42:09.605440 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="521cb15d-54dc-46b7-bab1-a7389273be9f" containerName="dnsmasq-dns" Mar 18 13:42:09.607357 master-0 kubenswrapper[27835]: E0318 13:42:09.605457 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd002959-2759-4b29-97c8-05ce0441059d" containerName="init" Mar 18 13:42:09.607357 master-0 kubenswrapper[27835]: I0318 13:42:09.605464 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd002959-2759-4b29-97c8-05ce0441059d" containerName="init" Mar 18 13:42:09.607357 master-0 kubenswrapper[27835]: I0318 13:42:09.605671 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e9fdf93-f5f4-4a4a-b20c-c786e160437e" containerName="init" Mar 18 13:42:09.607357 master-0 kubenswrapper[27835]: I0318 13:42:09.605685 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="521cb15d-54dc-46b7-bab1-a7389273be9f" containerName="dnsmasq-dns" Mar 18 13:42:09.607357 master-0 kubenswrapper[27835]: I0318 13:42:09.605710 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd002959-2759-4b29-97c8-05ce0441059d" containerName="init" Mar 18 13:42:09.607357 master-0 kubenswrapper[27835]: I0318 13:42:09.606670 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bb8ffc699-tsdpw" Mar 18 13:42:09.619101 master-0 kubenswrapper[27835]: I0318 13:42:09.619022 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 18 13:42:09.624877 master-0 kubenswrapper[27835]: I0318 13:42:09.624541 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bb8ffc699-tsdpw"] Mar 18 13:42:09.670781 master-0 kubenswrapper[27835]: I0318 13:42:09.670744 27835 generic.go:334] "Generic (PLEG): container finished" podID="521cb15d-54dc-46b7-bab1-a7389273be9f" containerID="39fcdd7c178f14e444506d2085e2bf4cb2efd1c2f9e8ee8b358785773a846cd6" exitCode=0 Mar 18 13:42:09.671951 master-0 kubenswrapper[27835]: I0318 13:42:09.671932 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76849d6659-hmn6c" Mar 18 13:42:09.677153 master-0 kubenswrapper[27835]: I0318 13:42:09.674532 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76849d6659-hmn6c" event={"ID":"521cb15d-54dc-46b7-bab1-a7389273be9f","Type":"ContainerDied","Data":"39fcdd7c178f14e444506d2085e2bf4cb2efd1c2f9e8ee8b358785773a846cd6"} Mar 18 13:42:09.677153 master-0 kubenswrapper[27835]: I0318 13:42:09.674604 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76849d6659-hmn6c" event={"ID":"521cb15d-54dc-46b7-bab1-a7389273be9f","Type":"ContainerDied","Data":"ab5e12233e5308b13b8f8f08a63193c22e5c0b430c8ea8dea2d6c41246fbe987"} Mar 18 13:42:09.677153 master-0 kubenswrapper[27835]: I0318 13:42:09.674623 27835 scope.go:117] "RemoveContainer" containerID="39fcdd7c178f14e444506d2085e2bf4cb2efd1c2f9e8ee8b358785773a846cd6" Mar 18 13:42:09.677153 master-0 kubenswrapper[27835]: I0318 13:42:09.675757 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 18 13:42:09.755144 master-0 kubenswrapper[27835]: I0318 13:42:09.755109 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 18 13:42:09.762298 master-0 kubenswrapper[27835]: I0318 13:42:09.762237 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 18 13:42:09.789221 master-0 kubenswrapper[27835]: I0318 13:42:09.789097 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6279172-0461-4f84-98e1-e457cba12d40-dns-svc\") pod \"dnsmasq-dns-7bb8ffc699-tsdpw\" (UID: \"b6279172-0461-4f84-98e1-e457cba12d40\") " pod="openstack/dnsmasq-dns-7bb8ffc699-tsdpw" Mar 18 13:42:09.793428 master-0 kubenswrapper[27835]: I0318 13:42:09.790213 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgkw4\" (UniqueName: \"kubernetes.io/projected/b6279172-0461-4f84-98e1-e457cba12d40-kube-api-access-jgkw4\") pod \"dnsmasq-dns-7bb8ffc699-tsdpw\" (UID: \"b6279172-0461-4f84-98e1-e457cba12d40\") " pod="openstack/dnsmasq-dns-7bb8ffc699-tsdpw" Mar 18 13:42:09.793428 master-0 kubenswrapper[27835]: I0318 13:42:09.790699 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6279172-0461-4f84-98e1-e457cba12d40-config\") pod \"dnsmasq-dns-7bb8ffc699-tsdpw\" (UID: \"b6279172-0461-4f84-98e1-e457cba12d40\") " pod="openstack/dnsmasq-dns-7bb8ffc699-tsdpw" Mar 18 13:42:09.827924 master-0 kubenswrapper[27835]: I0318 13:42:09.827062 27835 scope.go:117] "RemoveContainer" containerID="28898c08f5a8853c1bf7282c019d927f5848f9447e446e94b5a724806579033b" Mar 18 13:42:09.872215 master-0 kubenswrapper[27835]: I0318 13:42:09.872163 27835 scope.go:117] "RemoveContainer" containerID="39fcdd7c178f14e444506d2085e2bf4cb2efd1c2f9e8ee8b358785773a846cd6" Mar 18 13:42:09.875109 master-0 kubenswrapper[27835]: E0318 13:42:09.874981 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39fcdd7c178f14e444506d2085e2bf4cb2efd1c2f9e8ee8b358785773a846cd6\": container with ID starting with 39fcdd7c178f14e444506d2085e2bf4cb2efd1c2f9e8ee8b358785773a846cd6 not found: ID does not exist" containerID="39fcdd7c178f14e444506d2085e2bf4cb2efd1c2f9e8ee8b358785773a846cd6" Mar 18 13:42:09.875109 master-0 kubenswrapper[27835]: I0318 13:42:09.875067 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39fcdd7c178f14e444506d2085e2bf4cb2efd1c2f9e8ee8b358785773a846cd6"} err="failed to get container status \"39fcdd7c178f14e444506d2085e2bf4cb2efd1c2f9e8ee8b358785773a846cd6\": rpc error: code = NotFound desc = could not find container \"39fcdd7c178f14e444506d2085e2bf4cb2efd1c2f9e8ee8b358785773a846cd6\": container with ID starting with 39fcdd7c178f14e444506d2085e2bf4cb2efd1c2f9e8ee8b358785773a846cd6 not found: ID does not exist" Mar 18 13:42:09.875237 master-0 kubenswrapper[27835]: I0318 13:42:09.875122 27835 scope.go:117] "RemoveContainer" containerID="28898c08f5a8853c1bf7282c019d927f5848f9447e446e94b5a724806579033b" Mar 18 13:42:09.877049 master-0 kubenswrapper[27835]: E0318 13:42:09.877005 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28898c08f5a8853c1bf7282c019d927f5848f9447e446e94b5a724806579033b\": container with ID starting with 28898c08f5a8853c1bf7282c019d927f5848f9447e446e94b5a724806579033b not found: ID does not exist" containerID="28898c08f5a8853c1bf7282c019d927f5848f9447e446e94b5a724806579033b" Mar 18 13:42:09.877143 master-0 kubenswrapper[27835]: I0318 13:42:09.877054 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28898c08f5a8853c1bf7282c019d927f5848f9447e446e94b5a724806579033b"} err="failed to get container status \"28898c08f5a8853c1bf7282c019d927f5848f9447e446e94b5a724806579033b\": rpc error: code = NotFound desc = could not find container \"28898c08f5a8853c1bf7282c019d927f5848f9447e446e94b5a724806579033b\": container with ID starting with 28898c08f5a8853c1bf7282c019d927f5848f9447e446e94b5a724806579033b not found: ID does not exist" Mar 18 13:42:09.898585 master-0 kubenswrapper[27835]: I0318 13:42:09.893555 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6279172-0461-4f84-98e1-e457cba12d40-config\") pod \"dnsmasq-dns-7bb8ffc699-tsdpw\" (UID: \"b6279172-0461-4f84-98e1-e457cba12d40\") " pod="openstack/dnsmasq-dns-7bb8ffc699-tsdpw" Mar 18 13:42:09.898585 master-0 kubenswrapper[27835]: I0318 13:42:09.893676 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6279172-0461-4f84-98e1-e457cba12d40-dns-svc\") pod \"dnsmasq-dns-7bb8ffc699-tsdpw\" (UID: \"b6279172-0461-4f84-98e1-e457cba12d40\") " pod="openstack/dnsmasq-dns-7bb8ffc699-tsdpw" Mar 18 13:42:09.898585 master-0 kubenswrapper[27835]: I0318 13:42:09.893708 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgkw4\" (UniqueName: \"kubernetes.io/projected/b6279172-0461-4f84-98e1-e457cba12d40-kube-api-access-jgkw4\") pod \"dnsmasq-dns-7bb8ffc699-tsdpw\" (UID: \"b6279172-0461-4f84-98e1-e457cba12d40\") " pod="openstack/dnsmasq-dns-7bb8ffc699-tsdpw" Mar 18 13:42:09.898585 master-0 kubenswrapper[27835]: I0318 13:42:09.894859 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6279172-0461-4f84-98e1-e457cba12d40-config\") pod \"dnsmasq-dns-7bb8ffc699-tsdpw\" (UID: \"b6279172-0461-4f84-98e1-e457cba12d40\") " pod="openstack/dnsmasq-dns-7bb8ffc699-tsdpw" Mar 18 13:42:09.898585 master-0 kubenswrapper[27835]: I0318 13:42:09.895465 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6279172-0461-4f84-98e1-e457cba12d40-dns-svc\") pod \"dnsmasq-dns-7bb8ffc699-tsdpw\" (UID: \"b6279172-0461-4f84-98e1-e457cba12d40\") " pod="openstack/dnsmasq-dns-7bb8ffc699-tsdpw" Mar 18 13:42:09.919163 master-0 kubenswrapper[27835]: I0318 13:42:09.918513 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76849d6659-hmn6c"] Mar 18 13:42:09.932057 master-0 kubenswrapper[27835]: I0318 13:42:09.931534 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgkw4\" (UniqueName: \"kubernetes.io/projected/b6279172-0461-4f84-98e1-e457cba12d40-kube-api-access-jgkw4\") pod \"dnsmasq-dns-7bb8ffc699-tsdpw\" (UID: \"b6279172-0461-4f84-98e1-e457cba12d40\") " pod="openstack/dnsmasq-dns-7bb8ffc699-tsdpw" Mar 18 13:42:09.946086 master-0 kubenswrapper[27835]: I0318 13:42:09.946022 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76849d6659-hmn6c"] Mar 18 13:42:10.063387 master-0 kubenswrapper[27835]: I0318 13:42:10.063310 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bb8ffc699-tsdpw" Mar 18 13:42:10.108062 master-0 kubenswrapper[27835]: I0318 13:42:10.107912 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bb8ffc699-tsdpw"] Mar 18 13:42:10.297432 master-0 kubenswrapper[27835]: I0318 13:42:10.297269 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="521cb15d-54dc-46b7-bab1-a7389273be9f" path="/var/lib/kubelet/pods/521cb15d-54dc-46b7-bab1-a7389273be9f/volumes" Mar 18 13:42:10.744843 master-0 kubenswrapper[27835]: W0318 13:42:10.744740 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6279172_0461_4f84_98e1_e457cba12d40.slice/crio-d82ff56d86ea04ca10cb0b51f96e23fc9ab246509f085454d1274d3fa2982fd4 WatchSource:0}: Error finding container d82ff56d86ea04ca10cb0b51f96e23fc9ab246509f085454d1274d3fa2982fd4: Status 404 returned error can't find the container with id d82ff56d86ea04ca10cb0b51f96e23fc9ab246509f085454d1274d3fa2982fd4 Mar 18 13:42:10.745497 master-0 kubenswrapper[27835]: I0318 13:42:10.745166 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bb8ffc699-tsdpw"] Mar 18 13:42:10.870052 master-0 kubenswrapper[27835]: I0318 13:42:10.869924 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-859fd45fb7-tsnzf"] Mar 18 13:42:10.878609 master-0 kubenswrapper[27835]: I0318 13:42:10.872377 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-859fd45fb7-tsnzf" Mar 18 13:42:10.910068 master-0 kubenswrapper[27835]: I0318 13:42:10.910019 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 18 13:42:10.960803 master-0 kubenswrapper[27835]: I0318 13:42:10.958010 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-859fd45fb7-tsnzf"] Mar 18 13:42:10.985540 master-0 kubenswrapper[27835]: I0318 13:42:10.970868 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-zkgwp"] Mar 18 13:42:10.985540 master-0 kubenswrapper[27835]: I0318 13:42:10.972125 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-zkgwp" Mar 18 13:42:10.985540 master-0 kubenswrapper[27835]: I0318 13:42:10.981097 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Mar 18 13:42:11.007525 master-0 kubenswrapper[27835]: I0318 13:42:11.007136 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-zkgwp"] Mar 18 13:42:11.050916 master-0 kubenswrapper[27835]: I0318 13:42:11.050877 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa43fd71-68ad-4d12-8827-eab782305ccd-config\") pod \"dnsmasq-dns-859fd45fb7-tsnzf\" (UID: \"fa43fd71-68ad-4d12-8827-eab782305ccd\") " pod="openstack/dnsmasq-dns-859fd45fb7-tsnzf" Mar 18 13:42:11.051141 master-0 kubenswrapper[27835]: I0318 13:42:11.051124 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa43fd71-68ad-4d12-8827-eab782305ccd-ovsdbserver-sb\") pod \"dnsmasq-dns-859fd45fb7-tsnzf\" (UID: \"fa43fd71-68ad-4d12-8827-eab782305ccd\") " pod="openstack/dnsmasq-dns-859fd45fb7-tsnzf" Mar 18 13:42:11.051273 master-0 kubenswrapper[27835]: I0318 13:42:11.051257 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5e56cc8-abf0-403b-9a1f-3f073ae89422-config\") pod \"ovn-controller-metrics-zkgwp\" (UID: \"c5e56cc8-abf0-403b-9a1f-3f073ae89422\") " pod="openstack/ovn-controller-metrics-zkgwp" Mar 18 13:42:11.051370 master-0 kubenswrapper[27835]: I0318 13:42:11.051358 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4xxx\" (UniqueName: \"kubernetes.io/projected/fa43fd71-68ad-4d12-8827-eab782305ccd-kube-api-access-r4xxx\") pod \"dnsmasq-dns-859fd45fb7-tsnzf\" (UID: \"fa43fd71-68ad-4d12-8827-eab782305ccd\") " pod="openstack/dnsmasq-dns-859fd45fb7-tsnzf" Mar 18 13:42:11.051560 master-0 kubenswrapper[27835]: I0318 13:42:11.051545 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6clk2\" (UniqueName: \"kubernetes.io/projected/c5e56cc8-abf0-403b-9a1f-3f073ae89422-kube-api-access-6clk2\") pod \"ovn-controller-metrics-zkgwp\" (UID: \"c5e56cc8-abf0-403b-9a1f-3f073ae89422\") " pod="openstack/ovn-controller-metrics-zkgwp" Mar 18 13:42:11.051686 master-0 kubenswrapper[27835]: I0318 13:42:11.051674 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa43fd71-68ad-4d12-8827-eab782305ccd-dns-svc\") pod \"dnsmasq-dns-859fd45fb7-tsnzf\" (UID: \"fa43fd71-68ad-4d12-8827-eab782305ccd\") " pod="openstack/dnsmasq-dns-859fd45fb7-tsnzf" Mar 18 13:42:11.051785 master-0 kubenswrapper[27835]: I0318 13:42:11.051772 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e56cc8-abf0-403b-9a1f-3f073ae89422-combined-ca-bundle\") pod \"ovn-controller-metrics-zkgwp\" (UID: \"c5e56cc8-abf0-403b-9a1f-3f073ae89422\") " pod="openstack/ovn-controller-metrics-zkgwp" Mar 18 13:42:11.051883 master-0 kubenswrapper[27835]: I0318 13:42:11.051871 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c5e56cc8-abf0-403b-9a1f-3f073ae89422-ovn-rundir\") pod \"ovn-controller-metrics-zkgwp\" (UID: \"c5e56cc8-abf0-403b-9a1f-3f073ae89422\") " pod="openstack/ovn-controller-metrics-zkgwp" Mar 18 13:42:11.051979 master-0 kubenswrapper[27835]: I0318 13:42:11.051967 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5e56cc8-abf0-403b-9a1f-3f073ae89422-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-zkgwp\" (UID: \"c5e56cc8-abf0-403b-9a1f-3f073ae89422\") " pod="openstack/ovn-controller-metrics-zkgwp" Mar 18 13:42:11.052049 master-0 kubenswrapper[27835]: I0318 13:42:11.052037 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c5e56cc8-abf0-403b-9a1f-3f073ae89422-ovs-rundir\") pod \"ovn-controller-metrics-zkgwp\" (UID: \"c5e56cc8-abf0-403b-9a1f-3f073ae89422\") " pod="openstack/ovn-controller-metrics-zkgwp" Mar 18 13:42:11.097834 master-0 kubenswrapper[27835]: I0318 13:42:11.097767 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 18 13:42:11.112205 master-0 kubenswrapper[27835]: I0318 13:42:11.099824 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 18 13:42:11.112205 master-0 kubenswrapper[27835]: I0318 13:42:11.106481 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 18 13:42:11.114304 master-0 kubenswrapper[27835]: I0318 13:42:11.113086 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 18 13:42:11.114304 master-0 kubenswrapper[27835]: I0318 13:42:11.113143 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 18 13:42:11.114304 master-0 kubenswrapper[27835]: I0318 13:42:11.113101 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 18 13:42:11.172657 master-0 kubenswrapper[27835]: I0318 13:42:11.171247 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa43fd71-68ad-4d12-8827-eab782305ccd-config\") pod \"dnsmasq-dns-859fd45fb7-tsnzf\" (UID: \"fa43fd71-68ad-4d12-8827-eab782305ccd\") " pod="openstack/dnsmasq-dns-859fd45fb7-tsnzf" Mar 18 13:42:11.172657 master-0 kubenswrapper[27835]: I0318 13:42:11.171332 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa43fd71-68ad-4d12-8827-eab782305ccd-ovsdbserver-sb\") pod \"dnsmasq-dns-859fd45fb7-tsnzf\" (UID: \"fa43fd71-68ad-4d12-8827-eab782305ccd\") " pod="openstack/dnsmasq-dns-859fd45fb7-tsnzf" Mar 18 13:42:11.172657 master-0 kubenswrapper[27835]: I0318 13:42:11.171373 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5e56cc8-abf0-403b-9a1f-3f073ae89422-config\") pod \"ovn-controller-metrics-zkgwp\" (UID: \"c5e56cc8-abf0-403b-9a1f-3f073ae89422\") " pod="openstack/ovn-controller-metrics-zkgwp" Mar 18 13:42:11.172657 master-0 kubenswrapper[27835]: I0318 13:42:11.171434 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4xxx\" (UniqueName: \"kubernetes.io/projected/fa43fd71-68ad-4d12-8827-eab782305ccd-kube-api-access-r4xxx\") pod \"dnsmasq-dns-859fd45fb7-tsnzf\" (UID: \"fa43fd71-68ad-4d12-8827-eab782305ccd\") " pod="openstack/dnsmasq-dns-859fd45fb7-tsnzf" Mar 18 13:42:11.172657 master-0 kubenswrapper[27835]: I0318 13:42:11.171519 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6clk2\" (UniqueName: \"kubernetes.io/projected/c5e56cc8-abf0-403b-9a1f-3f073ae89422-kube-api-access-6clk2\") pod \"ovn-controller-metrics-zkgwp\" (UID: \"c5e56cc8-abf0-403b-9a1f-3f073ae89422\") " pod="openstack/ovn-controller-metrics-zkgwp" Mar 18 13:42:11.172657 master-0 kubenswrapper[27835]: I0318 13:42:11.171604 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa43fd71-68ad-4d12-8827-eab782305ccd-dns-svc\") pod \"dnsmasq-dns-859fd45fb7-tsnzf\" (UID: \"fa43fd71-68ad-4d12-8827-eab782305ccd\") " pod="openstack/dnsmasq-dns-859fd45fb7-tsnzf" Mar 18 13:42:11.172657 master-0 kubenswrapper[27835]: I0318 13:42:11.171651 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e56cc8-abf0-403b-9a1f-3f073ae89422-combined-ca-bundle\") pod \"ovn-controller-metrics-zkgwp\" (UID: \"c5e56cc8-abf0-403b-9a1f-3f073ae89422\") " pod="openstack/ovn-controller-metrics-zkgwp" Mar 18 13:42:11.172657 master-0 kubenswrapper[27835]: I0318 13:42:11.171700 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c5e56cc8-abf0-403b-9a1f-3f073ae89422-ovn-rundir\") pod \"ovn-controller-metrics-zkgwp\" (UID: \"c5e56cc8-abf0-403b-9a1f-3f073ae89422\") " pod="openstack/ovn-controller-metrics-zkgwp" Mar 18 13:42:11.172657 master-0 kubenswrapper[27835]: I0318 13:42:11.171745 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5e56cc8-abf0-403b-9a1f-3f073ae89422-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-zkgwp\" (UID: \"c5e56cc8-abf0-403b-9a1f-3f073ae89422\") " pod="openstack/ovn-controller-metrics-zkgwp" Mar 18 13:42:11.172657 master-0 kubenswrapper[27835]: I0318 13:42:11.171770 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c5e56cc8-abf0-403b-9a1f-3f073ae89422-ovs-rundir\") pod \"ovn-controller-metrics-zkgwp\" (UID: \"c5e56cc8-abf0-403b-9a1f-3f073ae89422\") " pod="openstack/ovn-controller-metrics-zkgwp" Mar 18 13:42:11.172657 master-0 kubenswrapper[27835]: I0318 13:42:11.171947 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c5e56cc8-abf0-403b-9a1f-3f073ae89422-ovs-rundir\") pod \"ovn-controller-metrics-zkgwp\" (UID: \"c5e56cc8-abf0-403b-9a1f-3f073ae89422\") " pod="openstack/ovn-controller-metrics-zkgwp" Mar 18 13:42:11.172657 master-0 kubenswrapper[27835]: I0318 13:42:11.172364 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa43fd71-68ad-4d12-8827-eab782305ccd-config\") pod \"dnsmasq-dns-859fd45fb7-tsnzf\" (UID: \"fa43fd71-68ad-4d12-8827-eab782305ccd\") " pod="openstack/dnsmasq-dns-859fd45fb7-tsnzf" Mar 18 13:42:11.173048 master-0 kubenswrapper[27835]: I0318 13:42:11.172779 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa43fd71-68ad-4d12-8827-eab782305ccd-ovsdbserver-sb\") pod \"dnsmasq-dns-859fd45fb7-tsnzf\" (UID: \"fa43fd71-68ad-4d12-8827-eab782305ccd\") " pod="openstack/dnsmasq-dns-859fd45fb7-tsnzf" Mar 18 13:42:11.173048 master-0 kubenswrapper[27835]: I0318 13:42:11.173028 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa43fd71-68ad-4d12-8827-eab782305ccd-dns-svc\") pod \"dnsmasq-dns-859fd45fb7-tsnzf\" (UID: \"fa43fd71-68ad-4d12-8827-eab782305ccd\") " pod="openstack/dnsmasq-dns-859fd45fb7-tsnzf" Mar 18 13:42:11.182700 master-0 kubenswrapper[27835]: I0318 13:42:11.173795 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5e56cc8-abf0-403b-9a1f-3f073ae89422-config\") pod \"ovn-controller-metrics-zkgwp\" (UID: \"c5e56cc8-abf0-403b-9a1f-3f073ae89422\") " pod="openstack/ovn-controller-metrics-zkgwp" Mar 18 13:42:11.182700 master-0 kubenswrapper[27835]: I0318 13:42:11.174209 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c5e56cc8-abf0-403b-9a1f-3f073ae89422-ovn-rundir\") pod \"ovn-controller-metrics-zkgwp\" (UID: \"c5e56cc8-abf0-403b-9a1f-3f073ae89422\") " pod="openstack/ovn-controller-metrics-zkgwp" Mar 18 13:42:11.194867 master-0 kubenswrapper[27835]: I0318 13:42:11.189135 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c5e56cc8-abf0-403b-9a1f-3f073ae89422-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-zkgwp\" (UID: \"c5e56cc8-abf0-403b-9a1f-3f073ae89422\") " pod="openstack/ovn-controller-metrics-zkgwp" Mar 18 13:42:11.199706 master-0 kubenswrapper[27835]: I0318 13:42:11.199520 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e56cc8-abf0-403b-9a1f-3f073ae89422-combined-ca-bundle\") pod \"ovn-controller-metrics-zkgwp\" (UID: \"c5e56cc8-abf0-403b-9a1f-3f073ae89422\") " pod="openstack/ovn-controller-metrics-zkgwp" Mar 18 13:42:11.203000 master-0 kubenswrapper[27835]: I0318 13:42:11.202548 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-859fd45fb7-tsnzf"] Mar 18 13:42:11.203000 master-0 kubenswrapper[27835]: I0318 13:42:11.202794 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6clk2\" (UniqueName: \"kubernetes.io/projected/c5e56cc8-abf0-403b-9a1f-3f073ae89422-kube-api-access-6clk2\") pod \"ovn-controller-metrics-zkgwp\" (UID: \"c5e56cc8-abf0-403b-9a1f-3f073ae89422\") " pod="openstack/ovn-controller-metrics-zkgwp" Mar 18 13:42:11.207379 master-0 kubenswrapper[27835]: E0318 13:42:11.207135 27835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-r4xxx], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-859fd45fb7-tsnzf" podUID="fa43fd71-68ad-4d12-8827-eab782305ccd" Mar 18 13:42:11.214975 master-0 kubenswrapper[27835]: I0318 13:42:11.213951 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4xxx\" (UniqueName: \"kubernetes.io/projected/fa43fd71-68ad-4d12-8827-eab782305ccd-kube-api-access-r4xxx\") pod \"dnsmasq-dns-859fd45fb7-tsnzf\" (UID: \"fa43fd71-68ad-4d12-8827-eab782305ccd\") " pod="openstack/dnsmasq-dns-859fd45fb7-tsnzf" Mar 18 13:42:11.234316 master-0 kubenswrapper[27835]: I0318 13:42:11.229132 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b8649b7f9-8486f"] Mar 18 13:42:11.246968 master-0 kubenswrapper[27835]: I0318 13:42:11.246897 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b8649b7f9-8486f"] Mar 18 13:42:11.247194 master-0 kubenswrapper[27835]: I0318 13:42:11.247095 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b8649b7f9-8486f" Mar 18 13:42:11.251920 master-0 kubenswrapper[27835]: I0318 13:42:11.251051 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 18 13:42:11.281502 master-0 kubenswrapper[27835]: I0318 13:42:11.276762 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c17ba4f-cece-4e06-b786-27992d500ae7-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6c17ba4f-cece-4e06-b786-27992d500ae7\") " pod="openstack/ovn-northd-0" Mar 18 13:42:11.281502 master-0 kubenswrapper[27835]: I0318 13:42:11.276894 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c17ba4f-cece-4e06-b786-27992d500ae7-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6c17ba4f-cece-4e06-b786-27992d500ae7\") " pod="openstack/ovn-northd-0" Mar 18 13:42:11.281502 master-0 kubenswrapper[27835]: I0318 13:42:11.276954 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c17ba4f-cece-4e06-b786-27992d500ae7-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6c17ba4f-cece-4e06-b786-27992d500ae7\") " pod="openstack/ovn-northd-0" Mar 18 13:42:11.281502 master-0 kubenswrapper[27835]: I0318 13:42:11.277019 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkc9m\" (UniqueName: \"kubernetes.io/projected/6c17ba4f-cece-4e06-b786-27992d500ae7-kube-api-access-pkc9m\") pod \"ovn-northd-0\" (UID: \"6c17ba4f-cece-4e06-b786-27992d500ae7\") " pod="openstack/ovn-northd-0" Mar 18 13:42:11.281502 master-0 kubenswrapper[27835]: I0318 13:42:11.277089 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c17ba4f-cece-4e06-b786-27992d500ae7-scripts\") pod \"ovn-northd-0\" (UID: \"6c17ba4f-cece-4e06-b786-27992d500ae7\") " pod="openstack/ovn-northd-0" Mar 18 13:42:11.281502 master-0 kubenswrapper[27835]: I0318 13:42:11.277116 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6c17ba4f-cece-4e06-b786-27992d500ae7-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6c17ba4f-cece-4e06-b786-27992d500ae7\") " pod="openstack/ovn-northd-0" Mar 18 13:42:11.281502 master-0 kubenswrapper[27835]: I0318 13:42:11.277178 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c17ba4f-cece-4e06-b786-27992d500ae7-config\") pod \"ovn-northd-0\" (UID: \"6c17ba4f-cece-4e06-b786-27992d500ae7\") " pod="openstack/ovn-northd-0" Mar 18 13:42:11.322258 master-0 kubenswrapper[27835]: I0318 13:42:11.322180 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-zkgwp" Mar 18 13:42:11.379066 master-0 kubenswrapper[27835]: I0318 13:42:11.378921 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c17ba4f-cece-4e06-b786-27992d500ae7-config\") pod \"ovn-northd-0\" (UID: \"6c17ba4f-cece-4e06-b786-27992d500ae7\") " pod="openstack/ovn-northd-0" Mar 18 13:42:11.379066 master-0 kubenswrapper[27835]: I0318 13:42:11.379004 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c17ba4f-cece-4e06-b786-27992d500ae7-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6c17ba4f-cece-4e06-b786-27992d500ae7\") " pod="openstack/ovn-northd-0" Mar 18 13:42:11.379066 master-0 kubenswrapper[27835]: I0318 13:42:11.379042 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c17ba4f-cece-4e06-b786-27992d500ae7-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6c17ba4f-cece-4e06-b786-27992d500ae7\") " pod="openstack/ovn-northd-0" Mar 18 13:42:11.379374 master-0 kubenswrapper[27835]: I0318 13:42:11.379088 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c17ba4f-cece-4e06-b786-27992d500ae7-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6c17ba4f-cece-4e06-b786-27992d500ae7\") " pod="openstack/ovn-northd-0" Mar 18 13:42:11.379374 master-0 kubenswrapper[27835]: I0318 13:42:11.379111 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8cceedf-f909-428d-953e-194d94f1c300-ovsdbserver-nb\") pod \"dnsmasq-dns-5b8649b7f9-8486f\" (UID: \"b8cceedf-f909-428d-953e-194d94f1c300\") " pod="openstack/dnsmasq-dns-5b8649b7f9-8486f" Mar 18 13:42:11.379374 master-0 kubenswrapper[27835]: I0318 13:42:11.379145 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8cceedf-f909-428d-953e-194d94f1c300-dns-svc\") pod \"dnsmasq-dns-5b8649b7f9-8486f\" (UID: \"b8cceedf-f909-428d-953e-194d94f1c300\") " pod="openstack/dnsmasq-dns-5b8649b7f9-8486f" Mar 18 13:42:11.379374 master-0 kubenswrapper[27835]: I0318 13:42:11.379188 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkc9m\" (UniqueName: \"kubernetes.io/projected/6c17ba4f-cece-4e06-b786-27992d500ae7-kube-api-access-pkc9m\") pod \"ovn-northd-0\" (UID: \"6c17ba4f-cece-4e06-b786-27992d500ae7\") " pod="openstack/ovn-northd-0" Mar 18 13:42:11.379374 master-0 kubenswrapper[27835]: I0318 13:42:11.379211 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt8hn\" (UniqueName: \"kubernetes.io/projected/b8cceedf-f909-428d-953e-194d94f1c300-kube-api-access-tt8hn\") pod \"dnsmasq-dns-5b8649b7f9-8486f\" (UID: \"b8cceedf-f909-428d-953e-194d94f1c300\") " pod="openstack/dnsmasq-dns-5b8649b7f9-8486f" Mar 18 13:42:11.379374 master-0 kubenswrapper[27835]: I0318 13:42:11.379237 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8cceedf-f909-428d-953e-194d94f1c300-ovsdbserver-sb\") pod \"dnsmasq-dns-5b8649b7f9-8486f\" (UID: \"b8cceedf-f909-428d-953e-194d94f1c300\") " pod="openstack/dnsmasq-dns-5b8649b7f9-8486f" Mar 18 13:42:11.379374 master-0 kubenswrapper[27835]: I0318 13:42:11.379258 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8cceedf-f909-428d-953e-194d94f1c300-config\") pod \"dnsmasq-dns-5b8649b7f9-8486f\" (UID: \"b8cceedf-f909-428d-953e-194d94f1c300\") " pod="openstack/dnsmasq-dns-5b8649b7f9-8486f" Mar 18 13:42:11.379374 master-0 kubenswrapper[27835]: I0318 13:42:11.379293 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c17ba4f-cece-4e06-b786-27992d500ae7-scripts\") pod \"ovn-northd-0\" (UID: \"6c17ba4f-cece-4e06-b786-27992d500ae7\") " pod="openstack/ovn-northd-0" Mar 18 13:42:11.379374 master-0 kubenswrapper[27835]: I0318 13:42:11.379323 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6c17ba4f-cece-4e06-b786-27992d500ae7-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6c17ba4f-cece-4e06-b786-27992d500ae7\") " pod="openstack/ovn-northd-0" Mar 18 13:42:11.382803 master-0 kubenswrapper[27835]: I0318 13:42:11.380925 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6c17ba4f-cece-4e06-b786-27992d500ae7-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6c17ba4f-cece-4e06-b786-27992d500ae7\") " pod="openstack/ovn-northd-0" Mar 18 13:42:11.382803 master-0 kubenswrapper[27835]: I0318 13:42:11.381158 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c17ba4f-cece-4e06-b786-27992d500ae7-scripts\") pod \"ovn-northd-0\" (UID: \"6c17ba4f-cece-4e06-b786-27992d500ae7\") " pod="openstack/ovn-northd-0" Mar 18 13:42:11.384263 master-0 kubenswrapper[27835]: I0318 13:42:11.382841 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c17ba4f-cece-4e06-b786-27992d500ae7-config\") pod \"ovn-northd-0\" (UID: \"6c17ba4f-cece-4e06-b786-27992d500ae7\") " pod="openstack/ovn-northd-0" Mar 18 13:42:11.385476 master-0 kubenswrapper[27835]: I0318 13:42:11.385380 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c17ba4f-cece-4e06-b786-27992d500ae7-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6c17ba4f-cece-4e06-b786-27992d500ae7\") " pod="openstack/ovn-northd-0" Mar 18 13:42:11.390242 master-0 kubenswrapper[27835]: I0318 13:42:11.390151 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c17ba4f-cece-4e06-b786-27992d500ae7-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6c17ba4f-cece-4e06-b786-27992d500ae7\") " pod="openstack/ovn-northd-0" Mar 18 13:42:11.390821 master-0 kubenswrapper[27835]: I0318 13:42:11.390755 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c17ba4f-cece-4e06-b786-27992d500ae7-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6c17ba4f-cece-4e06-b786-27992d500ae7\") " pod="openstack/ovn-northd-0" Mar 18 13:42:11.400725 master-0 kubenswrapper[27835]: I0318 13:42:11.400650 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkc9m\" (UniqueName: \"kubernetes.io/projected/6c17ba4f-cece-4e06-b786-27992d500ae7-kube-api-access-pkc9m\") pod \"ovn-northd-0\" (UID: \"6c17ba4f-cece-4e06-b786-27992d500ae7\") " pod="openstack/ovn-northd-0" Mar 18 13:42:11.420562 master-0 kubenswrapper[27835]: I0318 13:42:11.417255 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 18 13:42:11.434633 master-0 kubenswrapper[27835]: I0318 13:42:11.434504 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 18 13:42:11.447155 master-0 kubenswrapper[27835]: I0318 13:42:11.446596 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 18 13:42:11.484481 master-0 kubenswrapper[27835]: I0318 13:42:11.484360 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8cceedf-f909-428d-953e-194d94f1c300-ovsdbserver-sb\") pod \"dnsmasq-dns-5b8649b7f9-8486f\" (UID: \"b8cceedf-f909-428d-953e-194d94f1c300\") " pod="openstack/dnsmasq-dns-5b8649b7f9-8486f" Mar 18 13:42:11.484481 master-0 kubenswrapper[27835]: I0318 13:42:11.484456 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8cceedf-f909-428d-953e-194d94f1c300-config\") pod \"dnsmasq-dns-5b8649b7f9-8486f\" (UID: \"b8cceedf-f909-428d-953e-194d94f1c300\") " pod="openstack/dnsmasq-dns-5b8649b7f9-8486f" Mar 18 13:42:11.484694 master-0 kubenswrapper[27835]: I0318 13:42:11.484607 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8cceedf-f909-428d-953e-194d94f1c300-ovsdbserver-nb\") pod \"dnsmasq-dns-5b8649b7f9-8486f\" (UID: \"b8cceedf-f909-428d-953e-194d94f1c300\") " pod="openstack/dnsmasq-dns-5b8649b7f9-8486f" Mar 18 13:42:11.484694 master-0 kubenswrapper[27835]: I0318 13:42:11.484649 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8cceedf-f909-428d-953e-194d94f1c300-dns-svc\") pod \"dnsmasq-dns-5b8649b7f9-8486f\" (UID: \"b8cceedf-f909-428d-953e-194d94f1c300\") " pod="openstack/dnsmasq-dns-5b8649b7f9-8486f" Mar 18 13:42:11.484771 master-0 kubenswrapper[27835]: I0318 13:42:11.484696 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt8hn\" (UniqueName: \"kubernetes.io/projected/b8cceedf-f909-428d-953e-194d94f1c300-kube-api-access-tt8hn\") pod \"dnsmasq-dns-5b8649b7f9-8486f\" (UID: \"b8cceedf-f909-428d-953e-194d94f1c300\") " pod="openstack/dnsmasq-dns-5b8649b7f9-8486f" Mar 18 13:42:11.486601 master-0 kubenswrapper[27835]: I0318 13:42:11.485950 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8cceedf-f909-428d-953e-194d94f1c300-config\") pod \"dnsmasq-dns-5b8649b7f9-8486f\" (UID: \"b8cceedf-f909-428d-953e-194d94f1c300\") " pod="openstack/dnsmasq-dns-5b8649b7f9-8486f" Mar 18 13:42:11.486601 master-0 kubenswrapper[27835]: I0318 13:42:11.486365 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8cceedf-f909-428d-953e-194d94f1c300-ovsdbserver-nb\") pod \"dnsmasq-dns-5b8649b7f9-8486f\" (UID: \"b8cceedf-f909-428d-953e-194d94f1c300\") " pod="openstack/dnsmasq-dns-5b8649b7f9-8486f" Mar 18 13:42:11.486601 master-0 kubenswrapper[27835]: I0318 13:42:11.486600 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8cceedf-f909-428d-953e-194d94f1c300-dns-svc\") pod \"dnsmasq-dns-5b8649b7f9-8486f\" (UID: \"b8cceedf-f909-428d-953e-194d94f1c300\") " pod="openstack/dnsmasq-dns-5b8649b7f9-8486f" Mar 18 13:42:11.486987 master-0 kubenswrapper[27835]: I0318 13:42:11.486951 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8cceedf-f909-428d-953e-194d94f1c300-ovsdbserver-sb\") pod \"dnsmasq-dns-5b8649b7f9-8486f\" (UID: \"b8cceedf-f909-428d-953e-194d94f1c300\") " pod="openstack/dnsmasq-dns-5b8649b7f9-8486f" Mar 18 13:42:11.515216 master-0 kubenswrapper[27835]: I0318 13:42:11.515145 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt8hn\" (UniqueName: \"kubernetes.io/projected/b8cceedf-f909-428d-953e-194d94f1c300-kube-api-access-tt8hn\") pod \"dnsmasq-dns-5b8649b7f9-8486f\" (UID: \"b8cceedf-f909-428d-953e-194d94f1c300\") " pod="openstack/dnsmasq-dns-5b8649b7f9-8486f" Mar 18 13:42:11.619461 master-0 kubenswrapper[27835]: I0318 13:42:11.617866 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b8649b7f9-8486f" Mar 18 13:42:11.744366 master-0 kubenswrapper[27835]: I0318 13:42:11.743325 27835 generic.go:334] "Generic (PLEG): container finished" podID="b6279172-0461-4f84-98e1-e457cba12d40" containerID="c3f748bb03bb2c37a8e6c7cfcd5ff4db6c274111894fa27ae9f5a4f38565cdd9" exitCode=0 Mar 18 13:42:11.744366 master-0 kubenswrapper[27835]: I0318 13:42:11.743456 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-859fd45fb7-tsnzf" Mar 18 13:42:11.744366 master-0 kubenswrapper[27835]: I0318 13:42:11.743513 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bb8ffc699-tsdpw" event={"ID":"b6279172-0461-4f84-98e1-e457cba12d40","Type":"ContainerDied","Data":"c3f748bb03bb2c37a8e6c7cfcd5ff4db6c274111894fa27ae9f5a4f38565cdd9"} Mar 18 13:42:11.744366 master-0 kubenswrapper[27835]: I0318 13:42:11.743593 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bb8ffc699-tsdpw" event={"ID":"b6279172-0461-4f84-98e1-e457cba12d40","Type":"ContainerStarted","Data":"d82ff56d86ea04ca10cb0b51f96e23fc9ab246509f085454d1274d3fa2982fd4"} Mar 18 13:42:11.859061 master-0 kubenswrapper[27835]: I0318 13:42:11.859012 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-859fd45fb7-tsnzf" Mar 18 13:42:11.927458 master-0 kubenswrapper[27835]: I0318 13:42:11.926090 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-zkgwp"] Mar 18 13:42:12.012814 master-0 kubenswrapper[27835]: I0318 13:42:12.012633 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa43fd71-68ad-4d12-8827-eab782305ccd-ovsdbserver-sb\") pod \"fa43fd71-68ad-4d12-8827-eab782305ccd\" (UID: \"fa43fd71-68ad-4d12-8827-eab782305ccd\") " Mar 18 13:42:12.012814 master-0 kubenswrapper[27835]: I0318 13:42:12.012743 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa43fd71-68ad-4d12-8827-eab782305ccd-config\") pod \"fa43fd71-68ad-4d12-8827-eab782305ccd\" (UID: \"fa43fd71-68ad-4d12-8827-eab782305ccd\") " Mar 18 13:42:12.012814 master-0 kubenswrapper[27835]: I0318 13:42:12.012809 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa43fd71-68ad-4d12-8827-eab782305ccd-dns-svc\") pod \"fa43fd71-68ad-4d12-8827-eab782305ccd\" (UID: \"fa43fd71-68ad-4d12-8827-eab782305ccd\") " Mar 18 13:42:12.013123 master-0 kubenswrapper[27835]: I0318 13:42:12.012853 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4xxx\" (UniqueName: \"kubernetes.io/projected/fa43fd71-68ad-4d12-8827-eab782305ccd-kube-api-access-r4xxx\") pod \"fa43fd71-68ad-4d12-8827-eab782305ccd\" (UID: \"fa43fd71-68ad-4d12-8827-eab782305ccd\") " Mar 18 13:42:12.015486 master-0 kubenswrapper[27835]: I0318 13:42:12.014905 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa43fd71-68ad-4d12-8827-eab782305ccd-config" (OuterVolumeSpecName: "config") pod "fa43fd71-68ad-4d12-8827-eab782305ccd" (UID: "fa43fd71-68ad-4d12-8827-eab782305ccd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:42:12.015486 master-0 kubenswrapper[27835]: I0318 13:42:12.015169 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa43fd71-68ad-4d12-8827-eab782305ccd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fa43fd71-68ad-4d12-8827-eab782305ccd" (UID: "fa43fd71-68ad-4d12-8827-eab782305ccd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:42:12.015486 master-0 kubenswrapper[27835]: I0318 13:42:12.015221 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa43fd71-68ad-4d12-8827-eab782305ccd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fa43fd71-68ad-4d12-8827-eab782305ccd" (UID: "fa43fd71-68ad-4d12-8827-eab782305ccd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:42:12.024132 master-0 kubenswrapper[27835]: I0318 13:42:12.021742 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa43fd71-68ad-4d12-8827-eab782305ccd-kube-api-access-r4xxx" (OuterVolumeSpecName: "kube-api-access-r4xxx") pod "fa43fd71-68ad-4d12-8827-eab782305ccd" (UID: "fa43fd71-68ad-4d12-8827-eab782305ccd"). InnerVolumeSpecName "kube-api-access-r4xxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:42:12.027934 master-0 kubenswrapper[27835]: I0318 13:42:12.027870 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 18 13:42:12.086306 master-0 kubenswrapper[27835]: I0318 13:42:12.085561 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 18 13:42:12.117643 master-0 kubenswrapper[27835]: I0318 13:42:12.117582 27835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa43fd71-68ad-4d12-8827-eab782305ccd-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:12.117643 master-0 kubenswrapper[27835]: I0318 13:42:12.117634 27835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa43fd71-68ad-4d12-8827-eab782305ccd-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:12.117643 master-0 kubenswrapper[27835]: I0318 13:42:12.117646 27835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa43fd71-68ad-4d12-8827-eab782305ccd-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:12.117643 master-0 kubenswrapper[27835]: I0318 13:42:12.117656 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4xxx\" (UniqueName: \"kubernetes.io/projected/fa43fd71-68ad-4d12-8827-eab782305ccd-kube-api-access-r4xxx\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:12.444390 master-0 kubenswrapper[27835]: I0318 13:42:12.444336 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 18 13:42:12.447112 master-0 kubenswrapper[27835]: I0318 13:42:12.447050 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 18 13:42:12.508229 master-0 kubenswrapper[27835]: I0318 13:42:12.505682 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b8649b7f9-8486f"] Mar 18 13:42:12.512310 master-0 kubenswrapper[27835]: W0318 13:42:12.512230 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8cceedf_f909_428d_953e_194d94f1c300.slice/crio-970aa6790ee9974dbdc046980d19aad935316b13f23c4eaccc7085825c9d870c WatchSource:0}: Error finding container 970aa6790ee9974dbdc046980d19aad935316b13f23c4eaccc7085825c9d870c: Status 404 returned error can't find the container with id 970aa6790ee9974dbdc046980d19aad935316b13f23c4eaccc7085825c9d870c Mar 18 13:42:12.528017 master-0 kubenswrapper[27835]: I0318 13:42:12.527137 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bb8ffc699-tsdpw" Mar 18 13:42:12.625968 master-0 kubenswrapper[27835]: I0318 13:42:12.625912 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgkw4\" (UniqueName: \"kubernetes.io/projected/b6279172-0461-4f84-98e1-e457cba12d40-kube-api-access-jgkw4\") pod \"b6279172-0461-4f84-98e1-e457cba12d40\" (UID: \"b6279172-0461-4f84-98e1-e457cba12d40\") " Mar 18 13:42:12.626145 master-0 kubenswrapper[27835]: I0318 13:42:12.626007 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6279172-0461-4f84-98e1-e457cba12d40-config\") pod \"b6279172-0461-4f84-98e1-e457cba12d40\" (UID: \"b6279172-0461-4f84-98e1-e457cba12d40\") " Mar 18 13:42:12.626145 master-0 kubenswrapper[27835]: I0318 13:42:12.626097 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6279172-0461-4f84-98e1-e457cba12d40-dns-svc\") pod \"b6279172-0461-4f84-98e1-e457cba12d40\" (UID: \"b6279172-0461-4f84-98e1-e457cba12d40\") " Mar 18 13:42:12.632327 master-0 kubenswrapper[27835]: I0318 13:42:12.632257 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6279172-0461-4f84-98e1-e457cba12d40-kube-api-access-jgkw4" (OuterVolumeSpecName: "kube-api-access-jgkw4") pod "b6279172-0461-4f84-98e1-e457cba12d40" (UID: "b6279172-0461-4f84-98e1-e457cba12d40"). InnerVolumeSpecName "kube-api-access-jgkw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:42:12.659757 master-0 kubenswrapper[27835]: I0318 13:42:12.659710 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6279172-0461-4f84-98e1-e457cba12d40-config" (OuterVolumeSpecName: "config") pod "b6279172-0461-4f84-98e1-e457cba12d40" (UID: "b6279172-0461-4f84-98e1-e457cba12d40"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:42:12.665155 master-0 kubenswrapper[27835]: I0318 13:42:12.664943 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6279172-0461-4f84-98e1-e457cba12d40-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b6279172-0461-4f84-98e1-e457cba12d40" (UID: "b6279172-0461-4f84-98e1-e457cba12d40"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:42:12.730144 master-0 kubenswrapper[27835]: I0318 13:42:12.730065 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgkw4\" (UniqueName: \"kubernetes.io/projected/b6279172-0461-4f84-98e1-e457cba12d40-kube-api-access-jgkw4\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:12.730144 master-0 kubenswrapper[27835]: I0318 13:42:12.730118 27835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6279172-0461-4f84-98e1-e457cba12d40-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:12.730144 master-0 kubenswrapper[27835]: I0318 13:42:12.730135 27835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6279172-0461-4f84-98e1-e457cba12d40-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:12.771277 master-0 kubenswrapper[27835]: I0318 13:42:12.771221 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8649b7f9-8486f" event={"ID":"b8cceedf-f909-428d-953e-194d94f1c300","Type":"ContainerStarted","Data":"67b6aa4cfe0670be7acf6c6d9ace26123bafc594adb6ff872efcf1afce49395c"} Mar 18 13:42:12.771277 master-0 kubenswrapper[27835]: I0318 13:42:12.771282 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8649b7f9-8486f" event={"ID":"b8cceedf-f909-428d-953e-194d94f1c300","Type":"ContainerStarted","Data":"970aa6790ee9974dbdc046980d19aad935316b13f23c4eaccc7085825c9d870c"} Mar 18 13:42:12.789439 master-0 kubenswrapper[27835]: I0318 13:42:12.786192 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6c17ba4f-cece-4e06-b786-27992d500ae7","Type":"ContainerStarted","Data":"2ad3fd5a639681141a90e01648b5495dfe23b22036677e55121a45675f6e5127"} Mar 18 13:42:12.792993 master-0 kubenswrapper[27835]: I0318 13:42:12.792078 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bb8ffc699-tsdpw" event={"ID":"b6279172-0461-4f84-98e1-e457cba12d40","Type":"ContainerDied","Data":"d82ff56d86ea04ca10cb0b51f96e23fc9ab246509f085454d1274d3fa2982fd4"} Mar 18 13:42:12.792993 master-0 kubenswrapper[27835]: I0318 13:42:12.792140 27835 scope.go:117] "RemoveContainer" containerID="c3f748bb03bb2c37a8e6c7cfcd5ff4db6c274111894fa27ae9f5a4f38565cdd9" Mar 18 13:42:12.792993 master-0 kubenswrapper[27835]: I0318 13:42:12.792269 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bb8ffc699-tsdpw" Mar 18 13:42:12.810974 master-0 kubenswrapper[27835]: I0318 13:42:12.810468 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-zkgwp" event={"ID":"c5e56cc8-abf0-403b-9a1f-3f073ae89422","Type":"ContainerStarted","Data":"a387da936b3b35c94917c83faa33a38fef378c1ea23acd362f50b6ca793974a6"} Mar 18 13:42:12.810974 master-0 kubenswrapper[27835]: I0318 13:42:12.810516 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-zkgwp" event={"ID":"c5e56cc8-abf0-403b-9a1f-3f073ae89422","Type":"ContainerStarted","Data":"31e8f0e1b04a13d966acdb9893b3291493105a7884a706b368bf368d01b2b5bf"} Mar 18 13:42:12.810974 master-0 kubenswrapper[27835]: I0318 13:42:12.810546 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-859fd45fb7-tsnzf" Mar 18 13:42:12.865447 master-0 kubenswrapper[27835]: I0318 13:42:12.864805 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-zkgwp" podStartSLOduration=2.864152207 podStartE2EDuration="2.864152207s" podCreationTimestamp="2026-03-18 13:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:42:12.847544757 +0000 UTC m=+1096.812756317" watchObservedRunningTime="2026-03-18 13:42:12.864152207 +0000 UTC m=+1096.829363757" Mar 18 13:42:12.958285 master-0 kubenswrapper[27835]: I0318 13:42:12.950829 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bb8ffc699-tsdpw"] Mar 18 13:42:12.982934 master-0 kubenswrapper[27835]: I0318 13:42:12.979280 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bb8ffc699-tsdpw"] Mar 18 13:42:12.982934 master-0 kubenswrapper[27835]: I0318 13:42:12.980573 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 18 13:42:13.026935 master-0 kubenswrapper[27835]: I0318 13:42:13.026882 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-859fd45fb7-tsnzf"] Mar 18 13:42:13.040376 master-0 kubenswrapper[27835]: I0318 13:42:13.036077 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-859fd45fb7-tsnzf"] Mar 18 13:42:13.688324 master-0 kubenswrapper[27835]: I0318 13:42:13.688256 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Mar 18 13:42:13.688791 master-0 kubenswrapper[27835]: E0318 13:42:13.688748 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6279172-0461-4f84-98e1-e457cba12d40" containerName="init" Mar 18 13:42:13.688791 master-0 kubenswrapper[27835]: I0318 13:42:13.688788 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6279172-0461-4f84-98e1-e457cba12d40" containerName="init" Mar 18 13:42:13.689065 master-0 kubenswrapper[27835]: I0318 13:42:13.689038 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6279172-0461-4f84-98e1-e457cba12d40" containerName="init" Mar 18 13:42:13.731885 master-0 kubenswrapper[27835]: I0318 13:42:13.731823 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 18 13:42:13.732798 master-0 kubenswrapper[27835]: I0318 13:42:13.732699 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 18 13:42:13.734760 master-0 kubenswrapper[27835]: I0318 13:42:13.734710 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Mar 18 13:42:13.734948 master-0 kubenswrapper[27835]: I0318 13:42:13.734924 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Mar 18 13:42:13.735101 master-0 kubenswrapper[27835]: I0318 13:42:13.735074 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Mar 18 13:42:13.760981 master-0 kubenswrapper[27835]: I0318 13:42:13.760683 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrzjk\" (UniqueName: \"kubernetes.io/projected/7a767523-b86f-496d-940f-7a8afb0c3535-kube-api-access-jrzjk\") pod \"swift-storage-0\" (UID: \"7a767523-b86f-496d-940f-7a8afb0c3535\") " pod="openstack/swift-storage-0" Mar 18 13:42:13.761166 master-0 kubenswrapper[27835]: I0318 13:42:13.761005 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9b160b49-2493-41f5-8134-481b4ae560db\" (UniqueName: \"kubernetes.io/csi/topolvm.io^738569df-d37f-4155-b244-3fb134df4116\") pod \"swift-storage-0\" (UID: \"7a767523-b86f-496d-940f-7a8afb0c3535\") " pod="openstack/swift-storage-0" Mar 18 13:42:13.761226 master-0 kubenswrapper[27835]: I0318 13:42:13.761160 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a767523-b86f-496d-940f-7a8afb0c3535-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"7a767523-b86f-496d-940f-7a8afb0c3535\") " pod="openstack/swift-storage-0" Mar 18 13:42:13.761470 master-0 kubenswrapper[27835]: I0318 13:42:13.761314 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7a767523-b86f-496d-940f-7a8afb0c3535-lock\") pod \"swift-storage-0\" (UID: \"7a767523-b86f-496d-940f-7a8afb0c3535\") " pod="openstack/swift-storage-0" Mar 18 13:42:13.761678 master-0 kubenswrapper[27835]: I0318 13:42:13.761612 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7a767523-b86f-496d-940f-7a8afb0c3535-etc-swift\") pod \"swift-storage-0\" (UID: \"7a767523-b86f-496d-940f-7a8afb0c3535\") " pod="openstack/swift-storage-0" Mar 18 13:42:13.761804 master-0 kubenswrapper[27835]: I0318 13:42:13.761782 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7a767523-b86f-496d-940f-7a8afb0c3535-cache\") pod \"swift-storage-0\" (UID: \"7a767523-b86f-496d-940f-7a8afb0c3535\") " pod="openstack/swift-storage-0" Mar 18 13:42:13.824155 master-0 kubenswrapper[27835]: I0318 13:42:13.824097 27835 generic.go:334] "Generic (PLEG): container finished" podID="b8cceedf-f909-428d-953e-194d94f1c300" containerID="67b6aa4cfe0670be7acf6c6d9ace26123bafc594adb6ff872efcf1afce49395c" exitCode=0 Mar 18 13:42:13.824396 master-0 kubenswrapper[27835]: I0318 13:42:13.824269 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8649b7f9-8486f" event={"ID":"b8cceedf-f909-428d-953e-194d94f1c300","Type":"ContainerDied","Data":"67b6aa4cfe0670be7acf6c6d9ace26123bafc594adb6ff872efcf1afce49395c"} Mar 18 13:42:13.865750 master-0 kubenswrapper[27835]: I0318 13:42:13.864491 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7a767523-b86f-496d-940f-7a8afb0c3535-cache\") pod \"swift-storage-0\" (UID: \"7a767523-b86f-496d-940f-7a8afb0c3535\") " pod="openstack/swift-storage-0" Mar 18 13:42:13.865750 master-0 kubenswrapper[27835]: I0318 13:42:13.864657 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrzjk\" (UniqueName: \"kubernetes.io/projected/7a767523-b86f-496d-940f-7a8afb0c3535-kube-api-access-jrzjk\") pod \"swift-storage-0\" (UID: \"7a767523-b86f-496d-940f-7a8afb0c3535\") " pod="openstack/swift-storage-0" Mar 18 13:42:13.865750 master-0 kubenswrapper[27835]: I0318 13:42:13.864723 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9b160b49-2493-41f5-8134-481b4ae560db\" (UniqueName: \"kubernetes.io/csi/topolvm.io^738569df-d37f-4155-b244-3fb134df4116\") pod \"swift-storage-0\" (UID: \"7a767523-b86f-496d-940f-7a8afb0c3535\") " pod="openstack/swift-storage-0" Mar 18 13:42:13.865750 master-0 kubenswrapper[27835]: I0318 13:42:13.864785 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a767523-b86f-496d-940f-7a8afb0c3535-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"7a767523-b86f-496d-940f-7a8afb0c3535\") " pod="openstack/swift-storage-0" Mar 18 13:42:13.865750 master-0 kubenswrapper[27835]: I0318 13:42:13.864882 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7a767523-b86f-496d-940f-7a8afb0c3535-lock\") pod \"swift-storage-0\" (UID: \"7a767523-b86f-496d-940f-7a8afb0c3535\") " pod="openstack/swift-storage-0" Mar 18 13:42:13.865750 master-0 kubenswrapper[27835]: I0318 13:42:13.864969 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7a767523-b86f-496d-940f-7a8afb0c3535-etc-swift\") pod \"swift-storage-0\" (UID: \"7a767523-b86f-496d-940f-7a8afb0c3535\") " pod="openstack/swift-storage-0" Mar 18 13:42:13.866438 master-0 kubenswrapper[27835]: I0318 13:42:13.865868 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7a767523-b86f-496d-940f-7a8afb0c3535-lock\") pod \"swift-storage-0\" (UID: \"7a767523-b86f-496d-940f-7a8afb0c3535\") " pod="openstack/swift-storage-0" Mar 18 13:42:13.870436 master-0 kubenswrapper[27835]: I0318 13:42:13.868218 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7a767523-b86f-496d-940f-7a8afb0c3535-cache\") pod \"swift-storage-0\" (UID: \"7a767523-b86f-496d-940f-7a8afb0c3535\") " pod="openstack/swift-storage-0" Mar 18 13:42:13.870436 master-0 kubenswrapper[27835]: E0318 13:42:13.868554 27835 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 18 13:42:13.870436 master-0 kubenswrapper[27835]: I0318 13:42:13.868589 27835 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 13:42:13.870436 master-0 kubenswrapper[27835]: E0318 13:42:13.868583 27835 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 18 13:42:13.870436 master-0 kubenswrapper[27835]: I0318 13:42:13.868619 27835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9b160b49-2493-41f5-8134-481b4ae560db\" (UniqueName: \"kubernetes.io/csi/topolvm.io^738569df-d37f-4155-b244-3fb134df4116\") pod \"swift-storage-0\" (UID: \"7a767523-b86f-496d-940f-7a8afb0c3535\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/bc6c65111a11c1aba17584814e96e49a4e9f1626244d93a312405012e3a58b96/globalmount\"" pod="openstack/swift-storage-0" Mar 18 13:42:13.870436 master-0 kubenswrapper[27835]: E0318 13:42:13.868652 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7a767523-b86f-496d-940f-7a8afb0c3535-etc-swift podName:7a767523-b86f-496d-940f-7a8afb0c3535 nodeName:}" failed. No retries permitted until 2026-03-18 13:42:14.36863427 +0000 UTC m=+1098.333845830 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7a767523-b86f-496d-940f-7a8afb0c3535-etc-swift") pod "swift-storage-0" (UID: "7a767523-b86f-496d-940f-7a8afb0c3535") : configmap "swift-ring-files" not found Mar 18 13:42:13.876494 master-0 kubenswrapper[27835]: I0318 13:42:13.871885 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a767523-b86f-496d-940f-7a8afb0c3535-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"7a767523-b86f-496d-940f-7a8afb0c3535\") " pod="openstack/swift-storage-0" Mar 18 13:42:13.888674 master-0 kubenswrapper[27835]: I0318 13:42:13.888615 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrzjk\" (UniqueName: \"kubernetes.io/projected/7a767523-b86f-496d-940f-7a8afb0c3535-kube-api-access-jrzjk\") pod \"swift-storage-0\" (UID: \"7a767523-b86f-496d-940f-7a8afb0c3535\") " pod="openstack/swift-storage-0" Mar 18 13:42:14.045725 master-0 kubenswrapper[27835]: I0318 13:42:14.044752 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 18 13:42:14.155454 master-0 kubenswrapper[27835]: I0318 13:42:14.152309 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 18 13:42:14.296095 master-0 kubenswrapper[27835]: I0318 13:42:14.296029 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6279172-0461-4f84-98e1-e457cba12d40" path="/var/lib/kubelet/pods/b6279172-0461-4f84-98e1-e457cba12d40/volumes" Mar 18 13:42:14.297189 master-0 kubenswrapper[27835]: I0318 13:42:14.297157 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa43fd71-68ad-4d12-8827-eab782305ccd" path="/var/lib/kubelet/pods/fa43fd71-68ad-4d12-8827-eab782305ccd/volumes" Mar 18 13:42:14.377340 master-0 kubenswrapper[27835]: I0318 13:42:14.377281 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7a767523-b86f-496d-940f-7a8afb0c3535-etc-swift\") pod \"swift-storage-0\" (UID: \"7a767523-b86f-496d-940f-7a8afb0c3535\") " pod="openstack/swift-storage-0" Mar 18 13:42:14.377779 master-0 kubenswrapper[27835]: E0318 13:42:14.377724 27835 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 18 13:42:14.377779 master-0 kubenswrapper[27835]: E0318 13:42:14.377777 27835 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 18 13:42:14.377872 master-0 kubenswrapper[27835]: E0318 13:42:14.377850 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7a767523-b86f-496d-940f-7a8afb0c3535-etc-swift podName:7a767523-b86f-496d-940f-7a8afb0c3535 nodeName:}" failed. No retries permitted until 2026-03-18 13:42:15.377831019 +0000 UTC m=+1099.343042579 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7a767523-b86f-496d-940f-7a8afb0c3535-etc-swift") pod "swift-storage-0" (UID: "7a767523-b86f-496d-940f-7a8afb0c3535") : configmap "swift-ring-files" not found Mar 18 13:42:14.624160 master-0 kubenswrapper[27835]: I0318 13:42:14.624045 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-gd8zd"] Mar 18 13:42:14.625976 master-0 kubenswrapper[27835]: I0318 13:42:14.625933 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gd8zd" Mar 18 13:42:14.636822 master-0 kubenswrapper[27835]: I0318 13:42:14.636770 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Mar 18 13:42:14.637011 master-0 kubenswrapper[27835]: I0318 13:42:14.636996 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 18 13:42:14.637787 master-0 kubenswrapper[27835]: I0318 13:42:14.637737 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Mar 18 13:42:14.701563 master-0 kubenswrapper[27835]: I0318 13:42:14.701493 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-gd8zd"] Mar 18 13:42:14.724564 master-0 kubenswrapper[27835]: I0318 13:42:14.724510 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-qqg25"] Mar 18 13:42:14.727598 master-0 kubenswrapper[27835]: I0318 13:42:14.726031 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qqg25" Mar 18 13:42:14.728225 master-0 kubenswrapper[27835]: I0318 13:42:14.728032 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 18 13:42:14.738052 master-0 kubenswrapper[27835]: I0318 13:42:14.737998 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qqg25"] Mar 18 13:42:14.784990 master-0 kubenswrapper[27835]: I0318 13:42:14.784920 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e6a77218-90a4-48a8-beff-2c3b2d66c53e-scripts\") pod \"swift-ring-rebalance-gd8zd\" (UID: \"e6a77218-90a4-48a8-beff-2c3b2d66c53e\") " pod="openstack/swift-ring-rebalance-gd8zd" Mar 18 13:42:14.785267 master-0 kubenswrapper[27835]: I0318 13:42:14.785055 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz5kx\" (UniqueName: \"kubernetes.io/projected/e6a77218-90a4-48a8-beff-2c3b2d66c53e-kube-api-access-rz5kx\") pod \"swift-ring-rebalance-gd8zd\" (UID: \"e6a77218-90a4-48a8-beff-2c3b2d66c53e\") " pod="openstack/swift-ring-rebalance-gd8zd" Mar 18 13:42:14.785267 master-0 kubenswrapper[27835]: I0318 13:42:14.785085 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e6a77218-90a4-48a8-beff-2c3b2d66c53e-dispersionconf\") pod \"swift-ring-rebalance-gd8zd\" (UID: \"e6a77218-90a4-48a8-beff-2c3b2d66c53e\") " pod="openstack/swift-ring-rebalance-gd8zd" Mar 18 13:42:14.785267 master-0 kubenswrapper[27835]: I0318 13:42:14.785100 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6a77218-90a4-48a8-beff-2c3b2d66c53e-combined-ca-bundle\") pod \"swift-ring-rebalance-gd8zd\" (UID: \"e6a77218-90a4-48a8-beff-2c3b2d66c53e\") " pod="openstack/swift-ring-rebalance-gd8zd" Mar 18 13:42:14.785267 master-0 kubenswrapper[27835]: I0318 13:42:14.785208 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e6a77218-90a4-48a8-beff-2c3b2d66c53e-etc-swift\") pod \"swift-ring-rebalance-gd8zd\" (UID: \"e6a77218-90a4-48a8-beff-2c3b2d66c53e\") " pod="openstack/swift-ring-rebalance-gd8zd" Mar 18 13:42:14.785267 master-0 kubenswrapper[27835]: I0318 13:42:14.785236 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e6a77218-90a4-48a8-beff-2c3b2d66c53e-swiftconf\") pod \"swift-ring-rebalance-gd8zd\" (UID: \"e6a77218-90a4-48a8-beff-2c3b2d66c53e\") " pod="openstack/swift-ring-rebalance-gd8zd" Mar 18 13:42:14.785506 master-0 kubenswrapper[27835]: I0318 13:42:14.785280 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e6a77218-90a4-48a8-beff-2c3b2d66c53e-ring-data-devices\") pod \"swift-ring-rebalance-gd8zd\" (UID: \"e6a77218-90a4-48a8-beff-2c3b2d66c53e\") " pod="openstack/swift-ring-rebalance-gd8zd" Mar 18 13:42:14.841339 master-0 kubenswrapper[27835]: I0318 13:42:14.841274 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6c17ba4f-cece-4e06-b786-27992d500ae7","Type":"ContainerStarted","Data":"a085872060cdb143643724ab0654087005044c748ef94270cd6a2d3da8be9246"} Mar 18 13:42:14.841339 master-0 kubenswrapper[27835]: I0318 13:42:14.841335 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6c17ba4f-cece-4e06-b786-27992d500ae7","Type":"ContainerStarted","Data":"7e24bc62c39c2a1c5e16a6d14cf51103c4aed594f558f30db439e7836ad49ea8"} Mar 18 13:42:14.841720 master-0 kubenswrapper[27835]: I0318 13:42:14.841373 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 18 13:42:14.848020 master-0 kubenswrapper[27835]: I0318 13:42:14.847952 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8649b7f9-8486f" event={"ID":"b8cceedf-f909-428d-953e-194d94f1c300","Type":"ContainerStarted","Data":"04d0c178d5e66c62b9599398b2df917288fb6a8ba8647639521307d9f387f834"} Mar 18 13:42:14.848179 master-0 kubenswrapper[27835]: I0318 13:42:14.848159 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b8649b7f9-8486f" Mar 18 13:42:14.882386 master-0 kubenswrapper[27835]: I0318 13:42:14.881983 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.179040686 podStartE2EDuration="3.881960915s" podCreationTimestamp="2026-03-18 13:42:11 +0000 UTC" firstStartedPulling="2026-03-18 13:42:12.128862153 +0000 UTC m=+1096.094073713" lastFinishedPulling="2026-03-18 13:42:13.831782372 +0000 UTC m=+1097.796993942" observedRunningTime="2026-03-18 13:42:14.87005881 +0000 UTC m=+1098.835270370" watchObservedRunningTime="2026-03-18 13:42:14.881960915 +0000 UTC m=+1098.847172465" Mar 18 13:42:14.886766 master-0 kubenswrapper[27835]: I0318 13:42:14.886715 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddq2x\" (UniqueName: \"kubernetes.io/projected/d879ffcd-5c0f-4ea8-83f9-0f2e3e0503d9-kube-api-access-ddq2x\") pod \"root-account-create-update-qqg25\" (UID: \"d879ffcd-5c0f-4ea8-83f9-0f2e3e0503d9\") " pod="openstack/root-account-create-update-qqg25" Mar 18 13:42:14.886834 master-0 kubenswrapper[27835]: I0318 13:42:14.886808 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz5kx\" (UniqueName: \"kubernetes.io/projected/e6a77218-90a4-48a8-beff-2c3b2d66c53e-kube-api-access-rz5kx\") pod \"swift-ring-rebalance-gd8zd\" (UID: \"e6a77218-90a4-48a8-beff-2c3b2d66c53e\") " pod="openstack/swift-ring-rebalance-gd8zd" Mar 18 13:42:14.886979 master-0 kubenswrapper[27835]: I0318 13:42:14.886892 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e6a77218-90a4-48a8-beff-2c3b2d66c53e-dispersionconf\") pod \"swift-ring-rebalance-gd8zd\" (UID: \"e6a77218-90a4-48a8-beff-2c3b2d66c53e\") " pod="openstack/swift-ring-rebalance-gd8zd" Mar 18 13:42:14.887073 master-0 kubenswrapper[27835]: I0318 13:42:14.887048 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6a77218-90a4-48a8-beff-2c3b2d66c53e-combined-ca-bundle\") pod \"swift-ring-rebalance-gd8zd\" (UID: \"e6a77218-90a4-48a8-beff-2c3b2d66c53e\") " pod="openstack/swift-ring-rebalance-gd8zd" Mar 18 13:42:14.887408 master-0 kubenswrapper[27835]: I0318 13:42:14.887384 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e6a77218-90a4-48a8-beff-2c3b2d66c53e-etc-swift\") pod \"swift-ring-rebalance-gd8zd\" (UID: \"e6a77218-90a4-48a8-beff-2c3b2d66c53e\") " pod="openstack/swift-ring-rebalance-gd8zd" Mar 18 13:42:14.887516 master-0 kubenswrapper[27835]: I0318 13:42:14.887495 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e6a77218-90a4-48a8-beff-2c3b2d66c53e-swiftconf\") pod \"swift-ring-rebalance-gd8zd\" (UID: \"e6a77218-90a4-48a8-beff-2c3b2d66c53e\") " pod="openstack/swift-ring-rebalance-gd8zd" Mar 18 13:42:14.888244 master-0 kubenswrapper[27835]: I0318 13:42:14.887889 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e6a77218-90a4-48a8-beff-2c3b2d66c53e-etc-swift\") pod \"swift-ring-rebalance-gd8zd\" (UID: \"e6a77218-90a4-48a8-beff-2c3b2d66c53e\") " pod="openstack/swift-ring-rebalance-gd8zd" Mar 18 13:42:14.888244 master-0 kubenswrapper[27835]: I0318 13:42:14.887993 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e6a77218-90a4-48a8-beff-2c3b2d66c53e-ring-data-devices\") pod \"swift-ring-rebalance-gd8zd\" (UID: \"e6a77218-90a4-48a8-beff-2c3b2d66c53e\") " pod="openstack/swift-ring-rebalance-gd8zd" Mar 18 13:42:14.888244 master-0 kubenswrapper[27835]: I0318 13:42:14.888071 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d879ffcd-5c0f-4ea8-83f9-0f2e3e0503d9-operator-scripts\") pod \"root-account-create-update-qqg25\" (UID: \"d879ffcd-5c0f-4ea8-83f9-0f2e3e0503d9\") " pod="openstack/root-account-create-update-qqg25" Mar 18 13:42:14.888244 master-0 kubenswrapper[27835]: I0318 13:42:14.888097 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e6a77218-90a4-48a8-beff-2c3b2d66c53e-scripts\") pod \"swift-ring-rebalance-gd8zd\" (UID: \"e6a77218-90a4-48a8-beff-2c3b2d66c53e\") " pod="openstack/swift-ring-rebalance-gd8zd" Mar 18 13:42:14.889096 master-0 kubenswrapper[27835]: I0318 13:42:14.889069 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e6a77218-90a4-48a8-beff-2c3b2d66c53e-ring-data-devices\") pod \"swift-ring-rebalance-gd8zd\" (UID: \"e6a77218-90a4-48a8-beff-2c3b2d66c53e\") " pod="openstack/swift-ring-rebalance-gd8zd" Mar 18 13:42:14.889652 master-0 kubenswrapper[27835]: I0318 13:42:14.889618 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e6a77218-90a4-48a8-beff-2c3b2d66c53e-scripts\") pod \"swift-ring-rebalance-gd8zd\" (UID: \"e6a77218-90a4-48a8-beff-2c3b2d66c53e\") " pod="openstack/swift-ring-rebalance-gd8zd" Mar 18 13:42:14.890692 master-0 kubenswrapper[27835]: I0318 13:42:14.890654 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e6a77218-90a4-48a8-beff-2c3b2d66c53e-dispersionconf\") pod \"swift-ring-rebalance-gd8zd\" (UID: \"e6a77218-90a4-48a8-beff-2c3b2d66c53e\") " pod="openstack/swift-ring-rebalance-gd8zd" Mar 18 13:42:14.907451 master-0 kubenswrapper[27835]: I0318 13:42:14.892750 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6a77218-90a4-48a8-beff-2c3b2d66c53e-combined-ca-bundle\") pod \"swift-ring-rebalance-gd8zd\" (UID: \"e6a77218-90a4-48a8-beff-2c3b2d66c53e\") " pod="openstack/swift-ring-rebalance-gd8zd" Mar 18 13:42:14.909486 master-0 kubenswrapper[27835]: I0318 13:42:14.907989 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e6a77218-90a4-48a8-beff-2c3b2d66c53e-swiftconf\") pod \"swift-ring-rebalance-gd8zd\" (UID: \"e6a77218-90a4-48a8-beff-2c3b2d66c53e\") " pod="openstack/swift-ring-rebalance-gd8zd" Mar 18 13:42:14.909711 master-0 kubenswrapper[27835]: I0318 13:42:14.909662 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz5kx\" (UniqueName: \"kubernetes.io/projected/e6a77218-90a4-48a8-beff-2c3b2d66c53e-kube-api-access-rz5kx\") pod \"swift-ring-rebalance-gd8zd\" (UID: \"e6a77218-90a4-48a8-beff-2c3b2d66c53e\") " pod="openstack/swift-ring-rebalance-gd8zd" Mar 18 13:42:14.912158 master-0 kubenswrapper[27835]: I0318 13:42:14.912041 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b8649b7f9-8486f" podStartSLOduration=3.912022243 podStartE2EDuration="3.912022243s" podCreationTimestamp="2026-03-18 13:42:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:42:14.900626291 +0000 UTC m=+1098.865837851" watchObservedRunningTime="2026-03-18 13:42:14.912022243 +0000 UTC m=+1098.877233813" Mar 18 13:42:14.963148 master-0 kubenswrapper[27835]: I0318 13:42:14.963087 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gd8zd" Mar 18 13:42:14.990117 master-0 kubenswrapper[27835]: I0318 13:42:14.990058 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d879ffcd-5c0f-4ea8-83f9-0f2e3e0503d9-operator-scripts\") pod \"root-account-create-update-qqg25\" (UID: \"d879ffcd-5c0f-4ea8-83f9-0f2e3e0503d9\") " pod="openstack/root-account-create-update-qqg25" Mar 18 13:42:14.990477 master-0 kubenswrapper[27835]: I0318 13:42:14.990445 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddq2x\" (UniqueName: \"kubernetes.io/projected/d879ffcd-5c0f-4ea8-83f9-0f2e3e0503d9-kube-api-access-ddq2x\") pod \"root-account-create-update-qqg25\" (UID: \"d879ffcd-5c0f-4ea8-83f9-0f2e3e0503d9\") " pod="openstack/root-account-create-update-qqg25" Mar 18 13:42:14.992904 master-0 kubenswrapper[27835]: I0318 13:42:14.992873 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d879ffcd-5c0f-4ea8-83f9-0f2e3e0503d9-operator-scripts\") pod \"root-account-create-update-qqg25\" (UID: \"d879ffcd-5c0f-4ea8-83f9-0f2e3e0503d9\") " pod="openstack/root-account-create-update-qqg25" Mar 18 13:42:15.012158 master-0 kubenswrapper[27835]: I0318 13:42:15.012080 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddq2x\" (UniqueName: \"kubernetes.io/projected/d879ffcd-5c0f-4ea8-83f9-0f2e3e0503d9-kube-api-access-ddq2x\") pod \"root-account-create-update-qqg25\" (UID: \"d879ffcd-5c0f-4ea8-83f9-0f2e3e0503d9\") " pod="openstack/root-account-create-update-qqg25" Mar 18 13:42:15.045107 master-0 kubenswrapper[27835]: I0318 13:42:15.045030 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qqg25" Mar 18 13:42:15.259997 master-0 kubenswrapper[27835]: I0318 13:42:15.259943 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9b160b49-2493-41f5-8134-481b4ae560db\" (UniqueName: \"kubernetes.io/csi/topolvm.io^738569df-d37f-4155-b244-3fb134df4116\") pod \"swift-storage-0\" (UID: \"7a767523-b86f-496d-940f-7a8afb0c3535\") " pod="openstack/swift-storage-0" Mar 18 13:42:15.401638 master-0 kubenswrapper[27835]: I0318 13:42:15.400712 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7a767523-b86f-496d-940f-7a8afb0c3535-etc-swift\") pod \"swift-storage-0\" (UID: \"7a767523-b86f-496d-940f-7a8afb0c3535\") " pod="openstack/swift-storage-0" Mar 18 13:42:15.401638 master-0 kubenswrapper[27835]: E0318 13:42:15.401127 27835 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 18 13:42:15.401638 master-0 kubenswrapper[27835]: E0318 13:42:15.401163 27835 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 18 13:42:15.401638 master-0 kubenswrapper[27835]: E0318 13:42:15.401224 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7a767523-b86f-496d-940f-7a8afb0c3535-etc-swift podName:7a767523-b86f-496d-940f-7a8afb0c3535 nodeName:}" failed. No retries permitted until 2026-03-18 13:42:17.401206743 +0000 UTC m=+1101.366418303 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7a767523-b86f-496d-940f-7a8afb0c3535-etc-swift") pod "swift-storage-0" (UID: "7a767523-b86f-496d-940f-7a8afb0c3535") : configmap "swift-ring-files" not found Mar 18 13:42:15.468207 master-0 kubenswrapper[27835]: W0318 13:42:15.468132 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6a77218_90a4_48a8_beff_2c3b2d66c53e.slice/crio-c69bcea4ea2caff973db5411cfdc07223342beeff14c44a83ec6532d9d30732b WatchSource:0}: Error finding container c69bcea4ea2caff973db5411cfdc07223342beeff14c44a83ec6532d9d30732b: Status 404 returned error can't find the container with id c69bcea4ea2caff973db5411cfdc07223342beeff14c44a83ec6532d9d30732b Mar 18 13:42:15.484477 master-0 kubenswrapper[27835]: I0318 13:42:15.484269 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-gd8zd"] Mar 18 13:42:15.617938 master-0 kubenswrapper[27835]: W0318 13:42:15.617875 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd879ffcd_5c0f_4ea8_83f9_0f2e3e0503d9.slice/crio-c5147e851b09566f4435071ec3a1db58f885490f5ee64a6034d57c845474931b WatchSource:0}: Error finding container c5147e851b09566f4435071ec3a1db58f885490f5ee64a6034d57c845474931b: Status 404 returned error can't find the container with id c5147e851b09566f4435071ec3a1db58f885490f5ee64a6034d57c845474931b Mar 18 13:42:15.619952 master-0 kubenswrapper[27835]: I0318 13:42:15.619690 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qqg25"] Mar 18 13:42:15.856553 master-0 kubenswrapper[27835]: I0318 13:42:15.856494 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qqg25" event={"ID":"d879ffcd-5c0f-4ea8-83f9-0f2e3e0503d9","Type":"ContainerStarted","Data":"02e9d4627ab91f9885fa3697aa5e5860b1ec4409c6144140cf0e851184e8045e"} Mar 18 13:42:15.856553 master-0 kubenswrapper[27835]: I0318 13:42:15.856555 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qqg25" event={"ID":"d879ffcd-5c0f-4ea8-83f9-0f2e3e0503d9","Type":"ContainerStarted","Data":"c5147e851b09566f4435071ec3a1db58f885490f5ee64a6034d57c845474931b"} Mar 18 13:42:15.858845 master-0 kubenswrapper[27835]: I0318 13:42:15.858032 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gd8zd" event={"ID":"e6a77218-90a4-48a8-beff-2c3b2d66c53e","Type":"ContainerStarted","Data":"c69bcea4ea2caff973db5411cfdc07223342beeff14c44a83ec6532d9d30732b"} Mar 18 13:42:15.878406 master-0 kubenswrapper[27835]: I0318 13:42:15.878292 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-qqg25" podStartSLOduration=1.8782679610000002 podStartE2EDuration="1.878267961s" podCreationTimestamp="2026-03-18 13:42:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:42:15.875789055 +0000 UTC m=+1099.841000635" watchObservedRunningTime="2026-03-18 13:42:15.878267961 +0000 UTC m=+1099.843479541" Mar 18 13:42:16.872642 master-0 kubenswrapper[27835]: I0318 13:42:16.872583 27835 generic.go:334] "Generic (PLEG): container finished" podID="d879ffcd-5c0f-4ea8-83f9-0f2e3e0503d9" containerID="02e9d4627ab91f9885fa3697aa5e5860b1ec4409c6144140cf0e851184e8045e" exitCode=0 Mar 18 13:42:16.872642 master-0 kubenswrapper[27835]: I0318 13:42:16.872643 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qqg25" event={"ID":"d879ffcd-5c0f-4ea8-83f9-0f2e3e0503d9","Type":"ContainerDied","Data":"02e9d4627ab91f9885fa3697aa5e5860b1ec4409c6144140cf0e851184e8045e"} Mar 18 13:42:17.460647 master-0 kubenswrapper[27835]: I0318 13:42:17.460592 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7a767523-b86f-496d-940f-7a8afb0c3535-etc-swift\") pod \"swift-storage-0\" (UID: \"7a767523-b86f-496d-940f-7a8afb0c3535\") " pod="openstack/swift-storage-0" Mar 18 13:42:17.460965 master-0 kubenswrapper[27835]: E0318 13:42:17.460803 27835 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 18 13:42:17.460965 master-0 kubenswrapper[27835]: E0318 13:42:17.460842 27835 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 18 13:42:17.460965 master-0 kubenswrapper[27835]: E0318 13:42:17.460906 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7a767523-b86f-496d-940f-7a8afb0c3535-etc-swift podName:7a767523-b86f-496d-940f-7a8afb0c3535 nodeName:}" failed. No retries permitted until 2026-03-18 13:42:21.460886801 +0000 UTC m=+1105.426098381 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7a767523-b86f-496d-940f-7a8afb0c3535-etc-swift") pod "swift-storage-0" (UID: "7a767523-b86f-496d-940f-7a8afb0c3535") : configmap "swift-ring-files" not found Mar 18 13:42:19.357163 master-0 kubenswrapper[27835]: I0318 13:42:19.353504 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-qftmv"] Mar 18 13:42:19.357163 master-0 kubenswrapper[27835]: I0318 13:42:19.355666 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-qftmv" Mar 18 13:42:19.499085 master-0 kubenswrapper[27835]: I0318 13:42:19.498747 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5346156-9529-4575-bd1e-79d1d034ec56-operator-scripts\") pod \"glance-db-create-qftmv\" (UID: \"c5346156-9529-4575-bd1e-79d1d034ec56\") " pod="openstack/glance-db-create-qftmv" Mar 18 13:42:19.499085 master-0 kubenswrapper[27835]: I0318 13:42:19.498871 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skxh2\" (UniqueName: \"kubernetes.io/projected/c5346156-9529-4575-bd1e-79d1d034ec56-kube-api-access-skxh2\") pod \"glance-db-create-qftmv\" (UID: \"c5346156-9529-4575-bd1e-79d1d034ec56\") " pod="openstack/glance-db-create-qftmv" Mar 18 13:42:19.601865 master-0 kubenswrapper[27835]: I0318 13:42:19.601300 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5346156-9529-4575-bd1e-79d1d034ec56-operator-scripts\") pod \"glance-db-create-qftmv\" (UID: \"c5346156-9529-4575-bd1e-79d1d034ec56\") " pod="openstack/glance-db-create-qftmv" Mar 18 13:42:19.601865 master-0 kubenswrapper[27835]: I0318 13:42:19.601481 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skxh2\" (UniqueName: \"kubernetes.io/projected/c5346156-9529-4575-bd1e-79d1d034ec56-kube-api-access-skxh2\") pod \"glance-db-create-qftmv\" (UID: \"c5346156-9529-4575-bd1e-79d1d034ec56\") " pod="openstack/glance-db-create-qftmv" Mar 18 13:42:19.604439 master-0 kubenswrapper[27835]: I0318 13:42:19.602357 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5346156-9529-4575-bd1e-79d1d034ec56-operator-scripts\") pod \"glance-db-create-qftmv\" (UID: \"c5346156-9529-4575-bd1e-79d1d034ec56\") " pod="openstack/glance-db-create-qftmv" Mar 18 13:42:19.828487 master-0 kubenswrapper[27835]: I0318 13:42:19.825564 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-0dea-account-create-update-x7t7q"] Mar 18 13:42:19.828487 master-0 kubenswrapper[27835]: I0318 13:42:19.827633 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0dea-account-create-update-x7t7q" Mar 18 13:42:19.837450 master-0 kubenswrapper[27835]: I0318 13:42:19.835042 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 18 13:42:19.840498 master-0 kubenswrapper[27835]: I0318 13:42:19.840160 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-qftmv"] Mar 18 13:42:19.915442 master-0 kubenswrapper[27835]: I0318 13:42:19.913649 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7wp7\" (UniqueName: \"kubernetes.io/projected/40629581-9efa-429e-adb9-d34bd5a7503d-kube-api-access-s7wp7\") pod \"glance-0dea-account-create-update-x7t7q\" (UID: \"40629581-9efa-429e-adb9-d34bd5a7503d\") " pod="openstack/glance-0dea-account-create-update-x7t7q" Mar 18 13:42:19.915442 master-0 kubenswrapper[27835]: I0318 13:42:19.913767 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40629581-9efa-429e-adb9-d34bd5a7503d-operator-scripts\") pod \"glance-0dea-account-create-update-x7t7q\" (UID: \"40629581-9efa-429e-adb9-d34bd5a7503d\") " pod="openstack/glance-0dea-account-create-update-x7t7q" Mar 18 13:42:20.016029 master-0 kubenswrapper[27835]: I0318 13:42:20.015895 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7wp7\" (UniqueName: \"kubernetes.io/projected/40629581-9efa-429e-adb9-d34bd5a7503d-kube-api-access-s7wp7\") pod \"glance-0dea-account-create-update-x7t7q\" (UID: \"40629581-9efa-429e-adb9-d34bd5a7503d\") " pod="openstack/glance-0dea-account-create-update-x7t7q" Mar 18 13:42:20.016186 master-0 kubenswrapper[27835]: I0318 13:42:20.016044 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40629581-9efa-429e-adb9-d34bd5a7503d-operator-scripts\") pod \"glance-0dea-account-create-update-x7t7q\" (UID: \"40629581-9efa-429e-adb9-d34bd5a7503d\") " pod="openstack/glance-0dea-account-create-update-x7t7q" Mar 18 13:42:20.017130 master-0 kubenswrapper[27835]: I0318 13:42:20.017101 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40629581-9efa-429e-adb9-d34bd5a7503d-operator-scripts\") pod \"glance-0dea-account-create-update-x7t7q\" (UID: \"40629581-9efa-429e-adb9-d34bd5a7503d\") " pod="openstack/glance-0dea-account-create-update-x7t7q" Mar 18 13:42:20.017906 master-0 kubenswrapper[27835]: I0318 13:42:20.017877 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skxh2\" (UniqueName: \"kubernetes.io/projected/c5346156-9529-4575-bd1e-79d1d034ec56-kube-api-access-skxh2\") pod \"glance-db-create-qftmv\" (UID: \"c5346156-9529-4575-bd1e-79d1d034ec56\") " pod="openstack/glance-db-create-qftmv" Mar 18 13:42:20.022407 master-0 kubenswrapper[27835]: I0318 13:42:20.022365 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0dea-account-create-update-x7t7q"] Mar 18 13:42:20.022530 master-0 kubenswrapper[27835]: I0318 13:42:20.022434 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-qftmv" Mar 18 13:42:20.060462 master-0 kubenswrapper[27835]: I0318 13:42:20.060391 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-lbbjp"] Mar 18 13:42:20.062908 master-0 kubenswrapper[27835]: I0318 13:42:20.062862 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-lbbjp" Mar 18 13:42:20.127213 master-0 kubenswrapper[27835]: I0318 13:42:20.127160 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qqg25" Mar 18 13:42:20.219894 master-0 kubenswrapper[27835]: I0318 13:42:20.219845 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddq2x\" (UniqueName: \"kubernetes.io/projected/d879ffcd-5c0f-4ea8-83f9-0f2e3e0503d9-kube-api-access-ddq2x\") pod \"d879ffcd-5c0f-4ea8-83f9-0f2e3e0503d9\" (UID: \"d879ffcd-5c0f-4ea8-83f9-0f2e3e0503d9\") " Mar 18 13:42:20.220083 master-0 kubenswrapper[27835]: I0318 13:42:20.220015 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d879ffcd-5c0f-4ea8-83f9-0f2e3e0503d9-operator-scripts\") pod \"d879ffcd-5c0f-4ea8-83f9-0f2e3e0503d9\" (UID: \"d879ffcd-5c0f-4ea8-83f9-0f2e3e0503d9\") " Mar 18 13:42:20.220421 master-0 kubenswrapper[27835]: I0318 13:42:20.220384 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36e457fd-4fdd-4013-a0ba-e4b04480064b-operator-scripts\") pod \"keystone-db-create-lbbjp\" (UID: \"36e457fd-4fdd-4013-a0ba-e4b04480064b\") " pod="openstack/keystone-db-create-lbbjp" Mar 18 13:42:20.220473 master-0 kubenswrapper[27835]: I0318 13:42:20.220441 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q69m\" (UniqueName: \"kubernetes.io/projected/36e457fd-4fdd-4013-a0ba-e4b04480064b-kube-api-access-7q69m\") pod \"keystone-db-create-lbbjp\" (UID: \"36e457fd-4fdd-4013-a0ba-e4b04480064b\") " pod="openstack/keystone-db-create-lbbjp" Mar 18 13:42:20.221372 master-0 kubenswrapper[27835]: I0318 13:42:20.221344 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d879ffcd-5c0f-4ea8-83f9-0f2e3e0503d9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d879ffcd-5c0f-4ea8-83f9-0f2e3e0503d9" (UID: "d879ffcd-5c0f-4ea8-83f9-0f2e3e0503d9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:42:20.223579 master-0 kubenswrapper[27835]: I0318 13:42:20.223509 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d879ffcd-5c0f-4ea8-83f9-0f2e3e0503d9-kube-api-access-ddq2x" (OuterVolumeSpecName: "kube-api-access-ddq2x") pod "d879ffcd-5c0f-4ea8-83f9-0f2e3e0503d9" (UID: "d879ffcd-5c0f-4ea8-83f9-0f2e3e0503d9"). InnerVolumeSpecName "kube-api-access-ddq2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:42:20.329626 master-0 kubenswrapper[27835]: I0318 13:42:20.329377 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-lbbjp"] Mar 18 13:42:20.330608 master-0 kubenswrapper[27835]: I0318 13:42:20.330574 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36e457fd-4fdd-4013-a0ba-e4b04480064b-operator-scripts\") pod \"keystone-db-create-lbbjp\" (UID: \"36e457fd-4fdd-4013-a0ba-e4b04480064b\") " pod="openstack/keystone-db-create-lbbjp" Mar 18 13:42:20.330774 master-0 kubenswrapper[27835]: I0318 13:42:20.330744 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q69m\" (UniqueName: \"kubernetes.io/projected/36e457fd-4fdd-4013-a0ba-e4b04480064b-kube-api-access-7q69m\") pod \"keystone-db-create-lbbjp\" (UID: \"36e457fd-4fdd-4013-a0ba-e4b04480064b\") " pod="openstack/keystone-db-create-lbbjp" Mar 18 13:42:20.332018 master-0 kubenswrapper[27835]: I0318 13:42:20.331993 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36e457fd-4fdd-4013-a0ba-e4b04480064b-operator-scripts\") pod \"keystone-db-create-lbbjp\" (UID: \"36e457fd-4fdd-4013-a0ba-e4b04480064b\") " pod="openstack/keystone-db-create-lbbjp" Mar 18 13:42:20.334804 master-0 kubenswrapper[27835]: I0318 13:42:20.334758 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddq2x\" (UniqueName: \"kubernetes.io/projected/d879ffcd-5c0f-4ea8-83f9-0f2e3e0503d9-kube-api-access-ddq2x\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:20.334804 master-0 kubenswrapper[27835]: I0318 13:42:20.334800 27835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d879ffcd-5c0f-4ea8-83f9-0f2e3e0503d9-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:20.416908 master-0 kubenswrapper[27835]: I0318 13:42:20.416837 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7wp7\" (UniqueName: \"kubernetes.io/projected/40629581-9efa-429e-adb9-d34bd5a7503d-kube-api-access-s7wp7\") pod \"glance-0dea-account-create-update-x7t7q\" (UID: \"40629581-9efa-429e-adb9-d34bd5a7503d\") " pod="openstack/glance-0dea-account-create-update-x7t7q" Mar 18 13:42:20.452791 master-0 kubenswrapper[27835]: I0318 13:42:20.452714 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0dea-account-create-update-x7t7q" Mar 18 13:42:20.522646 master-0 kubenswrapper[27835]: I0318 13:42:20.522593 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q69m\" (UniqueName: \"kubernetes.io/projected/36e457fd-4fdd-4013-a0ba-e4b04480064b-kube-api-access-7q69m\") pod \"keystone-db-create-lbbjp\" (UID: \"36e457fd-4fdd-4013-a0ba-e4b04480064b\") " pod="openstack/keystone-db-create-lbbjp" Mar 18 13:42:20.539379 master-0 kubenswrapper[27835]: W0318 13:42:20.538635 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5346156_9529_4575_bd1e_79d1d034ec56.slice/crio-6f5626616c07f8cada03c562f5000a9d877ab24154392fd1bb79aa4b6dd7a931 WatchSource:0}: Error finding container 6f5626616c07f8cada03c562f5000a9d877ab24154392fd1bb79aa4b6dd7a931: Status 404 returned error can't find the container with id 6f5626616c07f8cada03c562f5000a9d877ab24154392fd1bb79aa4b6dd7a931 Mar 18 13:42:20.562083 master-0 kubenswrapper[27835]: I0318 13:42:20.557733 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-8ae1-account-create-update-tpb5p"] Mar 18 13:42:20.562083 master-0 kubenswrapper[27835]: E0318 13:42:20.558243 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d879ffcd-5c0f-4ea8-83f9-0f2e3e0503d9" containerName="mariadb-account-create-update" Mar 18 13:42:20.562083 master-0 kubenswrapper[27835]: I0318 13:42:20.558257 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d879ffcd-5c0f-4ea8-83f9-0f2e3e0503d9" containerName="mariadb-account-create-update" Mar 18 13:42:20.563237 master-0 kubenswrapper[27835]: I0318 13:42:20.562770 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="d879ffcd-5c0f-4ea8-83f9-0f2e3e0503d9" containerName="mariadb-account-create-update" Mar 18 13:42:20.565123 master-0 kubenswrapper[27835]: I0318 13:42:20.565078 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8ae1-account-create-update-tpb5p" Mar 18 13:42:20.573281 master-0 kubenswrapper[27835]: I0318 13:42:20.573213 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 18 13:42:20.606398 master-0 kubenswrapper[27835]: I0318 13:42:20.606131 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-qftmv"] Mar 18 13:42:20.627072 master-0 kubenswrapper[27835]: I0318 13:42:20.627006 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8ae1-account-create-update-tpb5p"] Mar 18 13:42:20.656559 master-0 kubenswrapper[27835]: I0318 13:42:20.655715 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrd2k\" (UniqueName: \"kubernetes.io/projected/84a4c423-d112-4b2d-9917-2fb8af188187-kube-api-access-xrd2k\") pod \"keystone-8ae1-account-create-update-tpb5p\" (UID: \"84a4c423-d112-4b2d-9917-2fb8af188187\") " pod="openstack/keystone-8ae1-account-create-update-tpb5p" Mar 18 13:42:20.656559 master-0 kubenswrapper[27835]: I0318 13:42:20.655912 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84a4c423-d112-4b2d-9917-2fb8af188187-operator-scripts\") pod \"keystone-8ae1-account-create-update-tpb5p\" (UID: \"84a4c423-d112-4b2d-9917-2fb8af188187\") " pod="openstack/keystone-8ae1-account-create-update-tpb5p" Mar 18 13:42:20.657211 master-0 kubenswrapper[27835]: I0318 13:42:20.657187 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-hd42z"] Mar 18 13:42:20.658576 master-0 kubenswrapper[27835]: I0318 13:42:20.658551 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hd42z" Mar 18 13:42:20.676253 master-0 kubenswrapper[27835]: I0318 13:42:20.676120 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-hd42z"] Mar 18 13:42:20.701611 master-0 kubenswrapper[27835]: I0318 13:42:20.701564 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-571e-account-create-update-qwmpc"] Mar 18 13:42:20.705660 master-0 kubenswrapper[27835]: I0318 13:42:20.705617 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-571e-account-create-update-qwmpc" Mar 18 13:42:20.713132 master-0 kubenswrapper[27835]: I0318 13:42:20.713057 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Mar 18 13:42:20.761383 master-0 kubenswrapper[27835]: I0318 13:42:20.761342 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-lbbjp" Mar 18 13:42:20.762744 master-0 kubenswrapper[27835]: I0318 13:42:20.762680 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19318ed4-494a-44fb-b05f-1b82d07994be-operator-scripts\") pod \"placement-db-create-hd42z\" (UID: \"19318ed4-494a-44fb-b05f-1b82d07994be\") " pod="openstack/placement-db-create-hd42z" Mar 18 13:42:20.762855 master-0 kubenswrapper[27835]: I0318 13:42:20.762830 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6zkf\" (UniqueName: \"kubernetes.io/projected/19318ed4-494a-44fb-b05f-1b82d07994be-kube-api-access-l6zkf\") pod \"placement-db-create-hd42z\" (UID: \"19318ed4-494a-44fb-b05f-1b82d07994be\") " pod="openstack/placement-db-create-hd42z" Mar 18 13:42:20.762905 master-0 kubenswrapper[27835]: I0318 13:42:20.762888 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84a4c423-d112-4b2d-9917-2fb8af188187-operator-scripts\") pod \"keystone-8ae1-account-create-update-tpb5p\" (UID: \"84a4c423-d112-4b2d-9917-2fb8af188187\") " pod="openstack/keystone-8ae1-account-create-update-tpb5p" Mar 18 13:42:20.762959 master-0 kubenswrapper[27835]: I0318 13:42:20.762940 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrd2k\" (UniqueName: \"kubernetes.io/projected/84a4c423-d112-4b2d-9917-2fb8af188187-kube-api-access-xrd2k\") pod \"keystone-8ae1-account-create-update-tpb5p\" (UID: \"84a4c423-d112-4b2d-9917-2fb8af188187\") " pod="openstack/keystone-8ae1-account-create-update-tpb5p" Mar 18 13:42:20.764289 master-0 kubenswrapper[27835]: I0318 13:42:20.764235 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84a4c423-d112-4b2d-9917-2fb8af188187-operator-scripts\") pod \"keystone-8ae1-account-create-update-tpb5p\" (UID: \"84a4c423-d112-4b2d-9917-2fb8af188187\") " pod="openstack/keystone-8ae1-account-create-update-tpb5p" Mar 18 13:42:20.771519 master-0 kubenswrapper[27835]: I0318 13:42:20.771475 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-571e-account-create-update-qwmpc"] Mar 18 13:42:20.789957 master-0 kubenswrapper[27835]: I0318 13:42:20.789915 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrd2k\" (UniqueName: \"kubernetes.io/projected/84a4c423-d112-4b2d-9917-2fb8af188187-kube-api-access-xrd2k\") pod \"keystone-8ae1-account-create-update-tpb5p\" (UID: \"84a4c423-d112-4b2d-9917-2fb8af188187\") " pod="openstack/keystone-8ae1-account-create-update-tpb5p" Mar 18 13:42:20.864520 master-0 kubenswrapper[27835]: I0318 13:42:20.864280 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6962730c-c54f-4806-8fd4-165f6c7b5728-operator-scripts\") pod \"placement-571e-account-create-update-qwmpc\" (UID: \"6962730c-c54f-4806-8fd4-165f6c7b5728\") " pod="openstack/placement-571e-account-create-update-qwmpc" Mar 18 13:42:20.866034 master-0 kubenswrapper[27835]: I0318 13:42:20.865896 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19318ed4-494a-44fb-b05f-1b82d07994be-operator-scripts\") pod \"placement-db-create-hd42z\" (UID: \"19318ed4-494a-44fb-b05f-1b82d07994be\") " pod="openstack/placement-db-create-hd42z" Mar 18 13:42:20.867327 master-0 kubenswrapper[27835]: I0318 13:42:20.867293 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19318ed4-494a-44fb-b05f-1b82d07994be-operator-scripts\") pod \"placement-db-create-hd42z\" (UID: \"19318ed4-494a-44fb-b05f-1b82d07994be\") " pod="openstack/placement-db-create-hd42z" Mar 18 13:42:20.867399 master-0 kubenswrapper[27835]: I0318 13:42:20.867379 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfzrp\" (UniqueName: \"kubernetes.io/projected/6962730c-c54f-4806-8fd4-165f6c7b5728-kube-api-access-jfzrp\") pod \"placement-571e-account-create-update-qwmpc\" (UID: \"6962730c-c54f-4806-8fd4-165f6c7b5728\") " pod="openstack/placement-571e-account-create-update-qwmpc" Mar 18 13:42:20.867509 master-0 kubenswrapper[27835]: I0318 13:42:20.867491 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6zkf\" (UniqueName: \"kubernetes.io/projected/19318ed4-494a-44fb-b05f-1b82d07994be-kube-api-access-l6zkf\") pod \"placement-db-create-hd42z\" (UID: \"19318ed4-494a-44fb-b05f-1b82d07994be\") " pod="openstack/placement-db-create-hd42z" Mar 18 13:42:20.887226 master-0 kubenswrapper[27835]: I0318 13:42:20.887187 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6zkf\" (UniqueName: \"kubernetes.io/projected/19318ed4-494a-44fb-b05f-1b82d07994be-kube-api-access-l6zkf\") pod \"placement-db-create-hd42z\" (UID: \"19318ed4-494a-44fb-b05f-1b82d07994be\") " pod="openstack/placement-db-create-hd42z" Mar 18 13:42:20.925739 master-0 kubenswrapper[27835]: I0318 13:42:20.920993 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8ae1-account-create-update-tpb5p" Mar 18 13:42:20.932096 master-0 kubenswrapper[27835]: I0318 13:42:20.932032 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qqg25" event={"ID":"d879ffcd-5c0f-4ea8-83f9-0f2e3e0503d9","Type":"ContainerDied","Data":"c5147e851b09566f4435071ec3a1db58f885490f5ee64a6034d57c845474931b"} Mar 18 13:42:20.932182 master-0 kubenswrapper[27835]: I0318 13:42:20.932100 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5147e851b09566f4435071ec3a1db58f885490f5ee64a6034d57c845474931b" Mar 18 13:42:20.932219 master-0 kubenswrapper[27835]: I0318 13:42:20.932185 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qqg25" Mar 18 13:42:20.934624 master-0 kubenswrapper[27835]: I0318 13:42:20.934584 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-qftmv" event={"ID":"c5346156-9529-4575-bd1e-79d1d034ec56","Type":"ContainerStarted","Data":"0ca060ed73f56a0c4c5767679adb7de8690c27440a3dbe67abbbbd92b6a4857f"} Mar 18 13:42:20.934692 master-0 kubenswrapper[27835]: I0318 13:42:20.934625 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-qftmv" event={"ID":"c5346156-9529-4575-bd1e-79d1d034ec56","Type":"ContainerStarted","Data":"6f5626616c07f8cada03c562f5000a9d877ab24154392fd1bb79aa4b6dd7a931"} Mar 18 13:42:20.969345 master-0 kubenswrapper[27835]: I0318 13:42:20.968905 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfzrp\" (UniqueName: \"kubernetes.io/projected/6962730c-c54f-4806-8fd4-165f6c7b5728-kube-api-access-jfzrp\") pod \"placement-571e-account-create-update-qwmpc\" (UID: \"6962730c-c54f-4806-8fd4-165f6c7b5728\") " pod="openstack/placement-571e-account-create-update-qwmpc" Mar 18 13:42:20.969345 master-0 kubenswrapper[27835]: I0318 13:42:20.969046 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6962730c-c54f-4806-8fd4-165f6c7b5728-operator-scripts\") pod \"placement-571e-account-create-update-qwmpc\" (UID: \"6962730c-c54f-4806-8fd4-165f6c7b5728\") " pod="openstack/placement-571e-account-create-update-qwmpc" Mar 18 13:42:20.970401 master-0 kubenswrapper[27835]: I0318 13:42:20.970374 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6962730c-c54f-4806-8fd4-165f6c7b5728-operator-scripts\") pod \"placement-571e-account-create-update-qwmpc\" (UID: \"6962730c-c54f-4806-8fd4-165f6c7b5728\") " pod="openstack/placement-571e-account-create-update-qwmpc" Mar 18 13:42:20.986542 master-0 kubenswrapper[27835]: I0318 13:42:20.975862 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-qftmv" podStartSLOduration=2.975838154 podStartE2EDuration="2.975838154s" podCreationTimestamp="2026-03-18 13:42:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:42:20.954495918 +0000 UTC m=+1104.919707498" watchObservedRunningTime="2026-03-18 13:42:20.975838154 +0000 UTC m=+1104.941049714" Mar 18 13:42:20.988811 master-0 kubenswrapper[27835]: I0318 13:42:20.988762 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfzrp\" (UniqueName: \"kubernetes.io/projected/6962730c-c54f-4806-8fd4-165f6c7b5728-kube-api-access-jfzrp\") pod \"placement-571e-account-create-update-qwmpc\" (UID: \"6962730c-c54f-4806-8fd4-165f6c7b5728\") " pod="openstack/placement-571e-account-create-update-qwmpc" Mar 18 13:42:20.993885 master-0 kubenswrapper[27835]: I0318 13:42:20.993826 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hd42z" Mar 18 13:42:21.028249 master-0 kubenswrapper[27835]: I0318 13:42:21.027203 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-571e-account-create-update-qwmpc" Mar 18 13:42:21.080878 master-0 kubenswrapper[27835]: I0318 13:42:21.080808 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0dea-account-create-update-x7t7q"] Mar 18 13:42:21.269898 master-0 kubenswrapper[27835]: I0318 13:42:21.269830 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-lbbjp"] Mar 18 13:42:21.300378 master-0 kubenswrapper[27835]: W0318 13:42:21.300313 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36e457fd_4fdd_4013_a0ba_e4b04480064b.slice/crio-bae550727ec9a40e75e3fc77b2ca43572925f2e4f1816d2a35e1a0c3abb83921 WatchSource:0}: Error finding container bae550727ec9a40e75e3fc77b2ca43572925f2e4f1816d2a35e1a0c3abb83921: Status 404 returned error can't find the container with id bae550727ec9a40e75e3fc77b2ca43572925f2e4f1816d2a35e1a0c3abb83921 Mar 18 13:42:21.485029 master-0 kubenswrapper[27835]: I0318 13:42:21.484896 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7a767523-b86f-496d-940f-7a8afb0c3535-etc-swift\") pod \"swift-storage-0\" (UID: \"7a767523-b86f-496d-940f-7a8afb0c3535\") " pod="openstack/swift-storage-0" Mar 18 13:42:21.485849 master-0 kubenswrapper[27835]: E0318 13:42:21.485130 27835 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 18 13:42:21.485849 master-0 kubenswrapper[27835]: E0318 13:42:21.485309 27835 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 18 13:42:21.485849 master-0 kubenswrapper[27835]: E0318 13:42:21.485371 27835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7a767523-b86f-496d-940f-7a8afb0c3535-etc-swift podName:7a767523-b86f-496d-940f-7a8afb0c3535 nodeName:}" failed. No retries permitted until 2026-03-18 13:42:29.485352143 +0000 UTC m=+1113.450563703 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7a767523-b86f-496d-940f-7a8afb0c3535-etc-swift") pod "swift-storage-0" (UID: "7a767523-b86f-496d-940f-7a8afb0c3535") : configmap "swift-ring-files" not found Mar 18 13:42:21.491340 master-0 kubenswrapper[27835]: I0318 13:42:21.491225 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8ae1-account-create-update-tpb5p"] Mar 18 13:42:21.603717 master-0 kubenswrapper[27835]: I0318 13:42:21.603604 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-hd42z"] Mar 18 13:42:21.624165 master-0 kubenswrapper[27835]: I0318 13:42:21.619377 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b8649b7f9-8486f" Mar 18 13:42:21.761621 master-0 kubenswrapper[27835]: I0318 13:42:21.760323 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-v9p5z"] Mar 18 13:42:21.761621 master-0 kubenswrapper[27835]: I0318 13:42:21.760558 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6ff8fd9d5c-v9p5z" podUID="6a036cb1-24e8-401c-af08-1291061013fa" containerName="dnsmasq-dns" containerID="cri-o://6728595522ef2ce6242ce29be25c45f543787f2c638e59b4d987ce1c24d424fb" gracePeriod=10 Mar 18 13:42:21.775558 master-0 kubenswrapper[27835]: W0318 13:42:21.774355 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6962730c_c54f_4806_8fd4_165f6c7b5728.slice/crio-588235a9539611eb13a3e567e03b258dcf53a0824824f52a710af272fd09a481 WatchSource:0}: Error finding container 588235a9539611eb13a3e567e03b258dcf53a0824824f52a710af272fd09a481: Status 404 returned error can't find the container with id 588235a9539611eb13a3e567e03b258dcf53a0824824f52a710af272fd09a481 Mar 18 13:42:21.793071 master-0 kubenswrapper[27835]: I0318 13:42:21.790442 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-571e-account-create-update-qwmpc"] Mar 18 13:42:21.952523 master-0 kubenswrapper[27835]: I0318 13:42:21.952457 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hd42z" event={"ID":"19318ed4-494a-44fb-b05f-1b82d07994be","Type":"ContainerStarted","Data":"afde83f6c247461401455b2fb26f99bf4f4f18681371641aa0200af48038587c"} Mar 18 13:42:21.954170 master-0 kubenswrapper[27835]: I0318 13:42:21.954013 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8ae1-account-create-update-tpb5p" event={"ID":"84a4c423-d112-4b2d-9917-2fb8af188187","Type":"ContainerStarted","Data":"fdf301e2f58eed5f9744f31fe5eb9ade99acee7450c91153a77428178e7d0cdb"} Mar 18 13:42:21.954170 master-0 kubenswrapper[27835]: I0318 13:42:21.954041 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8ae1-account-create-update-tpb5p" event={"ID":"84a4c423-d112-4b2d-9917-2fb8af188187","Type":"ContainerStarted","Data":"2a2085748523ba8f05c1542f472f70d4b96d0195e050028125ab0950eb6e90b6"} Mar 18 13:42:21.958992 master-0 kubenswrapper[27835]: I0318 13:42:21.956495 27835 generic.go:334] "Generic (PLEG): container finished" podID="c5346156-9529-4575-bd1e-79d1d034ec56" containerID="0ca060ed73f56a0c4c5767679adb7de8690c27440a3dbe67abbbbd92b6a4857f" exitCode=0 Mar 18 13:42:21.958992 master-0 kubenswrapper[27835]: I0318 13:42:21.956558 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-qftmv" event={"ID":"c5346156-9529-4575-bd1e-79d1d034ec56","Type":"ContainerDied","Data":"0ca060ed73f56a0c4c5767679adb7de8690c27440a3dbe67abbbbd92b6a4857f"} Mar 18 13:42:21.959573 master-0 kubenswrapper[27835]: I0318 13:42:21.959403 27835 generic.go:334] "Generic (PLEG): container finished" podID="36e457fd-4fdd-4013-a0ba-e4b04480064b" containerID="0060352331cb881cd80881fedd8313e5f4b79c7b6249f79e399dad1942c109e5" exitCode=0 Mar 18 13:42:21.959573 master-0 kubenswrapper[27835]: I0318 13:42:21.959490 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-lbbjp" event={"ID":"36e457fd-4fdd-4013-a0ba-e4b04480064b","Type":"ContainerDied","Data":"0060352331cb881cd80881fedd8313e5f4b79c7b6249f79e399dad1942c109e5"} Mar 18 13:42:21.959697 master-0 kubenswrapper[27835]: I0318 13:42:21.959553 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-lbbjp" event={"ID":"36e457fd-4fdd-4013-a0ba-e4b04480064b","Type":"ContainerStarted","Data":"bae550727ec9a40e75e3fc77b2ca43572925f2e4f1816d2a35e1a0c3abb83921"} Mar 18 13:42:21.963362 master-0 kubenswrapper[27835]: I0318 13:42:21.961146 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gd8zd" event={"ID":"e6a77218-90a4-48a8-beff-2c3b2d66c53e","Type":"ContainerStarted","Data":"eaf977d35d2aad20b6164a59524eb871b6aa92c60eebbe8a94b338946d69b2e7"} Mar 18 13:42:21.964536 master-0 kubenswrapper[27835]: I0318 13:42:21.964498 27835 generic.go:334] "Generic (PLEG): container finished" podID="6a036cb1-24e8-401c-af08-1291061013fa" containerID="6728595522ef2ce6242ce29be25c45f543787f2c638e59b4d987ce1c24d424fb" exitCode=0 Mar 18 13:42:21.964607 master-0 kubenswrapper[27835]: I0318 13:42:21.964550 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-v9p5z" event={"ID":"6a036cb1-24e8-401c-af08-1291061013fa","Type":"ContainerDied","Data":"6728595522ef2ce6242ce29be25c45f543787f2c638e59b4d987ce1c24d424fb"} Mar 18 13:42:21.967215 master-0 kubenswrapper[27835]: I0318 13:42:21.967089 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-571e-account-create-update-qwmpc" event={"ID":"6962730c-c54f-4806-8fd4-165f6c7b5728","Type":"ContainerStarted","Data":"588235a9539611eb13a3e567e03b258dcf53a0824824f52a710af272fd09a481"} Mar 18 13:42:21.969008 master-0 kubenswrapper[27835]: I0318 13:42:21.968971 27835 generic.go:334] "Generic (PLEG): container finished" podID="40629581-9efa-429e-adb9-d34bd5a7503d" containerID="475c06844783a943bb72476eefc715bfc7fb1716de2c45f346dcfcd02cb52c45" exitCode=0 Mar 18 13:42:21.969008 master-0 kubenswrapper[27835]: I0318 13:42:21.969004 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0dea-account-create-update-x7t7q" event={"ID":"40629581-9efa-429e-adb9-d34bd5a7503d","Type":"ContainerDied","Data":"475c06844783a943bb72476eefc715bfc7fb1716de2c45f346dcfcd02cb52c45"} Mar 18 13:42:21.969342 master-0 kubenswrapper[27835]: I0318 13:42:21.969019 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0dea-account-create-update-x7t7q" event={"ID":"40629581-9efa-429e-adb9-d34bd5a7503d","Type":"ContainerStarted","Data":"0f762a0cc3233d2b260acbbe7c31c0e187913ef0415ac20bb6b4deb1b8a93d72"} Mar 18 13:42:21.995994 master-0 kubenswrapper[27835]: I0318 13:42:21.993317 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-8ae1-account-create-update-tpb5p" podStartSLOduration=1.9933003999999999 podStartE2EDuration="1.9933004s" podCreationTimestamp="2026-03-18 13:42:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:42:21.980600963 +0000 UTC m=+1105.945812543" watchObservedRunningTime="2026-03-18 13:42:21.9933004 +0000 UTC m=+1105.958511960" Mar 18 13:42:22.121495 master-0 kubenswrapper[27835]: I0318 13:42:22.118059 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-gd8zd" podStartSLOduration=3.041903883 podStartE2EDuration="8.118037727s" podCreationTimestamp="2026-03-18 13:42:14 +0000 UTC" firstStartedPulling="2026-03-18 13:42:15.473199721 +0000 UTC m=+1099.438411281" lastFinishedPulling="2026-03-18 13:42:20.549333565 +0000 UTC m=+1104.514545125" observedRunningTime="2026-03-18 13:42:22.10794095 +0000 UTC m=+1106.073152520" watchObservedRunningTime="2026-03-18 13:42:22.118037727 +0000 UTC m=+1106.083249287" Mar 18 13:42:22.406338 master-0 kubenswrapper[27835]: I0318 13:42:22.405802 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff8fd9d5c-v9p5z" Mar 18 13:42:22.543996 master-0 kubenswrapper[27835]: I0318 13:42:22.543932 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28546\" (UniqueName: \"kubernetes.io/projected/6a036cb1-24e8-401c-af08-1291061013fa-kube-api-access-28546\") pod \"6a036cb1-24e8-401c-af08-1291061013fa\" (UID: \"6a036cb1-24e8-401c-af08-1291061013fa\") " Mar 18 13:42:22.544639 master-0 kubenswrapper[27835]: I0318 13:42:22.544107 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a036cb1-24e8-401c-af08-1291061013fa-config\") pod \"6a036cb1-24e8-401c-af08-1291061013fa\" (UID: \"6a036cb1-24e8-401c-af08-1291061013fa\") " Mar 18 13:42:22.544639 master-0 kubenswrapper[27835]: I0318 13:42:22.544194 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a036cb1-24e8-401c-af08-1291061013fa-dns-svc\") pod \"6a036cb1-24e8-401c-af08-1291061013fa\" (UID: \"6a036cb1-24e8-401c-af08-1291061013fa\") " Mar 18 13:42:22.548835 master-0 kubenswrapper[27835]: I0318 13:42:22.548118 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a036cb1-24e8-401c-af08-1291061013fa-kube-api-access-28546" (OuterVolumeSpecName: "kube-api-access-28546") pod "6a036cb1-24e8-401c-af08-1291061013fa" (UID: "6a036cb1-24e8-401c-af08-1291061013fa"). InnerVolumeSpecName "kube-api-access-28546". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:42:22.593353 master-0 kubenswrapper[27835]: I0318 13:42:22.593234 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a036cb1-24e8-401c-af08-1291061013fa-config" (OuterVolumeSpecName: "config") pod "6a036cb1-24e8-401c-af08-1291061013fa" (UID: "6a036cb1-24e8-401c-af08-1291061013fa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:42:22.595072 master-0 kubenswrapper[27835]: I0318 13:42:22.595014 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a036cb1-24e8-401c-af08-1291061013fa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6a036cb1-24e8-401c-af08-1291061013fa" (UID: "6a036cb1-24e8-401c-af08-1291061013fa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:42:22.646517 master-0 kubenswrapper[27835]: I0318 13:42:22.646000 27835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a036cb1-24e8-401c-af08-1291061013fa-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:22.646517 master-0 kubenswrapper[27835]: I0318 13:42:22.646051 27835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a036cb1-24e8-401c-af08-1291061013fa-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:22.646517 master-0 kubenswrapper[27835]: I0318 13:42:22.646062 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28546\" (UniqueName: \"kubernetes.io/projected/6a036cb1-24e8-401c-af08-1291061013fa-kube-api-access-28546\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:22.994435 master-0 kubenswrapper[27835]: I0318 13:42:22.994299 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-v9p5z" event={"ID":"6a036cb1-24e8-401c-af08-1291061013fa","Type":"ContainerDied","Data":"ade841e38ea6aa818bd0a8868c2bbeecc8f8c8f6676792920edff5d93802f85f"} Mar 18 13:42:22.994435 master-0 kubenswrapper[27835]: I0318 13:42:22.994404 27835 scope.go:117] "RemoveContainer" containerID="6728595522ef2ce6242ce29be25c45f543787f2c638e59b4d987ce1c24d424fb" Mar 18 13:42:22.994735 master-0 kubenswrapper[27835]: I0318 13:42:22.994553 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff8fd9d5c-v9p5z" Mar 18 13:42:22.998036 master-0 kubenswrapper[27835]: I0318 13:42:22.997880 27835 generic.go:334] "Generic (PLEG): container finished" podID="6962730c-c54f-4806-8fd4-165f6c7b5728" containerID="5020984329f2fbbb81ed7299b57841aa7fa1fbe05a4ecd992b8667e4d18db5bb" exitCode=0 Mar 18 13:42:22.998036 master-0 kubenswrapper[27835]: I0318 13:42:22.997985 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-571e-account-create-update-qwmpc" event={"ID":"6962730c-c54f-4806-8fd4-165f6c7b5728","Type":"ContainerDied","Data":"5020984329f2fbbb81ed7299b57841aa7fa1fbe05a4ecd992b8667e4d18db5bb"} Mar 18 13:42:23.001937 master-0 kubenswrapper[27835]: I0318 13:42:23.001867 27835 generic.go:334] "Generic (PLEG): container finished" podID="19318ed4-494a-44fb-b05f-1b82d07994be" containerID="7d097b1e4cec0030d94fcbfd31ac84f92b0aa2fe3296e3908431907f5e844ff6" exitCode=0 Mar 18 13:42:23.002156 master-0 kubenswrapper[27835]: I0318 13:42:23.002091 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hd42z" event={"ID":"19318ed4-494a-44fb-b05f-1b82d07994be","Type":"ContainerDied","Data":"7d097b1e4cec0030d94fcbfd31ac84f92b0aa2fe3296e3908431907f5e844ff6"} Mar 18 13:42:23.005813 master-0 kubenswrapper[27835]: I0318 13:42:23.005764 27835 generic.go:334] "Generic (PLEG): container finished" podID="84a4c423-d112-4b2d-9917-2fb8af188187" containerID="fdf301e2f58eed5f9744f31fe5eb9ade99acee7450c91153a77428178e7d0cdb" exitCode=0 Mar 18 13:42:23.005975 master-0 kubenswrapper[27835]: I0318 13:42:23.005935 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8ae1-account-create-update-tpb5p" event={"ID":"84a4c423-d112-4b2d-9917-2fb8af188187","Type":"ContainerDied","Data":"fdf301e2f58eed5f9744f31fe5eb9ade99acee7450c91153a77428178e7d0cdb"} Mar 18 13:42:23.032349 master-0 kubenswrapper[27835]: I0318 13:42:23.032310 27835 scope.go:117] "RemoveContainer" containerID="f378b2cd7842dc383b721f1b96d394d299a694cb1c6c4db8ef5f21ade54aabfe" Mar 18 13:42:23.160622 master-0 kubenswrapper[27835]: I0318 13:42:23.160547 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-v9p5z"] Mar 18 13:42:23.170611 master-0 kubenswrapper[27835]: I0318 13:42:23.170489 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-v9p5z"] Mar 18 13:42:23.697042 master-0 kubenswrapper[27835]: I0318 13:42:23.696580 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-lbbjp" Mar 18 13:42:23.777548 master-0 kubenswrapper[27835]: I0318 13:42:23.774462 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36e457fd-4fdd-4013-a0ba-e4b04480064b-operator-scripts\") pod \"36e457fd-4fdd-4013-a0ba-e4b04480064b\" (UID: \"36e457fd-4fdd-4013-a0ba-e4b04480064b\") " Mar 18 13:42:23.777548 master-0 kubenswrapper[27835]: I0318 13:42:23.774535 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7q69m\" (UniqueName: \"kubernetes.io/projected/36e457fd-4fdd-4013-a0ba-e4b04480064b-kube-api-access-7q69m\") pod \"36e457fd-4fdd-4013-a0ba-e4b04480064b\" (UID: \"36e457fd-4fdd-4013-a0ba-e4b04480064b\") " Mar 18 13:42:23.777548 master-0 kubenswrapper[27835]: I0318 13:42:23.775047 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36e457fd-4fdd-4013-a0ba-e4b04480064b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "36e457fd-4fdd-4013-a0ba-e4b04480064b" (UID: "36e457fd-4fdd-4013-a0ba-e4b04480064b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:42:23.792105 master-0 kubenswrapper[27835]: I0318 13:42:23.791983 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36e457fd-4fdd-4013-a0ba-e4b04480064b-kube-api-access-7q69m" (OuterVolumeSpecName: "kube-api-access-7q69m") pod "36e457fd-4fdd-4013-a0ba-e4b04480064b" (UID: "36e457fd-4fdd-4013-a0ba-e4b04480064b"). InnerVolumeSpecName "kube-api-access-7q69m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:42:23.864582 master-0 kubenswrapper[27835]: I0318 13:42:23.864535 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0dea-account-create-update-x7t7q" Mar 18 13:42:23.876564 master-0 kubenswrapper[27835]: I0318 13:42:23.876518 27835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36e457fd-4fdd-4013-a0ba-e4b04480064b-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:23.876564 master-0 kubenswrapper[27835]: I0318 13:42:23.876554 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7q69m\" (UniqueName: \"kubernetes.io/projected/36e457fd-4fdd-4013-a0ba-e4b04480064b-kube-api-access-7q69m\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:23.877460 master-0 kubenswrapper[27835]: I0318 13:42:23.877438 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-qftmv" Mar 18 13:42:23.981839 master-0 kubenswrapper[27835]: I0318 13:42:23.977805 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skxh2\" (UniqueName: \"kubernetes.io/projected/c5346156-9529-4575-bd1e-79d1d034ec56-kube-api-access-skxh2\") pod \"c5346156-9529-4575-bd1e-79d1d034ec56\" (UID: \"c5346156-9529-4575-bd1e-79d1d034ec56\") " Mar 18 13:42:23.981839 master-0 kubenswrapper[27835]: I0318 13:42:23.977915 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7wp7\" (UniqueName: \"kubernetes.io/projected/40629581-9efa-429e-adb9-d34bd5a7503d-kube-api-access-s7wp7\") pod \"40629581-9efa-429e-adb9-d34bd5a7503d\" (UID: \"40629581-9efa-429e-adb9-d34bd5a7503d\") " Mar 18 13:42:23.981839 master-0 kubenswrapper[27835]: I0318 13:42:23.978005 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40629581-9efa-429e-adb9-d34bd5a7503d-operator-scripts\") pod \"40629581-9efa-429e-adb9-d34bd5a7503d\" (UID: \"40629581-9efa-429e-adb9-d34bd5a7503d\") " Mar 18 13:42:23.981839 master-0 kubenswrapper[27835]: I0318 13:42:23.978130 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5346156-9529-4575-bd1e-79d1d034ec56-operator-scripts\") pod \"c5346156-9529-4575-bd1e-79d1d034ec56\" (UID: \"c5346156-9529-4575-bd1e-79d1d034ec56\") " Mar 18 13:42:23.981839 master-0 kubenswrapper[27835]: I0318 13:42:23.978957 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40629581-9efa-429e-adb9-d34bd5a7503d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "40629581-9efa-429e-adb9-d34bd5a7503d" (UID: "40629581-9efa-429e-adb9-d34bd5a7503d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:42:23.981839 master-0 kubenswrapper[27835]: I0318 13:42:23.979428 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5346156-9529-4575-bd1e-79d1d034ec56-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c5346156-9529-4575-bd1e-79d1d034ec56" (UID: "c5346156-9529-4575-bd1e-79d1d034ec56"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:42:23.981839 master-0 kubenswrapper[27835]: I0318 13:42:23.981466 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5346156-9529-4575-bd1e-79d1d034ec56-kube-api-access-skxh2" (OuterVolumeSpecName: "kube-api-access-skxh2") pod "c5346156-9529-4575-bd1e-79d1d034ec56" (UID: "c5346156-9529-4575-bd1e-79d1d034ec56"). InnerVolumeSpecName "kube-api-access-skxh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:42:23.982356 master-0 kubenswrapper[27835]: I0318 13:42:23.981895 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40629581-9efa-429e-adb9-d34bd5a7503d-kube-api-access-s7wp7" (OuterVolumeSpecName: "kube-api-access-s7wp7") pod "40629581-9efa-429e-adb9-d34bd5a7503d" (UID: "40629581-9efa-429e-adb9-d34bd5a7503d"). InnerVolumeSpecName "kube-api-access-s7wp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:42:23.984507 master-0 kubenswrapper[27835]: I0318 13:42:23.984457 27835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5346156-9529-4575-bd1e-79d1d034ec56-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:23.984507 master-0 kubenswrapper[27835]: I0318 13:42:23.984505 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skxh2\" (UniqueName: \"kubernetes.io/projected/c5346156-9529-4575-bd1e-79d1d034ec56-kube-api-access-skxh2\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:23.984649 master-0 kubenswrapper[27835]: I0318 13:42:23.984524 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7wp7\" (UniqueName: \"kubernetes.io/projected/40629581-9efa-429e-adb9-d34bd5a7503d-kube-api-access-s7wp7\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:23.984649 master-0 kubenswrapper[27835]: I0318 13:42:23.984540 27835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40629581-9efa-429e-adb9-d34bd5a7503d-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:24.019854 master-0 kubenswrapper[27835]: I0318 13:42:24.019767 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0dea-account-create-update-x7t7q" event={"ID":"40629581-9efa-429e-adb9-d34bd5a7503d","Type":"ContainerDied","Data":"0f762a0cc3233d2b260acbbe7c31c0e187913ef0415ac20bb6b4deb1b8a93d72"} Mar 18 13:42:24.019854 master-0 kubenswrapper[27835]: I0318 13:42:24.019821 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f762a0cc3233d2b260acbbe7c31c0e187913ef0415ac20bb6b4deb1b8a93d72" Mar 18 13:42:24.019854 master-0 kubenswrapper[27835]: I0318 13:42:24.019792 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0dea-account-create-update-x7t7q" Mar 18 13:42:24.026287 master-0 kubenswrapper[27835]: I0318 13:42:24.024952 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-qftmv" event={"ID":"c5346156-9529-4575-bd1e-79d1d034ec56","Type":"ContainerDied","Data":"6f5626616c07f8cada03c562f5000a9d877ab24154392fd1bb79aa4b6dd7a931"} Mar 18 13:42:24.026287 master-0 kubenswrapper[27835]: I0318 13:42:24.024991 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f5626616c07f8cada03c562f5000a9d877ab24154392fd1bb79aa4b6dd7a931" Mar 18 13:42:24.026287 master-0 kubenswrapper[27835]: I0318 13:42:24.025050 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-qftmv" Mar 18 13:42:24.035989 master-0 kubenswrapper[27835]: I0318 13:42:24.035872 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-lbbjp" Mar 18 13:42:24.036119 master-0 kubenswrapper[27835]: I0318 13:42:24.035872 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-lbbjp" event={"ID":"36e457fd-4fdd-4013-a0ba-e4b04480064b","Type":"ContainerDied","Data":"bae550727ec9a40e75e3fc77b2ca43572925f2e4f1816d2a35e1a0c3abb83921"} Mar 18 13:42:24.036119 master-0 kubenswrapper[27835]: I0318 13:42:24.036025 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bae550727ec9a40e75e3fc77b2ca43572925f2e4f1816d2a35e1a0c3abb83921" Mar 18 13:42:24.296081 master-0 kubenswrapper[27835]: I0318 13:42:24.295988 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a036cb1-24e8-401c-af08-1291061013fa" path="/var/lib/kubelet/pods/6a036cb1-24e8-401c-af08-1291061013fa/volumes" Mar 18 13:42:24.456823 master-0 kubenswrapper[27835]: I0318 13:42:24.456779 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hd42z" Mar 18 13:42:24.613522 master-0 kubenswrapper[27835]: I0318 13:42:24.613185 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6zkf\" (UniqueName: \"kubernetes.io/projected/19318ed4-494a-44fb-b05f-1b82d07994be-kube-api-access-l6zkf\") pod \"19318ed4-494a-44fb-b05f-1b82d07994be\" (UID: \"19318ed4-494a-44fb-b05f-1b82d07994be\") " Mar 18 13:42:24.613522 master-0 kubenswrapper[27835]: I0318 13:42:24.613477 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19318ed4-494a-44fb-b05f-1b82d07994be-operator-scripts\") pod \"19318ed4-494a-44fb-b05f-1b82d07994be\" (UID: \"19318ed4-494a-44fb-b05f-1b82d07994be\") " Mar 18 13:42:24.617288 master-0 kubenswrapper[27835]: I0318 13:42:24.616815 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19318ed4-494a-44fb-b05f-1b82d07994be-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "19318ed4-494a-44fb-b05f-1b82d07994be" (UID: "19318ed4-494a-44fb-b05f-1b82d07994be"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:42:24.621547 master-0 kubenswrapper[27835]: I0318 13:42:24.621426 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19318ed4-494a-44fb-b05f-1b82d07994be-kube-api-access-l6zkf" (OuterVolumeSpecName: "kube-api-access-l6zkf") pod "19318ed4-494a-44fb-b05f-1b82d07994be" (UID: "19318ed4-494a-44fb-b05f-1b82d07994be"). InnerVolumeSpecName "kube-api-access-l6zkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:42:24.701557 master-0 kubenswrapper[27835]: I0318 13:42:24.699178 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-571e-account-create-update-qwmpc" Mar 18 13:42:24.707094 master-0 kubenswrapper[27835]: I0318 13:42:24.707002 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8ae1-account-create-update-tpb5p" Mar 18 13:42:24.715459 master-0 kubenswrapper[27835]: I0318 13:42:24.715403 27835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19318ed4-494a-44fb-b05f-1b82d07994be-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:24.716214 master-0 kubenswrapper[27835]: I0318 13:42:24.716181 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6zkf\" (UniqueName: \"kubernetes.io/projected/19318ed4-494a-44fb-b05f-1b82d07994be-kube-api-access-l6zkf\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:24.817943 master-0 kubenswrapper[27835]: I0318 13:42:24.817882 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84a4c423-d112-4b2d-9917-2fb8af188187-operator-scripts\") pod \"84a4c423-d112-4b2d-9917-2fb8af188187\" (UID: \"84a4c423-d112-4b2d-9917-2fb8af188187\") " Mar 18 13:42:24.817943 master-0 kubenswrapper[27835]: I0318 13:42:24.817937 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfzrp\" (UniqueName: \"kubernetes.io/projected/6962730c-c54f-4806-8fd4-165f6c7b5728-kube-api-access-jfzrp\") pod \"6962730c-c54f-4806-8fd4-165f6c7b5728\" (UID: \"6962730c-c54f-4806-8fd4-165f6c7b5728\") " Mar 18 13:42:24.818175 master-0 kubenswrapper[27835]: I0318 13:42:24.818067 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6962730c-c54f-4806-8fd4-165f6c7b5728-operator-scripts\") pod \"6962730c-c54f-4806-8fd4-165f6c7b5728\" (UID: \"6962730c-c54f-4806-8fd4-165f6c7b5728\") " Mar 18 13:42:24.818260 master-0 kubenswrapper[27835]: I0318 13:42:24.818237 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrd2k\" (UniqueName: \"kubernetes.io/projected/84a4c423-d112-4b2d-9917-2fb8af188187-kube-api-access-xrd2k\") pod \"84a4c423-d112-4b2d-9917-2fb8af188187\" (UID: \"84a4c423-d112-4b2d-9917-2fb8af188187\") " Mar 18 13:42:24.818711 master-0 kubenswrapper[27835]: I0318 13:42:24.818692 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84a4c423-d112-4b2d-9917-2fb8af188187-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "84a4c423-d112-4b2d-9917-2fb8af188187" (UID: "84a4c423-d112-4b2d-9917-2fb8af188187"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:42:24.818989 master-0 kubenswrapper[27835]: I0318 13:42:24.818893 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6962730c-c54f-4806-8fd4-165f6c7b5728-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6962730c-c54f-4806-8fd4-165f6c7b5728" (UID: "6962730c-c54f-4806-8fd4-165f6c7b5728"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:42:24.819365 master-0 kubenswrapper[27835]: I0318 13:42:24.819328 27835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84a4c423-d112-4b2d-9917-2fb8af188187-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:24.819365 master-0 kubenswrapper[27835]: I0318 13:42:24.819362 27835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6962730c-c54f-4806-8fd4-165f6c7b5728-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:24.821220 master-0 kubenswrapper[27835]: I0318 13:42:24.821180 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6962730c-c54f-4806-8fd4-165f6c7b5728-kube-api-access-jfzrp" (OuterVolumeSpecName: "kube-api-access-jfzrp") pod "6962730c-c54f-4806-8fd4-165f6c7b5728" (UID: "6962730c-c54f-4806-8fd4-165f6c7b5728"). InnerVolumeSpecName "kube-api-access-jfzrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:42:24.822191 master-0 kubenswrapper[27835]: I0318 13:42:24.822140 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84a4c423-d112-4b2d-9917-2fb8af188187-kube-api-access-xrd2k" (OuterVolumeSpecName: "kube-api-access-xrd2k") pod "84a4c423-d112-4b2d-9917-2fb8af188187" (UID: "84a4c423-d112-4b2d-9917-2fb8af188187"). InnerVolumeSpecName "kube-api-access-xrd2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:42:24.921347 master-0 kubenswrapper[27835]: I0318 13:42:24.921274 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrd2k\" (UniqueName: \"kubernetes.io/projected/84a4c423-d112-4b2d-9917-2fb8af188187-kube-api-access-xrd2k\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:24.921347 master-0 kubenswrapper[27835]: I0318 13:42:24.921334 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfzrp\" (UniqueName: \"kubernetes.io/projected/6962730c-c54f-4806-8fd4-165f6c7b5728-kube-api-access-jfzrp\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:25.048270 master-0 kubenswrapper[27835]: I0318 13:42:25.048206 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-571e-account-create-update-qwmpc" event={"ID":"6962730c-c54f-4806-8fd4-165f6c7b5728","Type":"ContainerDied","Data":"588235a9539611eb13a3e567e03b258dcf53a0824824f52a710af272fd09a481"} Mar 18 13:42:25.048270 master-0 kubenswrapper[27835]: I0318 13:42:25.048266 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="588235a9539611eb13a3e567e03b258dcf53a0824824f52a710af272fd09a481" Mar 18 13:42:25.048757 master-0 kubenswrapper[27835]: I0318 13:42:25.048354 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-571e-account-create-update-qwmpc" Mar 18 13:42:25.052328 master-0 kubenswrapper[27835]: I0318 13:42:25.052241 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hd42z" event={"ID":"19318ed4-494a-44fb-b05f-1b82d07994be","Type":"ContainerDied","Data":"afde83f6c247461401455b2fb26f99bf4f4f18681371641aa0200af48038587c"} Mar 18 13:42:25.052328 master-0 kubenswrapper[27835]: I0318 13:42:25.052298 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afde83f6c247461401455b2fb26f99bf4f4f18681371641aa0200af48038587c" Mar 18 13:42:25.052741 master-0 kubenswrapper[27835]: I0318 13:42:25.052374 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hd42z" Mar 18 13:42:25.055112 master-0 kubenswrapper[27835]: I0318 13:42:25.055080 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8ae1-account-create-update-tpb5p" event={"ID":"84a4c423-d112-4b2d-9917-2fb8af188187","Type":"ContainerDied","Data":"2a2085748523ba8f05c1542f472f70d4b96d0195e050028125ab0950eb6e90b6"} Mar 18 13:42:25.055112 master-0 kubenswrapper[27835]: I0318 13:42:25.055115 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a2085748523ba8f05c1542f472f70d4b96d0195e050028125ab0950eb6e90b6" Mar 18 13:42:25.055112 master-0 kubenswrapper[27835]: I0318 13:42:25.055129 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8ae1-account-create-update-tpb5p" Mar 18 13:42:25.837922 master-0 kubenswrapper[27835]: I0318 13:42:25.837832 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-qqg25"] Mar 18 13:42:25.850204 master-0 kubenswrapper[27835]: I0318 13:42:25.850119 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-qqg25"] Mar 18 13:42:26.300962 master-0 kubenswrapper[27835]: I0318 13:42:26.300850 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d879ffcd-5c0f-4ea8-83f9-0f2e3e0503d9" path="/var/lib/kubelet/pods/d879ffcd-5c0f-4ea8-83f9-0f2e3e0503d9/volumes" Mar 18 13:42:28.085209 master-0 kubenswrapper[27835]: I0318 13:42:28.085156 27835 generic.go:334] "Generic (PLEG): container finished" podID="e6a77218-90a4-48a8-beff-2c3b2d66c53e" containerID="eaf977d35d2aad20b6164a59524eb871b6aa92c60eebbe8a94b338946d69b2e7" exitCode=0 Mar 18 13:42:28.085922 master-0 kubenswrapper[27835]: I0318 13:42:28.085249 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gd8zd" event={"ID":"e6a77218-90a4-48a8-beff-2c3b2d66c53e","Type":"ContainerDied","Data":"eaf977d35d2aad20b6164a59524eb871b6aa92c60eebbe8a94b338946d69b2e7"} Mar 18 13:42:29.529764 master-0 kubenswrapper[27835]: I0318 13:42:29.529706 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7a767523-b86f-496d-940f-7a8afb0c3535-etc-swift\") pod \"swift-storage-0\" (UID: \"7a767523-b86f-496d-940f-7a8afb0c3535\") " pod="openstack/swift-storage-0" Mar 18 13:42:29.537046 master-0 kubenswrapper[27835]: I0318 13:42:29.536995 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7a767523-b86f-496d-940f-7a8afb0c3535-etc-swift\") pod \"swift-storage-0\" (UID: \"7a767523-b86f-496d-940f-7a8afb0c3535\") " pod="openstack/swift-storage-0" Mar 18 13:42:29.537754 master-0 kubenswrapper[27835]: I0318 13:42:29.535405 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-p2pp6"] Mar 18 13:42:29.538191 master-0 kubenswrapper[27835]: E0318 13:42:29.538157 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84a4c423-d112-4b2d-9917-2fb8af188187" containerName="mariadb-account-create-update" Mar 18 13:42:29.538191 master-0 kubenswrapper[27835]: I0318 13:42:29.538184 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="84a4c423-d112-4b2d-9917-2fb8af188187" containerName="mariadb-account-create-update" Mar 18 13:42:29.538260 master-0 kubenswrapper[27835]: E0318 13:42:29.538228 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36e457fd-4fdd-4013-a0ba-e4b04480064b" containerName="mariadb-database-create" Mar 18 13:42:29.538260 master-0 kubenswrapper[27835]: I0318 13:42:29.538238 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="36e457fd-4fdd-4013-a0ba-e4b04480064b" containerName="mariadb-database-create" Mar 18 13:42:29.538260 master-0 kubenswrapper[27835]: E0318 13:42:29.538258 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6962730c-c54f-4806-8fd4-165f6c7b5728" containerName="mariadb-account-create-update" Mar 18 13:42:29.538605 master-0 kubenswrapper[27835]: I0318 13:42:29.538268 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="6962730c-c54f-4806-8fd4-165f6c7b5728" containerName="mariadb-account-create-update" Mar 18 13:42:29.538605 master-0 kubenswrapper[27835]: E0318 13:42:29.538279 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a036cb1-24e8-401c-af08-1291061013fa" containerName="dnsmasq-dns" Mar 18 13:42:29.538605 master-0 kubenswrapper[27835]: I0318 13:42:29.538288 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a036cb1-24e8-401c-af08-1291061013fa" containerName="dnsmasq-dns" Mar 18 13:42:29.538605 master-0 kubenswrapper[27835]: E0318 13:42:29.538305 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19318ed4-494a-44fb-b05f-1b82d07994be" containerName="mariadb-database-create" Mar 18 13:42:29.538605 master-0 kubenswrapper[27835]: I0318 13:42:29.538314 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="19318ed4-494a-44fb-b05f-1b82d07994be" containerName="mariadb-database-create" Mar 18 13:42:29.538605 master-0 kubenswrapper[27835]: E0318 13:42:29.538332 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a036cb1-24e8-401c-af08-1291061013fa" containerName="init" Mar 18 13:42:29.538605 master-0 kubenswrapper[27835]: I0318 13:42:29.538582 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a036cb1-24e8-401c-af08-1291061013fa" containerName="init" Mar 18 13:42:29.538605 master-0 kubenswrapper[27835]: E0318 13:42:29.538603 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5346156-9529-4575-bd1e-79d1d034ec56" containerName="mariadb-database-create" Mar 18 13:42:29.538853 master-0 kubenswrapper[27835]: I0318 13:42:29.538618 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5346156-9529-4575-bd1e-79d1d034ec56" containerName="mariadb-database-create" Mar 18 13:42:29.538853 master-0 kubenswrapper[27835]: E0318 13:42:29.538650 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40629581-9efa-429e-adb9-d34bd5a7503d" containerName="mariadb-account-create-update" Mar 18 13:42:29.538853 master-0 kubenswrapper[27835]: I0318 13:42:29.538659 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="40629581-9efa-429e-adb9-d34bd5a7503d" containerName="mariadb-account-create-update" Mar 18 13:42:29.538942 master-0 kubenswrapper[27835]: I0318 13:42:29.538907 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="40629581-9efa-429e-adb9-d34bd5a7503d" containerName="mariadb-account-create-update" Mar 18 13:42:29.538972 master-0 kubenswrapper[27835]: I0318 13:42:29.538954 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="19318ed4-494a-44fb-b05f-1b82d07994be" containerName="mariadb-database-create" Mar 18 13:42:29.539003 master-0 kubenswrapper[27835]: I0318 13:42:29.538976 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="84a4c423-d112-4b2d-9917-2fb8af188187" containerName="mariadb-account-create-update" Mar 18 13:42:29.539003 master-0 kubenswrapper[27835]: I0318 13:42:29.538994 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5346156-9529-4575-bd1e-79d1d034ec56" containerName="mariadb-database-create" Mar 18 13:42:29.539064 master-0 kubenswrapper[27835]: I0318 13:42:29.539007 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="6962730c-c54f-4806-8fd4-165f6c7b5728" containerName="mariadb-account-create-update" Mar 18 13:42:29.539064 master-0 kubenswrapper[27835]: I0318 13:42:29.539028 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="36e457fd-4fdd-4013-a0ba-e4b04480064b" containerName="mariadb-database-create" Mar 18 13:42:29.539064 master-0 kubenswrapper[27835]: I0318 13:42:29.539054 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a036cb1-24e8-401c-af08-1291061013fa" containerName="dnsmasq-dns" Mar 18 13:42:29.540405 master-0 kubenswrapper[27835]: I0318 13:42:29.540362 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-p2pp6" Mar 18 13:42:29.552946 master-0 kubenswrapper[27835]: I0318 13:42:29.552870 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-4f519-config-data" Mar 18 13:42:29.554109 master-0 kubenswrapper[27835]: I0318 13:42:29.554050 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-p2pp6"] Mar 18 13:42:29.611455 master-0 kubenswrapper[27835]: I0318 13:42:29.611399 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gd8zd" Mar 18 13:42:29.636743 master-0 kubenswrapper[27835]: I0318 13:42:29.636627 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8x98\" (UniqueName: \"kubernetes.io/projected/05a99a49-5215-40c9-ba30-54618aa67479-kube-api-access-l8x98\") pod \"glance-db-sync-p2pp6\" (UID: \"05a99a49-5215-40c9-ba30-54618aa67479\") " pod="openstack/glance-db-sync-p2pp6" Mar 18 13:42:29.637265 master-0 kubenswrapper[27835]: I0318 13:42:29.637204 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/05a99a49-5215-40c9-ba30-54618aa67479-db-sync-config-data\") pod \"glance-db-sync-p2pp6\" (UID: \"05a99a49-5215-40c9-ba30-54618aa67479\") " pod="openstack/glance-db-sync-p2pp6" Mar 18 13:42:29.637725 master-0 kubenswrapper[27835]: I0318 13:42:29.637696 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05a99a49-5215-40c9-ba30-54618aa67479-config-data\") pod \"glance-db-sync-p2pp6\" (UID: \"05a99a49-5215-40c9-ba30-54618aa67479\") " pod="openstack/glance-db-sync-p2pp6" Mar 18 13:42:29.638137 master-0 kubenswrapper[27835]: I0318 13:42:29.638112 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05a99a49-5215-40c9-ba30-54618aa67479-combined-ca-bundle\") pod \"glance-db-sync-p2pp6\" (UID: \"05a99a49-5215-40c9-ba30-54618aa67479\") " pod="openstack/glance-db-sync-p2pp6" Mar 18 13:42:29.667038 master-0 kubenswrapper[27835]: I0318 13:42:29.666889 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 18 13:42:29.740025 master-0 kubenswrapper[27835]: I0318 13:42:29.739810 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e6a77218-90a4-48a8-beff-2c3b2d66c53e-scripts\") pod \"e6a77218-90a4-48a8-beff-2c3b2d66c53e\" (UID: \"e6a77218-90a4-48a8-beff-2c3b2d66c53e\") " Mar 18 13:42:29.740025 master-0 kubenswrapper[27835]: I0318 13:42:29.739900 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6a77218-90a4-48a8-beff-2c3b2d66c53e-combined-ca-bundle\") pod \"e6a77218-90a4-48a8-beff-2c3b2d66c53e\" (UID: \"e6a77218-90a4-48a8-beff-2c3b2d66c53e\") " Mar 18 13:42:29.740025 master-0 kubenswrapper[27835]: I0318 13:42:29.739945 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rz5kx\" (UniqueName: \"kubernetes.io/projected/e6a77218-90a4-48a8-beff-2c3b2d66c53e-kube-api-access-rz5kx\") pod \"e6a77218-90a4-48a8-beff-2c3b2d66c53e\" (UID: \"e6a77218-90a4-48a8-beff-2c3b2d66c53e\") " Mar 18 13:42:29.740476 master-0 kubenswrapper[27835]: I0318 13:42:29.740072 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e6a77218-90a4-48a8-beff-2c3b2d66c53e-swiftconf\") pod \"e6a77218-90a4-48a8-beff-2c3b2d66c53e\" (UID: \"e6a77218-90a4-48a8-beff-2c3b2d66c53e\") " Mar 18 13:42:29.740476 master-0 kubenswrapper[27835]: I0318 13:42:29.740136 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e6a77218-90a4-48a8-beff-2c3b2d66c53e-etc-swift\") pod \"e6a77218-90a4-48a8-beff-2c3b2d66c53e\" (UID: \"e6a77218-90a4-48a8-beff-2c3b2d66c53e\") " Mar 18 13:42:29.740476 master-0 kubenswrapper[27835]: I0318 13:42:29.740163 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e6a77218-90a4-48a8-beff-2c3b2d66c53e-dispersionconf\") pod \"e6a77218-90a4-48a8-beff-2c3b2d66c53e\" (UID: \"e6a77218-90a4-48a8-beff-2c3b2d66c53e\") " Mar 18 13:42:29.740476 master-0 kubenswrapper[27835]: I0318 13:42:29.740247 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e6a77218-90a4-48a8-beff-2c3b2d66c53e-ring-data-devices\") pod \"e6a77218-90a4-48a8-beff-2c3b2d66c53e\" (UID: \"e6a77218-90a4-48a8-beff-2c3b2d66c53e\") " Mar 18 13:42:29.740746 master-0 kubenswrapper[27835]: I0318 13:42:29.740705 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8x98\" (UniqueName: \"kubernetes.io/projected/05a99a49-5215-40c9-ba30-54618aa67479-kube-api-access-l8x98\") pod \"glance-db-sync-p2pp6\" (UID: \"05a99a49-5215-40c9-ba30-54618aa67479\") " pod="openstack/glance-db-sync-p2pp6" Mar 18 13:42:29.740788 master-0 kubenswrapper[27835]: I0318 13:42:29.740760 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/05a99a49-5215-40c9-ba30-54618aa67479-db-sync-config-data\") pod \"glance-db-sync-p2pp6\" (UID: \"05a99a49-5215-40c9-ba30-54618aa67479\") " pod="openstack/glance-db-sync-p2pp6" Mar 18 13:42:29.740847 master-0 kubenswrapper[27835]: I0318 13:42:29.740820 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05a99a49-5215-40c9-ba30-54618aa67479-config-data\") pod \"glance-db-sync-p2pp6\" (UID: \"05a99a49-5215-40c9-ba30-54618aa67479\") " pod="openstack/glance-db-sync-p2pp6" Mar 18 13:42:29.740937 master-0 kubenswrapper[27835]: I0318 13:42:29.740903 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05a99a49-5215-40c9-ba30-54618aa67479-combined-ca-bundle\") pod \"glance-db-sync-p2pp6\" (UID: \"05a99a49-5215-40c9-ba30-54618aa67479\") " pod="openstack/glance-db-sync-p2pp6" Mar 18 13:42:29.742350 master-0 kubenswrapper[27835]: I0318 13:42:29.742291 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6a77218-90a4-48a8-beff-2c3b2d66c53e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "e6a77218-90a4-48a8-beff-2c3b2d66c53e" (UID: "e6a77218-90a4-48a8-beff-2c3b2d66c53e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:42:29.744256 master-0 kubenswrapper[27835]: I0318 13:42:29.744191 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6a77218-90a4-48a8-beff-2c3b2d66c53e-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "e6a77218-90a4-48a8-beff-2c3b2d66c53e" (UID: "e6a77218-90a4-48a8-beff-2c3b2d66c53e"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:42:29.749640 master-0 kubenswrapper[27835]: I0318 13:42:29.749060 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6a77218-90a4-48a8-beff-2c3b2d66c53e-kube-api-access-rz5kx" (OuterVolumeSpecName: "kube-api-access-rz5kx") pod "e6a77218-90a4-48a8-beff-2c3b2d66c53e" (UID: "e6a77218-90a4-48a8-beff-2c3b2d66c53e"). InnerVolumeSpecName "kube-api-access-rz5kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:42:29.749640 master-0 kubenswrapper[27835]: I0318 13:42:29.749273 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/05a99a49-5215-40c9-ba30-54618aa67479-db-sync-config-data\") pod \"glance-db-sync-p2pp6\" (UID: \"05a99a49-5215-40c9-ba30-54618aa67479\") " pod="openstack/glance-db-sync-p2pp6" Mar 18 13:42:29.749640 master-0 kubenswrapper[27835]: I0318 13:42:29.749522 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05a99a49-5215-40c9-ba30-54618aa67479-config-data\") pod \"glance-db-sync-p2pp6\" (UID: \"05a99a49-5215-40c9-ba30-54618aa67479\") " pod="openstack/glance-db-sync-p2pp6" Mar 18 13:42:29.751845 master-0 kubenswrapper[27835]: I0318 13:42:29.751749 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05a99a49-5215-40c9-ba30-54618aa67479-combined-ca-bundle\") pod \"glance-db-sync-p2pp6\" (UID: \"05a99a49-5215-40c9-ba30-54618aa67479\") " pod="openstack/glance-db-sync-p2pp6" Mar 18 13:42:29.754660 master-0 kubenswrapper[27835]: I0318 13:42:29.753531 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6a77218-90a4-48a8-beff-2c3b2d66c53e-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "e6a77218-90a4-48a8-beff-2c3b2d66c53e" (UID: "e6a77218-90a4-48a8-beff-2c3b2d66c53e"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:42:29.765325 master-0 kubenswrapper[27835]: I0318 13:42:29.765248 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6a77218-90a4-48a8-beff-2c3b2d66c53e-scripts" (OuterVolumeSpecName: "scripts") pod "e6a77218-90a4-48a8-beff-2c3b2d66c53e" (UID: "e6a77218-90a4-48a8-beff-2c3b2d66c53e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:42:29.768700 master-0 kubenswrapper[27835]: I0318 13:42:29.768618 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8x98\" (UniqueName: \"kubernetes.io/projected/05a99a49-5215-40c9-ba30-54618aa67479-kube-api-access-l8x98\") pod \"glance-db-sync-p2pp6\" (UID: \"05a99a49-5215-40c9-ba30-54618aa67479\") " pod="openstack/glance-db-sync-p2pp6" Mar 18 13:42:29.774657 master-0 kubenswrapper[27835]: I0318 13:42:29.774594 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6a77218-90a4-48a8-beff-2c3b2d66c53e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6a77218-90a4-48a8-beff-2c3b2d66c53e" (UID: "e6a77218-90a4-48a8-beff-2c3b2d66c53e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:42:29.832037 master-0 kubenswrapper[27835]: I0318 13:42:29.828048 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6a77218-90a4-48a8-beff-2c3b2d66c53e-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "e6a77218-90a4-48a8-beff-2c3b2d66c53e" (UID: "e6a77218-90a4-48a8-beff-2c3b2d66c53e"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:42:29.842637 master-0 kubenswrapper[27835]: I0318 13:42:29.842518 27835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e6a77218-90a4-48a8-beff-2c3b2d66c53e-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:29.842637 master-0 kubenswrapper[27835]: I0318 13:42:29.842598 27835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6a77218-90a4-48a8-beff-2c3b2d66c53e-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:29.842637 master-0 kubenswrapper[27835]: I0318 13:42:29.842619 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rz5kx\" (UniqueName: \"kubernetes.io/projected/e6a77218-90a4-48a8-beff-2c3b2d66c53e-kube-api-access-rz5kx\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:29.842874 master-0 kubenswrapper[27835]: I0318 13:42:29.842656 27835 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/e6a77218-90a4-48a8-beff-2c3b2d66c53e-swiftconf\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:29.842874 master-0 kubenswrapper[27835]: I0318 13:42:29.842672 27835 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/e6a77218-90a4-48a8-beff-2c3b2d66c53e-etc-swift\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:29.842874 master-0 kubenswrapper[27835]: I0318 13:42:29.842684 27835 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/e6a77218-90a4-48a8-beff-2c3b2d66c53e-dispersionconf\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:29.842874 master-0 kubenswrapper[27835]: I0318 13:42:29.842697 27835 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/e6a77218-90a4-48a8-beff-2c3b2d66c53e-ring-data-devices\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:29.924558 master-0 kubenswrapper[27835]: I0318 13:42:29.924481 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-p2pp6" Mar 18 13:42:30.122618 master-0 kubenswrapper[27835]: I0318 13:42:30.122522 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gd8zd" event={"ID":"e6a77218-90a4-48a8-beff-2c3b2d66c53e","Type":"ContainerDied","Data":"c69bcea4ea2caff973db5411cfdc07223342beeff14c44a83ec6532d9d30732b"} Mar 18 13:42:30.122618 master-0 kubenswrapper[27835]: I0318 13:42:30.122577 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c69bcea4ea2caff973db5411cfdc07223342beeff14c44a83ec6532d9d30732b" Mar 18 13:42:30.122618 master-0 kubenswrapper[27835]: I0318 13:42:30.122633 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gd8zd" Mar 18 13:42:30.156519 master-0 kubenswrapper[27835]: I0318 13:42:30.156197 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 18 13:42:30.174746 master-0 kubenswrapper[27835]: W0318 13:42:30.174668 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a767523_b86f_496d_940f_7a8afb0c3535.slice/crio-1c12249ca043338442887431354ac377a5b75274c023f4b931f7289eb2d19830 WatchSource:0}: Error finding container 1c12249ca043338442887431354ac377a5b75274c023f4b931f7289eb2d19830: Status 404 returned error can't find the container with id 1c12249ca043338442887431354ac377a5b75274c023f4b931f7289eb2d19830 Mar 18 13:42:30.458529 master-0 kubenswrapper[27835]: W0318 13:42:30.458450 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05a99a49_5215_40c9_ba30_54618aa67479.slice/crio-3a7059102139906b50511e4b0d93afe4faa9820b4add0f0521f1f9e2312b315a WatchSource:0}: Error finding container 3a7059102139906b50511e4b0d93afe4faa9820b4add0f0521f1f9e2312b315a: Status 404 returned error can't find the container with id 3a7059102139906b50511e4b0d93afe4faa9820b4add0f0521f1f9e2312b315a Mar 18 13:42:30.462805 master-0 kubenswrapper[27835]: I0318 13:42:30.462745 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-p2pp6"] Mar 18 13:42:31.133901 master-0 kubenswrapper[27835]: I0318 13:42:31.133838 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-p2pp6" event={"ID":"05a99a49-5215-40c9-ba30-54618aa67479","Type":"ContainerStarted","Data":"3a7059102139906b50511e4b0d93afe4faa9820b4add0f0521f1f9e2312b315a"} Mar 18 13:42:31.135315 master-0 kubenswrapper[27835]: I0318 13:42:31.135269 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7a767523-b86f-496d-940f-7a8afb0c3535","Type":"ContainerStarted","Data":"1c12249ca043338442887431354ac377a5b75274c023f4b931f7289eb2d19830"} Mar 18 13:42:31.255880 master-0 kubenswrapper[27835]: I0318 13:42:31.255776 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-5njg9"] Mar 18 13:42:31.256336 master-0 kubenswrapper[27835]: E0318 13:42:31.256308 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6a77218-90a4-48a8-beff-2c3b2d66c53e" containerName="swift-ring-rebalance" Mar 18 13:42:31.256336 master-0 kubenswrapper[27835]: I0318 13:42:31.256332 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6a77218-90a4-48a8-beff-2c3b2d66c53e" containerName="swift-ring-rebalance" Mar 18 13:42:31.256711 master-0 kubenswrapper[27835]: I0318 13:42:31.256676 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6a77218-90a4-48a8-beff-2c3b2d66c53e" containerName="swift-ring-rebalance" Mar 18 13:42:31.257539 master-0 kubenswrapper[27835]: I0318 13:42:31.257501 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5njg9" Mar 18 13:42:31.263196 master-0 kubenswrapper[27835]: I0318 13:42:31.260445 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 18 13:42:31.282679 master-0 kubenswrapper[27835]: I0318 13:42:31.282614 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5njg9"] Mar 18 13:42:31.379804 master-0 kubenswrapper[27835]: I0318 13:42:31.379731 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/644cf655-a9b1-4879-b84b-8db7dc2a98e6-operator-scripts\") pod \"root-account-create-update-5njg9\" (UID: \"644cf655-a9b1-4879-b84b-8db7dc2a98e6\") " pod="openstack/root-account-create-update-5njg9" Mar 18 13:42:31.380128 master-0 kubenswrapper[27835]: I0318 13:42:31.380095 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w94d\" (UniqueName: \"kubernetes.io/projected/644cf655-a9b1-4879-b84b-8db7dc2a98e6-kube-api-access-9w94d\") pod \"root-account-create-update-5njg9\" (UID: \"644cf655-a9b1-4879-b84b-8db7dc2a98e6\") " pod="openstack/root-account-create-update-5njg9" Mar 18 13:42:31.482999 master-0 kubenswrapper[27835]: I0318 13:42:31.482933 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/644cf655-a9b1-4879-b84b-8db7dc2a98e6-operator-scripts\") pod \"root-account-create-update-5njg9\" (UID: \"644cf655-a9b1-4879-b84b-8db7dc2a98e6\") " pod="openstack/root-account-create-update-5njg9" Mar 18 13:42:31.483238 master-0 kubenswrapper[27835]: I0318 13:42:31.483080 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w94d\" (UniqueName: \"kubernetes.io/projected/644cf655-a9b1-4879-b84b-8db7dc2a98e6-kube-api-access-9w94d\") pod \"root-account-create-update-5njg9\" (UID: \"644cf655-a9b1-4879-b84b-8db7dc2a98e6\") " pod="openstack/root-account-create-update-5njg9" Mar 18 13:42:31.486700 master-0 kubenswrapper[27835]: I0318 13:42:31.484085 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/644cf655-a9b1-4879-b84b-8db7dc2a98e6-operator-scripts\") pod \"root-account-create-update-5njg9\" (UID: \"644cf655-a9b1-4879-b84b-8db7dc2a98e6\") " pod="openstack/root-account-create-update-5njg9" Mar 18 13:42:31.504001 master-0 kubenswrapper[27835]: I0318 13:42:31.503943 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w94d\" (UniqueName: \"kubernetes.io/projected/644cf655-a9b1-4879-b84b-8db7dc2a98e6-kube-api-access-9w94d\") pod \"root-account-create-update-5njg9\" (UID: \"644cf655-a9b1-4879-b84b-8db7dc2a98e6\") " pod="openstack/root-account-create-update-5njg9" Mar 18 13:42:31.578503 master-0 kubenswrapper[27835]: I0318 13:42:31.578144 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5njg9" Mar 18 13:42:31.791703 master-0 kubenswrapper[27835]: I0318 13:42:31.791582 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 18 13:42:32.213804 master-0 kubenswrapper[27835]: I0318 13:42:32.213513 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5njg9"] Mar 18 13:42:32.780632 master-0 kubenswrapper[27835]: I0318 13:42:32.780547 27835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-sf7pp" podUID="c90d0a00-53e9-4145-8137-d73cee5337f0" containerName="ovn-controller" probeResult="failure" output=< Mar 18 13:42:32.780632 master-0 kubenswrapper[27835]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 18 13:42:32.780632 master-0 kubenswrapper[27835]: > Mar 18 13:42:32.881068 master-0 kubenswrapper[27835]: I0318 13:42:32.880995 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-gtvxg" Mar 18 13:42:32.888830 master-0 kubenswrapper[27835]: I0318 13:42:32.888678 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-gtvxg" Mar 18 13:42:33.146478 master-0 kubenswrapper[27835]: I0318 13:42:33.146037 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sf7pp-config-8hb2q"] Mar 18 13:42:33.147778 master-0 kubenswrapper[27835]: I0318 13:42:33.147704 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sf7pp-config-8hb2q" Mar 18 13:42:33.158482 master-0 kubenswrapper[27835]: I0318 13:42:33.153801 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 18 13:42:33.163021 master-0 kubenswrapper[27835]: I0318 13:42:33.162332 27835 generic.go:334] "Generic (PLEG): container finished" podID="644cf655-a9b1-4879-b84b-8db7dc2a98e6" containerID="8806024461afb74ffe1ca2f23565e550868b1ca6721d9b81e52d44d5987335bf" exitCode=0 Mar 18 13:42:33.163021 master-0 kubenswrapper[27835]: I0318 13:42:33.162430 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5njg9" event={"ID":"644cf655-a9b1-4879-b84b-8db7dc2a98e6","Type":"ContainerDied","Data":"8806024461afb74ffe1ca2f23565e550868b1ca6721d9b81e52d44d5987335bf"} Mar 18 13:42:33.163021 master-0 kubenswrapper[27835]: I0318 13:42:33.162470 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5njg9" event={"ID":"644cf655-a9b1-4879-b84b-8db7dc2a98e6","Type":"ContainerStarted","Data":"d90aaa3e85899269eff29f1d4b0cad8f28368dd149d88642f9f8f17c97267ad4"} Mar 18 13:42:33.166511 master-0 kubenswrapper[27835]: I0318 13:42:33.166386 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7a767523-b86f-496d-940f-7a8afb0c3535","Type":"ContainerStarted","Data":"c5c0d981f836ebee9f6bb691f1980135afe25e0b7b48eb290ac50182a314ca16"} Mar 18 13:42:33.166511 master-0 kubenswrapper[27835]: I0318 13:42:33.166442 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7a767523-b86f-496d-940f-7a8afb0c3535","Type":"ContainerStarted","Data":"1453eb1ce7210e465c2d14e421b220b535b07235833076b8e2adac01f76ac610"} Mar 18 13:42:33.166511 master-0 kubenswrapper[27835]: I0318 13:42:33.166459 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7a767523-b86f-496d-940f-7a8afb0c3535","Type":"ContainerStarted","Data":"ea2188f614e4adcba874ba9718fbabbb75f35814736c07fbc87395ec9075ab9d"} Mar 18 13:42:33.168328 master-0 kubenswrapper[27835]: I0318 13:42:33.168282 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sf7pp-config-8hb2q"] Mar 18 13:42:33.243732 master-0 kubenswrapper[27835]: I0318 13:42:33.243519 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d9b2decd-0ef7-4694-b770-ab7137ac8b83-var-run-ovn\") pod \"ovn-controller-sf7pp-config-8hb2q\" (UID: \"d9b2decd-0ef7-4694-b770-ab7137ac8b83\") " pod="openstack/ovn-controller-sf7pp-config-8hb2q" Mar 18 13:42:33.243732 master-0 kubenswrapper[27835]: I0318 13:42:33.243597 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-954jx\" (UniqueName: \"kubernetes.io/projected/d9b2decd-0ef7-4694-b770-ab7137ac8b83-kube-api-access-954jx\") pod \"ovn-controller-sf7pp-config-8hb2q\" (UID: \"d9b2decd-0ef7-4694-b770-ab7137ac8b83\") " pod="openstack/ovn-controller-sf7pp-config-8hb2q" Mar 18 13:42:33.243732 master-0 kubenswrapper[27835]: I0318 13:42:33.243720 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d9b2decd-0ef7-4694-b770-ab7137ac8b83-var-run\") pod \"ovn-controller-sf7pp-config-8hb2q\" (UID: \"d9b2decd-0ef7-4694-b770-ab7137ac8b83\") " pod="openstack/ovn-controller-sf7pp-config-8hb2q" Mar 18 13:42:33.247890 master-0 kubenswrapper[27835]: I0318 13:42:33.243848 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d9b2decd-0ef7-4694-b770-ab7137ac8b83-additional-scripts\") pod \"ovn-controller-sf7pp-config-8hb2q\" (UID: \"d9b2decd-0ef7-4694-b770-ab7137ac8b83\") " pod="openstack/ovn-controller-sf7pp-config-8hb2q" Mar 18 13:42:33.247890 master-0 kubenswrapper[27835]: I0318 13:42:33.243948 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9b2decd-0ef7-4694-b770-ab7137ac8b83-scripts\") pod \"ovn-controller-sf7pp-config-8hb2q\" (UID: \"d9b2decd-0ef7-4694-b770-ab7137ac8b83\") " pod="openstack/ovn-controller-sf7pp-config-8hb2q" Mar 18 13:42:33.247890 master-0 kubenswrapper[27835]: I0318 13:42:33.244176 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d9b2decd-0ef7-4694-b770-ab7137ac8b83-var-log-ovn\") pod \"ovn-controller-sf7pp-config-8hb2q\" (UID: \"d9b2decd-0ef7-4694-b770-ab7137ac8b83\") " pod="openstack/ovn-controller-sf7pp-config-8hb2q" Mar 18 13:42:33.350398 master-0 kubenswrapper[27835]: I0318 13:42:33.350328 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9b2decd-0ef7-4694-b770-ab7137ac8b83-scripts\") pod \"ovn-controller-sf7pp-config-8hb2q\" (UID: \"d9b2decd-0ef7-4694-b770-ab7137ac8b83\") " pod="openstack/ovn-controller-sf7pp-config-8hb2q" Mar 18 13:42:33.351097 master-0 kubenswrapper[27835]: I0318 13:42:33.351051 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d9b2decd-0ef7-4694-b770-ab7137ac8b83-var-log-ovn\") pod \"ovn-controller-sf7pp-config-8hb2q\" (UID: \"d9b2decd-0ef7-4694-b770-ab7137ac8b83\") " pod="openstack/ovn-controller-sf7pp-config-8hb2q" Mar 18 13:42:33.351513 master-0 kubenswrapper[27835]: I0318 13:42:33.351468 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d9b2decd-0ef7-4694-b770-ab7137ac8b83-var-run-ovn\") pod \"ovn-controller-sf7pp-config-8hb2q\" (UID: \"d9b2decd-0ef7-4694-b770-ab7137ac8b83\") " pod="openstack/ovn-controller-sf7pp-config-8hb2q" Mar 18 13:42:33.351595 master-0 kubenswrapper[27835]: I0318 13:42:33.351563 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-954jx\" (UniqueName: \"kubernetes.io/projected/d9b2decd-0ef7-4694-b770-ab7137ac8b83-kube-api-access-954jx\") pod \"ovn-controller-sf7pp-config-8hb2q\" (UID: \"d9b2decd-0ef7-4694-b770-ab7137ac8b83\") " pod="openstack/ovn-controller-sf7pp-config-8hb2q" Mar 18 13:42:33.351840 master-0 kubenswrapper[27835]: I0318 13:42:33.351805 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d9b2decd-0ef7-4694-b770-ab7137ac8b83-var-run\") pod \"ovn-controller-sf7pp-config-8hb2q\" (UID: \"d9b2decd-0ef7-4694-b770-ab7137ac8b83\") " pod="openstack/ovn-controller-sf7pp-config-8hb2q" Mar 18 13:42:33.351988 master-0 kubenswrapper[27835]: I0318 13:42:33.351968 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d9b2decd-0ef7-4694-b770-ab7137ac8b83-additional-scripts\") pod \"ovn-controller-sf7pp-config-8hb2q\" (UID: \"d9b2decd-0ef7-4694-b770-ab7137ac8b83\") " pod="openstack/ovn-controller-sf7pp-config-8hb2q" Mar 18 13:42:33.352607 master-0 kubenswrapper[27835]: I0318 13:42:33.352567 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d9b2decd-0ef7-4694-b770-ab7137ac8b83-var-run-ovn\") pod \"ovn-controller-sf7pp-config-8hb2q\" (UID: \"d9b2decd-0ef7-4694-b770-ab7137ac8b83\") " pod="openstack/ovn-controller-sf7pp-config-8hb2q" Mar 18 13:42:33.353286 master-0 kubenswrapper[27835]: I0318 13:42:33.353159 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d9b2decd-0ef7-4694-b770-ab7137ac8b83-var-run\") pod \"ovn-controller-sf7pp-config-8hb2q\" (UID: \"d9b2decd-0ef7-4694-b770-ab7137ac8b83\") " pod="openstack/ovn-controller-sf7pp-config-8hb2q" Mar 18 13:42:33.353740 master-0 kubenswrapper[27835]: I0318 13:42:33.353697 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d9b2decd-0ef7-4694-b770-ab7137ac8b83-var-log-ovn\") pod \"ovn-controller-sf7pp-config-8hb2q\" (UID: \"d9b2decd-0ef7-4694-b770-ab7137ac8b83\") " pod="openstack/ovn-controller-sf7pp-config-8hb2q" Mar 18 13:42:33.353937 master-0 kubenswrapper[27835]: I0318 13:42:33.353839 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d9b2decd-0ef7-4694-b770-ab7137ac8b83-additional-scripts\") pod \"ovn-controller-sf7pp-config-8hb2q\" (UID: \"d9b2decd-0ef7-4694-b770-ab7137ac8b83\") " pod="openstack/ovn-controller-sf7pp-config-8hb2q" Mar 18 13:42:33.356009 master-0 kubenswrapper[27835]: I0318 13:42:33.355968 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9b2decd-0ef7-4694-b770-ab7137ac8b83-scripts\") pod \"ovn-controller-sf7pp-config-8hb2q\" (UID: \"d9b2decd-0ef7-4694-b770-ab7137ac8b83\") " pod="openstack/ovn-controller-sf7pp-config-8hb2q" Mar 18 13:42:33.374599 master-0 kubenswrapper[27835]: I0318 13:42:33.374548 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-954jx\" (UniqueName: \"kubernetes.io/projected/d9b2decd-0ef7-4694-b770-ab7137ac8b83-kube-api-access-954jx\") pod \"ovn-controller-sf7pp-config-8hb2q\" (UID: \"d9b2decd-0ef7-4694-b770-ab7137ac8b83\") " pod="openstack/ovn-controller-sf7pp-config-8hb2q" Mar 18 13:42:33.590938 master-0 kubenswrapper[27835]: I0318 13:42:33.590876 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sf7pp-config-8hb2q" Mar 18 13:42:34.088682 master-0 kubenswrapper[27835]: I0318 13:42:34.088614 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sf7pp-config-8hb2q"] Mar 18 13:42:34.182384 master-0 kubenswrapper[27835]: I0318 13:42:34.182331 27835 generic.go:334] "Generic (PLEG): container finished" podID="1b76c81c-7824-4bfa-af04-9c1fd928fb63" containerID="78946f613903c49866632a32a02d94783bbf6c6a66400c112e85ef599af226c9" exitCode=0 Mar 18 13:42:34.183307 master-0 kubenswrapper[27835]: I0318 13:42:34.182400 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1b76c81c-7824-4bfa-af04-9c1fd928fb63","Type":"ContainerDied","Data":"78946f613903c49866632a32a02d94783bbf6c6a66400c112e85ef599af226c9"} Mar 18 13:42:34.185380 master-0 kubenswrapper[27835]: I0318 13:42:34.185275 27835 generic.go:334] "Generic (PLEG): container finished" podID="6f51d7b8-7e16-4c10-8e64-a5af8a8522ed" containerID="f03d087364e016e051b903fc5046b08a9917a4453cf4682b4b351e3b3f54bdba" exitCode=0 Mar 18 13:42:34.185592 master-0 kubenswrapper[27835]: I0318 13:42:34.185339 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed","Type":"ContainerDied","Data":"f03d087364e016e051b903fc5046b08a9917a4453cf4682b4b351e3b3f54bdba"} Mar 18 13:42:34.194972 master-0 kubenswrapper[27835]: I0318 13:42:34.194921 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sf7pp-config-8hb2q" event={"ID":"d9b2decd-0ef7-4694-b770-ab7137ac8b83","Type":"ContainerStarted","Data":"7f67aeb7007a551c0b0ee5a6daf32f999bb238de969a2ce4b14c5c87bb6246f5"} Mar 18 13:42:34.198908 master-0 kubenswrapper[27835]: I0318 13:42:34.198860 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7a767523-b86f-496d-940f-7a8afb0c3535","Type":"ContainerStarted","Data":"a3cc4c78485ea13703c6db63f3d6b1cae123c3f201bf727476114cd2d1302038"} Mar 18 13:42:34.752003 master-0 kubenswrapper[27835]: I0318 13:42:34.751945 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5njg9" Mar 18 13:42:34.916157 master-0 kubenswrapper[27835]: I0318 13:42:34.916089 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9w94d\" (UniqueName: \"kubernetes.io/projected/644cf655-a9b1-4879-b84b-8db7dc2a98e6-kube-api-access-9w94d\") pod \"644cf655-a9b1-4879-b84b-8db7dc2a98e6\" (UID: \"644cf655-a9b1-4879-b84b-8db7dc2a98e6\") " Mar 18 13:42:34.916346 master-0 kubenswrapper[27835]: I0318 13:42:34.916214 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/644cf655-a9b1-4879-b84b-8db7dc2a98e6-operator-scripts\") pod \"644cf655-a9b1-4879-b84b-8db7dc2a98e6\" (UID: \"644cf655-a9b1-4879-b84b-8db7dc2a98e6\") " Mar 18 13:42:34.917372 master-0 kubenswrapper[27835]: I0318 13:42:34.917277 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/644cf655-a9b1-4879-b84b-8db7dc2a98e6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "644cf655-a9b1-4879-b84b-8db7dc2a98e6" (UID: "644cf655-a9b1-4879-b84b-8db7dc2a98e6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:42:34.921016 master-0 kubenswrapper[27835]: I0318 13:42:34.920982 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/644cf655-a9b1-4879-b84b-8db7dc2a98e6-kube-api-access-9w94d" (OuterVolumeSpecName: "kube-api-access-9w94d") pod "644cf655-a9b1-4879-b84b-8db7dc2a98e6" (UID: "644cf655-a9b1-4879-b84b-8db7dc2a98e6"). InnerVolumeSpecName "kube-api-access-9w94d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:42:35.021498 master-0 kubenswrapper[27835]: I0318 13:42:35.021339 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9w94d\" (UniqueName: \"kubernetes.io/projected/644cf655-a9b1-4879-b84b-8db7dc2a98e6-kube-api-access-9w94d\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:35.021721 master-0 kubenswrapper[27835]: I0318 13:42:35.021510 27835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/644cf655-a9b1-4879-b84b-8db7dc2a98e6-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:35.219207 master-0 kubenswrapper[27835]: I0318 13:42:35.219170 27835 generic.go:334] "Generic (PLEG): container finished" podID="d9b2decd-0ef7-4694-b770-ab7137ac8b83" containerID="eef251ff3b79c10689d866f86932e87ec8fca874d418bf46a68b59aa4480fe70" exitCode=0 Mar 18 13:42:35.219361 master-0 kubenswrapper[27835]: I0318 13:42:35.219342 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sf7pp-config-8hb2q" event={"ID":"d9b2decd-0ef7-4694-b770-ab7137ac8b83","Type":"ContainerDied","Data":"eef251ff3b79c10689d866f86932e87ec8fca874d418bf46a68b59aa4480fe70"} Mar 18 13:42:35.223757 master-0 kubenswrapper[27835]: I0318 13:42:35.221888 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1b76c81c-7824-4bfa-af04-9c1fd928fb63","Type":"ContainerStarted","Data":"9e18e124b6339dfeba6b55d6fae60ffca62ce404b8c69db187a4989deddf5ae2"} Mar 18 13:42:35.223757 master-0 kubenswrapper[27835]: I0318 13:42:35.222770 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 18 13:42:35.232458 master-0 kubenswrapper[27835]: I0318 13:42:35.232392 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6f51d7b8-7e16-4c10-8e64-a5af8a8522ed","Type":"ContainerStarted","Data":"a38dcf8498f41319515c6cd88f48ab749d763a4d6910e044bb37e3b91bf7d1ad"} Mar 18 13:42:35.232952 master-0 kubenswrapper[27835]: I0318 13:42:35.232917 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:42:35.240735 master-0 kubenswrapper[27835]: I0318 13:42:35.240681 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5njg9" event={"ID":"644cf655-a9b1-4879-b84b-8db7dc2a98e6","Type":"ContainerDied","Data":"d90aaa3e85899269eff29f1d4b0cad8f28368dd149d88642f9f8f17c97267ad4"} Mar 18 13:42:35.240905 master-0 kubenswrapper[27835]: I0318 13:42:35.240744 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d90aaa3e85899269eff29f1d4b0cad8f28368dd149d88642f9f8f17c97267ad4" Mar 18 13:42:35.240905 master-0 kubenswrapper[27835]: I0318 13:42:35.240810 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5njg9" Mar 18 13:42:35.278456 master-0 kubenswrapper[27835]: I0318 13:42:35.278232 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=55.76141742 podStartE2EDuration="1m3.278210315s" podCreationTimestamp="2026-03-18 13:41:32 +0000 UTC" firstStartedPulling="2026-03-18 13:41:52.24139163 +0000 UTC m=+1076.206603190" lastFinishedPulling="2026-03-18 13:41:59.758184505 +0000 UTC m=+1083.723396085" observedRunningTime="2026-03-18 13:42:35.26819854 +0000 UTC m=+1119.233410110" watchObservedRunningTime="2026-03-18 13:42:35.278210315 +0000 UTC m=+1119.243421885" Mar 18 13:42:35.299841 master-0 kubenswrapper[27835]: I0318 13:42:35.299721 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=54.765896758 podStartE2EDuration="1m2.299702555s" podCreationTimestamp="2026-03-18 13:41:33 +0000 UTC" firstStartedPulling="2026-03-18 13:41:52.588867922 +0000 UTC m=+1076.554079482" lastFinishedPulling="2026-03-18 13:42:00.122673719 +0000 UTC m=+1084.087885279" observedRunningTime="2026-03-18 13:42:35.288970451 +0000 UTC m=+1119.254182021" watchObservedRunningTime="2026-03-18 13:42:35.299702555 +0000 UTC m=+1119.264914115" Mar 18 13:42:36.260944 master-0 kubenswrapper[27835]: I0318 13:42:36.260893 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7a767523-b86f-496d-940f-7a8afb0c3535","Type":"ContainerStarted","Data":"bfb239fa77b1fb34e87db7a7901053ffba4d407adc2a32928edeb36496eaa5d8"} Mar 18 13:42:36.261393 master-0 kubenswrapper[27835]: I0318 13:42:36.260947 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7a767523-b86f-496d-940f-7a8afb0c3535","Type":"ContainerStarted","Data":"abfcf35421310e9030f2a16d1dc01ebc8725bddd9bfd856bf66122e912156403"} Mar 18 13:42:36.261393 master-0 kubenswrapper[27835]: I0318 13:42:36.260960 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7a767523-b86f-496d-940f-7a8afb0c3535","Type":"ContainerStarted","Data":"b0dcd4a1bb64da28abaf6765c49f8a0bec37d4a7d1a7df105ea3ca44a2e44a3c"} Mar 18 13:42:36.830391 master-0 kubenswrapper[27835]: I0318 13:42:36.830339 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sf7pp-config-8hb2q" Mar 18 13:42:36.876429 master-0 kubenswrapper[27835]: I0318 13:42:36.876320 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9b2decd-0ef7-4694-b770-ab7137ac8b83-scripts\") pod \"d9b2decd-0ef7-4694-b770-ab7137ac8b83\" (UID: \"d9b2decd-0ef7-4694-b770-ab7137ac8b83\") " Mar 18 13:42:36.876801 master-0 kubenswrapper[27835]: I0318 13:42:36.876555 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d9b2decd-0ef7-4694-b770-ab7137ac8b83-var-run\") pod \"d9b2decd-0ef7-4694-b770-ab7137ac8b83\" (UID: \"d9b2decd-0ef7-4694-b770-ab7137ac8b83\") " Mar 18 13:42:36.876801 master-0 kubenswrapper[27835]: I0318 13:42:36.876615 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d9b2decd-0ef7-4694-b770-ab7137ac8b83-additional-scripts\") pod \"d9b2decd-0ef7-4694-b770-ab7137ac8b83\" (UID: \"d9b2decd-0ef7-4694-b770-ab7137ac8b83\") " Mar 18 13:42:36.876801 master-0 kubenswrapper[27835]: I0318 13:42:36.876694 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-954jx\" (UniqueName: \"kubernetes.io/projected/d9b2decd-0ef7-4694-b770-ab7137ac8b83-kube-api-access-954jx\") pod \"d9b2decd-0ef7-4694-b770-ab7137ac8b83\" (UID: \"d9b2decd-0ef7-4694-b770-ab7137ac8b83\") " Mar 18 13:42:36.876801 master-0 kubenswrapper[27835]: I0318 13:42:36.876726 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d9b2decd-0ef7-4694-b770-ab7137ac8b83-var-log-ovn\") pod \"d9b2decd-0ef7-4694-b770-ab7137ac8b83\" (UID: \"d9b2decd-0ef7-4694-b770-ab7137ac8b83\") " Mar 18 13:42:36.876945 master-0 kubenswrapper[27835]: I0318 13:42:36.876852 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d9b2decd-0ef7-4694-b770-ab7137ac8b83-var-run-ovn\") pod \"d9b2decd-0ef7-4694-b770-ab7137ac8b83\" (UID: \"d9b2decd-0ef7-4694-b770-ab7137ac8b83\") " Mar 18 13:42:36.877586 master-0 kubenswrapper[27835]: I0318 13:42:36.877550 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9b2decd-0ef7-4694-b770-ab7137ac8b83-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "d9b2decd-0ef7-4694-b770-ab7137ac8b83" (UID: "d9b2decd-0ef7-4694-b770-ab7137ac8b83"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:42:36.877811 master-0 kubenswrapper[27835]: I0318 13:42:36.877763 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9b2decd-0ef7-4694-b770-ab7137ac8b83-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "d9b2decd-0ef7-4694-b770-ab7137ac8b83" (UID: "d9b2decd-0ef7-4694-b770-ab7137ac8b83"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:42:36.877858 master-0 kubenswrapper[27835]: I0318 13:42:36.877827 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9b2decd-0ef7-4694-b770-ab7137ac8b83-var-run" (OuterVolumeSpecName: "var-run") pod "d9b2decd-0ef7-4694-b770-ab7137ac8b83" (UID: "d9b2decd-0ef7-4694-b770-ab7137ac8b83"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:42:36.877858 master-0 kubenswrapper[27835]: I0318 13:42:36.877849 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9b2decd-0ef7-4694-b770-ab7137ac8b83-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "d9b2decd-0ef7-4694-b770-ab7137ac8b83" (UID: "d9b2decd-0ef7-4694-b770-ab7137ac8b83"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:42:36.877956 master-0 kubenswrapper[27835]: I0318 13:42:36.877913 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9b2decd-0ef7-4694-b770-ab7137ac8b83-scripts" (OuterVolumeSpecName: "scripts") pod "d9b2decd-0ef7-4694-b770-ab7137ac8b83" (UID: "d9b2decd-0ef7-4694-b770-ab7137ac8b83"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:42:36.886204 master-0 kubenswrapper[27835]: I0318 13:42:36.886143 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9b2decd-0ef7-4694-b770-ab7137ac8b83-kube-api-access-954jx" (OuterVolumeSpecName: "kube-api-access-954jx") pod "d9b2decd-0ef7-4694-b770-ab7137ac8b83" (UID: "d9b2decd-0ef7-4694-b770-ab7137ac8b83"). InnerVolumeSpecName "kube-api-access-954jx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:42:36.979533 master-0 kubenswrapper[27835]: I0318 13:42:36.979436 27835 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d9b2decd-0ef7-4694-b770-ab7137ac8b83-var-run\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:36.979533 master-0 kubenswrapper[27835]: I0318 13:42:36.979488 27835 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d9b2decd-0ef7-4694-b770-ab7137ac8b83-additional-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:36.979533 master-0 kubenswrapper[27835]: I0318 13:42:36.979500 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-954jx\" (UniqueName: \"kubernetes.io/projected/d9b2decd-0ef7-4694-b770-ab7137ac8b83-kube-api-access-954jx\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:36.979533 master-0 kubenswrapper[27835]: I0318 13:42:36.979510 27835 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d9b2decd-0ef7-4694-b770-ab7137ac8b83-var-log-ovn\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:36.979533 master-0 kubenswrapper[27835]: I0318 13:42:36.979522 27835 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d9b2decd-0ef7-4694-b770-ab7137ac8b83-var-run-ovn\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:36.979533 master-0 kubenswrapper[27835]: I0318 13:42:36.979531 27835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d9b2decd-0ef7-4694-b770-ab7137ac8b83-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:37.272150 master-0 kubenswrapper[27835]: I0318 13:42:37.272098 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sf7pp-config-8hb2q" Mar 18 13:42:37.272714 master-0 kubenswrapper[27835]: I0318 13:42:37.272095 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sf7pp-config-8hb2q" event={"ID":"d9b2decd-0ef7-4694-b770-ab7137ac8b83","Type":"ContainerDied","Data":"7f67aeb7007a551c0b0ee5a6daf32f999bb238de969a2ce4b14c5c87bb6246f5"} Mar 18 13:42:37.272714 master-0 kubenswrapper[27835]: I0318 13:42:37.272252 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f67aeb7007a551c0b0ee5a6daf32f999bb238de969a2ce4b14c5c87bb6246f5" Mar 18 13:42:37.276285 master-0 kubenswrapper[27835]: I0318 13:42:37.276240 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7a767523-b86f-496d-940f-7a8afb0c3535","Type":"ContainerStarted","Data":"a1b19a41f1e93ab4cfd929df0d66dcb662e029f6286bd25240f6da971e6c74fc"} Mar 18 13:42:37.470851 master-0 kubenswrapper[27835]: E0318 13:42:37.470727 27835 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9b2decd_0ef7_4694_b770_ab7137ac8b83.slice/crio-7f67aeb7007a551c0b0ee5a6daf32f999bb238de969a2ce4b14c5c87bb6246f5\": RecentStats: unable to find data in memory cache]" Mar 18 13:42:37.777041 master-0 kubenswrapper[27835]: I0318 13:42:37.776989 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-sf7pp" Mar 18 13:42:37.971705 master-0 kubenswrapper[27835]: I0318 13:42:37.971640 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sf7pp-config-8hb2q"] Mar 18 13:42:37.983729 master-0 kubenswrapper[27835]: I0318 13:42:37.983673 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-sf7pp-config-8hb2q"] Mar 18 13:42:38.135229 master-0 kubenswrapper[27835]: I0318 13:42:38.135153 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sf7pp-config-kh4jg"] Mar 18 13:42:38.136485 master-0 kubenswrapper[27835]: E0318 13:42:38.136454 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9b2decd-0ef7-4694-b770-ab7137ac8b83" containerName="ovn-config" Mar 18 13:42:38.136485 master-0 kubenswrapper[27835]: I0318 13:42:38.136483 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9b2decd-0ef7-4694-b770-ab7137ac8b83" containerName="ovn-config" Mar 18 13:42:38.136584 master-0 kubenswrapper[27835]: E0318 13:42:38.136536 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="644cf655-a9b1-4879-b84b-8db7dc2a98e6" containerName="mariadb-account-create-update" Mar 18 13:42:38.136584 master-0 kubenswrapper[27835]: I0318 13:42:38.136548 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="644cf655-a9b1-4879-b84b-8db7dc2a98e6" containerName="mariadb-account-create-update" Mar 18 13:42:38.138435 master-0 kubenswrapper[27835]: I0318 13:42:38.137086 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="644cf655-a9b1-4879-b84b-8db7dc2a98e6" containerName="mariadb-account-create-update" Mar 18 13:42:38.138435 master-0 kubenswrapper[27835]: I0318 13:42:38.137113 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9b2decd-0ef7-4694-b770-ab7137ac8b83" containerName="ovn-config" Mar 18 13:42:38.138603 master-0 kubenswrapper[27835]: I0318 13:42:38.138527 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sf7pp-config-kh4jg" Mar 18 13:42:38.153440 master-0 kubenswrapper[27835]: I0318 13:42:38.153356 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 18 13:42:38.174206 master-0 kubenswrapper[27835]: I0318 13:42:38.174143 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sf7pp-config-kh4jg"] Mar 18 13:42:38.204175 master-0 kubenswrapper[27835]: I0318 13:42:38.204122 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smrdh\" (UniqueName: \"kubernetes.io/projected/32c9d64e-7be8-4437-aa1f-19726d1f7383-kube-api-access-smrdh\") pod \"ovn-controller-sf7pp-config-kh4jg\" (UID: \"32c9d64e-7be8-4437-aa1f-19726d1f7383\") " pod="openstack/ovn-controller-sf7pp-config-kh4jg" Mar 18 13:42:38.204357 master-0 kubenswrapper[27835]: I0318 13:42:38.204192 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/32c9d64e-7be8-4437-aa1f-19726d1f7383-var-run\") pod \"ovn-controller-sf7pp-config-kh4jg\" (UID: \"32c9d64e-7be8-4437-aa1f-19726d1f7383\") " pod="openstack/ovn-controller-sf7pp-config-kh4jg" Mar 18 13:42:38.204482 master-0 kubenswrapper[27835]: I0318 13:42:38.204390 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/32c9d64e-7be8-4437-aa1f-19726d1f7383-additional-scripts\") pod \"ovn-controller-sf7pp-config-kh4jg\" (UID: \"32c9d64e-7be8-4437-aa1f-19726d1f7383\") " pod="openstack/ovn-controller-sf7pp-config-kh4jg" Mar 18 13:42:38.204667 master-0 kubenswrapper[27835]: I0318 13:42:38.204628 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/32c9d64e-7be8-4437-aa1f-19726d1f7383-var-log-ovn\") pod \"ovn-controller-sf7pp-config-kh4jg\" (UID: \"32c9d64e-7be8-4437-aa1f-19726d1f7383\") " pod="openstack/ovn-controller-sf7pp-config-kh4jg" Mar 18 13:42:38.204878 master-0 kubenswrapper[27835]: I0318 13:42:38.204820 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/32c9d64e-7be8-4437-aa1f-19726d1f7383-var-run-ovn\") pod \"ovn-controller-sf7pp-config-kh4jg\" (UID: \"32c9d64e-7be8-4437-aa1f-19726d1f7383\") " pod="openstack/ovn-controller-sf7pp-config-kh4jg" Mar 18 13:42:38.204989 master-0 kubenswrapper[27835]: I0318 13:42:38.204933 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32c9d64e-7be8-4437-aa1f-19726d1f7383-scripts\") pod \"ovn-controller-sf7pp-config-kh4jg\" (UID: \"32c9d64e-7be8-4437-aa1f-19726d1f7383\") " pod="openstack/ovn-controller-sf7pp-config-kh4jg" Mar 18 13:42:38.309793 master-0 kubenswrapper[27835]: I0318 13:42:38.309732 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smrdh\" (UniqueName: \"kubernetes.io/projected/32c9d64e-7be8-4437-aa1f-19726d1f7383-kube-api-access-smrdh\") pod \"ovn-controller-sf7pp-config-kh4jg\" (UID: \"32c9d64e-7be8-4437-aa1f-19726d1f7383\") " pod="openstack/ovn-controller-sf7pp-config-kh4jg" Mar 18 13:42:38.310279 master-0 kubenswrapper[27835]: I0318 13:42:38.309804 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/32c9d64e-7be8-4437-aa1f-19726d1f7383-var-run\") pod \"ovn-controller-sf7pp-config-kh4jg\" (UID: \"32c9d64e-7be8-4437-aa1f-19726d1f7383\") " pod="openstack/ovn-controller-sf7pp-config-kh4jg" Mar 18 13:42:38.310279 master-0 kubenswrapper[27835]: I0318 13:42:38.309843 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/32c9d64e-7be8-4437-aa1f-19726d1f7383-additional-scripts\") pod \"ovn-controller-sf7pp-config-kh4jg\" (UID: \"32c9d64e-7be8-4437-aa1f-19726d1f7383\") " pod="openstack/ovn-controller-sf7pp-config-kh4jg" Mar 18 13:42:38.310279 master-0 kubenswrapper[27835]: I0318 13:42:38.310006 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/32c9d64e-7be8-4437-aa1f-19726d1f7383-var-run\") pod \"ovn-controller-sf7pp-config-kh4jg\" (UID: \"32c9d64e-7be8-4437-aa1f-19726d1f7383\") " pod="openstack/ovn-controller-sf7pp-config-kh4jg" Mar 18 13:42:38.310389 master-0 kubenswrapper[27835]: I0318 13:42:38.310348 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/32c9d64e-7be8-4437-aa1f-19726d1f7383-var-log-ovn\") pod \"ovn-controller-sf7pp-config-kh4jg\" (UID: \"32c9d64e-7be8-4437-aa1f-19726d1f7383\") " pod="openstack/ovn-controller-sf7pp-config-kh4jg" Mar 18 13:42:38.310590 master-0 kubenswrapper[27835]: I0318 13:42:38.310563 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/32c9d64e-7be8-4437-aa1f-19726d1f7383-var-log-ovn\") pod \"ovn-controller-sf7pp-config-kh4jg\" (UID: \"32c9d64e-7be8-4437-aa1f-19726d1f7383\") " pod="openstack/ovn-controller-sf7pp-config-kh4jg" Mar 18 13:42:38.310722 master-0 kubenswrapper[27835]: I0318 13:42:38.310585 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/32c9d64e-7be8-4437-aa1f-19726d1f7383-var-run-ovn\") pod \"ovn-controller-sf7pp-config-kh4jg\" (UID: \"32c9d64e-7be8-4437-aa1f-19726d1f7383\") " pod="openstack/ovn-controller-sf7pp-config-kh4jg" Mar 18 13:42:38.310945 master-0 kubenswrapper[27835]: I0318 13:42:38.310928 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32c9d64e-7be8-4437-aa1f-19726d1f7383-scripts\") pod \"ovn-controller-sf7pp-config-kh4jg\" (UID: \"32c9d64e-7be8-4437-aa1f-19726d1f7383\") " pod="openstack/ovn-controller-sf7pp-config-kh4jg" Mar 18 13:42:38.312077 master-0 kubenswrapper[27835]: I0318 13:42:38.310640 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/32c9d64e-7be8-4437-aa1f-19726d1f7383-var-run-ovn\") pod \"ovn-controller-sf7pp-config-kh4jg\" (UID: \"32c9d64e-7be8-4437-aa1f-19726d1f7383\") " pod="openstack/ovn-controller-sf7pp-config-kh4jg" Mar 18 13:42:38.312077 master-0 kubenswrapper[27835]: I0318 13:42:38.310594 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/32c9d64e-7be8-4437-aa1f-19726d1f7383-additional-scripts\") pod \"ovn-controller-sf7pp-config-kh4jg\" (UID: \"32c9d64e-7be8-4437-aa1f-19726d1f7383\") " pod="openstack/ovn-controller-sf7pp-config-kh4jg" Mar 18 13:42:38.312968 master-0 kubenswrapper[27835]: I0318 13:42:38.312925 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32c9d64e-7be8-4437-aa1f-19726d1f7383-scripts\") pod \"ovn-controller-sf7pp-config-kh4jg\" (UID: \"32c9d64e-7be8-4437-aa1f-19726d1f7383\") " pod="openstack/ovn-controller-sf7pp-config-kh4jg" Mar 18 13:42:38.315506 master-0 kubenswrapper[27835]: I0318 13:42:38.315070 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9b2decd-0ef7-4694-b770-ab7137ac8b83" path="/var/lib/kubelet/pods/d9b2decd-0ef7-4694-b770-ab7137ac8b83/volumes" Mar 18 13:42:38.326593 master-0 kubenswrapper[27835]: I0318 13:42:38.326540 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smrdh\" (UniqueName: \"kubernetes.io/projected/32c9d64e-7be8-4437-aa1f-19726d1f7383-kube-api-access-smrdh\") pod \"ovn-controller-sf7pp-config-kh4jg\" (UID: \"32c9d64e-7be8-4437-aa1f-19726d1f7383\") " pod="openstack/ovn-controller-sf7pp-config-kh4jg" Mar 18 13:42:38.330292 master-0 kubenswrapper[27835]: I0318 13:42:38.330258 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7a767523-b86f-496d-940f-7a8afb0c3535","Type":"ContainerStarted","Data":"b417bdc66ce850cde85194b9cab8a1869864ca27f649e9a3563a65ebbcdd1ee7"} Mar 18 13:42:38.330455 master-0 kubenswrapper[27835]: I0318 13:42:38.330302 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7a767523-b86f-496d-940f-7a8afb0c3535","Type":"ContainerStarted","Data":"e4ea6279e0bdd1bb9c4c116879f5bde726a67e28735d9d651c4517bc79d89163"} Mar 18 13:42:38.500009 master-0 kubenswrapper[27835]: I0318 13:42:38.499937 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sf7pp-config-kh4jg" Mar 18 13:42:38.790769 master-0 kubenswrapper[27835]: E0318 13:42:38.787364 27835 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a767523_b86f_496d_940f_7a8afb0c3535.slice/crio-conmon-2b95b6813bdb930f6eb3b2ebdd70c79d8ffbd6edfadc48023c147f44a35d285e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a767523_b86f_496d_940f_7a8afb0c3535.slice/crio-2b95b6813bdb930f6eb3b2ebdd70c79d8ffbd6edfadc48023c147f44a35d285e.scope\": RecentStats: unable to find data in memory cache]" Mar 18 13:42:39.345611 master-0 kubenswrapper[27835]: I0318 13:42:39.345554 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7a767523-b86f-496d-940f-7a8afb0c3535","Type":"ContainerStarted","Data":"2b95b6813bdb930f6eb3b2ebdd70c79d8ffbd6edfadc48023c147f44a35d285e"} Mar 18 13:42:39.345611 master-0 kubenswrapper[27835]: I0318 13:42:39.345603 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7a767523-b86f-496d-940f-7a8afb0c3535","Type":"ContainerStarted","Data":"d1fb4f8997efe725543212ce13311618706b93a556dd61412d4e1e56675a4105"} Mar 18 13:42:39.345611 master-0 kubenswrapper[27835]: I0318 13:42:39.345615 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7a767523-b86f-496d-940f-7a8afb0c3535","Type":"ContainerStarted","Data":"7980d2d98078cb720a0021bbb77d47abfce4992c52712b6e1df31476321996c3"} Mar 18 13:42:39.468378 master-0 kubenswrapper[27835]: I0318 13:42:39.468290 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sf7pp-config-kh4jg"] Mar 18 13:42:46.300965 master-0 kubenswrapper[27835]: W0318 13:42:46.300901 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32c9d64e_7be8_4437_aa1f_19726d1f7383.slice/crio-d5b3b0fa6112c6b57300afb712a94f8e63ac15f2f92b2a5990c735d74a090951 WatchSource:0}: Error finding container d5b3b0fa6112c6b57300afb712a94f8e63ac15f2f92b2a5990c735d74a090951: Status 404 returned error can't find the container with id d5b3b0fa6112c6b57300afb712a94f8e63ac15f2f92b2a5990c735d74a090951 Mar 18 13:42:46.447990 master-0 kubenswrapper[27835]: I0318 13:42:46.447933 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sf7pp-config-kh4jg" event={"ID":"32c9d64e-7be8-4437-aa1f-19726d1f7383","Type":"ContainerStarted","Data":"d5b3b0fa6112c6b57300afb712a94f8e63ac15f2f92b2a5990c735d74a090951"} Mar 18 13:42:47.460718 master-0 kubenswrapper[27835]: I0318 13:42:47.460364 27835 generic.go:334] "Generic (PLEG): container finished" podID="32c9d64e-7be8-4437-aa1f-19726d1f7383" containerID="76ae9528f94638856ffd919244239fe4fc5ea2ae62abb4866666f27ab61d8b5d" exitCode=0 Mar 18 13:42:47.460718 master-0 kubenswrapper[27835]: I0318 13:42:47.460435 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sf7pp-config-kh4jg" event={"ID":"32c9d64e-7be8-4437-aa1f-19726d1f7383","Type":"ContainerDied","Data":"76ae9528f94638856ffd919244239fe4fc5ea2ae62abb4866666f27ab61d8b5d"} Mar 18 13:42:47.472605 master-0 kubenswrapper[27835]: I0318 13:42:47.472374 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7a767523-b86f-496d-940f-7a8afb0c3535","Type":"ContainerStarted","Data":"ffe83fb57dc1b179959d25c78964affc8210e603e3bec0c72905c8646cd1dcf1"} Mar 18 13:42:47.472605 master-0 kubenswrapper[27835]: I0318 13:42:47.472448 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7a767523-b86f-496d-940f-7a8afb0c3535","Type":"ContainerStarted","Data":"acf386c54061e7f9d0b4f84b9ade499e12718f915f0d4b031b9c3be9813affc6"} Mar 18 13:42:47.475082 master-0 kubenswrapper[27835]: I0318 13:42:47.475030 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-p2pp6" event={"ID":"05a99a49-5215-40c9-ba30-54618aa67479","Type":"ContainerStarted","Data":"85327a55f766489236795af0d710f18fdd741fe468da5ee0e450b662530134cb"} Mar 18 13:42:47.513274 master-0 kubenswrapper[27835]: I0318 13:42:47.513200 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-p2pp6" podStartSLOduration=2.548957213 podStartE2EDuration="18.513182596s" podCreationTimestamp="2026-03-18 13:42:29 +0000 UTC" firstStartedPulling="2026-03-18 13:42:30.461102999 +0000 UTC m=+1114.426314569" lastFinishedPulling="2026-03-18 13:42:46.425328392 +0000 UTC m=+1130.390539952" observedRunningTime="2026-03-18 13:42:47.507261678 +0000 UTC m=+1131.472473238" watchObservedRunningTime="2026-03-18 13:42:47.513182596 +0000 UTC m=+1131.478394156" Mar 18 13:42:47.552264 master-0 kubenswrapper[27835]: I0318 13:42:47.552171 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=30.972494738 podStartE2EDuration="38.552153959s" podCreationTimestamp="2026-03-18 13:42:09 +0000 UTC" firstStartedPulling="2026-03-18 13:42:30.178925077 +0000 UTC m=+1114.144136637" lastFinishedPulling="2026-03-18 13:42:37.758584298 +0000 UTC m=+1121.723795858" observedRunningTime="2026-03-18 13:42:47.542526123 +0000 UTC m=+1131.507737683" watchObservedRunningTime="2026-03-18 13:42:47.552153959 +0000 UTC m=+1131.517365519" Mar 18 13:42:47.858758 master-0 kubenswrapper[27835]: I0318 13:42:47.858138 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85f88db649-6j6lk"] Mar 18 13:42:47.863181 master-0 kubenswrapper[27835]: I0318 13:42:47.860068 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85f88db649-6j6lk" Mar 18 13:42:47.883526 master-0 kubenswrapper[27835]: I0318 13:42:47.875093 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Mar 18 13:42:47.907483 master-0 kubenswrapper[27835]: I0318 13:42:47.907429 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85f88db649-6j6lk"] Mar 18 13:42:47.966787 master-0 kubenswrapper[27835]: I0318 13:42:47.966713 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6xpp\" (UniqueName: \"kubernetes.io/projected/529ada08-25e2-4b25-aa95-f6fbb5263c13-kube-api-access-v6xpp\") pod \"dnsmasq-dns-85f88db649-6j6lk\" (UID: \"529ada08-25e2-4b25-aa95-f6fbb5263c13\") " pod="openstack/dnsmasq-dns-85f88db649-6j6lk" Mar 18 13:42:47.966787 master-0 kubenswrapper[27835]: I0318 13:42:47.966801 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/529ada08-25e2-4b25-aa95-f6fbb5263c13-ovsdbserver-nb\") pod \"dnsmasq-dns-85f88db649-6j6lk\" (UID: \"529ada08-25e2-4b25-aa95-f6fbb5263c13\") " pod="openstack/dnsmasq-dns-85f88db649-6j6lk" Mar 18 13:42:47.967058 master-0 kubenswrapper[27835]: I0318 13:42:47.966926 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/529ada08-25e2-4b25-aa95-f6fbb5263c13-config\") pod \"dnsmasq-dns-85f88db649-6j6lk\" (UID: \"529ada08-25e2-4b25-aa95-f6fbb5263c13\") " pod="openstack/dnsmasq-dns-85f88db649-6j6lk" Mar 18 13:42:47.967058 master-0 kubenswrapper[27835]: I0318 13:42:47.966971 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/529ada08-25e2-4b25-aa95-f6fbb5263c13-ovsdbserver-sb\") pod \"dnsmasq-dns-85f88db649-6j6lk\" (UID: \"529ada08-25e2-4b25-aa95-f6fbb5263c13\") " pod="openstack/dnsmasq-dns-85f88db649-6j6lk" Mar 18 13:42:47.967058 master-0 kubenswrapper[27835]: I0318 13:42:47.967003 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/529ada08-25e2-4b25-aa95-f6fbb5263c13-dns-svc\") pod \"dnsmasq-dns-85f88db649-6j6lk\" (UID: \"529ada08-25e2-4b25-aa95-f6fbb5263c13\") " pod="openstack/dnsmasq-dns-85f88db649-6j6lk" Mar 18 13:42:47.967163 master-0 kubenswrapper[27835]: I0318 13:42:47.967078 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/529ada08-25e2-4b25-aa95-f6fbb5263c13-dns-swift-storage-0\") pod \"dnsmasq-dns-85f88db649-6j6lk\" (UID: \"529ada08-25e2-4b25-aa95-f6fbb5263c13\") " pod="openstack/dnsmasq-dns-85f88db649-6j6lk" Mar 18 13:42:48.069822 master-0 kubenswrapper[27835]: I0318 13:42:48.069753 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6xpp\" (UniqueName: \"kubernetes.io/projected/529ada08-25e2-4b25-aa95-f6fbb5263c13-kube-api-access-v6xpp\") pod \"dnsmasq-dns-85f88db649-6j6lk\" (UID: \"529ada08-25e2-4b25-aa95-f6fbb5263c13\") " pod="openstack/dnsmasq-dns-85f88db649-6j6lk" Mar 18 13:42:48.070052 master-0 kubenswrapper[27835]: I0318 13:42:48.069832 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/529ada08-25e2-4b25-aa95-f6fbb5263c13-ovsdbserver-nb\") pod \"dnsmasq-dns-85f88db649-6j6lk\" (UID: \"529ada08-25e2-4b25-aa95-f6fbb5263c13\") " pod="openstack/dnsmasq-dns-85f88db649-6j6lk" Mar 18 13:42:48.070052 master-0 kubenswrapper[27835]: I0318 13:42:48.069955 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/529ada08-25e2-4b25-aa95-f6fbb5263c13-config\") pod \"dnsmasq-dns-85f88db649-6j6lk\" (UID: \"529ada08-25e2-4b25-aa95-f6fbb5263c13\") " pod="openstack/dnsmasq-dns-85f88db649-6j6lk" Mar 18 13:42:48.070052 master-0 kubenswrapper[27835]: I0318 13:42:48.069999 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/529ada08-25e2-4b25-aa95-f6fbb5263c13-ovsdbserver-sb\") pod \"dnsmasq-dns-85f88db649-6j6lk\" (UID: \"529ada08-25e2-4b25-aa95-f6fbb5263c13\") " pod="openstack/dnsmasq-dns-85f88db649-6j6lk" Mar 18 13:42:48.070052 master-0 kubenswrapper[27835]: I0318 13:42:48.070028 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/529ada08-25e2-4b25-aa95-f6fbb5263c13-dns-svc\") pod \"dnsmasq-dns-85f88db649-6j6lk\" (UID: \"529ada08-25e2-4b25-aa95-f6fbb5263c13\") " pod="openstack/dnsmasq-dns-85f88db649-6j6lk" Mar 18 13:42:48.070189 master-0 kubenswrapper[27835]: I0318 13:42:48.070110 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/529ada08-25e2-4b25-aa95-f6fbb5263c13-dns-swift-storage-0\") pod \"dnsmasq-dns-85f88db649-6j6lk\" (UID: \"529ada08-25e2-4b25-aa95-f6fbb5263c13\") " pod="openstack/dnsmasq-dns-85f88db649-6j6lk" Mar 18 13:42:48.072582 master-0 kubenswrapper[27835]: I0318 13:42:48.071243 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/529ada08-25e2-4b25-aa95-f6fbb5263c13-dns-swift-storage-0\") pod \"dnsmasq-dns-85f88db649-6j6lk\" (UID: \"529ada08-25e2-4b25-aa95-f6fbb5263c13\") " pod="openstack/dnsmasq-dns-85f88db649-6j6lk" Mar 18 13:42:48.073188 master-0 kubenswrapper[27835]: I0318 13:42:48.073161 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/529ada08-25e2-4b25-aa95-f6fbb5263c13-config\") pod \"dnsmasq-dns-85f88db649-6j6lk\" (UID: \"529ada08-25e2-4b25-aa95-f6fbb5263c13\") " pod="openstack/dnsmasq-dns-85f88db649-6j6lk" Mar 18 13:42:48.073578 master-0 kubenswrapper[27835]: I0318 13:42:48.073554 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/529ada08-25e2-4b25-aa95-f6fbb5263c13-ovsdbserver-nb\") pod \"dnsmasq-dns-85f88db649-6j6lk\" (UID: \"529ada08-25e2-4b25-aa95-f6fbb5263c13\") " pod="openstack/dnsmasq-dns-85f88db649-6j6lk" Mar 18 13:42:48.073969 master-0 kubenswrapper[27835]: I0318 13:42:48.073952 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/529ada08-25e2-4b25-aa95-f6fbb5263c13-dns-svc\") pod \"dnsmasq-dns-85f88db649-6j6lk\" (UID: \"529ada08-25e2-4b25-aa95-f6fbb5263c13\") " pod="openstack/dnsmasq-dns-85f88db649-6j6lk" Mar 18 13:42:48.078435 master-0 kubenswrapper[27835]: I0318 13:42:48.075811 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/529ada08-25e2-4b25-aa95-f6fbb5263c13-ovsdbserver-sb\") pod \"dnsmasq-dns-85f88db649-6j6lk\" (UID: \"529ada08-25e2-4b25-aa95-f6fbb5263c13\") " pod="openstack/dnsmasq-dns-85f88db649-6j6lk" Mar 18 13:42:48.103361 master-0 kubenswrapper[27835]: I0318 13:42:48.103318 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6xpp\" (UniqueName: \"kubernetes.io/projected/529ada08-25e2-4b25-aa95-f6fbb5263c13-kube-api-access-v6xpp\") pod \"dnsmasq-dns-85f88db649-6j6lk\" (UID: \"529ada08-25e2-4b25-aa95-f6fbb5263c13\") " pod="openstack/dnsmasq-dns-85f88db649-6j6lk" Mar 18 13:42:48.198909 master-0 kubenswrapper[27835]: I0318 13:42:48.198843 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85f88db649-6j6lk" Mar 18 13:42:48.726912 master-0 kubenswrapper[27835]: I0318 13:42:48.726822 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85f88db649-6j6lk"] Mar 18 13:42:48.972347 master-0 kubenswrapper[27835]: I0318 13:42:48.972287 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sf7pp-config-kh4jg" Mar 18 13:42:49.006813 master-0 kubenswrapper[27835]: I0318 13:42:49.006721 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smrdh\" (UniqueName: \"kubernetes.io/projected/32c9d64e-7be8-4437-aa1f-19726d1f7383-kube-api-access-smrdh\") pod \"32c9d64e-7be8-4437-aa1f-19726d1f7383\" (UID: \"32c9d64e-7be8-4437-aa1f-19726d1f7383\") " Mar 18 13:42:49.007044 master-0 kubenswrapper[27835]: I0318 13:42:49.006992 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/32c9d64e-7be8-4437-aa1f-19726d1f7383-var-log-ovn\") pod \"32c9d64e-7be8-4437-aa1f-19726d1f7383\" (UID: \"32c9d64e-7be8-4437-aa1f-19726d1f7383\") " Mar 18 13:42:49.007123 master-0 kubenswrapper[27835]: I0318 13:42:49.007069 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32c9d64e-7be8-4437-aa1f-19726d1f7383-scripts\") pod \"32c9d64e-7be8-4437-aa1f-19726d1f7383\" (UID: \"32c9d64e-7be8-4437-aa1f-19726d1f7383\") " Mar 18 13:42:49.007573 master-0 kubenswrapper[27835]: I0318 13:42:49.007539 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/32c9d64e-7be8-4437-aa1f-19726d1f7383-additional-scripts\") pod \"32c9d64e-7be8-4437-aa1f-19726d1f7383\" (UID: \"32c9d64e-7be8-4437-aa1f-19726d1f7383\") " Mar 18 13:42:49.007642 master-0 kubenswrapper[27835]: I0318 13:42:49.007630 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/32c9d64e-7be8-4437-aa1f-19726d1f7383-var-run\") pod \"32c9d64e-7be8-4437-aa1f-19726d1f7383\" (UID: \"32c9d64e-7be8-4437-aa1f-19726d1f7383\") " Mar 18 13:42:49.007715 master-0 kubenswrapper[27835]: I0318 13:42:49.007693 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/32c9d64e-7be8-4437-aa1f-19726d1f7383-var-run-ovn\") pod \"32c9d64e-7be8-4437-aa1f-19726d1f7383\" (UID: \"32c9d64e-7be8-4437-aa1f-19726d1f7383\") " Mar 18 13:42:49.008819 master-0 kubenswrapper[27835]: I0318 13:42:49.008736 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32c9d64e-7be8-4437-aa1f-19726d1f7383-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "32c9d64e-7be8-4437-aa1f-19726d1f7383" (UID: "32c9d64e-7be8-4437-aa1f-19726d1f7383"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:42:49.014764 master-0 kubenswrapper[27835]: I0318 13:42:49.014694 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32c9d64e-7be8-4437-aa1f-19726d1f7383-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "32c9d64e-7be8-4437-aa1f-19726d1f7383" (UID: "32c9d64e-7be8-4437-aa1f-19726d1f7383"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:42:49.015887 master-0 kubenswrapper[27835]: I0318 13:42:49.015842 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32c9d64e-7be8-4437-aa1f-19726d1f7383-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "32c9d64e-7be8-4437-aa1f-19726d1f7383" (UID: "32c9d64e-7be8-4437-aa1f-19726d1f7383"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:42:49.015997 master-0 kubenswrapper[27835]: I0318 13:42:49.015921 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32c9d64e-7be8-4437-aa1f-19726d1f7383-var-run" (OuterVolumeSpecName: "var-run") pod "32c9d64e-7be8-4437-aa1f-19726d1f7383" (UID: "32c9d64e-7be8-4437-aa1f-19726d1f7383"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:42:49.024608 master-0 kubenswrapper[27835]: I0318 13:42:49.016472 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32c9d64e-7be8-4437-aa1f-19726d1f7383-scripts" (OuterVolumeSpecName: "scripts") pod "32c9d64e-7be8-4437-aa1f-19726d1f7383" (UID: "32c9d64e-7be8-4437-aa1f-19726d1f7383"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:42:49.024608 master-0 kubenswrapper[27835]: I0318 13:42:49.017183 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 18 13:42:49.024608 master-0 kubenswrapper[27835]: I0318 13:42:49.018311 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32c9d64e-7be8-4437-aa1f-19726d1f7383-kube-api-access-smrdh" (OuterVolumeSpecName: "kube-api-access-smrdh") pod "32c9d64e-7be8-4437-aa1f-19726d1f7383" (UID: "32c9d64e-7be8-4437-aa1f-19726d1f7383"). InnerVolumeSpecName "kube-api-access-smrdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:42:49.117728 master-0 kubenswrapper[27835]: I0318 13:42:49.117628 27835 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/32c9d64e-7be8-4437-aa1f-19726d1f7383-var-log-ovn\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:49.117921 master-0 kubenswrapper[27835]: I0318 13:42:49.117742 27835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32c9d64e-7be8-4437-aa1f-19726d1f7383-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:49.117921 master-0 kubenswrapper[27835]: I0318 13:42:49.117769 27835 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/32c9d64e-7be8-4437-aa1f-19726d1f7383-additional-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:49.117921 master-0 kubenswrapper[27835]: I0318 13:42:49.117793 27835 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/32c9d64e-7be8-4437-aa1f-19726d1f7383-var-run\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:49.117921 master-0 kubenswrapper[27835]: I0318 13:42:49.117808 27835 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/32c9d64e-7be8-4437-aa1f-19726d1f7383-var-run-ovn\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:49.117921 master-0 kubenswrapper[27835]: I0318 13:42:49.117820 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smrdh\" (UniqueName: \"kubernetes.io/projected/32c9d64e-7be8-4437-aa1f-19726d1f7383-kube-api-access-smrdh\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:49.511731 master-0 kubenswrapper[27835]: I0318 13:42:49.508004 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sf7pp-config-kh4jg" Mar 18 13:42:49.511731 master-0 kubenswrapper[27835]: I0318 13:42:49.508635 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sf7pp-config-kh4jg" event={"ID":"32c9d64e-7be8-4437-aa1f-19726d1f7383","Type":"ContainerDied","Data":"d5b3b0fa6112c6b57300afb712a94f8e63ac15f2f92b2a5990c735d74a090951"} Mar 18 13:42:49.511731 master-0 kubenswrapper[27835]: I0318 13:42:49.508694 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5b3b0fa6112c6b57300afb712a94f8e63ac15f2f92b2a5990c735d74a090951" Mar 18 13:42:49.511731 master-0 kubenswrapper[27835]: I0318 13:42:49.511637 27835 generic.go:334] "Generic (PLEG): container finished" podID="529ada08-25e2-4b25-aa95-f6fbb5263c13" containerID="b5bd8e58feb1dc9c8c8f5e8c9c7be58aabc1f0e512a605ad4a5de4b6c51b5b3d" exitCode=0 Mar 18 13:42:49.511731 master-0 kubenswrapper[27835]: I0318 13:42:49.511668 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f88db649-6j6lk" event={"ID":"529ada08-25e2-4b25-aa95-f6fbb5263c13","Type":"ContainerDied","Data":"b5bd8e58feb1dc9c8c8f5e8c9c7be58aabc1f0e512a605ad4a5de4b6c51b5b3d"} Mar 18 13:42:49.511731 master-0 kubenswrapper[27835]: I0318 13:42:49.511689 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f88db649-6j6lk" event={"ID":"529ada08-25e2-4b25-aa95-f6fbb5263c13","Type":"ContainerStarted","Data":"af80f58d9530c589384c0f3c30e14b0cd81e24fb4adb68ffd362986c27d7e130"} Mar 18 13:42:50.091849 master-0 kubenswrapper[27835]: I0318 13:42:50.091647 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sf7pp-config-kh4jg"] Mar 18 13:42:50.107947 master-0 kubenswrapper[27835]: I0318 13:42:50.107835 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-sf7pp-config-kh4jg"] Mar 18 13:42:50.265738 master-0 kubenswrapper[27835]: I0318 13:42:50.265670 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 18 13:42:50.292386 master-0 kubenswrapper[27835]: I0318 13:42:50.292320 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32c9d64e-7be8-4437-aa1f-19726d1f7383" path="/var/lib/kubelet/pods/32c9d64e-7be8-4437-aa1f-19726d1f7383/volumes" Mar 18 13:42:50.530069 master-0 kubenswrapper[27835]: I0318 13:42:50.529975 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f88db649-6j6lk" event={"ID":"529ada08-25e2-4b25-aa95-f6fbb5263c13","Type":"ContainerStarted","Data":"19cd67eec991225b9b13b9e8eaf173efffee41d378abfb1990727e3839b98a61"} Mar 18 13:42:50.530314 master-0 kubenswrapper[27835]: I0318 13:42:50.530098 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85f88db649-6j6lk" Mar 18 13:42:50.571499 master-0 kubenswrapper[27835]: I0318 13:42:50.571381 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85f88db649-6j6lk" podStartSLOduration=3.571362497 podStartE2EDuration="3.571362497s" podCreationTimestamp="2026-03-18 13:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:42:50.56163207 +0000 UTC m=+1134.526843630" watchObservedRunningTime="2026-03-18 13:42:50.571362497 +0000 UTC m=+1134.536574057" Mar 18 13:42:50.822320 master-0 kubenswrapper[27835]: I0318 13:42:50.822259 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-8j9l6"] Mar 18 13:42:50.822819 master-0 kubenswrapper[27835]: E0318 13:42:50.822785 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32c9d64e-7be8-4437-aa1f-19726d1f7383" containerName="ovn-config" Mar 18 13:42:50.822819 master-0 kubenswrapper[27835]: I0318 13:42:50.822806 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="32c9d64e-7be8-4437-aa1f-19726d1f7383" containerName="ovn-config" Mar 18 13:42:50.823046 master-0 kubenswrapper[27835]: I0318 13:42:50.823027 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="32c9d64e-7be8-4437-aa1f-19726d1f7383" containerName="ovn-config" Mar 18 13:42:50.823749 master-0 kubenswrapper[27835]: I0318 13:42:50.823713 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8j9l6" Mar 18 13:42:50.913945 master-0 kubenswrapper[27835]: I0318 13:42:50.913879 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-8j9l6"] Mar 18 13:42:50.950703 master-0 kubenswrapper[27835]: I0318 13:42:50.950638 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-cb39-account-create-update-tfcs5"] Mar 18 13:42:50.952113 master-0 kubenswrapper[27835]: I0318 13:42:50.952080 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cb39-account-create-update-tfcs5" Mar 18 13:42:50.986176 master-0 kubenswrapper[27835]: I0318 13:42:50.963247 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cb39-account-create-update-tfcs5"] Mar 18 13:42:50.989025 master-0 kubenswrapper[27835]: I0318 13:42:50.986724 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 18 13:42:50.993498 master-0 kubenswrapper[27835]: I0318 13:42:50.993444 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dad967d0-9ad1-4342-9885-e5e28a68d3af-operator-scripts\") pod \"cinder-cb39-account-create-update-tfcs5\" (UID: \"dad967d0-9ad1-4342-9885-e5e28a68d3af\") " pod="openstack/cinder-cb39-account-create-update-tfcs5" Mar 18 13:42:50.993592 master-0 kubenswrapper[27835]: I0318 13:42:50.993517 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txxnw\" (UniqueName: \"kubernetes.io/projected/dad967d0-9ad1-4342-9885-e5e28a68d3af-kube-api-access-txxnw\") pod \"cinder-cb39-account-create-update-tfcs5\" (UID: \"dad967d0-9ad1-4342-9885-e5e28a68d3af\") " pod="openstack/cinder-cb39-account-create-update-tfcs5" Mar 18 13:42:50.993677 master-0 kubenswrapper[27835]: I0318 13:42:50.993653 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24b8658d-cf54-48b9-b4ee-fceef6236403-operator-scripts\") pod \"cinder-db-create-8j9l6\" (UID: \"24b8658d-cf54-48b9-b4ee-fceef6236403\") " pod="openstack/cinder-db-create-8j9l6" Mar 18 13:42:50.993910 master-0 kubenswrapper[27835]: I0318 13:42:50.993872 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl5xw\" (UniqueName: \"kubernetes.io/projected/24b8658d-cf54-48b9-b4ee-fceef6236403-kube-api-access-dl5xw\") pod \"cinder-db-create-8j9l6\" (UID: \"24b8658d-cf54-48b9-b4ee-fceef6236403\") " pod="openstack/cinder-db-create-8j9l6" Mar 18 13:42:51.100637 master-0 kubenswrapper[27835]: I0318 13:42:51.100477 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl5xw\" (UniqueName: \"kubernetes.io/projected/24b8658d-cf54-48b9-b4ee-fceef6236403-kube-api-access-dl5xw\") pod \"cinder-db-create-8j9l6\" (UID: \"24b8658d-cf54-48b9-b4ee-fceef6236403\") " pod="openstack/cinder-db-create-8j9l6" Mar 18 13:42:51.100637 master-0 kubenswrapper[27835]: I0318 13:42:51.100589 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dad967d0-9ad1-4342-9885-e5e28a68d3af-operator-scripts\") pod \"cinder-cb39-account-create-update-tfcs5\" (UID: \"dad967d0-9ad1-4342-9885-e5e28a68d3af\") " pod="openstack/cinder-cb39-account-create-update-tfcs5" Mar 18 13:42:51.100637 master-0 kubenswrapper[27835]: I0318 13:42:51.100628 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txxnw\" (UniqueName: \"kubernetes.io/projected/dad967d0-9ad1-4342-9885-e5e28a68d3af-kube-api-access-txxnw\") pod \"cinder-cb39-account-create-update-tfcs5\" (UID: \"dad967d0-9ad1-4342-9885-e5e28a68d3af\") " pod="openstack/cinder-cb39-account-create-update-tfcs5" Mar 18 13:42:51.101202 master-0 kubenswrapper[27835]: I0318 13:42:51.100718 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24b8658d-cf54-48b9-b4ee-fceef6236403-operator-scripts\") pod \"cinder-db-create-8j9l6\" (UID: \"24b8658d-cf54-48b9-b4ee-fceef6236403\") " pod="openstack/cinder-db-create-8j9l6" Mar 18 13:42:51.101542 master-0 kubenswrapper[27835]: I0318 13:42:51.101474 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dad967d0-9ad1-4342-9885-e5e28a68d3af-operator-scripts\") pod \"cinder-cb39-account-create-update-tfcs5\" (UID: \"dad967d0-9ad1-4342-9885-e5e28a68d3af\") " pod="openstack/cinder-cb39-account-create-update-tfcs5" Mar 18 13:42:51.101618 master-0 kubenswrapper[27835]: I0318 13:42:51.101579 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24b8658d-cf54-48b9-b4ee-fceef6236403-operator-scripts\") pod \"cinder-db-create-8j9l6\" (UID: \"24b8658d-cf54-48b9-b4ee-fceef6236403\") " pod="openstack/cinder-db-create-8j9l6" Mar 18 13:42:51.128438 master-0 kubenswrapper[27835]: I0318 13:42:51.127578 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txxnw\" (UniqueName: \"kubernetes.io/projected/dad967d0-9ad1-4342-9885-e5e28a68d3af-kube-api-access-txxnw\") pod \"cinder-cb39-account-create-update-tfcs5\" (UID: \"dad967d0-9ad1-4342-9885-e5e28a68d3af\") " pod="openstack/cinder-cb39-account-create-update-tfcs5" Mar 18 13:42:51.132587 master-0 kubenswrapper[27835]: I0318 13:42:51.131656 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-7tsdz"] Mar 18 13:42:51.135160 master-0 kubenswrapper[27835]: I0318 13:42:51.132996 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7tsdz" Mar 18 13:42:51.144439 master-0 kubenswrapper[27835]: I0318 13:42:51.141232 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl5xw\" (UniqueName: \"kubernetes.io/projected/24b8658d-cf54-48b9-b4ee-fceef6236403-kube-api-access-dl5xw\") pod \"cinder-db-create-8j9l6\" (UID: \"24b8658d-cf54-48b9-b4ee-fceef6236403\") " pod="openstack/cinder-db-create-8j9l6" Mar 18 13:42:51.144439 master-0 kubenswrapper[27835]: I0318 13:42:51.142369 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8j9l6" Mar 18 13:42:51.155696 master-0 kubenswrapper[27835]: I0318 13:42:51.155294 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-7tsdz"] Mar 18 13:42:51.202900 master-0 kubenswrapper[27835]: I0318 13:42:51.202838 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/825a0cb3-ac48-4974-8e6c-eb30956b617e-operator-scripts\") pod \"neutron-db-create-7tsdz\" (UID: \"825a0cb3-ac48-4974-8e6c-eb30956b617e\") " pod="openstack/neutron-db-create-7tsdz" Mar 18 13:42:51.203116 master-0 kubenswrapper[27835]: I0318 13:42:51.202989 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssrn2\" (UniqueName: \"kubernetes.io/projected/825a0cb3-ac48-4974-8e6c-eb30956b617e-kube-api-access-ssrn2\") pod \"neutron-db-create-7tsdz\" (UID: \"825a0cb3-ac48-4974-8e6c-eb30956b617e\") " pod="openstack/neutron-db-create-7tsdz" Mar 18 13:42:51.241225 master-0 kubenswrapper[27835]: I0318 13:42:51.241152 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-ac18-account-create-update-t8fjp"] Mar 18 13:42:51.242616 master-0 kubenswrapper[27835]: I0318 13:42:51.242561 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ac18-account-create-update-t8fjp" Mar 18 13:42:51.248143 master-0 kubenswrapper[27835]: I0318 13:42:51.246482 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 18 13:42:51.298682 master-0 kubenswrapper[27835]: I0318 13:42:51.296533 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-ac18-account-create-update-t8fjp"] Mar 18 13:42:51.305716 master-0 kubenswrapper[27835]: I0318 13:42:51.305643 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssrn2\" (UniqueName: \"kubernetes.io/projected/825a0cb3-ac48-4974-8e6c-eb30956b617e-kube-api-access-ssrn2\") pod \"neutron-db-create-7tsdz\" (UID: \"825a0cb3-ac48-4974-8e6c-eb30956b617e\") " pod="openstack/neutron-db-create-7tsdz" Mar 18 13:42:51.305906 master-0 kubenswrapper[27835]: I0318 13:42:51.305721 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn99r\" (UniqueName: \"kubernetes.io/projected/9fb62e62-ee53-4992-a2af-06420a2812ed-kube-api-access-tn99r\") pod \"neutron-ac18-account-create-update-t8fjp\" (UID: \"9fb62e62-ee53-4992-a2af-06420a2812ed\") " pod="openstack/neutron-ac18-account-create-update-t8fjp" Mar 18 13:42:51.305906 master-0 kubenswrapper[27835]: I0318 13:42:51.305867 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fb62e62-ee53-4992-a2af-06420a2812ed-operator-scripts\") pod \"neutron-ac18-account-create-update-t8fjp\" (UID: \"9fb62e62-ee53-4992-a2af-06420a2812ed\") " pod="openstack/neutron-ac18-account-create-update-t8fjp" Mar 18 13:42:51.306033 master-0 kubenswrapper[27835]: I0318 13:42:51.305988 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/825a0cb3-ac48-4974-8e6c-eb30956b617e-operator-scripts\") pod \"neutron-db-create-7tsdz\" (UID: \"825a0cb3-ac48-4974-8e6c-eb30956b617e\") " pod="openstack/neutron-db-create-7tsdz" Mar 18 13:42:51.307151 master-0 kubenswrapper[27835]: I0318 13:42:51.307130 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/825a0cb3-ac48-4974-8e6c-eb30956b617e-operator-scripts\") pod \"neutron-db-create-7tsdz\" (UID: \"825a0cb3-ac48-4974-8e6c-eb30956b617e\") " pod="openstack/neutron-db-create-7tsdz" Mar 18 13:42:51.321521 master-0 kubenswrapper[27835]: I0318 13:42:51.319553 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cb39-account-create-update-tfcs5" Mar 18 13:42:51.339503 master-0 kubenswrapper[27835]: I0318 13:42:51.335089 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssrn2\" (UniqueName: \"kubernetes.io/projected/825a0cb3-ac48-4974-8e6c-eb30956b617e-kube-api-access-ssrn2\") pod \"neutron-db-create-7tsdz\" (UID: \"825a0cb3-ac48-4974-8e6c-eb30956b617e\") " pod="openstack/neutron-db-create-7tsdz" Mar 18 13:42:51.402218 master-0 kubenswrapper[27835]: I0318 13:42:51.389371 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-sj9px"] Mar 18 13:42:51.402218 master-0 kubenswrapper[27835]: I0318 13:42:51.392855 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-sj9px" Mar 18 13:42:51.402218 master-0 kubenswrapper[27835]: I0318 13:42:51.396631 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 18 13:42:51.402218 master-0 kubenswrapper[27835]: I0318 13:42:51.397032 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 18 13:42:51.402218 master-0 kubenswrapper[27835]: I0318 13:42:51.397155 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 18 13:42:51.407518 master-0 kubenswrapper[27835]: I0318 13:42:51.406797 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn99r\" (UniqueName: \"kubernetes.io/projected/9fb62e62-ee53-4992-a2af-06420a2812ed-kube-api-access-tn99r\") pod \"neutron-ac18-account-create-update-t8fjp\" (UID: \"9fb62e62-ee53-4992-a2af-06420a2812ed\") " pod="openstack/neutron-ac18-account-create-update-t8fjp" Mar 18 13:42:51.407518 master-0 kubenswrapper[27835]: I0318 13:42:51.406852 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0c2698f-e0c0-413f-8e86-184f8ab0b231-combined-ca-bundle\") pod \"keystone-db-sync-sj9px\" (UID: \"b0c2698f-e0c0-413f-8e86-184f8ab0b231\") " pod="openstack/keystone-db-sync-sj9px" Mar 18 13:42:51.407518 master-0 kubenswrapper[27835]: I0318 13:42:51.406877 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0c2698f-e0c0-413f-8e86-184f8ab0b231-config-data\") pod \"keystone-db-sync-sj9px\" (UID: \"b0c2698f-e0c0-413f-8e86-184f8ab0b231\") " pod="openstack/keystone-db-sync-sj9px" Mar 18 13:42:51.407518 master-0 kubenswrapper[27835]: I0318 13:42:51.406915 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fb62e62-ee53-4992-a2af-06420a2812ed-operator-scripts\") pod \"neutron-ac18-account-create-update-t8fjp\" (UID: \"9fb62e62-ee53-4992-a2af-06420a2812ed\") " pod="openstack/neutron-ac18-account-create-update-t8fjp" Mar 18 13:42:51.407518 master-0 kubenswrapper[27835]: I0318 13:42:51.406955 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j94t6\" (UniqueName: \"kubernetes.io/projected/b0c2698f-e0c0-413f-8e86-184f8ab0b231-kube-api-access-j94t6\") pod \"keystone-db-sync-sj9px\" (UID: \"b0c2698f-e0c0-413f-8e86-184f8ab0b231\") " pod="openstack/keystone-db-sync-sj9px" Mar 18 13:42:51.408161 master-0 kubenswrapper[27835]: I0318 13:42:51.408099 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fb62e62-ee53-4992-a2af-06420a2812ed-operator-scripts\") pod \"neutron-ac18-account-create-update-t8fjp\" (UID: \"9fb62e62-ee53-4992-a2af-06420a2812ed\") " pod="openstack/neutron-ac18-account-create-update-t8fjp" Mar 18 13:42:51.435679 master-0 kubenswrapper[27835]: I0318 13:42:51.435554 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-sj9px"] Mar 18 13:42:51.450725 master-0 kubenswrapper[27835]: I0318 13:42:51.450108 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn99r\" (UniqueName: \"kubernetes.io/projected/9fb62e62-ee53-4992-a2af-06420a2812ed-kube-api-access-tn99r\") pod \"neutron-ac18-account-create-update-t8fjp\" (UID: \"9fb62e62-ee53-4992-a2af-06420a2812ed\") " pod="openstack/neutron-ac18-account-create-update-t8fjp" Mar 18 13:42:51.509510 master-0 kubenswrapper[27835]: I0318 13:42:51.508890 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0c2698f-e0c0-413f-8e86-184f8ab0b231-combined-ca-bundle\") pod \"keystone-db-sync-sj9px\" (UID: \"b0c2698f-e0c0-413f-8e86-184f8ab0b231\") " pod="openstack/keystone-db-sync-sj9px" Mar 18 13:42:51.509510 master-0 kubenswrapper[27835]: I0318 13:42:51.508944 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0c2698f-e0c0-413f-8e86-184f8ab0b231-config-data\") pod \"keystone-db-sync-sj9px\" (UID: \"b0c2698f-e0c0-413f-8e86-184f8ab0b231\") " pod="openstack/keystone-db-sync-sj9px" Mar 18 13:42:51.509510 master-0 kubenswrapper[27835]: I0318 13:42:51.509015 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j94t6\" (UniqueName: \"kubernetes.io/projected/b0c2698f-e0c0-413f-8e86-184f8ab0b231-kube-api-access-j94t6\") pod \"keystone-db-sync-sj9px\" (UID: \"b0c2698f-e0c0-413f-8e86-184f8ab0b231\") " pod="openstack/keystone-db-sync-sj9px" Mar 18 13:42:51.513676 master-0 kubenswrapper[27835]: I0318 13:42:51.513519 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0c2698f-e0c0-413f-8e86-184f8ab0b231-config-data\") pod \"keystone-db-sync-sj9px\" (UID: \"b0c2698f-e0c0-413f-8e86-184f8ab0b231\") " pod="openstack/keystone-db-sync-sj9px" Mar 18 13:42:51.514447 master-0 kubenswrapper[27835]: I0318 13:42:51.514054 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0c2698f-e0c0-413f-8e86-184f8ab0b231-combined-ca-bundle\") pod \"keystone-db-sync-sj9px\" (UID: \"b0c2698f-e0c0-413f-8e86-184f8ab0b231\") " pod="openstack/keystone-db-sync-sj9px" Mar 18 13:42:51.526492 master-0 kubenswrapper[27835]: I0318 13:42:51.526151 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j94t6\" (UniqueName: \"kubernetes.io/projected/b0c2698f-e0c0-413f-8e86-184f8ab0b231-kube-api-access-j94t6\") pod \"keystone-db-sync-sj9px\" (UID: \"b0c2698f-e0c0-413f-8e86-184f8ab0b231\") " pod="openstack/keystone-db-sync-sj9px" Mar 18 13:42:51.545470 master-0 kubenswrapper[27835]: I0318 13:42:51.544912 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7tsdz" Mar 18 13:42:51.564099 master-0 kubenswrapper[27835]: I0318 13:42:51.563442 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ac18-account-create-update-t8fjp" Mar 18 13:42:51.730016 master-0 kubenswrapper[27835]: I0318 13:42:51.729941 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-sj9px" Mar 18 13:42:51.776102 master-0 kubenswrapper[27835]: I0318 13:42:51.776042 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-8j9l6"] Mar 18 13:42:51.880840 master-0 kubenswrapper[27835]: W0318 13:42:51.880742 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24b8658d_cf54_48b9_b4ee_fceef6236403.slice/crio-d0b72a0687c276c4bc1853a1a57c989295d7c4824554e07d7c40ba1eafa74b7a WatchSource:0}: Error finding container d0b72a0687c276c4bc1853a1a57c989295d7c4824554e07d7c40ba1eafa74b7a: Status 404 returned error can't find the container with id d0b72a0687c276c4bc1853a1a57c989295d7c4824554e07d7c40ba1eafa74b7a Mar 18 13:42:51.980776 master-0 kubenswrapper[27835]: I0318 13:42:51.980721 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-cb39-account-create-update-tfcs5"] Mar 18 13:42:52.310128 master-0 kubenswrapper[27835]: I0318 13:42:52.310090 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-7tsdz"] Mar 18 13:42:52.422376 master-0 kubenswrapper[27835]: I0318 13:42:52.422324 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-ac18-account-create-update-t8fjp"] Mar 18 13:42:52.545520 master-0 kubenswrapper[27835]: W0318 13:42:52.545014 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0c2698f_e0c0_413f_8e86_184f8ab0b231.slice/crio-53ee56471777bc9a10b3dce44806fb9deb956800e3adfe803a51292bc90a8e04 WatchSource:0}: Error finding container 53ee56471777bc9a10b3dce44806fb9deb956800e3adfe803a51292bc90a8e04: Status 404 returned error can't find the container with id 53ee56471777bc9a10b3dce44806fb9deb956800e3adfe803a51292bc90a8e04 Mar 18 13:42:52.552515 master-0 kubenswrapper[27835]: I0318 13:42:52.551358 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-sj9px"] Mar 18 13:42:52.559338 master-0 kubenswrapper[27835]: I0318 13:42:52.557255 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ac18-account-create-update-t8fjp" event={"ID":"9fb62e62-ee53-4992-a2af-06420a2812ed","Type":"ContainerStarted","Data":"a008afcd8b126afc69eeeb2a62296121e1ba07ab2e03a2816c890332d55ab8ce"} Mar 18 13:42:52.561481 master-0 kubenswrapper[27835]: I0318 13:42:52.560857 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cb39-account-create-update-tfcs5" event={"ID":"dad967d0-9ad1-4342-9885-e5e28a68d3af","Type":"ContainerStarted","Data":"46e8e1ab291f936f7438b2757bcc3a05b5562e1bc4b522d3a77274b5a1652628"} Mar 18 13:42:52.561481 master-0 kubenswrapper[27835]: I0318 13:42:52.560896 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cb39-account-create-update-tfcs5" event={"ID":"dad967d0-9ad1-4342-9885-e5e28a68d3af","Type":"ContainerStarted","Data":"7d15945d16196306eeeabdfc242a4a0e32312d9fade82960d2b2bf8da0968134"} Mar 18 13:42:52.565361 master-0 kubenswrapper[27835]: I0318 13:42:52.565304 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7tsdz" event={"ID":"825a0cb3-ac48-4974-8e6c-eb30956b617e","Type":"ContainerStarted","Data":"d74d6f03506180b25398a6a625bd404c17eb5e4856580bb494cdd3d086969a64"} Mar 18 13:42:52.567438 master-0 kubenswrapper[27835]: I0318 13:42:52.567390 27835 generic.go:334] "Generic (PLEG): container finished" podID="24b8658d-cf54-48b9-b4ee-fceef6236403" containerID="594ee041582be16686f5c860563b8df6b517dede5d725ee4966e641f3c46009d" exitCode=0 Mar 18 13:42:52.567555 master-0 kubenswrapper[27835]: I0318 13:42:52.567444 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-8j9l6" event={"ID":"24b8658d-cf54-48b9-b4ee-fceef6236403","Type":"ContainerDied","Data":"594ee041582be16686f5c860563b8df6b517dede5d725ee4966e641f3c46009d"} Mar 18 13:42:52.567555 master-0 kubenswrapper[27835]: I0318 13:42:52.567469 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-8j9l6" event={"ID":"24b8658d-cf54-48b9-b4ee-fceef6236403","Type":"ContainerStarted","Data":"d0b72a0687c276c4bc1853a1a57c989295d7c4824554e07d7c40ba1eafa74b7a"} Mar 18 13:42:52.583702 master-0 kubenswrapper[27835]: I0318 13:42:52.583619 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-cb39-account-create-update-tfcs5" podStartSLOduration=2.583602009 podStartE2EDuration="2.583602009s" podCreationTimestamp="2026-03-18 13:42:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:42:52.581909364 +0000 UTC m=+1136.547120924" watchObservedRunningTime="2026-03-18 13:42:52.583602009 +0000 UTC m=+1136.548813569" Mar 18 13:42:53.589455 master-0 kubenswrapper[27835]: I0318 13:42:53.588001 27835 generic.go:334] "Generic (PLEG): container finished" podID="9fb62e62-ee53-4992-a2af-06420a2812ed" containerID="10e0563ce5145c2202248676fff2d6b0496f142f9215ef6a810ece8915c8c310" exitCode=0 Mar 18 13:42:53.589455 master-0 kubenswrapper[27835]: I0318 13:42:53.588118 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ac18-account-create-update-t8fjp" event={"ID":"9fb62e62-ee53-4992-a2af-06420a2812ed","Type":"ContainerDied","Data":"10e0563ce5145c2202248676fff2d6b0496f142f9215ef6a810ece8915c8c310"} Mar 18 13:42:53.593322 master-0 kubenswrapper[27835]: I0318 13:42:53.591596 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-sj9px" event={"ID":"b0c2698f-e0c0-413f-8e86-184f8ab0b231","Type":"ContainerStarted","Data":"53ee56471777bc9a10b3dce44806fb9deb956800e3adfe803a51292bc90a8e04"} Mar 18 13:42:53.594937 master-0 kubenswrapper[27835]: I0318 13:42:53.593757 27835 generic.go:334] "Generic (PLEG): container finished" podID="dad967d0-9ad1-4342-9885-e5e28a68d3af" containerID="46e8e1ab291f936f7438b2757bcc3a05b5562e1bc4b522d3a77274b5a1652628" exitCode=0 Mar 18 13:42:53.594937 master-0 kubenswrapper[27835]: I0318 13:42:53.593808 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cb39-account-create-update-tfcs5" event={"ID":"dad967d0-9ad1-4342-9885-e5e28a68d3af","Type":"ContainerDied","Data":"46e8e1ab291f936f7438b2757bcc3a05b5562e1bc4b522d3a77274b5a1652628"} Mar 18 13:42:53.596071 master-0 kubenswrapper[27835]: I0318 13:42:53.595916 27835 generic.go:334] "Generic (PLEG): container finished" podID="825a0cb3-ac48-4974-8e6c-eb30956b617e" containerID="fd85e784e4c3be585bb1bb43e6cf8ba5f580442fba6c3b7a1465f825cc344572" exitCode=0 Mar 18 13:42:53.596071 master-0 kubenswrapper[27835]: I0318 13:42:53.595971 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7tsdz" event={"ID":"825a0cb3-ac48-4974-8e6c-eb30956b617e","Type":"ContainerDied","Data":"fd85e784e4c3be585bb1bb43e6cf8ba5f580442fba6c3b7a1465f825cc344572"} Mar 18 13:42:54.124110 master-0 kubenswrapper[27835]: I0318 13:42:54.124068 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8j9l6" Mar 18 13:42:54.286665 master-0 kubenswrapper[27835]: I0318 13:42:54.286573 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dl5xw\" (UniqueName: \"kubernetes.io/projected/24b8658d-cf54-48b9-b4ee-fceef6236403-kube-api-access-dl5xw\") pod \"24b8658d-cf54-48b9-b4ee-fceef6236403\" (UID: \"24b8658d-cf54-48b9-b4ee-fceef6236403\") " Mar 18 13:42:54.286911 master-0 kubenswrapper[27835]: I0318 13:42:54.286750 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24b8658d-cf54-48b9-b4ee-fceef6236403-operator-scripts\") pod \"24b8658d-cf54-48b9-b4ee-fceef6236403\" (UID: \"24b8658d-cf54-48b9-b4ee-fceef6236403\") " Mar 18 13:42:54.287370 master-0 kubenswrapper[27835]: I0318 13:42:54.287291 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24b8658d-cf54-48b9-b4ee-fceef6236403-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "24b8658d-cf54-48b9-b4ee-fceef6236403" (UID: "24b8658d-cf54-48b9-b4ee-fceef6236403"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:42:54.287829 master-0 kubenswrapper[27835]: I0318 13:42:54.287786 27835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24b8658d-cf54-48b9-b4ee-fceef6236403-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:54.292921 master-0 kubenswrapper[27835]: I0318 13:42:54.292819 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24b8658d-cf54-48b9-b4ee-fceef6236403-kube-api-access-dl5xw" (OuterVolumeSpecName: "kube-api-access-dl5xw") pod "24b8658d-cf54-48b9-b4ee-fceef6236403" (UID: "24b8658d-cf54-48b9-b4ee-fceef6236403"). InnerVolumeSpecName "kube-api-access-dl5xw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:42:54.389334 master-0 kubenswrapper[27835]: I0318 13:42:54.389255 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dl5xw\" (UniqueName: \"kubernetes.io/projected/24b8658d-cf54-48b9-b4ee-fceef6236403-kube-api-access-dl5xw\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:54.616097 master-0 kubenswrapper[27835]: I0318 13:42:54.615921 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8j9l6" Mar 18 13:42:54.616868 master-0 kubenswrapper[27835]: I0318 13:42:54.616841 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-8j9l6" event={"ID":"24b8658d-cf54-48b9-b4ee-fceef6236403","Type":"ContainerDied","Data":"d0b72a0687c276c4bc1853a1a57c989295d7c4824554e07d7c40ba1eafa74b7a"} Mar 18 13:42:54.616948 master-0 kubenswrapper[27835]: I0318 13:42:54.616873 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0b72a0687c276c4bc1853a1a57c989295d7c4824554e07d7c40ba1eafa74b7a" Mar 18 13:42:55.067182 master-0 kubenswrapper[27835]: I0318 13:42:55.066307 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cb39-account-create-update-tfcs5" Mar 18 13:42:55.106187 master-0 kubenswrapper[27835]: I0318 13:42:55.106117 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dad967d0-9ad1-4342-9885-e5e28a68d3af-operator-scripts\") pod \"dad967d0-9ad1-4342-9885-e5e28a68d3af\" (UID: \"dad967d0-9ad1-4342-9885-e5e28a68d3af\") " Mar 18 13:42:55.106447 master-0 kubenswrapper[27835]: I0318 13:42:55.106201 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txxnw\" (UniqueName: \"kubernetes.io/projected/dad967d0-9ad1-4342-9885-e5e28a68d3af-kube-api-access-txxnw\") pod \"dad967d0-9ad1-4342-9885-e5e28a68d3af\" (UID: \"dad967d0-9ad1-4342-9885-e5e28a68d3af\") " Mar 18 13:42:55.113293 master-0 kubenswrapper[27835]: I0318 13:42:55.113089 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dad967d0-9ad1-4342-9885-e5e28a68d3af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dad967d0-9ad1-4342-9885-e5e28a68d3af" (UID: "dad967d0-9ad1-4342-9885-e5e28a68d3af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:42:55.114610 master-0 kubenswrapper[27835]: I0318 13:42:55.113661 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dad967d0-9ad1-4342-9885-e5e28a68d3af-kube-api-access-txxnw" (OuterVolumeSpecName: "kube-api-access-txxnw") pod "dad967d0-9ad1-4342-9885-e5e28a68d3af" (UID: "dad967d0-9ad1-4342-9885-e5e28a68d3af"). InnerVolumeSpecName "kube-api-access-txxnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:42:55.208759 master-0 kubenswrapper[27835]: I0318 13:42:55.208631 27835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dad967d0-9ad1-4342-9885-e5e28a68d3af-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:55.208759 master-0 kubenswrapper[27835]: I0318 13:42:55.208681 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txxnw\" (UniqueName: \"kubernetes.io/projected/dad967d0-9ad1-4342-9885-e5e28a68d3af-kube-api-access-txxnw\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:55.631481 master-0 kubenswrapper[27835]: I0318 13:42:55.631354 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-cb39-account-create-update-tfcs5" event={"ID":"dad967d0-9ad1-4342-9885-e5e28a68d3af","Type":"ContainerDied","Data":"7d15945d16196306eeeabdfc242a4a0e32312d9fade82960d2b2bf8da0968134"} Mar 18 13:42:55.631481 master-0 kubenswrapper[27835]: I0318 13:42:55.631399 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d15945d16196306eeeabdfc242a4a0e32312d9fade82960d2b2bf8da0968134" Mar 18 13:42:55.631481 master-0 kubenswrapper[27835]: I0318 13:42:55.631430 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-cb39-account-create-update-tfcs5" Mar 18 13:42:58.138731 master-0 kubenswrapper[27835]: I0318 13:42:58.138679 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ac18-account-create-update-t8fjp" Mar 18 13:42:58.146085 master-0 kubenswrapper[27835]: I0318 13:42:58.146054 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7tsdz" Mar 18 13:42:58.201985 master-0 kubenswrapper[27835]: I0318 13:42:58.201602 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85f88db649-6j6lk" Mar 18 13:42:58.294048 master-0 kubenswrapper[27835]: I0318 13:42:58.293818 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tn99r\" (UniqueName: \"kubernetes.io/projected/9fb62e62-ee53-4992-a2af-06420a2812ed-kube-api-access-tn99r\") pod \"9fb62e62-ee53-4992-a2af-06420a2812ed\" (UID: \"9fb62e62-ee53-4992-a2af-06420a2812ed\") " Mar 18 13:42:58.294048 master-0 kubenswrapper[27835]: I0318 13:42:58.293890 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssrn2\" (UniqueName: \"kubernetes.io/projected/825a0cb3-ac48-4974-8e6c-eb30956b617e-kube-api-access-ssrn2\") pod \"825a0cb3-ac48-4974-8e6c-eb30956b617e\" (UID: \"825a0cb3-ac48-4974-8e6c-eb30956b617e\") " Mar 18 13:42:58.294048 master-0 kubenswrapper[27835]: I0318 13:42:58.293924 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/825a0cb3-ac48-4974-8e6c-eb30956b617e-operator-scripts\") pod \"825a0cb3-ac48-4974-8e6c-eb30956b617e\" (UID: \"825a0cb3-ac48-4974-8e6c-eb30956b617e\") " Mar 18 13:42:58.294309 master-0 kubenswrapper[27835]: I0318 13:42:58.294091 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fb62e62-ee53-4992-a2af-06420a2812ed-operator-scripts\") pod \"9fb62e62-ee53-4992-a2af-06420a2812ed\" (UID: \"9fb62e62-ee53-4992-a2af-06420a2812ed\") " Mar 18 13:42:58.294949 master-0 kubenswrapper[27835]: I0318 13:42:58.294893 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/825a0cb3-ac48-4974-8e6c-eb30956b617e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "825a0cb3-ac48-4974-8e6c-eb30956b617e" (UID: "825a0cb3-ac48-4974-8e6c-eb30956b617e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:42:58.300216 master-0 kubenswrapper[27835]: I0318 13:42:58.300125 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fb62e62-ee53-4992-a2af-06420a2812ed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9fb62e62-ee53-4992-a2af-06420a2812ed" (UID: "9fb62e62-ee53-4992-a2af-06420a2812ed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:42:58.304882 master-0 kubenswrapper[27835]: I0318 13:42:58.302942 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/825a0cb3-ac48-4974-8e6c-eb30956b617e-kube-api-access-ssrn2" (OuterVolumeSpecName: "kube-api-access-ssrn2") pod "825a0cb3-ac48-4974-8e6c-eb30956b617e" (UID: "825a0cb3-ac48-4974-8e6c-eb30956b617e"). InnerVolumeSpecName "kube-api-access-ssrn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:42:58.304882 master-0 kubenswrapper[27835]: I0318 13:42:58.304753 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fb62e62-ee53-4992-a2af-06420a2812ed-kube-api-access-tn99r" (OuterVolumeSpecName: "kube-api-access-tn99r") pod "9fb62e62-ee53-4992-a2af-06420a2812ed" (UID: "9fb62e62-ee53-4992-a2af-06420a2812ed"). InnerVolumeSpecName "kube-api-access-tn99r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:42:58.311617 master-0 kubenswrapper[27835]: I0318 13:42:58.310041 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b8649b7f9-8486f"] Mar 18 13:42:58.311617 master-0 kubenswrapper[27835]: I0318 13:42:58.310251 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b8649b7f9-8486f" podUID="b8cceedf-f909-428d-953e-194d94f1c300" containerName="dnsmasq-dns" containerID="cri-o://04d0c178d5e66c62b9599398b2df917288fb6a8ba8647639521307d9f387f834" gracePeriod=10 Mar 18 13:42:58.406699 master-0 kubenswrapper[27835]: I0318 13:42:58.397243 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssrn2\" (UniqueName: \"kubernetes.io/projected/825a0cb3-ac48-4974-8e6c-eb30956b617e-kube-api-access-ssrn2\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:58.406699 master-0 kubenswrapper[27835]: I0318 13:42:58.397292 27835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/825a0cb3-ac48-4974-8e6c-eb30956b617e-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:58.406699 master-0 kubenswrapper[27835]: I0318 13:42:58.397305 27835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fb62e62-ee53-4992-a2af-06420a2812ed-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:58.406699 master-0 kubenswrapper[27835]: I0318 13:42:58.397317 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tn99r\" (UniqueName: \"kubernetes.io/projected/9fb62e62-ee53-4992-a2af-06420a2812ed-kube-api-access-tn99r\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:58.678643 master-0 kubenswrapper[27835]: I0318 13:42:58.678585 27835 generic.go:334] "Generic (PLEG): container finished" podID="05a99a49-5215-40c9-ba30-54618aa67479" containerID="85327a55f766489236795af0d710f18fdd741fe468da5ee0e450b662530134cb" exitCode=0 Mar 18 13:42:58.678824 master-0 kubenswrapper[27835]: I0318 13:42:58.678673 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-p2pp6" event={"ID":"05a99a49-5215-40c9-ba30-54618aa67479","Type":"ContainerDied","Data":"85327a55f766489236795af0d710f18fdd741fe468da5ee0e450b662530134cb"} Mar 18 13:42:58.688448 master-0 kubenswrapper[27835]: I0318 13:42:58.682881 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-ac18-account-create-update-t8fjp" event={"ID":"9fb62e62-ee53-4992-a2af-06420a2812ed","Type":"ContainerDied","Data":"a008afcd8b126afc69eeeb2a62296121e1ba07ab2e03a2816c890332d55ab8ce"} Mar 18 13:42:58.688448 master-0 kubenswrapper[27835]: I0318 13:42:58.682928 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a008afcd8b126afc69eeeb2a62296121e1ba07ab2e03a2816c890332d55ab8ce" Mar 18 13:42:58.688448 master-0 kubenswrapper[27835]: I0318 13:42:58.682988 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-ac18-account-create-update-t8fjp" Mar 18 13:42:58.719510 master-0 kubenswrapper[27835]: I0318 13:42:58.719447 27835 generic.go:334] "Generic (PLEG): container finished" podID="b8cceedf-f909-428d-953e-194d94f1c300" containerID="04d0c178d5e66c62b9599398b2df917288fb6a8ba8647639521307d9f387f834" exitCode=0 Mar 18 13:42:58.719714 master-0 kubenswrapper[27835]: I0318 13:42:58.719611 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8649b7f9-8486f" event={"ID":"b8cceedf-f909-428d-953e-194d94f1c300","Type":"ContainerDied","Data":"04d0c178d5e66c62b9599398b2df917288fb6a8ba8647639521307d9f387f834"} Mar 18 13:42:58.721590 master-0 kubenswrapper[27835]: I0318 13:42:58.721555 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-sj9px" event={"ID":"b0c2698f-e0c0-413f-8e86-184f8ab0b231","Type":"ContainerStarted","Data":"46c3faa622bcefa81c2e26872488e583d54416823e3442a962682ee4bb472155"} Mar 18 13:42:58.728819 master-0 kubenswrapper[27835]: I0318 13:42:58.728780 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7tsdz" event={"ID":"825a0cb3-ac48-4974-8e6c-eb30956b617e","Type":"ContainerDied","Data":"d74d6f03506180b25398a6a625bd404c17eb5e4856580bb494cdd3d086969a64"} Mar 18 13:42:58.729095 master-0 kubenswrapper[27835]: I0318 13:42:58.729024 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7tsdz" Mar 18 13:42:58.729228 master-0 kubenswrapper[27835]: I0318 13:42:58.729037 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d74d6f03506180b25398a6a625bd404c17eb5e4856580bb494cdd3d086969a64" Mar 18 13:42:58.756580 master-0 kubenswrapper[27835]: I0318 13:42:58.756334 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-sj9px" podStartSLOduration=2.411492209 podStartE2EDuration="7.756307947s" podCreationTimestamp="2026-03-18 13:42:51 +0000 UTC" firstStartedPulling="2026-03-18 13:42:52.547776299 +0000 UTC m=+1136.512987849" lastFinishedPulling="2026-03-18 13:42:57.892592027 +0000 UTC m=+1141.857803587" observedRunningTime="2026-03-18 13:42:58.743193859 +0000 UTC m=+1142.708405429" watchObservedRunningTime="2026-03-18 13:42:58.756307947 +0000 UTC m=+1142.721519507" Mar 18 13:42:59.093438 master-0 kubenswrapper[27835]: I0318 13:42:59.093387 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b8649b7f9-8486f" Mar 18 13:42:59.235389 master-0 kubenswrapper[27835]: I0318 13:42:59.235317 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tt8hn\" (UniqueName: \"kubernetes.io/projected/b8cceedf-f909-428d-953e-194d94f1c300-kube-api-access-tt8hn\") pod \"b8cceedf-f909-428d-953e-194d94f1c300\" (UID: \"b8cceedf-f909-428d-953e-194d94f1c300\") " Mar 18 13:42:59.235389 master-0 kubenswrapper[27835]: I0318 13:42:59.235380 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8cceedf-f909-428d-953e-194d94f1c300-ovsdbserver-sb\") pod \"b8cceedf-f909-428d-953e-194d94f1c300\" (UID: \"b8cceedf-f909-428d-953e-194d94f1c300\") " Mar 18 13:42:59.235961 master-0 kubenswrapper[27835]: I0318 13:42:59.235425 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8cceedf-f909-428d-953e-194d94f1c300-config\") pod \"b8cceedf-f909-428d-953e-194d94f1c300\" (UID: \"b8cceedf-f909-428d-953e-194d94f1c300\") " Mar 18 13:42:59.235961 master-0 kubenswrapper[27835]: I0318 13:42:59.235487 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8cceedf-f909-428d-953e-194d94f1c300-ovsdbserver-nb\") pod \"b8cceedf-f909-428d-953e-194d94f1c300\" (UID: \"b8cceedf-f909-428d-953e-194d94f1c300\") " Mar 18 13:42:59.235961 master-0 kubenswrapper[27835]: I0318 13:42:59.235548 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8cceedf-f909-428d-953e-194d94f1c300-dns-svc\") pod \"b8cceedf-f909-428d-953e-194d94f1c300\" (UID: \"b8cceedf-f909-428d-953e-194d94f1c300\") " Mar 18 13:42:59.248739 master-0 kubenswrapper[27835]: I0318 13:42:59.248625 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8cceedf-f909-428d-953e-194d94f1c300-kube-api-access-tt8hn" (OuterVolumeSpecName: "kube-api-access-tt8hn") pod "b8cceedf-f909-428d-953e-194d94f1c300" (UID: "b8cceedf-f909-428d-953e-194d94f1c300"). InnerVolumeSpecName "kube-api-access-tt8hn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:42:59.290472 master-0 kubenswrapper[27835]: I0318 13:42:59.290282 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8cceedf-f909-428d-953e-194d94f1c300-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b8cceedf-f909-428d-953e-194d94f1c300" (UID: "b8cceedf-f909-428d-953e-194d94f1c300"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:42:59.294318 master-0 kubenswrapper[27835]: I0318 13:42:59.294259 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8cceedf-f909-428d-953e-194d94f1c300-config" (OuterVolumeSpecName: "config") pod "b8cceedf-f909-428d-953e-194d94f1c300" (UID: "b8cceedf-f909-428d-953e-194d94f1c300"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:42:59.299841 master-0 kubenswrapper[27835]: I0318 13:42:59.299814 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8cceedf-f909-428d-953e-194d94f1c300-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b8cceedf-f909-428d-953e-194d94f1c300" (UID: "b8cceedf-f909-428d-953e-194d94f1c300"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:42:59.309131 master-0 kubenswrapper[27835]: I0318 13:42:59.309093 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8cceedf-f909-428d-953e-194d94f1c300-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b8cceedf-f909-428d-953e-194d94f1c300" (UID: "b8cceedf-f909-428d-953e-194d94f1c300"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:42:59.337740 master-0 kubenswrapper[27835]: I0318 13:42:59.337486 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tt8hn\" (UniqueName: \"kubernetes.io/projected/b8cceedf-f909-428d-953e-194d94f1c300-kube-api-access-tt8hn\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:59.337740 master-0 kubenswrapper[27835]: I0318 13:42:59.337734 27835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b8cceedf-f909-428d-953e-194d94f1c300-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:59.337740 master-0 kubenswrapper[27835]: I0318 13:42:59.337749 27835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8cceedf-f909-428d-953e-194d94f1c300-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:59.338026 master-0 kubenswrapper[27835]: I0318 13:42:59.337761 27835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b8cceedf-f909-428d-953e-194d94f1c300-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:59.338026 master-0 kubenswrapper[27835]: I0318 13:42:59.337776 27835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b8cceedf-f909-428d-953e-194d94f1c300-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 18 13:42:59.769256 master-0 kubenswrapper[27835]: I0318 13:42:59.769164 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b8649b7f9-8486f" event={"ID":"b8cceedf-f909-428d-953e-194d94f1c300","Type":"ContainerDied","Data":"970aa6790ee9974dbdc046980d19aad935316b13f23c4eaccc7085825c9d870c"} Mar 18 13:42:59.769540 master-0 kubenswrapper[27835]: I0318 13:42:59.769262 27835 scope.go:117] "RemoveContainer" containerID="04d0c178d5e66c62b9599398b2df917288fb6a8ba8647639521307d9f387f834" Mar 18 13:42:59.769540 master-0 kubenswrapper[27835]: I0318 13:42:59.769264 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b8649b7f9-8486f" Mar 18 13:42:59.799301 master-0 kubenswrapper[27835]: I0318 13:42:59.799256 27835 scope.go:117] "RemoveContainer" containerID="67b6aa4cfe0670be7acf6c6d9ace26123bafc594adb6ff872efcf1afce49395c" Mar 18 13:42:59.829989 master-0 kubenswrapper[27835]: I0318 13:42:59.829919 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b8649b7f9-8486f"] Mar 18 13:42:59.845577 master-0 kubenswrapper[27835]: I0318 13:42:59.845287 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b8649b7f9-8486f"] Mar 18 13:43:00.305799 master-0 kubenswrapper[27835]: I0318 13:43:00.292511 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-p2pp6" Mar 18 13:43:00.305799 master-0 kubenswrapper[27835]: I0318 13:43:00.295557 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8cceedf-f909-428d-953e-194d94f1c300" path="/var/lib/kubelet/pods/b8cceedf-f909-428d-953e-194d94f1c300/volumes" Mar 18 13:43:00.378616 master-0 kubenswrapper[27835]: I0318 13:43:00.378555 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05a99a49-5215-40c9-ba30-54618aa67479-config-data\") pod \"05a99a49-5215-40c9-ba30-54618aa67479\" (UID: \"05a99a49-5215-40c9-ba30-54618aa67479\") " Mar 18 13:43:00.378616 master-0 kubenswrapper[27835]: I0318 13:43:00.378620 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05a99a49-5215-40c9-ba30-54618aa67479-combined-ca-bundle\") pod \"05a99a49-5215-40c9-ba30-54618aa67479\" (UID: \"05a99a49-5215-40c9-ba30-54618aa67479\") " Mar 18 13:43:00.378913 master-0 kubenswrapper[27835]: I0318 13:43:00.378852 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/05a99a49-5215-40c9-ba30-54618aa67479-db-sync-config-data\") pod \"05a99a49-5215-40c9-ba30-54618aa67479\" (UID: \"05a99a49-5215-40c9-ba30-54618aa67479\") " Mar 18 13:43:00.379243 master-0 kubenswrapper[27835]: I0318 13:43:00.379179 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8x98\" (UniqueName: \"kubernetes.io/projected/05a99a49-5215-40c9-ba30-54618aa67479-kube-api-access-l8x98\") pod \"05a99a49-5215-40c9-ba30-54618aa67479\" (UID: \"05a99a49-5215-40c9-ba30-54618aa67479\") " Mar 18 13:43:00.382999 master-0 kubenswrapper[27835]: I0318 13:43:00.382964 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05a99a49-5215-40c9-ba30-54618aa67479-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "05a99a49-5215-40c9-ba30-54618aa67479" (UID: "05a99a49-5215-40c9-ba30-54618aa67479"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:00.383987 master-0 kubenswrapper[27835]: I0318 13:43:00.383901 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05a99a49-5215-40c9-ba30-54618aa67479-kube-api-access-l8x98" (OuterVolumeSpecName: "kube-api-access-l8x98") pod "05a99a49-5215-40c9-ba30-54618aa67479" (UID: "05a99a49-5215-40c9-ba30-54618aa67479"). InnerVolumeSpecName "kube-api-access-l8x98". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:43:00.404512 master-0 kubenswrapper[27835]: I0318 13:43:00.404436 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05a99a49-5215-40c9-ba30-54618aa67479-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05a99a49-5215-40c9-ba30-54618aa67479" (UID: "05a99a49-5215-40c9-ba30-54618aa67479"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:00.437595 master-0 kubenswrapper[27835]: I0318 13:43:00.437488 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05a99a49-5215-40c9-ba30-54618aa67479-config-data" (OuterVolumeSpecName: "config-data") pod "05a99a49-5215-40c9-ba30-54618aa67479" (UID: "05a99a49-5215-40c9-ba30-54618aa67479"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:00.482279 master-0 kubenswrapper[27835]: I0318 13:43:00.482108 27835 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/05a99a49-5215-40c9-ba30-54618aa67479-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:00.482279 master-0 kubenswrapper[27835]: I0318 13:43:00.482163 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8x98\" (UniqueName: \"kubernetes.io/projected/05a99a49-5215-40c9-ba30-54618aa67479-kube-api-access-l8x98\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:00.482279 master-0 kubenswrapper[27835]: I0318 13:43:00.482173 27835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05a99a49-5215-40c9-ba30-54618aa67479-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:00.482279 master-0 kubenswrapper[27835]: I0318 13:43:00.482184 27835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05a99a49-5215-40c9-ba30-54618aa67479-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:00.786307 master-0 kubenswrapper[27835]: I0318 13:43:00.783901 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-p2pp6" event={"ID":"05a99a49-5215-40c9-ba30-54618aa67479","Type":"ContainerDied","Data":"3a7059102139906b50511e4b0d93afe4faa9820b4add0f0521f1f9e2312b315a"} Mar 18 13:43:00.786307 master-0 kubenswrapper[27835]: I0318 13:43:00.783955 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a7059102139906b50511e4b0d93afe4faa9820b4add0f0521f1f9e2312b315a" Mar 18 13:43:00.786307 master-0 kubenswrapper[27835]: I0318 13:43:00.783978 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-p2pp6" Mar 18 13:43:01.134677 master-0 kubenswrapper[27835]: I0318 13:43:01.130539 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bbb956cfc-8ggj8"] Mar 18 13:43:01.134677 master-0 kubenswrapper[27835]: E0318 13:43:01.131067 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05a99a49-5215-40c9-ba30-54618aa67479" containerName="glance-db-sync" Mar 18 13:43:01.134677 master-0 kubenswrapper[27835]: I0318 13:43:01.131086 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="05a99a49-5215-40c9-ba30-54618aa67479" containerName="glance-db-sync" Mar 18 13:43:01.134677 master-0 kubenswrapper[27835]: E0318 13:43:01.131104 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fb62e62-ee53-4992-a2af-06420a2812ed" containerName="mariadb-account-create-update" Mar 18 13:43:01.134677 master-0 kubenswrapper[27835]: I0318 13:43:01.131111 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fb62e62-ee53-4992-a2af-06420a2812ed" containerName="mariadb-account-create-update" Mar 18 13:43:01.134677 master-0 kubenswrapper[27835]: E0318 13:43:01.131125 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="825a0cb3-ac48-4974-8e6c-eb30956b617e" containerName="mariadb-database-create" Mar 18 13:43:01.134677 master-0 kubenswrapper[27835]: I0318 13:43:01.131132 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="825a0cb3-ac48-4974-8e6c-eb30956b617e" containerName="mariadb-database-create" Mar 18 13:43:01.134677 master-0 kubenswrapper[27835]: E0318 13:43:01.131152 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b8658d-cf54-48b9-b4ee-fceef6236403" containerName="mariadb-database-create" Mar 18 13:43:01.134677 master-0 kubenswrapper[27835]: I0318 13:43:01.131158 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b8658d-cf54-48b9-b4ee-fceef6236403" containerName="mariadb-database-create" Mar 18 13:43:01.134677 master-0 kubenswrapper[27835]: E0318 13:43:01.131170 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8cceedf-f909-428d-953e-194d94f1c300" containerName="dnsmasq-dns" Mar 18 13:43:01.134677 master-0 kubenswrapper[27835]: I0318 13:43:01.131176 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8cceedf-f909-428d-953e-194d94f1c300" containerName="dnsmasq-dns" Mar 18 13:43:01.135290 master-0 kubenswrapper[27835]: E0318 13:43:01.134816 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8cceedf-f909-428d-953e-194d94f1c300" containerName="init" Mar 18 13:43:01.135290 master-0 kubenswrapper[27835]: I0318 13:43:01.134854 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8cceedf-f909-428d-953e-194d94f1c300" containerName="init" Mar 18 13:43:01.135290 master-0 kubenswrapper[27835]: E0318 13:43:01.134890 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dad967d0-9ad1-4342-9885-e5e28a68d3af" containerName="mariadb-account-create-update" Mar 18 13:43:01.135290 master-0 kubenswrapper[27835]: I0318 13:43:01.134897 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad967d0-9ad1-4342-9885-e5e28a68d3af" containerName="mariadb-account-create-update" Mar 18 13:43:01.135290 master-0 kubenswrapper[27835]: I0318 13:43:01.135263 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b8658d-cf54-48b9-b4ee-fceef6236403" containerName="mariadb-database-create" Mar 18 13:43:01.135290 master-0 kubenswrapper[27835]: I0318 13:43:01.135279 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="dad967d0-9ad1-4342-9885-e5e28a68d3af" containerName="mariadb-account-create-update" Mar 18 13:43:01.135784 master-0 kubenswrapper[27835]: I0318 13:43:01.135300 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="825a0cb3-ac48-4974-8e6c-eb30956b617e" containerName="mariadb-database-create" Mar 18 13:43:01.135784 master-0 kubenswrapper[27835]: I0318 13:43:01.135308 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fb62e62-ee53-4992-a2af-06420a2812ed" containerName="mariadb-account-create-update" Mar 18 13:43:01.135784 master-0 kubenswrapper[27835]: I0318 13:43:01.135329 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8cceedf-f909-428d-953e-194d94f1c300" containerName="dnsmasq-dns" Mar 18 13:43:01.135784 master-0 kubenswrapper[27835]: I0318 13:43:01.135347 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="05a99a49-5215-40c9-ba30-54618aa67479" containerName="glance-db-sync" Mar 18 13:43:01.142043 master-0 kubenswrapper[27835]: I0318 13:43:01.136730 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bbb956cfc-8ggj8" Mar 18 13:43:01.182633 master-0 kubenswrapper[27835]: I0318 13:43:01.182130 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bbb956cfc-8ggj8"] Mar 18 13:43:01.203686 master-0 kubenswrapper[27835]: I0318 13:43:01.203635 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m78ds\" (UniqueName: \"kubernetes.io/projected/fe8ccd0f-8ecc-4d54-970d-3812c661d62b-kube-api-access-m78ds\") pod \"dnsmasq-dns-5bbb956cfc-8ggj8\" (UID: \"fe8ccd0f-8ecc-4d54-970d-3812c661d62b\") " pod="openstack/dnsmasq-dns-5bbb956cfc-8ggj8" Mar 18 13:43:01.203893 master-0 kubenswrapper[27835]: I0318 13:43:01.203700 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe8ccd0f-8ecc-4d54-970d-3812c661d62b-dns-swift-storage-0\") pod \"dnsmasq-dns-5bbb956cfc-8ggj8\" (UID: \"fe8ccd0f-8ecc-4d54-970d-3812c661d62b\") " pod="openstack/dnsmasq-dns-5bbb956cfc-8ggj8" Mar 18 13:43:01.203893 master-0 kubenswrapper[27835]: I0318 13:43:01.203722 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe8ccd0f-8ecc-4d54-970d-3812c661d62b-config\") pod \"dnsmasq-dns-5bbb956cfc-8ggj8\" (UID: \"fe8ccd0f-8ecc-4d54-970d-3812c661d62b\") " pod="openstack/dnsmasq-dns-5bbb956cfc-8ggj8" Mar 18 13:43:01.204013 master-0 kubenswrapper[27835]: I0318 13:43:01.203964 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe8ccd0f-8ecc-4d54-970d-3812c661d62b-dns-svc\") pod \"dnsmasq-dns-5bbb956cfc-8ggj8\" (UID: \"fe8ccd0f-8ecc-4d54-970d-3812c661d62b\") " pod="openstack/dnsmasq-dns-5bbb956cfc-8ggj8" Mar 18 13:43:01.204077 master-0 kubenswrapper[27835]: I0318 13:43:01.204036 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe8ccd0f-8ecc-4d54-970d-3812c661d62b-ovsdbserver-nb\") pod \"dnsmasq-dns-5bbb956cfc-8ggj8\" (UID: \"fe8ccd0f-8ecc-4d54-970d-3812c661d62b\") " pod="openstack/dnsmasq-dns-5bbb956cfc-8ggj8" Mar 18 13:43:01.204293 master-0 kubenswrapper[27835]: I0318 13:43:01.204263 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe8ccd0f-8ecc-4d54-970d-3812c661d62b-ovsdbserver-sb\") pod \"dnsmasq-dns-5bbb956cfc-8ggj8\" (UID: \"fe8ccd0f-8ecc-4d54-970d-3812c661d62b\") " pod="openstack/dnsmasq-dns-5bbb956cfc-8ggj8" Mar 18 13:43:01.305830 master-0 kubenswrapper[27835]: I0318 13:43:01.305764 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe8ccd0f-8ecc-4d54-970d-3812c661d62b-ovsdbserver-nb\") pod \"dnsmasq-dns-5bbb956cfc-8ggj8\" (UID: \"fe8ccd0f-8ecc-4d54-970d-3812c661d62b\") " pod="openstack/dnsmasq-dns-5bbb956cfc-8ggj8" Mar 18 13:43:01.307309 master-0 kubenswrapper[27835]: I0318 13:43:01.306583 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe8ccd0f-8ecc-4d54-970d-3812c661d62b-ovsdbserver-sb\") pod \"dnsmasq-dns-5bbb956cfc-8ggj8\" (UID: \"fe8ccd0f-8ecc-4d54-970d-3812c661d62b\") " pod="openstack/dnsmasq-dns-5bbb956cfc-8ggj8" Mar 18 13:43:01.307309 master-0 kubenswrapper[27835]: I0318 13:43:01.306919 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m78ds\" (UniqueName: \"kubernetes.io/projected/fe8ccd0f-8ecc-4d54-970d-3812c661d62b-kube-api-access-m78ds\") pod \"dnsmasq-dns-5bbb956cfc-8ggj8\" (UID: \"fe8ccd0f-8ecc-4d54-970d-3812c661d62b\") " pod="openstack/dnsmasq-dns-5bbb956cfc-8ggj8" Mar 18 13:43:01.307309 master-0 kubenswrapper[27835]: I0318 13:43:01.307107 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe8ccd0f-8ecc-4d54-970d-3812c661d62b-dns-swift-storage-0\") pod \"dnsmasq-dns-5bbb956cfc-8ggj8\" (UID: \"fe8ccd0f-8ecc-4d54-970d-3812c661d62b\") " pod="openstack/dnsmasq-dns-5bbb956cfc-8ggj8" Mar 18 13:43:01.307309 master-0 kubenswrapper[27835]: I0318 13:43:01.307166 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe8ccd0f-8ecc-4d54-970d-3812c661d62b-config\") pod \"dnsmasq-dns-5bbb956cfc-8ggj8\" (UID: \"fe8ccd0f-8ecc-4d54-970d-3812c661d62b\") " pod="openstack/dnsmasq-dns-5bbb956cfc-8ggj8" Mar 18 13:43:01.307593 master-0 kubenswrapper[27835]: I0318 13:43:01.307328 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe8ccd0f-8ecc-4d54-970d-3812c661d62b-dns-svc\") pod \"dnsmasq-dns-5bbb956cfc-8ggj8\" (UID: \"fe8ccd0f-8ecc-4d54-970d-3812c661d62b\") " pod="openstack/dnsmasq-dns-5bbb956cfc-8ggj8" Mar 18 13:43:01.308070 master-0 kubenswrapper[27835]: I0318 13:43:01.308033 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe8ccd0f-8ecc-4d54-970d-3812c661d62b-dns-swift-storage-0\") pod \"dnsmasq-dns-5bbb956cfc-8ggj8\" (UID: \"fe8ccd0f-8ecc-4d54-970d-3812c661d62b\") " pod="openstack/dnsmasq-dns-5bbb956cfc-8ggj8" Mar 18 13:43:01.308070 master-0 kubenswrapper[27835]: I0318 13:43:01.308056 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe8ccd0f-8ecc-4d54-970d-3812c661d62b-config\") pod \"dnsmasq-dns-5bbb956cfc-8ggj8\" (UID: \"fe8ccd0f-8ecc-4d54-970d-3812c661d62b\") " pod="openstack/dnsmasq-dns-5bbb956cfc-8ggj8" Mar 18 13:43:01.308348 master-0 kubenswrapper[27835]: I0318 13:43:01.308316 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe8ccd0f-8ecc-4d54-970d-3812c661d62b-ovsdbserver-nb\") pod \"dnsmasq-dns-5bbb956cfc-8ggj8\" (UID: \"fe8ccd0f-8ecc-4d54-970d-3812c661d62b\") " pod="openstack/dnsmasq-dns-5bbb956cfc-8ggj8" Mar 18 13:43:01.308388 master-0 kubenswrapper[27835]: I0318 13:43:01.308315 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe8ccd0f-8ecc-4d54-970d-3812c661d62b-dns-svc\") pod \"dnsmasq-dns-5bbb956cfc-8ggj8\" (UID: \"fe8ccd0f-8ecc-4d54-970d-3812c661d62b\") " pod="openstack/dnsmasq-dns-5bbb956cfc-8ggj8" Mar 18 13:43:01.308443 master-0 kubenswrapper[27835]: I0318 13:43:01.308391 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe8ccd0f-8ecc-4d54-970d-3812c661d62b-ovsdbserver-sb\") pod \"dnsmasq-dns-5bbb956cfc-8ggj8\" (UID: \"fe8ccd0f-8ecc-4d54-970d-3812c661d62b\") " pod="openstack/dnsmasq-dns-5bbb956cfc-8ggj8" Mar 18 13:43:01.330559 master-0 kubenswrapper[27835]: I0318 13:43:01.329462 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m78ds\" (UniqueName: \"kubernetes.io/projected/fe8ccd0f-8ecc-4d54-970d-3812c661d62b-kube-api-access-m78ds\") pod \"dnsmasq-dns-5bbb956cfc-8ggj8\" (UID: \"fe8ccd0f-8ecc-4d54-970d-3812c661d62b\") " pod="openstack/dnsmasq-dns-5bbb956cfc-8ggj8" Mar 18 13:43:01.464854 master-0 kubenswrapper[27835]: I0318 13:43:01.464792 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bbb956cfc-8ggj8" Mar 18 13:43:01.998458 master-0 kubenswrapper[27835]: I0318 13:43:01.998391 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bbb956cfc-8ggj8"] Mar 18 13:43:02.001660 master-0 kubenswrapper[27835]: W0318 13:43:02.001614 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe8ccd0f_8ecc_4d54_970d_3812c661d62b.slice/crio-e84e999b5da0d1df2dd0d25cdabc56170a514b1565300dc006c5a2f6bfa298da WatchSource:0}: Error finding container e84e999b5da0d1df2dd0d25cdabc56170a514b1565300dc006c5a2f6bfa298da: Status 404 returned error can't find the container with id e84e999b5da0d1df2dd0d25cdabc56170a514b1565300dc006c5a2f6bfa298da Mar 18 13:43:02.812855 master-0 kubenswrapper[27835]: I0318 13:43:02.812655 27835 generic.go:334] "Generic (PLEG): container finished" podID="fe8ccd0f-8ecc-4d54-970d-3812c661d62b" containerID="457df5584f08ae10c617776b79b2e2d257e35ee6229229420a7a741ace78919d" exitCode=0 Mar 18 13:43:02.812855 master-0 kubenswrapper[27835]: I0318 13:43:02.812735 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bbb956cfc-8ggj8" event={"ID":"fe8ccd0f-8ecc-4d54-970d-3812c661d62b","Type":"ContainerDied","Data":"457df5584f08ae10c617776b79b2e2d257e35ee6229229420a7a741ace78919d"} Mar 18 13:43:02.812855 master-0 kubenswrapper[27835]: I0318 13:43:02.812817 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bbb956cfc-8ggj8" event={"ID":"fe8ccd0f-8ecc-4d54-970d-3812c661d62b","Type":"ContainerStarted","Data":"e84e999b5da0d1df2dd0d25cdabc56170a514b1565300dc006c5a2f6bfa298da"} Mar 18 13:43:03.839475 master-0 kubenswrapper[27835]: I0318 13:43:03.839310 27835 generic.go:334] "Generic (PLEG): container finished" podID="b0c2698f-e0c0-413f-8e86-184f8ab0b231" containerID="46c3faa622bcefa81c2e26872488e583d54416823e3442a962682ee4bb472155" exitCode=0 Mar 18 13:43:03.839966 master-0 kubenswrapper[27835]: I0318 13:43:03.839473 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-sj9px" event={"ID":"b0c2698f-e0c0-413f-8e86-184f8ab0b231","Type":"ContainerDied","Data":"46c3faa622bcefa81c2e26872488e583d54416823e3442a962682ee4bb472155"} Mar 18 13:43:03.847273 master-0 kubenswrapper[27835]: I0318 13:43:03.847211 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bbb956cfc-8ggj8" event={"ID":"fe8ccd0f-8ecc-4d54-970d-3812c661d62b","Type":"ContainerStarted","Data":"eebe47267e802400050009f337899c39fdca340a0eaa40ec505f23e5a9befec7"} Mar 18 13:43:03.848142 master-0 kubenswrapper[27835]: I0318 13:43:03.848070 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bbb956cfc-8ggj8" Mar 18 13:43:03.894985 master-0 kubenswrapper[27835]: I0318 13:43:03.894776 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bbb956cfc-8ggj8" podStartSLOduration=2.894755053 podStartE2EDuration="2.894755053s" podCreationTimestamp="2026-03-18 13:43:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:43:03.880737851 +0000 UTC m=+1147.845949421" watchObservedRunningTime="2026-03-18 13:43:03.894755053 +0000 UTC m=+1147.859966623" Mar 18 13:43:05.360862 master-0 kubenswrapper[27835]: I0318 13:43:05.360823 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-sj9px" Mar 18 13:43:05.406610 master-0 kubenswrapper[27835]: I0318 13:43:05.403743 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0c2698f-e0c0-413f-8e86-184f8ab0b231-combined-ca-bundle\") pod \"b0c2698f-e0c0-413f-8e86-184f8ab0b231\" (UID: \"b0c2698f-e0c0-413f-8e86-184f8ab0b231\") " Mar 18 13:43:05.406610 master-0 kubenswrapper[27835]: I0318 13:43:05.403968 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0c2698f-e0c0-413f-8e86-184f8ab0b231-config-data\") pod \"b0c2698f-e0c0-413f-8e86-184f8ab0b231\" (UID: \"b0c2698f-e0c0-413f-8e86-184f8ab0b231\") " Mar 18 13:43:05.406610 master-0 kubenswrapper[27835]: I0318 13:43:05.404019 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j94t6\" (UniqueName: \"kubernetes.io/projected/b0c2698f-e0c0-413f-8e86-184f8ab0b231-kube-api-access-j94t6\") pod \"b0c2698f-e0c0-413f-8e86-184f8ab0b231\" (UID: \"b0c2698f-e0c0-413f-8e86-184f8ab0b231\") " Mar 18 13:43:05.408051 master-0 kubenswrapper[27835]: I0318 13:43:05.407978 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0c2698f-e0c0-413f-8e86-184f8ab0b231-kube-api-access-j94t6" (OuterVolumeSpecName: "kube-api-access-j94t6") pod "b0c2698f-e0c0-413f-8e86-184f8ab0b231" (UID: "b0c2698f-e0c0-413f-8e86-184f8ab0b231"). InnerVolumeSpecName "kube-api-access-j94t6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:43:05.436734 master-0 kubenswrapper[27835]: I0318 13:43:05.436670 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0c2698f-e0c0-413f-8e86-184f8ab0b231-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0c2698f-e0c0-413f-8e86-184f8ab0b231" (UID: "b0c2698f-e0c0-413f-8e86-184f8ab0b231"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:05.456910 master-0 kubenswrapper[27835]: I0318 13:43:05.456830 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0c2698f-e0c0-413f-8e86-184f8ab0b231-config-data" (OuterVolumeSpecName: "config-data") pod "b0c2698f-e0c0-413f-8e86-184f8ab0b231" (UID: "b0c2698f-e0c0-413f-8e86-184f8ab0b231"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:05.506097 master-0 kubenswrapper[27835]: I0318 13:43:05.506023 27835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0c2698f-e0c0-413f-8e86-184f8ab0b231-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:05.506097 master-0 kubenswrapper[27835]: I0318 13:43:05.506077 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j94t6\" (UniqueName: \"kubernetes.io/projected/b0c2698f-e0c0-413f-8e86-184f8ab0b231-kube-api-access-j94t6\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:05.506097 master-0 kubenswrapper[27835]: I0318 13:43:05.506092 27835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0c2698f-e0c0-413f-8e86-184f8ab0b231-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:05.871862 master-0 kubenswrapper[27835]: I0318 13:43:05.871797 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-sj9px" event={"ID":"b0c2698f-e0c0-413f-8e86-184f8ab0b231","Type":"ContainerDied","Data":"53ee56471777bc9a10b3dce44806fb9deb956800e3adfe803a51292bc90a8e04"} Mar 18 13:43:05.871862 master-0 kubenswrapper[27835]: I0318 13:43:05.871842 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53ee56471777bc9a10b3dce44806fb9deb956800e3adfe803a51292bc90a8e04" Mar 18 13:43:05.872197 master-0 kubenswrapper[27835]: I0318 13:43:05.871895 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-sj9px" Mar 18 13:43:06.165453 master-0 kubenswrapper[27835]: I0318 13:43:06.164677 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bbb956cfc-8ggj8"] Mar 18 13:43:06.165453 master-0 kubenswrapper[27835]: I0318 13:43:06.165000 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bbb956cfc-8ggj8" podUID="fe8ccd0f-8ecc-4d54-970d-3812c661d62b" containerName="dnsmasq-dns" containerID="cri-o://eebe47267e802400050009f337899c39fdca340a0eaa40ec505f23e5a9befec7" gracePeriod=10 Mar 18 13:43:06.214439 master-0 kubenswrapper[27835]: I0318 13:43:06.203919 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-nq7m4"] Mar 18 13:43:06.214439 master-0 kubenswrapper[27835]: E0318 13:43:06.204577 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0c2698f-e0c0-413f-8e86-184f8ab0b231" containerName="keystone-db-sync" Mar 18 13:43:06.214439 master-0 kubenswrapper[27835]: I0318 13:43:06.204598 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0c2698f-e0c0-413f-8e86-184f8ab0b231" containerName="keystone-db-sync" Mar 18 13:43:06.214439 master-0 kubenswrapper[27835]: I0318 13:43:06.204902 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0c2698f-e0c0-413f-8e86-184f8ab0b231" containerName="keystone-db-sync" Mar 18 13:43:06.214439 master-0 kubenswrapper[27835]: I0318 13:43:06.205797 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nq7m4" Mar 18 13:43:06.223261 master-0 kubenswrapper[27835]: I0318 13:43:06.222262 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 18 13:43:06.223261 master-0 kubenswrapper[27835]: I0318 13:43:06.222295 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 18 13:43:06.223261 master-0 kubenswrapper[27835]: I0318 13:43:06.222501 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 18 13:43:06.223261 master-0 kubenswrapper[27835]: I0318 13:43:06.222759 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 18 13:43:06.252051 master-0 kubenswrapper[27835]: I0318 13:43:06.243255 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/562b6f7a-8086-457b-8f82-cc3c56043ac8-combined-ca-bundle\") pod \"keystone-bootstrap-nq7m4\" (UID: \"562b6f7a-8086-457b-8f82-cc3c56043ac8\") " pod="openstack/keystone-bootstrap-nq7m4" Mar 18 13:43:06.252051 master-0 kubenswrapper[27835]: I0318 13:43:06.243308 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/562b6f7a-8086-457b-8f82-cc3c56043ac8-credential-keys\") pod \"keystone-bootstrap-nq7m4\" (UID: \"562b6f7a-8086-457b-8f82-cc3c56043ac8\") " pod="openstack/keystone-bootstrap-nq7m4" Mar 18 13:43:06.252051 master-0 kubenswrapper[27835]: I0318 13:43:06.243359 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/562b6f7a-8086-457b-8f82-cc3c56043ac8-fernet-keys\") pod \"keystone-bootstrap-nq7m4\" (UID: \"562b6f7a-8086-457b-8f82-cc3c56043ac8\") " pod="openstack/keystone-bootstrap-nq7m4" Mar 18 13:43:06.252051 master-0 kubenswrapper[27835]: I0318 13:43:06.243382 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/562b6f7a-8086-457b-8f82-cc3c56043ac8-config-data\") pod \"keystone-bootstrap-nq7m4\" (UID: \"562b6f7a-8086-457b-8f82-cc3c56043ac8\") " pod="openstack/keystone-bootstrap-nq7m4" Mar 18 13:43:06.252051 master-0 kubenswrapper[27835]: I0318 13:43:06.243423 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clrwj\" (UniqueName: \"kubernetes.io/projected/562b6f7a-8086-457b-8f82-cc3c56043ac8-kube-api-access-clrwj\") pod \"keystone-bootstrap-nq7m4\" (UID: \"562b6f7a-8086-457b-8f82-cc3c56043ac8\") " pod="openstack/keystone-bootstrap-nq7m4" Mar 18 13:43:06.252051 master-0 kubenswrapper[27835]: I0318 13:43:06.243489 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/562b6f7a-8086-457b-8f82-cc3c56043ac8-scripts\") pod \"keystone-bootstrap-nq7m4\" (UID: \"562b6f7a-8086-457b-8f82-cc3c56043ac8\") " pod="openstack/keystone-bootstrap-nq7m4" Mar 18 13:43:06.252051 master-0 kubenswrapper[27835]: I0318 13:43:06.250325 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dcc5b569-tg282"] Mar 18 13:43:06.253125 master-0 kubenswrapper[27835]: I0318 13:43:06.253068 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dcc5b569-tg282" Mar 18 13:43:06.262465 master-0 kubenswrapper[27835]: I0318 13:43:06.261508 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nq7m4"] Mar 18 13:43:06.454019 master-0 kubenswrapper[27835]: I0318 13:43:06.440954 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/562b6f7a-8086-457b-8f82-cc3c56043ac8-scripts\") pod \"keystone-bootstrap-nq7m4\" (UID: \"562b6f7a-8086-457b-8f82-cc3c56043ac8\") " pod="openstack/keystone-bootstrap-nq7m4" Mar 18 13:43:06.454019 master-0 kubenswrapper[27835]: I0318 13:43:06.441314 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8203106-e334-473e-8e67-f6fddefc8016-dns-svc\") pod \"dnsmasq-dns-dcc5b569-tg282\" (UID: \"a8203106-e334-473e-8e67-f6fddefc8016\") " pod="openstack/dnsmasq-dns-dcc5b569-tg282" Mar 18 13:43:06.454019 master-0 kubenswrapper[27835]: I0318 13:43:06.441354 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a8203106-e334-473e-8e67-f6fddefc8016-ovsdbserver-nb\") pod \"dnsmasq-dns-dcc5b569-tg282\" (UID: \"a8203106-e334-473e-8e67-f6fddefc8016\") " pod="openstack/dnsmasq-dns-dcc5b569-tg282" Mar 18 13:43:06.454019 master-0 kubenswrapper[27835]: I0318 13:43:06.441758 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/562b6f7a-8086-457b-8f82-cc3c56043ac8-combined-ca-bundle\") pod \"keystone-bootstrap-nq7m4\" (UID: \"562b6f7a-8086-457b-8f82-cc3c56043ac8\") " pod="openstack/keystone-bootstrap-nq7m4" Mar 18 13:43:06.454019 master-0 kubenswrapper[27835]: I0318 13:43:06.443112 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a8203106-e334-473e-8e67-f6fddefc8016-ovsdbserver-sb\") pod \"dnsmasq-dns-dcc5b569-tg282\" (UID: \"a8203106-e334-473e-8e67-f6fddefc8016\") " pod="openstack/dnsmasq-dns-dcc5b569-tg282" Mar 18 13:43:06.454019 master-0 kubenswrapper[27835]: I0318 13:43:06.443179 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/562b6f7a-8086-457b-8f82-cc3c56043ac8-credential-keys\") pod \"keystone-bootstrap-nq7m4\" (UID: \"562b6f7a-8086-457b-8f82-cc3c56043ac8\") " pod="openstack/keystone-bootstrap-nq7m4" Mar 18 13:43:06.454019 master-0 kubenswrapper[27835]: I0318 13:43:06.443249 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a8203106-e334-473e-8e67-f6fddefc8016-dns-swift-storage-0\") pod \"dnsmasq-dns-dcc5b569-tg282\" (UID: \"a8203106-e334-473e-8e67-f6fddefc8016\") " pod="openstack/dnsmasq-dns-dcc5b569-tg282" Mar 18 13:43:06.454019 master-0 kubenswrapper[27835]: I0318 13:43:06.444017 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/562b6f7a-8086-457b-8f82-cc3c56043ac8-fernet-keys\") pod \"keystone-bootstrap-nq7m4\" (UID: \"562b6f7a-8086-457b-8f82-cc3c56043ac8\") " pod="openstack/keystone-bootstrap-nq7m4" Mar 18 13:43:06.454019 master-0 kubenswrapper[27835]: I0318 13:43:06.444106 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/562b6f7a-8086-457b-8f82-cc3c56043ac8-config-data\") pod \"keystone-bootstrap-nq7m4\" (UID: \"562b6f7a-8086-457b-8f82-cc3c56043ac8\") " pod="openstack/keystone-bootstrap-nq7m4" Mar 18 13:43:06.454019 master-0 kubenswrapper[27835]: I0318 13:43:06.445207 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8203106-e334-473e-8e67-f6fddefc8016-config\") pod \"dnsmasq-dns-dcc5b569-tg282\" (UID: \"a8203106-e334-473e-8e67-f6fddefc8016\") " pod="openstack/dnsmasq-dns-dcc5b569-tg282" Mar 18 13:43:06.454019 master-0 kubenswrapper[27835]: I0318 13:43:06.445457 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clrwj\" (UniqueName: \"kubernetes.io/projected/562b6f7a-8086-457b-8f82-cc3c56043ac8-kube-api-access-clrwj\") pod \"keystone-bootstrap-nq7m4\" (UID: \"562b6f7a-8086-457b-8f82-cc3c56043ac8\") " pod="openstack/keystone-bootstrap-nq7m4" Mar 18 13:43:06.454019 master-0 kubenswrapper[27835]: I0318 13:43:06.445815 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bsgn\" (UniqueName: \"kubernetes.io/projected/a8203106-e334-473e-8e67-f6fddefc8016-kube-api-access-4bsgn\") pod \"dnsmasq-dns-dcc5b569-tg282\" (UID: \"a8203106-e334-473e-8e67-f6fddefc8016\") " pod="openstack/dnsmasq-dns-dcc5b569-tg282" Mar 18 13:43:06.587625 master-0 kubenswrapper[27835]: I0318 13:43:06.538865 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/562b6f7a-8086-457b-8f82-cc3c56043ac8-credential-keys\") pod \"keystone-bootstrap-nq7m4\" (UID: \"562b6f7a-8086-457b-8f82-cc3c56043ac8\") " pod="openstack/keystone-bootstrap-nq7m4" Mar 18 13:43:06.587625 master-0 kubenswrapper[27835]: I0318 13:43:06.552135 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/562b6f7a-8086-457b-8f82-cc3c56043ac8-scripts\") pod \"keystone-bootstrap-nq7m4\" (UID: \"562b6f7a-8086-457b-8f82-cc3c56043ac8\") " pod="openstack/keystone-bootstrap-nq7m4" Mar 18 13:43:06.587625 master-0 kubenswrapper[27835]: I0318 13:43:06.552752 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/562b6f7a-8086-457b-8f82-cc3c56043ac8-fernet-keys\") pod \"keystone-bootstrap-nq7m4\" (UID: \"562b6f7a-8086-457b-8f82-cc3c56043ac8\") " pod="openstack/keystone-bootstrap-nq7m4" Mar 18 13:43:06.587625 master-0 kubenswrapper[27835]: I0318 13:43:06.553335 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/562b6f7a-8086-457b-8f82-cc3c56043ac8-config-data\") pod \"keystone-bootstrap-nq7m4\" (UID: \"562b6f7a-8086-457b-8f82-cc3c56043ac8\") " pod="openstack/keystone-bootstrap-nq7m4" Mar 18 13:43:06.587625 master-0 kubenswrapper[27835]: I0318 13:43:06.563157 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/562b6f7a-8086-457b-8f82-cc3c56043ac8-combined-ca-bundle\") pod \"keystone-bootstrap-nq7m4\" (UID: \"562b6f7a-8086-457b-8f82-cc3c56043ac8\") " pod="openstack/keystone-bootstrap-nq7m4" Mar 18 13:43:06.587625 master-0 kubenswrapper[27835]: I0318 13:43:06.573337 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8203106-e334-473e-8e67-f6fddefc8016-dns-svc\") pod \"dnsmasq-dns-dcc5b569-tg282\" (UID: \"a8203106-e334-473e-8e67-f6fddefc8016\") " pod="openstack/dnsmasq-dns-dcc5b569-tg282" Mar 18 13:43:06.587625 master-0 kubenswrapper[27835]: I0318 13:43:06.573379 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a8203106-e334-473e-8e67-f6fddefc8016-ovsdbserver-nb\") pod \"dnsmasq-dns-dcc5b569-tg282\" (UID: \"a8203106-e334-473e-8e67-f6fddefc8016\") " pod="openstack/dnsmasq-dns-dcc5b569-tg282" Mar 18 13:43:06.587625 master-0 kubenswrapper[27835]: I0318 13:43:06.586523 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8203106-e334-473e-8e67-f6fddefc8016-dns-svc\") pod \"dnsmasq-dns-dcc5b569-tg282\" (UID: \"a8203106-e334-473e-8e67-f6fddefc8016\") " pod="openstack/dnsmasq-dns-dcc5b569-tg282" Mar 18 13:43:06.644500 master-0 kubenswrapper[27835]: I0318 13:43:06.636382 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a8203106-e334-473e-8e67-f6fddefc8016-ovsdbserver-nb\") pod \"dnsmasq-dns-dcc5b569-tg282\" (UID: \"a8203106-e334-473e-8e67-f6fddefc8016\") " pod="openstack/dnsmasq-dns-dcc5b569-tg282" Mar 18 13:43:06.644500 master-0 kubenswrapper[27835]: I0318 13:43:06.636975 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dcc5b569-tg282"] Mar 18 13:43:06.646429 master-0 kubenswrapper[27835]: I0318 13:43:06.645061 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clrwj\" (UniqueName: \"kubernetes.io/projected/562b6f7a-8086-457b-8f82-cc3c56043ac8-kube-api-access-clrwj\") pod \"keystone-bootstrap-nq7m4\" (UID: \"562b6f7a-8086-457b-8f82-cc3c56043ac8\") " pod="openstack/keystone-bootstrap-nq7m4" Mar 18 13:43:06.646429 master-0 kubenswrapper[27835]: I0318 13:43:06.645610 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a8203106-e334-473e-8e67-f6fddefc8016-ovsdbserver-sb\") pod \"dnsmasq-dns-dcc5b569-tg282\" (UID: \"a8203106-e334-473e-8e67-f6fddefc8016\") " pod="openstack/dnsmasq-dns-dcc5b569-tg282" Mar 18 13:43:06.646429 master-0 kubenswrapper[27835]: I0318 13:43:06.645687 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a8203106-e334-473e-8e67-f6fddefc8016-dns-swift-storage-0\") pod \"dnsmasq-dns-dcc5b569-tg282\" (UID: \"a8203106-e334-473e-8e67-f6fddefc8016\") " pod="openstack/dnsmasq-dns-dcc5b569-tg282" Mar 18 13:43:06.647599 master-0 kubenswrapper[27835]: I0318 13:43:06.647078 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a8203106-e334-473e-8e67-f6fddefc8016-ovsdbserver-sb\") pod \"dnsmasq-dns-dcc5b569-tg282\" (UID: \"a8203106-e334-473e-8e67-f6fddefc8016\") " pod="openstack/dnsmasq-dns-dcc5b569-tg282" Mar 18 13:43:06.647599 master-0 kubenswrapper[27835]: I0318 13:43:06.647539 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a8203106-e334-473e-8e67-f6fddefc8016-dns-swift-storage-0\") pod \"dnsmasq-dns-dcc5b569-tg282\" (UID: \"a8203106-e334-473e-8e67-f6fddefc8016\") " pod="openstack/dnsmasq-dns-dcc5b569-tg282" Mar 18 13:43:06.673953 master-0 kubenswrapper[27835]: I0318 13:43:06.668171 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8203106-e334-473e-8e67-f6fddefc8016-config\") pod \"dnsmasq-dns-dcc5b569-tg282\" (UID: \"a8203106-e334-473e-8e67-f6fddefc8016\") " pod="openstack/dnsmasq-dns-dcc5b569-tg282" Mar 18 13:43:06.673953 master-0 kubenswrapper[27835]: I0318 13:43:06.668292 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bsgn\" (UniqueName: \"kubernetes.io/projected/a8203106-e334-473e-8e67-f6fddefc8016-kube-api-access-4bsgn\") pod \"dnsmasq-dns-dcc5b569-tg282\" (UID: \"a8203106-e334-473e-8e67-f6fddefc8016\") " pod="openstack/dnsmasq-dns-dcc5b569-tg282" Mar 18 13:43:06.687395 master-0 kubenswrapper[27835]: I0318 13:43:06.685545 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8203106-e334-473e-8e67-f6fddefc8016-config\") pod \"dnsmasq-dns-dcc5b569-tg282\" (UID: \"a8203106-e334-473e-8e67-f6fddefc8016\") " pod="openstack/dnsmasq-dns-dcc5b569-tg282" Mar 18 13:43:06.706747 master-0 kubenswrapper[27835]: I0318 13:43:06.702356 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bsgn\" (UniqueName: \"kubernetes.io/projected/a8203106-e334-473e-8e67-f6fddefc8016-kube-api-access-4bsgn\") pod \"dnsmasq-dns-dcc5b569-tg282\" (UID: \"a8203106-e334-473e-8e67-f6fddefc8016\") " pod="openstack/dnsmasq-dns-dcc5b569-tg282" Mar 18 13:43:06.716458 master-0 kubenswrapper[27835]: I0318 13:43:06.715500 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-create-vkvz7"] Mar 18 13:43:06.716458 master-0 kubenswrapper[27835]: I0318 13:43:06.715968 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dcc5b569-tg282" Mar 18 13:43:06.745393 master-0 kubenswrapper[27835]: I0318 13:43:06.744914 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nq7m4" Mar 18 13:43:06.817696 master-0 kubenswrapper[27835]: I0318 13:43:06.817626 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-vkvz7" Mar 18 13:43:06.955649 master-0 kubenswrapper[27835]: I0318 13:43:06.955490 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-mtjpd"] Mar 18 13:43:06.965759 master-0 kubenswrapper[27835]: I0318 13:43:06.965672 27835 generic.go:334] "Generic (PLEG): container finished" podID="fe8ccd0f-8ecc-4d54-970d-3812c661d62b" containerID="eebe47267e802400050009f337899c39fdca340a0eaa40ec505f23e5a9befec7" exitCode=0 Mar 18 13:43:06.972468 master-0 kubenswrapper[27835]: I0318 13:43:06.968527 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bbb956cfc-8ggj8" event={"ID":"fe8ccd0f-8ecc-4d54-970d-3812c661d62b","Type":"ContainerDied","Data":"eebe47267e802400050009f337899c39fdca340a0eaa40ec505f23e5a9befec7"} Mar 18 13:43:06.972468 master-0 kubenswrapper[27835]: I0318 13:43:06.968652 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mtjpd" Mar 18 13:43:06.972468 master-0 kubenswrapper[27835]: I0318 13:43:06.972228 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 18 13:43:06.972468 master-0 kubenswrapper[27835]: I0318 13:43:06.972374 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 18 13:43:06.976310 master-0 kubenswrapper[27835]: I0318 13:43:06.976263 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-vkvz7"] Mar 18 13:43:06.982290 master-0 kubenswrapper[27835]: I0318 13:43:06.982222 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjwm2\" (UniqueName: \"kubernetes.io/projected/b686167f-35b3-4b2c-a6c7-074c63023350-kube-api-access-cjwm2\") pod \"ironic-db-create-vkvz7\" (UID: \"b686167f-35b3-4b2c-a6c7-074c63023350\") " pod="openstack/ironic-db-create-vkvz7" Mar 18 13:43:06.982462 master-0 kubenswrapper[27835]: I0318 13:43:06.982287 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b686167f-35b3-4b2c-a6c7-074c63023350-operator-scripts\") pod \"ironic-db-create-vkvz7\" (UID: \"b686167f-35b3-4b2c-a6c7-074c63023350\") " pod="openstack/ironic-db-create-vkvz7" Mar 18 13:43:06.987984 master-0 kubenswrapper[27835]: I0318 13:43:06.987890 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-07518-db-sync-jhwx7"] Mar 18 13:43:06.997521 master-0 kubenswrapper[27835]: I0318 13:43:06.997452 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-07518-db-sync-jhwx7" Mar 18 13:43:07.003424 master-0 kubenswrapper[27835]: I0318 13:43:07.003364 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-07518-scripts" Mar 18 13:43:07.003725 master-0 kubenswrapper[27835]: I0318 13:43:07.003691 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-07518-config-data" Mar 18 13:43:07.029556 master-0 kubenswrapper[27835]: I0318 13:43:07.029490 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-mtjpd"] Mar 18 13:43:07.040478 master-0 kubenswrapper[27835]: I0318 13:43:07.038724 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-07518-db-sync-jhwx7"] Mar 18 13:43:07.071462 master-0 kubenswrapper[27835]: I0318 13:43:07.060596 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dcc5b569-tg282"] Mar 18 13:43:07.088945 master-0 kubenswrapper[27835]: I0318 13:43:07.088353 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b14c1c8f-6882-4b86-bfdf-ef1cae5e8321-config-data\") pod \"cinder-07518-db-sync-jhwx7\" (UID: \"b14c1c8f-6882-4b86-bfdf-ef1cae5e8321\") " pod="openstack/cinder-07518-db-sync-jhwx7" Mar 18 13:43:07.089208 master-0 kubenswrapper[27835]: I0318 13:43:07.089191 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjwm2\" (UniqueName: \"kubernetes.io/projected/b686167f-35b3-4b2c-a6c7-074c63023350-kube-api-access-cjwm2\") pod \"ironic-db-create-vkvz7\" (UID: \"b686167f-35b3-4b2c-a6c7-074c63023350\") " pod="openstack/ironic-db-create-vkvz7" Mar 18 13:43:07.089338 master-0 kubenswrapper[27835]: I0318 13:43:07.089323 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b14c1c8f-6882-4b86-bfdf-ef1cae5e8321-etc-machine-id\") pod \"cinder-07518-db-sync-jhwx7\" (UID: \"b14c1c8f-6882-4b86-bfdf-ef1cae5e8321\") " pod="openstack/cinder-07518-db-sync-jhwx7" Mar 18 13:43:07.089791 master-0 kubenswrapper[27835]: I0318 13:43:07.089715 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b686167f-35b3-4b2c-a6c7-074c63023350-operator-scripts\") pod \"ironic-db-create-vkvz7\" (UID: \"b686167f-35b3-4b2c-a6c7-074c63023350\") " pod="openstack/ironic-db-create-vkvz7" Mar 18 13:43:07.090604 master-0 kubenswrapper[27835]: I0318 13:43:07.090514 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b686167f-35b3-4b2c-a6c7-074c63023350-operator-scripts\") pod \"ironic-db-create-vkvz7\" (UID: \"b686167f-35b3-4b2c-a6c7-074c63023350\") " pod="openstack/ironic-db-create-vkvz7" Mar 18 13:43:07.090768 master-0 kubenswrapper[27835]: I0318 13:43:07.090499 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2rf9\" (UniqueName: \"kubernetes.io/projected/66ceeb4b-18bd-4d26-a1e7-ef700771aeec-kube-api-access-x2rf9\") pod \"neutron-db-sync-mtjpd\" (UID: \"66ceeb4b-18bd-4d26-a1e7-ef700771aeec\") " pod="openstack/neutron-db-sync-mtjpd" Mar 18 13:43:07.090908 master-0 kubenswrapper[27835]: I0318 13:43:07.090894 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkzj7\" (UniqueName: \"kubernetes.io/projected/b14c1c8f-6882-4b86-bfdf-ef1cae5e8321-kube-api-access-wkzj7\") pod \"cinder-07518-db-sync-jhwx7\" (UID: \"b14c1c8f-6882-4b86-bfdf-ef1cae5e8321\") " pod="openstack/cinder-07518-db-sync-jhwx7" Mar 18 13:43:07.091053 master-0 kubenswrapper[27835]: I0318 13:43:07.091037 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b14c1c8f-6882-4b86-bfdf-ef1cae5e8321-combined-ca-bundle\") pod \"cinder-07518-db-sync-jhwx7\" (UID: \"b14c1c8f-6882-4b86-bfdf-ef1cae5e8321\") " pod="openstack/cinder-07518-db-sync-jhwx7" Mar 18 13:43:07.092123 master-0 kubenswrapper[27835]: I0318 13:43:07.092071 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b14c1c8f-6882-4b86-bfdf-ef1cae5e8321-db-sync-config-data\") pod \"cinder-07518-db-sync-jhwx7\" (UID: \"b14c1c8f-6882-4b86-bfdf-ef1cae5e8321\") " pod="openstack/cinder-07518-db-sync-jhwx7" Mar 18 13:43:07.092477 master-0 kubenswrapper[27835]: I0318 13:43:07.092430 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66ceeb4b-18bd-4d26-a1e7-ef700771aeec-combined-ca-bundle\") pod \"neutron-db-sync-mtjpd\" (UID: \"66ceeb4b-18bd-4d26-a1e7-ef700771aeec\") " pod="openstack/neutron-db-sync-mtjpd" Mar 18 13:43:07.092672 master-0 kubenswrapper[27835]: I0318 13:43:07.092658 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b14c1c8f-6882-4b86-bfdf-ef1cae5e8321-scripts\") pod \"cinder-07518-db-sync-jhwx7\" (UID: \"b14c1c8f-6882-4b86-bfdf-ef1cae5e8321\") " pod="openstack/cinder-07518-db-sync-jhwx7" Mar 18 13:43:07.097466 master-0 kubenswrapper[27835]: I0318 13:43:07.092560 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-f5fe-account-create-update-4dwnc"] Mar 18 13:43:07.097466 master-0 kubenswrapper[27835]: I0318 13:43:07.096711 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-f5fe-account-create-update-4dwnc" Mar 18 13:43:07.098072 master-0 kubenswrapper[27835]: I0318 13:43:07.098049 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/66ceeb4b-18bd-4d26-a1e7-ef700771aeec-config\") pod \"neutron-db-sync-mtjpd\" (UID: \"66ceeb4b-18bd-4d26-a1e7-ef700771aeec\") " pod="openstack/neutron-db-sync-mtjpd" Mar 18 13:43:07.099872 master-0 kubenswrapper[27835]: I0318 13:43:07.099854 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-db-secret" Mar 18 13:43:07.120744 master-0 kubenswrapper[27835]: I0318 13:43:07.116999 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjwm2\" (UniqueName: \"kubernetes.io/projected/b686167f-35b3-4b2c-a6c7-074c63023350-kube-api-access-cjwm2\") pod \"ironic-db-create-vkvz7\" (UID: \"b686167f-35b3-4b2c-a6c7-074c63023350\") " pod="openstack/ironic-db-create-vkvz7" Mar 18 13:43:07.145309 master-0 kubenswrapper[27835]: I0318 13:43:07.145222 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-f5fe-account-create-update-4dwnc"] Mar 18 13:43:07.158529 master-0 kubenswrapper[27835]: I0318 13:43:07.158449 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-8mqtx"] Mar 18 13:43:07.160963 master-0 kubenswrapper[27835]: I0318 13:43:07.159977 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8mqtx" Mar 18 13:43:07.169743 master-0 kubenswrapper[27835]: I0318 13:43:07.162912 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 18 13:43:07.169743 master-0 kubenswrapper[27835]: I0318 13:43:07.163434 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 18 13:43:07.191434 master-0 kubenswrapper[27835]: I0318 13:43:07.191173 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84b7d9bdfc-bnzwf"] Mar 18 13:43:07.199377 master-0 kubenswrapper[27835]: I0318 13:43:07.195131 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b7d9bdfc-bnzwf" Mar 18 13:43:07.199706 master-0 kubenswrapper[27835]: I0318 13:43:07.199591 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkzj7\" (UniqueName: \"kubernetes.io/projected/b14c1c8f-6882-4b86-bfdf-ef1cae5e8321-kube-api-access-wkzj7\") pod \"cinder-07518-db-sync-jhwx7\" (UID: \"b14c1c8f-6882-4b86-bfdf-ef1cae5e8321\") " pod="openstack/cinder-07518-db-sync-jhwx7" Mar 18 13:43:07.199706 master-0 kubenswrapper[27835]: I0318 13:43:07.199675 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh8xr\" (UniqueName: \"kubernetes.io/projected/6b779cf8-bfc6-417f-b28b-4cfa060a6db2-kube-api-access-mh8xr\") pod \"ironic-f5fe-account-create-update-4dwnc\" (UID: \"6b779cf8-bfc6-417f-b28b-4cfa060a6db2\") " pod="openstack/ironic-f5fe-account-create-update-4dwnc" Mar 18 13:43:07.199811 master-0 kubenswrapper[27835]: I0318 13:43:07.199709 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b14c1c8f-6882-4b86-bfdf-ef1cae5e8321-combined-ca-bundle\") pod \"cinder-07518-db-sync-jhwx7\" (UID: \"b14c1c8f-6882-4b86-bfdf-ef1cae5e8321\") " pod="openstack/cinder-07518-db-sync-jhwx7" Mar 18 13:43:07.199811 master-0 kubenswrapper[27835]: I0318 13:43:07.199756 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b14c1c8f-6882-4b86-bfdf-ef1cae5e8321-db-sync-config-data\") pod \"cinder-07518-db-sync-jhwx7\" (UID: \"b14c1c8f-6882-4b86-bfdf-ef1cae5e8321\") " pod="openstack/cinder-07518-db-sync-jhwx7" Mar 18 13:43:07.199811 master-0 kubenswrapper[27835]: I0318 13:43:07.199784 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66ceeb4b-18bd-4d26-a1e7-ef700771aeec-combined-ca-bundle\") pod \"neutron-db-sync-mtjpd\" (UID: \"66ceeb4b-18bd-4d26-a1e7-ef700771aeec\") " pod="openstack/neutron-db-sync-mtjpd" Mar 18 13:43:07.199902 master-0 kubenswrapper[27835]: I0318 13:43:07.199812 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b14c1c8f-6882-4b86-bfdf-ef1cae5e8321-scripts\") pod \"cinder-07518-db-sync-jhwx7\" (UID: \"b14c1c8f-6882-4b86-bfdf-ef1cae5e8321\") " pod="openstack/cinder-07518-db-sync-jhwx7" Mar 18 13:43:07.199902 master-0 kubenswrapper[27835]: I0318 13:43:07.199844 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b779cf8-bfc6-417f-b28b-4cfa060a6db2-operator-scripts\") pod \"ironic-f5fe-account-create-update-4dwnc\" (UID: \"6b779cf8-bfc6-417f-b28b-4cfa060a6db2\") " pod="openstack/ironic-f5fe-account-create-update-4dwnc" Mar 18 13:43:07.199964 master-0 kubenswrapper[27835]: I0318 13:43:07.199932 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/66ceeb4b-18bd-4d26-a1e7-ef700771aeec-config\") pod \"neutron-db-sync-mtjpd\" (UID: \"66ceeb4b-18bd-4d26-a1e7-ef700771aeec\") " pod="openstack/neutron-db-sync-mtjpd" Mar 18 13:43:07.200004 master-0 kubenswrapper[27835]: I0318 13:43:07.199972 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b14c1c8f-6882-4b86-bfdf-ef1cae5e8321-config-data\") pod \"cinder-07518-db-sync-jhwx7\" (UID: \"b14c1c8f-6882-4b86-bfdf-ef1cae5e8321\") " pod="openstack/cinder-07518-db-sync-jhwx7" Mar 18 13:43:07.200047 master-0 kubenswrapper[27835]: I0318 13:43:07.200010 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b14c1c8f-6882-4b86-bfdf-ef1cae5e8321-etc-machine-id\") pod \"cinder-07518-db-sync-jhwx7\" (UID: \"b14c1c8f-6882-4b86-bfdf-ef1cae5e8321\") " pod="openstack/cinder-07518-db-sync-jhwx7" Mar 18 13:43:07.200620 master-0 kubenswrapper[27835]: I0318 13:43:07.200112 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2rf9\" (UniqueName: \"kubernetes.io/projected/66ceeb4b-18bd-4d26-a1e7-ef700771aeec-kube-api-access-x2rf9\") pod \"neutron-db-sync-mtjpd\" (UID: \"66ceeb4b-18bd-4d26-a1e7-ef700771aeec\") " pod="openstack/neutron-db-sync-mtjpd" Mar 18 13:43:07.214498 master-0 kubenswrapper[27835]: I0318 13:43:07.212509 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b14c1c8f-6882-4b86-bfdf-ef1cae5e8321-etc-machine-id\") pod \"cinder-07518-db-sync-jhwx7\" (UID: \"b14c1c8f-6882-4b86-bfdf-ef1cae5e8321\") " pod="openstack/cinder-07518-db-sync-jhwx7" Mar 18 13:43:07.214498 master-0 kubenswrapper[27835]: I0318 13:43:07.213904 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/66ceeb4b-18bd-4d26-a1e7-ef700771aeec-config\") pod \"neutron-db-sync-mtjpd\" (UID: \"66ceeb4b-18bd-4d26-a1e7-ef700771aeec\") " pod="openstack/neutron-db-sync-mtjpd" Mar 18 13:43:07.218960 master-0 kubenswrapper[27835]: I0318 13:43:07.218567 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b14c1c8f-6882-4b86-bfdf-ef1cae5e8321-config-data\") pod \"cinder-07518-db-sync-jhwx7\" (UID: \"b14c1c8f-6882-4b86-bfdf-ef1cae5e8321\") " pod="openstack/cinder-07518-db-sync-jhwx7" Mar 18 13:43:07.218960 master-0 kubenswrapper[27835]: I0318 13:43:07.218902 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b14c1c8f-6882-4b86-bfdf-ef1cae5e8321-scripts\") pod \"cinder-07518-db-sync-jhwx7\" (UID: \"b14c1c8f-6882-4b86-bfdf-ef1cae5e8321\") " pod="openstack/cinder-07518-db-sync-jhwx7" Mar 18 13:43:07.222576 master-0 kubenswrapper[27835]: I0318 13:43:07.222382 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b14c1c8f-6882-4b86-bfdf-ef1cae5e8321-db-sync-config-data\") pod \"cinder-07518-db-sync-jhwx7\" (UID: \"b14c1c8f-6882-4b86-bfdf-ef1cae5e8321\") " pod="openstack/cinder-07518-db-sync-jhwx7" Mar 18 13:43:07.224714 master-0 kubenswrapper[27835]: I0318 13:43:07.224310 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66ceeb4b-18bd-4d26-a1e7-ef700771aeec-combined-ca-bundle\") pod \"neutron-db-sync-mtjpd\" (UID: \"66ceeb4b-18bd-4d26-a1e7-ef700771aeec\") " pod="openstack/neutron-db-sync-mtjpd" Mar 18 13:43:07.224714 master-0 kubenswrapper[27835]: I0318 13:43:07.224628 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b14c1c8f-6882-4b86-bfdf-ef1cae5e8321-combined-ca-bundle\") pod \"cinder-07518-db-sync-jhwx7\" (UID: \"b14c1c8f-6882-4b86-bfdf-ef1cae5e8321\") " pod="openstack/cinder-07518-db-sync-jhwx7" Mar 18 13:43:07.231141 master-0 kubenswrapper[27835]: I0318 13:43:07.225955 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-vkvz7" Mar 18 13:43:07.246397 master-0 kubenswrapper[27835]: I0318 13:43:07.244625 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2rf9\" (UniqueName: \"kubernetes.io/projected/66ceeb4b-18bd-4d26-a1e7-ef700771aeec-kube-api-access-x2rf9\") pod \"neutron-db-sync-mtjpd\" (UID: \"66ceeb4b-18bd-4d26-a1e7-ef700771aeec\") " pod="openstack/neutron-db-sync-mtjpd" Mar 18 13:43:07.282213 master-0 kubenswrapper[27835]: I0318 13:43:07.282103 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkzj7\" (UniqueName: \"kubernetes.io/projected/b14c1c8f-6882-4b86-bfdf-ef1cae5e8321-kube-api-access-wkzj7\") pod \"cinder-07518-db-sync-jhwx7\" (UID: \"b14c1c8f-6882-4b86-bfdf-ef1cae5e8321\") " pod="openstack/cinder-07518-db-sync-jhwx7" Mar 18 13:43:07.306425 master-0 kubenswrapper[27835]: I0318 13:43:07.306345 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa96d61e-36ca-4846-a008-82052eff4ab8-scripts\") pod \"placement-db-sync-8mqtx\" (UID: \"fa96d61e-36ca-4846-a008-82052eff4ab8\") " pod="openstack/placement-db-sync-8mqtx" Mar 18 13:43:07.306620 master-0 kubenswrapper[27835]: I0318 13:43:07.306477 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/784c7158-5c02-4b97-bf4b-11241e7ebc40-dns-svc\") pod \"dnsmasq-dns-84b7d9bdfc-bnzwf\" (UID: \"784c7158-5c02-4b97-bf4b-11241e7ebc40\") " pod="openstack/dnsmasq-dns-84b7d9bdfc-bnzwf" Mar 18 13:43:07.306620 master-0 kubenswrapper[27835]: I0318 13:43:07.306539 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh8xr\" (UniqueName: \"kubernetes.io/projected/6b779cf8-bfc6-417f-b28b-4cfa060a6db2-kube-api-access-mh8xr\") pod \"ironic-f5fe-account-create-update-4dwnc\" (UID: \"6b779cf8-bfc6-417f-b28b-4cfa060a6db2\") " pod="openstack/ironic-f5fe-account-create-update-4dwnc" Mar 18 13:43:07.306620 master-0 kubenswrapper[27835]: I0318 13:43:07.306604 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29d84\" (UniqueName: \"kubernetes.io/projected/fa96d61e-36ca-4846-a008-82052eff4ab8-kube-api-access-29d84\") pod \"placement-db-sync-8mqtx\" (UID: \"fa96d61e-36ca-4846-a008-82052eff4ab8\") " pod="openstack/placement-db-sync-8mqtx" Mar 18 13:43:07.307747 master-0 kubenswrapper[27835]: I0318 13:43:07.307217 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/784c7158-5c02-4b97-bf4b-11241e7ebc40-config\") pod \"dnsmasq-dns-84b7d9bdfc-bnzwf\" (UID: \"784c7158-5c02-4b97-bf4b-11241e7ebc40\") " pod="openstack/dnsmasq-dns-84b7d9bdfc-bnzwf" Mar 18 13:43:07.307747 master-0 kubenswrapper[27835]: I0318 13:43:07.307322 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b779cf8-bfc6-417f-b28b-4cfa060a6db2-operator-scripts\") pod \"ironic-f5fe-account-create-update-4dwnc\" (UID: \"6b779cf8-bfc6-417f-b28b-4cfa060a6db2\") " pod="openstack/ironic-f5fe-account-create-update-4dwnc" Mar 18 13:43:07.307747 master-0 kubenswrapper[27835]: I0318 13:43:07.307544 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa96d61e-36ca-4846-a008-82052eff4ab8-combined-ca-bundle\") pod \"placement-db-sync-8mqtx\" (UID: \"fa96d61e-36ca-4846-a008-82052eff4ab8\") " pod="openstack/placement-db-sync-8mqtx" Mar 18 13:43:07.309813 master-0 kubenswrapper[27835]: I0318 13:43:07.309775 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b779cf8-bfc6-417f-b28b-4cfa060a6db2-operator-scripts\") pod \"ironic-f5fe-account-create-update-4dwnc\" (UID: \"6b779cf8-bfc6-417f-b28b-4cfa060a6db2\") " pod="openstack/ironic-f5fe-account-create-update-4dwnc" Mar 18 13:43:07.310162 master-0 kubenswrapper[27835]: I0318 13:43:07.310123 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/784c7158-5c02-4b97-bf4b-11241e7ebc40-ovsdbserver-nb\") pod \"dnsmasq-dns-84b7d9bdfc-bnzwf\" (UID: \"784c7158-5c02-4b97-bf4b-11241e7ebc40\") " pod="openstack/dnsmasq-dns-84b7d9bdfc-bnzwf" Mar 18 13:43:07.310219 master-0 kubenswrapper[27835]: I0318 13:43:07.310188 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69cmv\" (UniqueName: \"kubernetes.io/projected/784c7158-5c02-4b97-bf4b-11241e7ebc40-kube-api-access-69cmv\") pod \"dnsmasq-dns-84b7d9bdfc-bnzwf\" (UID: \"784c7158-5c02-4b97-bf4b-11241e7ebc40\") " pod="openstack/dnsmasq-dns-84b7d9bdfc-bnzwf" Mar 18 13:43:07.310311 master-0 kubenswrapper[27835]: I0318 13:43:07.310291 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa96d61e-36ca-4846-a008-82052eff4ab8-config-data\") pod \"placement-db-sync-8mqtx\" (UID: \"fa96d61e-36ca-4846-a008-82052eff4ab8\") " pod="openstack/placement-db-sync-8mqtx" Mar 18 13:43:07.310371 master-0 kubenswrapper[27835]: I0318 13:43:07.310352 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/784c7158-5c02-4b97-bf4b-11241e7ebc40-dns-swift-storage-0\") pod \"dnsmasq-dns-84b7d9bdfc-bnzwf\" (UID: \"784c7158-5c02-4b97-bf4b-11241e7ebc40\") " pod="openstack/dnsmasq-dns-84b7d9bdfc-bnzwf" Mar 18 13:43:07.310424 master-0 kubenswrapper[27835]: I0318 13:43:07.310382 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa96d61e-36ca-4846-a008-82052eff4ab8-logs\") pod \"placement-db-sync-8mqtx\" (UID: \"fa96d61e-36ca-4846-a008-82052eff4ab8\") " pod="openstack/placement-db-sync-8mqtx" Mar 18 13:43:07.310458 master-0 kubenswrapper[27835]: I0318 13:43:07.310440 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/784c7158-5c02-4b97-bf4b-11241e7ebc40-ovsdbserver-sb\") pod \"dnsmasq-dns-84b7d9bdfc-bnzwf\" (UID: \"784c7158-5c02-4b97-bf4b-11241e7ebc40\") " pod="openstack/dnsmasq-dns-84b7d9bdfc-bnzwf" Mar 18 13:43:07.327025 master-0 kubenswrapper[27835]: I0318 13:43:07.326978 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mtjpd" Mar 18 13:43:07.352511 master-0 kubenswrapper[27835]: I0318 13:43:07.351500 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-07518-db-sync-jhwx7" Mar 18 13:43:07.358123 master-0 kubenswrapper[27835]: I0318 13:43:07.357407 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh8xr\" (UniqueName: \"kubernetes.io/projected/6b779cf8-bfc6-417f-b28b-4cfa060a6db2-kube-api-access-mh8xr\") pod \"ironic-f5fe-account-create-update-4dwnc\" (UID: \"6b779cf8-bfc6-417f-b28b-4cfa060a6db2\") " pod="openstack/ironic-f5fe-account-create-update-4dwnc" Mar 18 13:43:07.391330 master-0 kubenswrapper[27835]: I0318 13:43:07.391079 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8mqtx"] Mar 18 13:43:07.412313 master-0 kubenswrapper[27835]: I0318 13:43:07.412176 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa96d61e-36ca-4846-a008-82052eff4ab8-scripts\") pod \"placement-db-sync-8mqtx\" (UID: \"fa96d61e-36ca-4846-a008-82052eff4ab8\") " pod="openstack/placement-db-sync-8mqtx" Mar 18 13:43:07.412313 master-0 kubenswrapper[27835]: I0318 13:43:07.412280 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/784c7158-5c02-4b97-bf4b-11241e7ebc40-dns-svc\") pod \"dnsmasq-dns-84b7d9bdfc-bnzwf\" (UID: \"784c7158-5c02-4b97-bf4b-11241e7ebc40\") " pod="openstack/dnsmasq-dns-84b7d9bdfc-bnzwf" Mar 18 13:43:07.412597 master-0 kubenswrapper[27835]: I0318 13:43:07.412342 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29d84\" (UniqueName: \"kubernetes.io/projected/fa96d61e-36ca-4846-a008-82052eff4ab8-kube-api-access-29d84\") pod \"placement-db-sync-8mqtx\" (UID: \"fa96d61e-36ca-4846-a008-82052eff4ab8\") " pod="openstack/placement-db-sync-8mqtx" Mar 18 13:43:07.412597 master-0 kubenswrapper[27835]: I0318 13:43:07.412370 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/784c7158-5c02-4b97-bf4b-11241e7ebc40-config\") pod \"dnsmasq-dns-84b7d9bdfc-bnzwf\" (UID: \"784c7158-5c02-4b97-bf4b-11241e7ebc40\") " pod="openstack/dnsmasq-dns-84b7d9bdfc-bnzwf" Mar 18 13:43:07.412597 master-0 kubenswrapper[27835]: I0318 13:43:07.412529 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa96d61e-36ca-4846-a008-82052eff4ab8-combined-ca-bundle\") pod \"placement-db-sync-8mqtx\" (UID: \"fa96d61e-36ca-4846-a008-82052eff4ab8\") " pod="openstack/placement-db-sync-8mqtx" Mar 18 13:43:07.412597 master-0 kubenswrapper[27835]: I0318 13:43:07.412560 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/784c7158-5c02-4b97-bf4b-11241e7ebc40-ovsdbserver-nb\") pod \"dnsmasq-dns-84b7d9bdfc-bnzwf\" (UID: \"784c7158-5c02-4b97-bf4b-11241e7ebc40\") " pod="openstack/dnsmasq-dns-84b7d9bdfc-bnzwf" Mar 18 13:43:07.412597 master-0 kubenswrapper[27835]: I0318 13:43:07.412584 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69cmv\" (UniqueName: \"kubernetes.io/projected/784c7158-5c02-4b97-bf4b-11241e7ebc40-kube-api-access-69cmv\") pod \"dnsmasq-dns-84b7d9bdfc-bnzwf\" (UID: \"784c7158-5c02-4b97-bf4b-11241e7ebc40\") " pod="openstack/dnsmasq-dns-84b7d9bdfc-bnzwf" Mar 18 13:43:07.412815 master-0 kubenswrapper[27835]: I0318 13:43:07.412632 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa96d61e-36ca-4846-a008-82052eff4ab8-config-data\") pod \"placement-db-sync-8mqtx\" (UID: \"fa96d61e-36ca-4846-a008-82052eff4ab8\") " pod="openstack/placement-db-sync-8mqtx" Mar 18 13:43:07.412815 master-0 kubenswrapper[27835]: I0318 13:43:07.412655 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/784c7158-5c02-4b97-bf4b-11241e7ebc40-dns-swift-storage-0\") pod \"dnsmasq-dns-84b7d9bdfc-bnzwf\" (UID: \"784c7158-5c02-4b97-bf4b-11241e7ebc40\") " pod="openstack/dnsmasq-dns-84b7d9bdfc-bnzwf" Mar 18 13:43:07.412815 master-0 kubenswrapper[27835]: I0318 13:43:07.412678 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa96d61e-36ca-4846-a008-82052eff4ab8-logs\") pod \"placement-db-sync-8mqtx\" (UID: \"fa96d61e-36ca-4846-a008-82052eff4ab8\") " pod="openstack/placement-db-sync-8mqtx" Mar 18 13:43:07.412815 master-0 kubenswrapper[27835]: I0318 13:43:07.412703 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/784c7158-5c02-4b97-bf4b-11241e7ebc40-ovsdbserver-sb\") pod \"dnsmasq-dns-84b7d9bdfc-bnzwf\" (UID: \"784c7158-5c02-4b97-bf4b-11241e7ebc40\") " pod="openstack/dnsmasq-dns-84b7d9bdfc-bnzwf" Mar 18 13:43:07.417372 master-0 kubenswrapper[27835]: I0318 13:43:07.416433 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa96d61e-36ca-4846-a008-82052eff4ab8-scripts\") pod \"placement-db-sync-8mqtx\" (UID: \"fa96d61e-36ca-4846-a008-82052eff4ab8\") " pod="openstack/placement-db-sync-8mqtx" Mar 18 13:43:07.418593 master-0 kubenswrapper[27835]: I0318 13:43:07.418008 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/784c7158-5c02-4b97-bf4b-11241e7ebc40-dns-svc\") pod \"dnsmasq-dns-84b7d9bdfc-bnzwf\" (UID: \"784c7158-5c02-4b97-bf4b-11241e7ebc40\") " pod="openstack/dnsmasq-dns-84b7d9bdfc-bnzwf" Mar 18 13:43:07.419730 master-0 kubenswrapper[27835]: I0318 13:43:07.418826 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/784c7158-5c02-4b97-bf4b-11241e7ebc40-config\") pod \"dnsmasq-dns-84b7d9bdfc-bnzwf\" (UID: \"784c7158-5c02-4b97-bf4b-11241e7ebc40\") " pod="openstack/dnsmasq-dns-84b7d9bdfc-bnzwf" Mar 18 13:43:07.421975 master-0 kubenswrapper[27835]: I0318 13:43:07.421662 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/784c7158-5c02-4b97-bf4b-11241e7ebc40-dns-swift-storage-0\") pod \"dnsmasq-dns-84b7d9bdfc-bnzwf\" (UID: \"784c7158-5c02-4b97-bf4b-11241e7ebc40\") " pod="openstack/dnsmasq-dns-84b7d9bdfc-bnzwf" Mar 18 13:43:07.423165 master-0 kubenswrapper[27835]: I0318 13:43:07.422104 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/784c7158-5c02-4b97-bf4b-11241e7ebc40-ovsdbserver-sb\") pod \"dnsmasq-dns-84b7d9bdfc-bnzwf\" (UID: \"784c7158-5c02-4b97-bf4b-11241e7ebc40\") " pod="openstack/dnsmasq-dns-84b7d9bdfc-bnzwf" Mar 18 13:43:07.423165 master-0 kubenswrapper[27835]: I0318 13:43:07.422427 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa96d61e-36ca-4846-a008-82052eff4ab8-logs\") pod \"placement-db-sync-8mqtx\" (UID: \"fa96d61e-36ca-4846-a008-82052eff4ab8\") " pod="openstack/placement-db-sync-8mqtx" Mar 18 13:43:07.423165 master-0 kubenswrapper[27835]: I0318 13:43:07.422737 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa96d61e-36ca-4846-a008-82052eff4ab8-combined-ca-bundle\") pod \"placement-db-sync-8mqtx\" (UID: \"fa96d61e-36ca-4846-a008-82052eff4ab8\") " pod="openstack/placement-db-sync-8mqtx" Mar 18 13:43:07.423165 master-0 kubenswrapper[27835]: I0318 13:43:07.422814 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/784c7158-5c02-4b97-bf4b-11241e7ebc40-ovsdbserver-nb\") pod \"dnsmasq-dns-84b7d9bdfc-bnzwf\" (UID: \"784c7158-5c02-4b97-bf4b-11241e7ebc40\") " pod="openstack/dnsmasq-dns-84b7d9bdfc-bnzwf" Mar 18 13:43:07.425442 master-0 kubenswrapper[27835]: I0318 13:43:07.424550 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b7d9bdfc-bnzwf"] Mar 18 13:43:07.433479 master-0 kubenswrapper[27835]: I0318 13:43:07.433161 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa96d61e-36ca-4846-a008-82052eff4ab8-config-data\") pod \"placement-db-sync-8mqtx\" (UID: \"fa96d61e-36ca-4846-a008-82052eff4ab8\") " pod="openstack/placement-db-sync-8mqtx" Mar 18 13:43:07.463393 master-0 kubenswrapper[27835]: I0318 13:43:07.462906 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69cmv\" (UniqueName: \"kubernetes.io/projected/784c7158-5c02-4b97-bf4b-11241e7ebc40-kube-api-access-69cmv\") pod \"dnsmasq-dns-84b7d9bdfc-bnzwf\" (UID: \"784c7158-5c02-4b97-bf4b-11241e7ebc40\") " pod="openstack/dnsmasq-dns-84b7d9bdfc-bnzwf" Mar 18 13:43:07.463393 master-0 kubenswrapper[27835]: I0318 13:43:07.463094 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29d84\" (UniqueName: \"kubernetes.io/projected/fa96d61e-36ca-4846-a008-82052eff4ab8-kube-api-access-29d84\") pod \"placement-db-sync-8mqtx\" (UID: \"fa96d61e-36ca-4846-a008-82052eff4ab8\") " pod="openstack/placement-db-sync-8mqtx" Mar 18 13:43:07.463393 master-0 kubenswrapper[27835]: I0318 13:43:07.463400 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-f5fe-account-create-update-4dwnc" Mar 18 13:43:07.517221 master-0 kubenswrapper[27835]: I0318 13:43:07.514176 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b7d9bdfc-bnzwf" Mar 18 13:43:07.526363 master-0 kubenswrapper[27835]: I0318 13:43:07.525663 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8mqtx" Mar 18 13:43:07.592962 master-0 kubenswrapper[27835]: I0318 13:43:07.592116 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bbb956cfc-8ggj8" Mar 18 13:43:07.726834 master-0 kubenswrapper[27835]: I0318 13:43:07.720484 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nq7m4"] Mar 18 13:43:07.748962 master-0 kubenswrapper[27835]: I0318 13:43:07.740056 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m78ds\" (UniqueName: \"kubernetes.io/projected/fe8ccd0f-8ecc-4d54-970d-3812c661d62b-kube-api-access-m78ds\") pod \"fe8ccd0f-8ecc-4d54-970d-3812c661d62b\" (UID: \"fe8ccd0f-8ecc-4d54-970d-3812c661d62b\") " Mar 18 13:43:07.748962 master-0 kubenswrapper[27835]: I0318 13:43:07.740118 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe8ccd0f-8ecc-4d54-970d-3812c661d62b-dns-swift-storage-0\") pod \"fe8ccd0f-8ecc-4d54-970d-3812c661d62b\" (UID: \"fe8ccd0f-8ecc-4d54-970d-3812c661d62b\") " Mar 18 13:43:07.748962 master-0 kubenswrapper[27835]: I0318 13:43:07.740230 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe8ccd0f-8ecc-4d54-970d-3812c661d62b-dns-svc\") pod \"fe8ccd0f-8ecc-4d54-970d-3812c661d62b\" (UID: \"fe8ccd0f-8ecc-4d54-970d-3812c661d62b\") " Mar 18 13:43:07.748962 master-0 kubenswrapper[27835]: I0318 13:43:07.740300 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe8ccd0f-8ecc-4d54-970d-3812c661d62b-ovsdbserver-nb\") pod \"fe8ccd0f-8ecc-4d54-970d-3812c661d62b\" (UID: \"fe8ccd0f-8ecc-4d54-970d-3812c661d62b\") " Mar 18 13:43:07.748962 master-0 kubenswrapper[27835]: I0318 13:43:07.740362 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe8ccd0f-8ecc-4d54-970d-3812c661d62b-ovsdbserver-sb\") pod \"fe8ccd0f-8ecc-4d54-970d-3812c661d62b\" (UID: \"fe8ccd0f-8ecc-4d54-970d-3812c661d62b\") " Mar 18 13:43:07.748962 master-0 kubenswrapper[27835]: I0318 13:43:07.740391 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe8ccd0f-8ecc-4d54-970d-3812c661d62b-config\") pod \"fe8ccd0f-8ecc-4d54-970d-3812c661d62b\" (UID: \"fe8ccd0f-8ecc-4d54-970d-3812c661d62b\") " Mar 18 13:43:07.791081 master-0 kubenswrapper[27835]: I0318 13:43:07.790986 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dcc5b569-tg282"] Mar 18 13:43:07.794312 master-0 kubenswrapper[27835]: I0318 13:43:07.794250 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe8ccd0f-8ecc-4d54-970d-3812c661d62b-kube-api-access-m78ds" (OuterVolumeSpecName: "kube-api-access-m78ds") pod "fe8ccd0f-8ecc-4d54-970d-3812c661d62b" (UID: "fe8ccd0f-8ecc-4d54-970d-3812c661d62b"). InnerVolumeSpecName "kube-api-access-m78ds". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:43:07.849793 master-0 kubenswrapper[27835]: I0318 13:43:07.846996 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m78ds\" (UniqueName: \"kubernetes.io/projected/fe8ccd0f-8ecc-4d54-970d-3812c661d62b-kube-api-access-m78ds\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:07.983113 master-0 kubenswrapper[27835]: I0318 13:43:07.982949 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dcc5b569-tg282" event={"ID":"a8203106-e334-473e-8e67-f6fddefc8016","Type":"ContainerStarted","Data":"a73e25eb244337d3ea8352f1d62450a0806bf91f66999a3fbcc96838ae356902"} Mar 18 13:43:07.994194 master-0 kubenswrapper[27835]: I0318 13:43:07.989239 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nq7m4" event={"ID":"562b6f7a-8086-457b-8f82-cc3c56043ac8","Type":"ContainerStarted","Data":"e93cbc1446ae4cfdbcc2a72a2613fcfb8b40273c9fbc819a6ceb4b99f26a1747"} Mar 18 13:43:07.994194 master-0 kubenswrapper[27835]: I0318 13:43:07.991089 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe8ccd0f-8ecc-4d54-970d-3812c661d62b-config" (OuterVolumeSpecName: "config") pod "fe8ccd0f-8ecc-4d54-970d-3812c661d62b" (UID: "fe8ccd0f-8ecc-4d54-970d-3812c661d62b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:43:07.995155 master-0 kubenswrapper[27835]: I0318 13:43:07.995103 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bbb956cfc-8ggj8" event={"ID":"fe8ccd0f-8ecc-4d54-970d-3812c661d62b","Type":"ContainerDied","Data":"e84e999b5da0d1df2dd0d25cdabc56170a514b1565300dc006c5a2f6bfa298da"} Mar 18 13:43:07.995249 master-0 kubenswrapper[27835]: I0318 13:43:07.995170 27835 scope.go:117] "RemoveContainer" containerID="eebe47267e802400050009f337899c39fdca340a0eaa40ec505f23e5a9befec7" Mar 18 13:43:07.995384 master-0 kubenswrapper[27835]: I0318 13:43:07.995341 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bbb956cfc-8ggj8" Mar 18 13:43:07.999621 master-0 kubenswrapper[27835]: I0318 13:43:07.999265 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe8ccd0f-8ecc-4d54-970d-3812c661d62b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fe8ccd0f-8ecc-4d54-970d-3812c661d62b" (UID: "fe8ccd0f-8ecc-4d54-970d-3812c661d62b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:43:08.011854 master-0 kubenswrapper[27835]: I0318 13:43:08.010006 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe8ccd0f-8ecc-4d54-970d-3812c661d62b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fe8ccd0f-8ecc-4d54-970d-3812c661d62b" (UID: "fe8ccd0f-8ecc-4d54-970d-3812c661d62b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:43:08.013124 master-0 kubenswrapper[27835]: I0318 13:43:08.012909 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe8ccd0f-8ecc-4d54-970d-3812c661d62b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fe8ccd0f-8ecc-4d54-970d-3812c661d62b" (UID: "fe8ccd0f-8ecc-4d54-970d-3812c661d62b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:43:08.019179 master-0 kubenswrapper[27835]: I0318 13:43:08.019126 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe8ccd0f-8ecc-4d54-970d-3812c661d62b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fe8ccd0f-8ecc-4d54-970d-3812c661d62b" (UID: "fe8ccd0f-8ecc-4d54-970d-3812c661d62b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:43:08.050154 master-0 kubenswrapper[27835]: I0318 13:43:08.049730 27835 scope.go:117] "RemoveContainer" containerID="457df5584f08ae10c617776b79b2e2d257e35ee6229229420a7a741ace78919d" Mar 18 13:43:08.052529 master-0 kubenswrapper[27835]: I0318 13:43:08.052324 27835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe8ccd0f-8ecc-4d54-970d-3812c661d62b-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:08.052529 master-0 kubenswrapper[27835]: I0318 13:43:08.052367 27835 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe8ccd0f-8ecc-4d54-970d-3812c661d62b-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:08.052529 master-0 kubenswrapper[27835]: I0318 13:43:08.052379 27835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe8ccd0f-8ecc-4d54-970d-3812c661d62b-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:08.052529 master-0 kubenswrapper[27835]: I0318 13:43:08.052391 27835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe8ccd0f-8ecc-4d54-970d-3812c661d62b-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:08.052529 master-0 kubenswrapper[27835]: I0318 13:43:08.052404 27835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe8ccd0f-8ecc-4d54-970d-3812c661d62b-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:08.431549 master-0 kubenswrapper[27835]: I0318 13:43:08.423836 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-vkvz7"] Mar 18 13:43:08.439381 master-0 kubenswrapper[27835]: I0318 13:43:08.439180 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-4f519-default-external-api-0"] Mar 18 13:43:08.439615 master-0 kubenswrapper[27835]: E0318 13:43:08.439593 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe8ccd0f-8ecc-4d54-970d-3812c661d62b" containerName="init" Mar 18 13:43:08.439615 master-0 kubenswrapper[27835]: I0318 13:43:08.439610 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe8ccd0f-8ecc-4d54-970d-3812c661d62b" containerName="init" Mar 18 13:43:08.439695 master-0 kubenswrapper[27835]: E0318 13:43:08.439640 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe8ccd0f-8ecc-4d54-970d-3812c661d62b" containerName="dnsmasq-dns" Mar 18 13:43:08.439695 master-0 kubenswrapper[27835]: I0318 13:43:08.439647 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe8ccd0f-8ecc-4d54-970d-3812c661d62b" containerName="dnsmasq-dns" Mar 18 13:43:08.439899 master-0 kubenswrapper[27835]: I0318 13:43:08.439885 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe8ccd0f-8ecc-4d54-970d-3812c661d62b" containerName="dnsmasq-dns" Mar 18 13:43:08.499600 master-0 kubenswrapper[27835]: I0318 13:43:08.440895 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:08.499600 master-0 kubenswrapper[27835]: I0318 13:43:08.461011 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 18 13:43:08.499600 master-0 kubenswrapper[27835]: I0318 13:43:08.461256 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-4f519-default-external-config-data" Mar 18 13:43:08.499600 master-0 kubenswrapper[27835]: I0318 13:43:08.461366 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 18 13:43:08.572152 master-0 kubenswrapper[27835]: I0318 13:43:08.569825 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4f519-default-external-api-0"] Mar 18 13:43:08.601576 master-0 kubenswrapper[27835]: I0318 13:43:08.601385 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be12c3ea-d20c-494b-9467-b13c0f096788-combined-ca-bundle\") pod \"glance-4f519-default-external-api-0\" (UID: \"be12c3ea-d20c-494b-9467-b13c0f096788\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:08.601686 master-0 kubenswrapper[27835]: I0318 13:43:08.601508 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-07518-db-sync-jhwx7"] Mar 18 13:43:08.601686 master-0 kubenswrapper[27835]: I0318 13:43:08.601597 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ff789d6f-852a-4819-b19c-09444384ecbe\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8cd87a72-b057-4c0b-a353-f22fec868a61\") pod \"glance-4f519-default-external-api-0\" (UID: \"be12c3ea-d20c-494b-9467-b13c0f096788\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:08.601810 master-0 kubenswrapper[27835]: I0318 13:43:08.601781 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be12c3ea-d20c-494b-9467-b13c0f096788-logs\") pod \"glance-4f519-default-external-api-0\" (UID: \"be12c3ea-d20c-494b-9467-b13c0f096788\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:08.602054 master-0 kubenswrapper[27835]: I0318 13:43:08.602001 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8s6k\" (UniqueName: \"kubernetes.io/projected/be12c3ea-d20c-494b-9467-b13c0f096788-kube-api-access-d8s6k\") pod \"glance-4f519-default-external-api-0\" (UID: \"be12c3ea-d20c-494b-9467-b13c0f096788\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:08.602189 master-0 kubenswrapper[27835]: I0318 13:43:08.602126 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be12c3ea-d20c-494b-9467-b13c0f096788-config-data\") pod \"glance-4f519-default-external-api-0\" (UID: \"be12c3ea-d20c-494b-9467-b13c0f096788\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:08.602245 master-0 kubenswrapper[27835]: I0318 13:43:08.602189 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/be12c3ea-d20c-494b-9467-b13c0f096788-httpd-run\") pod \"glance-4f519-default-external-api-0\" (UID: \"be12c3ea-d20c-494b-9467-b13c0f096788\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:08.602430 master-0 kubenswrapper[27835]: I0318 13:43:08.602386 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be12c3ea-d20c-494b-9467-b13c0f096788-public-tls-certs\") pod \"glance-4f519-default-external-api-0\" (UID: \"be12c3ea-d20c-494b-9467-b13c0f096788\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:08.602498 master-0 kubenswrapper[27835]: I0318 13:43:08.602477 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be12c3ea-d20c-494b-9467-b13c0f096788-scripts\") pod \"glance-4f519-default-external-api-0\" (UID: \"be12c3ea-d20c-494b-9467-b13c0f096788\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:08.706449 master-0 kubenswrapper[27835]: I0318 13:43:08.704440 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8s6k\" (UniqueName: \"kubernetes.io/projected/be12c3ea-d20c-494b-9467-b13c0f096788-kube-api-access-d8s6k\") pod \"glance-4f519-default-external-api-0\" (UID: \"be12c3ea-d20c-494b-9467-b13c0f096788\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:08.706449 master-0 kubenswrapper[27835]: I0318 13:43:08.704523 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be12c3ea-d20c-494b-9467-b13c0f096788-config-data\") pod \"glance-4f519-default-external-api-0\" (UID: \"be12c3ea-d20c-494b-9467-b13c0f096788\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:08.706449 master-0 kubenswrapper[27835]: I0318 13:43:08.705386 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/be12c3ea-d20c-494b-9467-b13c0f096788-httpd-run\") pod \"glance-4f519-default-external-api-0\" (UID: \"be12c3ea-d20c-494b-9467-b13c0f096788\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:08.706449 master-0 kubenswrapper[27835]: I0318 13:43:08.705604 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be12c3ea-d20c-494b-9467-b13c0f096788-public-tls-certs\") pod \"glance-4f519-default-external-api-0\" (UID: \"be12c3ea-d20c-494b-9467-b13c0f096788\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:08.707361 master-0 kubenswrapper[27835]: I0318 13:43:08.707313 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/be12c3ea-d20c-494b-9467-b13c0f096788-httpd-run\") pod \"glance-4f519-default-external-api-0\" (UID: \"be12c3ea-d20c-494b-9467-b13c0f096788\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:08.707636 master-0 kubenswrapper[27835]: I0318 13:43:08.707600 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be12c3ea-d20c-494b-9467-b13c0f096788-scripts\") pod \"glance-4f519-default-external-api-0\" (UID: \"be12c3ea-d20c-494b-9467-b13c0f096788\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:08.707863 master-0 kubenswrapper[27835]: I0318 13:43:08.707836 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be12c3ea-d20c-494b-9467-b13c0f096788-combined-ca-bundle\") pod \"glance-4f519-default-external-api-0\" (UID: \"be12c3ea-d20c-494b-9467-b13c0f096788\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:08.707913 master-0 kubenswrapper[27835]: I0318 13:43:08.707879 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ff789d6f-852a-4819-b19c-09444384ecbe\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8cd87a72-b057-4c0b-a353-f22fec868a61\") pod \"glance-4f519-default-external-api-0\" (UID: \"be12c3ea-d20c-494b-9467-b13c0f096788\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:08.707953 master-0 kubenswrapper[27835]: I0318 13:43:08.707928 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be12c3ea-d20c-494b-9467-b13c0f096788-logs\") pod \"glance-4f519-default-external-api-0\" (UID: \"be12c3ea-d20c-494b-9467-b13c0f096788\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:08.708384 master-0 kubenswrapper[27835]: I0318 13:43:08.708354 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be12c3ea-d20c-494b-9467-b13c0f096788-logs\") pod \"glance-4f519-default-external-api-0\" (UID: \"be12c3ea-d20c-494b-9467-b13c0f096788\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:08.716029 master-0 kubenswrapper[27835]: I0318 13:43:08.711968 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be12c3ea-d20c-494b-9467-b13c0f096788-config-data\") pod \"glance-4f519-default-external-api-0\" (UID: \"be12c3ea-d20c-494b-9467-b13c0f096788\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:08.716505 master-0 kubenswrapper[27835]: I0318 13:43:08.716468 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be12c3ea-d20c-494b-9467-b13c0f096788-combined-ca-bundle\") pod \"glance-4f519-default-external-api-0\" (UID: \"be12c3ea-d20c-494b-9467-b13c0f096788\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:08.717164 master-0 kubenswrapper[27835]: I0318 13:43:08.717125 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be12c3ea-d20c-494b-9467-b13c0f096788-public-tls-certs\") pod \"glance-4f519-default-external-api-0\" (UID: \"be12c3ea-d20c-494b-9467-b13c0f096788\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:08.718019 master-0 kubenswrapper[27835]: I0318 13:43:08.717924 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be12c3ea-d20c-494b-9467-b13c0f096788-scripts\") pod \"glance-4f519-default-external-api-0\" (UID: \"be12c3ea-d20c-494b-9467-b13c0f096788\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:08.730299 master-0 kubenswrapper[27835]: I0318 13:43:08.730262 27835 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 13:43:08.730559 master-0 kubenswrapper[27835]: I0318 13:43:08.730536 27835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ff789d6f-852a-4819-b19c-09444384ecbe\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8cd87a72-b057-4c0b-a353-f22fec868a61\") pod \"glance-4f519-default-external-api-0\" (UID: \"be12c3ea-d20c-494b-9467-b13c0f096788\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/d7d2380a2367ec81f9f9b44b1b86eaac9ba6ff0ab5cc582d80f5ba97c51d1f86/globalmount\"" pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:08.734468 master-0 kubenswrapper[27835]: I0318 13:43:08.734399 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8s6k\" (UniqueName: \"kubernetes.io/projected/be12c3ea-d20c-494b-9467-b13c0f096788-kube-api-access-d8s6k\") pod \"glance-4f519-default-external-api-0\" (UID: \"be12c3ea-d20c-494b-9467-b13c0f096788\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:08.768715 master-0 kubenswrapper[27835]: I0318 13:43:08.768533 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8mqtx"] Mar 18 13:43:08.791480 master-0 kubenswrapper[27835]: W0318 13:43:08.791431 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa96d61e_36ca_4846_a008_82052eff4ab8.slice/crio-f99e97c31b64caa5dbe8403225e7bdd2df3d7af9230fd2b8fc2ef0c90357df48 WatchSource:0}: Error finding container f99e97c31b64caa5dbe8403225e7bdd2df3d7af9230fd2b8fc2ef0c90357df48: Status 404 returned error can't find the container with id f99e97c31b64caa5dbe8403225e7bdd2df3d7af9230fd2b8fc2ef0c90357df48 Mar 18 13:43:09.049027 master-0 kubenswrapper[27835]: I0318 13:43:09.048887 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8mqtx" event={"ID":"fa96d61e-36ca-4846-a008-82052eff4ab8","Type":"ContainerStarted","Data":"f99e97c31b64caa5dbe8403225e7bdd2df3d7af9230fd2b8fc2ef0c90357df48"} Mar 18 13:43:09.060543 master-0 kubenswrapper[27835]: I0318 13:43:09.056619 27835 generic.go:334] "Generic (PLEG): container finished" podID="a8203106-e334-473e-8e67-f6fddefc8016" containerID="1071cac3f5f8667e484fbef0e29c88aec37182c9e32d36ef8678f75dfa7b9dde" exitCode=0 Mar 18 13:43:09.060543 master-0 kubenswrapper[27835]: I0318 13:43:09.056709 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dcc5b569-tg282" event={"ID":"a8203106-e334-473e-8e67-f6fddefc8016","Type":"ContainerDied","Data":"1071cac3f5f8667e484fbef0e29c88aec37182c9e32d36ef8678f75dfa7b9dde"} Mar 18 13:43:09.063232 master-0 kubenswrapper[27835]: I0318 13:43:09.063184 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nq7m4" event={"ID":"562b6f7a-8086-457b-8f82-cc3c56043ac8","Type":"ContainerStarted","Data":"265a0fc59c15ef6c7115cfbea99575530cc3273b796645623364156cc8e7e6bf"} Mar 18 13:43:09.067631 master-0 kubenswrapper[27835]: I0318 13:43:09.067583 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-vkvz7" event={"ID":"b686167f-35b3-4b2c-a6c7-074c63023350","Type":"ContainerStarted","Data":"d6f29167975a4815854e054ebf2ebe79100c677285491b8d0f69e65c2f7c4b4f"} Mar 18 13:43:09.067842 master-0 kubenswrapper[27835]: I0318 13:43:09.067820 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-vkvz7" event={"ID":"b686167f-35b3-4b2c-a6c7-074c63023350","Type":"ContainerStarted","Data":"1cb7720a5b3424498ccc1e982b4e23081c4a5c182c6551af79298d2fec22d954"} Mar 18 13:43:09.078564 master-0 kubenswrapper[27835]: I0318 13:43:09.078502 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-db-sync-jhwx7" event={"ID":"b14c1c8f-6882-4b86-bfdf-ef1cae5e8321","Type":"ContainerStarted","Data":"694e546aa9f15054614f728b930ed73eb1079c6944eb0dcf47cfb389703c8e41"} Mar 18 13:43:09.131047 master-0 kubenswrapper[27835]: I0318 13:43:09.130981 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-mtjpd"] Mar 18 13:43:09.206615 master-0 kubenswrapper[27835]: I0318 13:43:09.202053 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-f5fe-account-create-update-4dwnc"] Mar 18 13:43:09.209771 master-0 kubenswrapper[27835]: I0318 13:43:09.209718 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b7d9bdfc-bnzwf"] Mar 18 13:43:09.727470 master-0 kubenswrapper[27835]: I0318 13:43:09.724616 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dcc5b569-tg282" Mar 18 13:43:09.741477 master-0 kubenswrapper[27835]: I0318 13:43:09.737565 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-db-create-vkvz7" podStartSLOduration=3.737546774 podStartE2EDuration="3.737546774s" podCreationTimestamp="2026-03-18 13:43:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:43:09.732528741 +0000 UTC m=+1153.697740311" watchObservedRunningTime="2026-03-18 13:43:09.737546774 +0000 UTC m=+1153.702758334" Mar 18 13:43:09.764816 master-0 kubenswrapper[27835]: I0318 13:43:09.761643 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-4f519-default-internal-api-0"] Mar 18 13:43:09.764816 master-0 kubenswrapper[27835]: E0318 13:43:09.762130 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8203106-e334-473e-8e67-f6fddefc8016" containerName="init" Mar 18 13:43:09.764816 master-0 kubenswrapper[27835]: I0318 13:43:09.762145 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8203106-e334-473e-8e67-f6fddefc8016" containerName="init" Mar 18 13:43:09.764816 master-0 kubenswrapper[27835]: I0318 13:43:09.762368 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8203106-e334-473e-8e67-f6fddefc8016" containerName="init" Mar 18 13:43:09.764816 master-0 kubenswrapper[27835]: I0318 13:43:09.763453 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:09.766515 master-0 kubenswrapper[27835]: I0318 13:43:09.766476 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-4f519-default-internal-config-data" Mar 18 13:43:09.766666 master-0 kubenswrapper[27835]: I0318 13:43:09.766630 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 18 13:43:10.048434 master-0 kubenswrapper[27835]: I0318 13:43:10.048350 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4f519-default-internal-api-0"] Mar 18 13:43:10.098377 master-0 kubenswrapper[27835]: I0318 13:43:10.098299 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a8203106-e334-473e-8e67-f6fddefc8016-ovsdbserver-sb\") pod \"a8203106-e334-473e-8e67-f6fddefc8016\" (UID: \"a8203106-e334-473e-8e67-f6fddefc8016\") " Mar 18 13:43:10.098623 master-0 kubenswrapper[27835]: I0318 13:43:10.098409 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8203106-e334-473e-8e67-f6fddefc8016-config\") pod \"a8203106-e334-473e-8e67-f6fddefc8016\" (UID: \"a8203106-e334-473e-8e67-f6fddefc8016\") " Mar 18 13:43:10.098623 master-0 kubenswrapper[27835]: I0318 13:43:10.098450 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8203106-e334-473e-8e67-f6fddefc8016-dns-svc\") pod \"a8203106-e334-473e-8e67-f6fddefc8016\" (UID: \"a8203106-e334-473e-8e67-f6fddefc8016\") " Mar 18 13:43:10.098623 master-0 kubenswrapper[27835]: I0318 13:43:10.098487 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a8203106-e334-473e-8e67-f6fddefc8016-dns-swift-storage-0\") pod \"a8203106-e334-473e-8e67-f6fddefc8016\" (UID: \"a8203106-e334-473e-8e67-f6fddefc8016\") " Mar 18 13:43:10.098754 master-0 kubenswrapper[27835]: I0318 13:43:10.098670 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bsgn\" (UniqueName: \"kubernetes.io/projected/a8203106-e334-473e-8e67-f6fddefc8016-kube-api-access-4bsgn\") pod \"a8203106-e334-473e-8e67-f6fddefc8016\" (UID: \"a8203106-e334-473e-8e67-f6fddefc8016\") " Mar 18 13:43:10.098754 master-0 kubenswrapper[27835]: I0318 13:43:10.098705 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a8203106-e334-473e-8e67-f6fddefc8016-ovsdbserver-nb\") pod \"a8203106-e334-473e-8e67-f6fddefc8016\" (UID: \"a8203106-e334-473e-8e67-f6fddefc8016\") " Mar 18 13:43:10.099126 master-0 kubenswrapper[27835]: I0318 13:43:10.099082 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/097312db-40df-43d6-b3ba-998c52c4e55f-internal-tls-certs\") pod \"glance-4f519-default-internal-api-0\" (UID: \"097312db-40df-43d6-b3ba-998c52c4e55f\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:10.099192 master-0 kubenswrapper[27835]: I0318 13:43:10.099179 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/097312db-40df-43d6-b3ba-998c52c4e55f-config-data\") pod \"glance-4f519-default-internal-api-0\" (UID: \"097312db-40df-43d6-b3ba-998c52c4e55f\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:10.099259 master-0 kubenswrapper[27835]: I0318 13:43:10.099244 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3ead41c4-903a-4686-a384-328e4b9fb938\" (UniqueName: \"kubernetes.io/csi/topolvm.io^919e8c73-7055-4dab-b7f5-393f87412c76\") pod \"glance-4f519-default-internal-api-0\" (UID: \"097312db-40df-43d6-b3ba-998c52c4e55f\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:10.099384 master-0 kubenswrapper[27835]: I0318 13:43:10.099346 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt4m2\" (UniqueName: \"kubernetes.io/projected/097312db-40df-43d6-b3ba-998c52c4e55f-kube-api-access-kt4m2\") pod \"glance-4f519-default-internal-api-0\" (UID: \"097312db-40df-43d6-b3ba-998c52c4e55f\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:10.099487 master-0 kubenswrapper[27835]: I0318 13:43:10.099388 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/097312db-40df-43d6-b3ba-998c52c4e55f-httpd-run\") pod \"glance-4f519-default-internal-api-0\" (UID: \"097312db-40df-43d6-b3ba-998c52c4e55f\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:10.099623 master-0 kubenswrapper[27835]: I0318 13:43:10.099585 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/097312db-40df-43d6-b3ba-998c52c4e55f-combined-ca-bundle\") pod \"glance-4f519-default-internal-api-0\" (UID: \"097312db-40df-43d6-b3ba-998c52c4e55f\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:10.099623 master-0 kubenswrapper[27835]: I0318 13:43:10.099619 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/097312db-40df-43d6-b3ba-998c52c4e55f-scripts\") pod \"glance-4f519-default-internal-api-0\" (UID: \"097312db-40df-43d6-b3ba-998c52c4e55f\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:10.099743 master-0 kubenswrapper[27835]: I0318 13:43:10.099687 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/097312db-40df-43d6-b3ba-998c52c4e55f-logs\") pod \"glance-4f519-default-internal-api-0\" (UID: \"097312db-40df-43d6-b3ba-998c52c4e55f\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:10.102696 master-0 kubenswrapper[27835]: I0318 13:43:10.102644 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mtjpd" event={"ID":"66ceeb4b-18bd-4d26-a1e7-ef700771aeec","Type":"ContainerStarted","Data":"9fd85f008728cfa85974596cac191e10f4d840ded1a79fbb3cd49c6801164197"} Mar 18 13:43:10.102787 master-0 kubenswrapper[27835]: I0318 13:43:10.102701 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mtjpd" event={"ID":"66ceeb4b-18bd-4d26-a1e7-ef700771aeec","Type":"ContainerStarted","Data":"0087979584d94b514f954ab39d3f878c8e3d3c172ab0b30f7b1e26087ba298a1"} Mar 18 13:43:10.105736 master-0 kubenswrapper[27835]: I0318 13:43:10.105676 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dcc5b569-tg282" event={"ID":"a8203106-e334-473e-8e67-f6fddefc8016","Type":"ContainerDied","Data":"a73e25eb244337d3ea8352f1d62450a0806bf91f66999a3fbcc96838ae356902"} Mar 18 13:43:10.105736 master-0 kubenswrapper[27835]: I0318 13:43:10.105733 27835 scope.go:117] "RemoveContainer" containerID="1071cac3f5f8667e484fbef0e29c88aec37182c9e32d36ef8678f75dfa7b9dde" Mar 18 13:43:10.106082 master-0 kubenswrapper[27835]: I0318 13:43:10.105851 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dcc5b569-tg282" Mar 18 13:43:10.107047 master-0 kubenswrapper[27835]: I0318 13:43:10.106966 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8203106-e334-473e-8e67-f6fddefc8016-kube-api-access-4bsgn" (OuterVolumeSpecName: "kube-api-access-4bsgn") pod "a8203106-e334-473e-8e67-f6fddefc8016" (UID: "a8203106-e334-473e-8e67-f6fddefc8016"). InnerVolumeSpecName "kube-api-access-4bsgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:43:10.109333 master-0 kubenswrapper[27835]: I0318 13:43:10.109247 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b7d9bdfc-bnzwf" event={"ID":"784c7158-5c02-4b97-bf4b-11241e7ebc40","Type":"ContainerStarted","Data":"518954abcd4ba774aae84929c5c606de15aafa5ec705809dc6fc2a486dcfd130"} Mar 18 13:43:10.109333 master-0 kubenswrapper[27835]: I0318 13:43:10.109299 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b7d9bdfc-bnzwf" event={"ID":"784c7158-5c02-4b97-bf4b-11241e7ebc40","Type":"ContainerStarted","Data":"8b95b1872b02020d195b68ed5a773100977db05c120e249340f92a3b65c6c4af"} Mar 18 13:43:10.112066 master-0 kubenswrapper[27835]: I0318 13:43:10.112019 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-f5fe-account-create-update-4dwnc" event={"ID":"6b779cf8-bfc6-417f-b28b-4cfa060a6db2","Type":"ContainerStarted","Data":"a490481a02a6cf3bfe7dab4c92ba1dd45c88fd0e8d402c4f8cb7529465c6e5ce"} Mar 18 13:43:10.112066 master-0 kubenswrapper[27835]: I0318 13:43:10.112053 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-f5fe-account-create-update-4dwnc" event={"ID":"6b779cf8-bfc6-417f-b28b-4cfa060a6db2","Type":"ContainerStarted","Data":"1763928bef7232de636495b49d5bc7d33bf41abcd6a84eda063c90e5ba9dd21d"} Mar 18 13:43:10.114782 master-0 kubenswrapper[27835]: I0318 13:43:10.114737 27835 generic.go:334] "Generic (PLEG): container finished" podID="b686167f-35b3-4b2c-a6c7-074c63023350" containerID="d6f29167975a4815854e054ebf2ebe79100c677285491b8d0f69e65c2f7c4b4f" exitCode=0 Mar 18 13:43:10.115466 master-0 kubenswrapper[27835]: I0318 13:43:10.115374 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-vkvz7" event={"ID":"b686167f-35b3-4b2c-a6c7-074c63023350","Type":"ContainerDied","Data":"d6f29167975a4815854e054ebf2ebe79100c677285491b8d0f69e65c2f7c4b4f"} Mar 18 13:43:10.142056 master-0 kubenswrapper[27835]: I0318 13:43:10.141969 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8203106-e334-473e-8e67-f6fddefc8016-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a8203106-e334-473e-8e67-f6fddefc8016" (UID: "a8203106-e334-473e-8e67-f6fddefc8016"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:43:10.142407 master-0 kubenswrapper[27835]: I0318 13:43:10.142358 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8203106-e334-473e-8e67-f6fddefc8016-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a8203106-e334-473e-8e67-f6fddefc8016" (UID: "a8203106-e334-473e-8e67-f6fddefc8016"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:43:10.147944 master-0 kubenswrapper[27835]: I0318 13:43:10.147882 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8203106-e334-473e-8e67-f6fddefc8016-config" (OuterVolumeSpecName: "config") pod "a8203106-e334-473e-8e67-f6fddefc8016" (UID: "a8203106-e334-473e-8e67-f6fddefc8016"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:43:10.157060 master-0 kubenswrapper[27835]: I0318 13:43:10.156990 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8203106-e334-473e-8e67-f6fddefc8016-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a8203106-e334-473e-8e67-f6fddefc8016" (UID: "a8203106-e334-473e-8e67-f6fddefc8016"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:43:10.180970 master-0 kubenswrapper[27835]: I0318 13:43:10.180829 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8203106-e334-473e-8e67-f6fddefc8016-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a8203106-e334-473e-8e67-f6fddefc8016" (UID: "a8203106-e334-473e-8e67-f6fddefc8016"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:43:10.202192 master-0 kubenswrapper[27835]: I0318 13:43:10.202112 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt4m2\" (UniqueName: \"kubernetes.io/projected/097312db-40df-43d6-b3ba-998c52c4e55f-kube-api-access-kt4m2\") pod \"glance-4f519-default-internal-api-0\" (UID: \"097312db-40df-43d6-b3ba-998c52c4e55f\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:10.202192 master-0 kubenswrapper[27835]: I0318 13:43:10.202196 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/097312db-40df-43d6-b3ba-998c52c4e55f-httpd-run\") pod \"glance-4f519-default-internal-api-0\" (UID: \"097312db-40df-43d6-b3ba-998c52c4e55f\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:10.202551 master-0 kubenswrapper[27835]: I0318 13:43:10.202293 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/097312db-40df-43d6-b3ba-998c52c4e55f-scripts\") pod \"glance-4f519-default-internal-api-0\" (UID: \"097312db-40df-43d6-b3ba-998c52c4e55f\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:10.202551 master-0 kubenswrapper[27835]: I0318 13:43:10.202310 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/097312db-40df-43d6-b3ba-998c52c4e55f-combined-ca-bundle\") pod \"glance-4f519-default-internal-api-0\" (UID: \"097312db-40df-43d6-b3ba-998c52c4e55f\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:10.202551 master-0 kubenswrapper[27835]: I0318 13:43:10.202457 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/097312db-40df-43d6-b3ba-998c52c4e55f-logs\") pod \"glance-4f519-default-internal-api-0\" (UID: \"097312db-40df-43d6-b3ba-998c52c4e55f\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:10.202551 master-0 kubenswrapper[27835]: I0318 13:43:10.202477 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/097312db-40df-43d6-b3ba-998c52c4e55f-internal-tls-certs\") pod \"glance-4f519-default-internal-api-0\" (UID: \"097312db-40df-43d6-b3ba-998c52c4e55f\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:10.202745 master-0 kubenswrapper[27835]: I0318 13:43:10.202573 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/097312db-40df-43d6-b3ba-998c52c4e55f-config-data\") pod \"glance-4f519-default-internal-api-0\" (UID: \"097312db-40df-43d6-b3ba-998c52c4e55f\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:10.202745 master-0 kubenswrapper[27835]: I0318 13:43:10.202741 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bsgn\" (UniqueName: \"kubernetes.io/projected/a8203106-e334-473e-8e67-f6fddefc8016-kube-api-access-4bsgn\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:10.202836 master-0 kubenswrapper[27835]: I0318 13:43:10.202780 27835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a8203106-e334-473e-8e67-f6fddefc8016-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:10.202836 master-0 kubenswrapper[27835]: I0318 13:43:10.202791 27835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a8203106-e334-473e-8e67-f6fddefc8016-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:10.202836 master-0 kubenswrapper[27835]: I0318 13:43:10.202800 27835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8203106-e334-473e-8e67-f6fddefc8016-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:10.202836 master-0 kubenswrapper[27835]: I0318 13:43:10.202808 27835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8203106-e334-473e-8e67-f6fddefc8016-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:10.203024 master-0 kubenswrapper[27835]: I0318 13:43:10.202937 27835 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a8203106-e334-473e-8e67-f6fddefc8016-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:10.205339 master-0 kubenswrapper[27835]: I0318 13:43:10.205286 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/097312db-40df-43d6-b3ba-998c52c4e55f-logs\") pod \"glance-4f519-default-internal-api-0\" (UID: \"097312db-40df-43d6-b3ba-998c52c4e55f\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:10.207153 master-0 kubenswrapper[27835]: I0318 13:43:10.206234 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/097312db-40df-43d6-b3ba-998c52c4e55f-httpd-run\") pod \"glance-4f519-default-internal-api-0\" (UID: \"097312db-40df-43d6-b3ba-998c52c4e55f\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:10.213741 master-0 kubenswrapper[27835]: I0318 13:43:10.213399 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/097312db-40df-43d6-b3ba-998c52c4e55f-scripts\") pod \"glance-4f519-default-internal-api-0\" (UID: \"097312db-40df-43d6-b3ba-998c52c4e55f\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:10.217899 master-0 kubenswrapper[27835]: I0318 13:43:10.217850 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/097312db-40df-43d6-b3ba-998c52c4e55f-combined-ca-bundle\") pod \"glance-4f519-default-internal-api-0\" (UID: \"097312db-40df-43d6-b3ba-998c52c4e55f\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:10.218795 master-0 kubenswrapper[27835]: I0318 13:43:10.218773 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/097312db-40df-43d6-b3ba-998c52c4e55f-internal-tls-certs\") pod \"glance-4f519-default-internal-api-0\" (UID: \"097312db-40df-43d6-b3ba-998c52c4e55f\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:10.219963 master-0 kubenswrapper[27835]: I0318 13:43:10.219908 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/097312db-40df-43d6-b3ba-998c52c4e55f-config-data\") pod \"glance-4f519-default-internal-api-0\" (UID: \"097312db-40df-43d6-b3ba-998c52c4e55f\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:10.351294 master-0 kubenswrapper[27835]: I0318 13:43:10.349706 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-nq7m4" podStartSLOduration=4.349685674 podStartE2EDuration="4.349685674s" podCreationTimestamp="2026-03-18 13:43:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:43:10.342574316 +0000 UTC m=+1154.307785866" watchObservedRunningTime="2026-03-18 13:43:10.349685674 +0000 UTC m=+1154.314897234" Mar 18 13:43:10.366498 master-0 kubenswrapper[27835]: I0318 13:43:10.364510 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt4m2\" (UniqueName: \"kubernetes.io/projected/097312db-40df-43d6-b3ba-998c52c4e55f-kube-api-access-kt4m2\") pod \"glance-4f519-default-internal-api-0\" (UID: \"097312db-40df-43d6-b3ba-998c52c4e55f\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:10.410292 master-0 kubenswrapper[27835]: I0318 13:43:10.408832 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3ead41c4-903a-4686-a384-328e4b9fb938\" (UniqueName: \"kubernetes.io/csi/topolvm.io^919e8c73-7055-4dab-b7f5-393f87412c76\") pod \"glance-4f519-default-internal-api-0\" (UID: \"097312db-40df-43d6-b3ba-998c52c4e55f\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:10.412311 master-0 kubenswrapper[27835]: I0318 13:43:10.411963 27835 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 13:43:10.412311 master-0 kubenswrapper[27835]: I0318 13:43:10.412006 27835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3ead41c4-903a-4686-a384-328e4b9fb938\" (UniqueName: \"kubernetes.io/csi/topolvm.io^919e8c73-7055-4dab-b7f5-393f87412c76\") pod \"glance-4f519-default-internal-api-0\" (UID: \"097312db-40df-43d6-b3ba-998c52c4e55f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/fe2c78a82f6fd4790d1308034ee8953c7233fac051a28905339bd938cd5ef252/globalmount\"" pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:10.533342 master-0 kubenswrapper[27835]: I0318 13:43:10.533244 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ff789d6f-852a-4819-b19c-09444384ecbe\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8cd87a72-b057-4c0b-a353-f22fec868a61\") pod \"glance-4f519-default-external-api-0\" (UID: \"be12c3ea-d20c-494b-9467-b13c0f096788\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:10.689048 master-0 kubenswrapper[27835]: I0318 13:43:10.688944 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:11.108046 master-0 kubenswrapper[27835]: I0318 13:43:11.107926 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-mtjpd" podStartSLOduration=5.107897887 podStartE2EDuration="5.107897887s" podCreationTimestamp="2026-03-18 13:43:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:43:11.100266994 +0000 UTC m=+1155.065478554" watchObservedRunningTime="2026-03-18 13:43:11.107897887 +0000 UTC m=+1155.073109457" Mar 18 13:43:11.151996 master-0 kubenswrapper[27835]: I0318 13:43:11.151818 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-4f519-default-external-api-0"] Mar 18 13:43:11.157043 master-0 kubenswrapper[27835]: I0318 13:43:11.156983 27835 generic.go:334] "Generic (PLEG): container finished" podID="784c7158-5c02-4b97-bf4b-11241e7ebc40" containerID="518954abcd4ba774aae84929c5c606de15aafa5ec705809dc6fc2a486dcfd130" exitCode=0 Mar 18 13:43:11.157222 master-0 kubenswrapper[27835]: I0318 13:43:11.157083 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b7d9bdfc-bnzwf" event={"ID":"784c7158-5c02-4b97-bf4b-11241e7ebc40","Type":"ContainerDied","Data":"518954abcd4ba774aae84929c5c606de15aafa5ec705809dc6fc2a486dcfd130"} Mar 18 13:43:11.580861 master-0 kubenswrapper[27835]: I0318 13:43:11.574160 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-f5fe-account-create-update-4dwnc" podStartSLOduration=5.574135839 podStartE2EDuration="5.574135839s" podCreationTimestamp="2026-03-18 13:43:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:43:11.570755318 +0000 UTC m=+1155.535966878" watchObservedRunningTime="2026-03-18 13:43:11.574135839 +0000 UTC m=+1155.539347399" Mar 18 13:43:11.644985 master-0 kubenswrapper[27835]: I0318 13:43:11.644920 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-4f519-default-internal-api-0"] Mar 18 13:43:11.646175 master-0 kubenswrapper[27835]: E0318 13:43:11.646130 27835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[glance], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-4f519-default-internal-api-0" podUID="097312db-40df-43d6-b3ba-998c52c4e55f" Mar 18 13:43:11.717729 master-0 kubenswrapper[27835]: I0318 13:43:11.717467 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-vkvz7" Mar 18 13:43:11.760010 master-0 kubenswrapper[27835]: I0318 13:43:11.749587 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b686167f-35b3-4b2c-a6c7-074c63023350-operator-scripts\") pod \"b686167f-35b3-4b2c-a6c7-074c63023350\" (UID: \"b686167f-35b3-4b2c-a6c7-074c63023350\") " Mar 18 13:43:11.760010 master-0 kubenswrapper[27835]: I0318 13:43:11.749930 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjwm2\" (UniqueName: \"kubernetes.io/projected/b686167f-35b3-4b2c-a6c7-074c63023350-kube-api-access-cjwm2\") pod \"b686167f-35b3-4b2c-a6c7-074c63023350\" (UID: \"b686167f-35b3-4b2c-a6c7-074c63023350\") " Mar 18 13:43:11.760010 master-0 kubenswrapper[27835]: I0318 13:43:11.751028 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b686167f-35b3-4b2c-a6c7-074c63023350-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b686167f-35b3-4b2c-a6c7-074c63023350" (UID: "b686167f-35b3-4b2c-a6c7-074c63023350"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:43:11.772018 master-0 kubenswrapper[27835]: I0318 13:43:11.771956 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b686167f-35b3-4b2c-a6c7-074c63023350-kube-api-access-cjwm2" (OuterVolumeSpecName: "kube-api-access-cjwm2") pod "b686167f-35b3-4b2c-a6c7-074c63023350" (UID: "b686167f-35b3-4b2c-a6c7-074c63023350"). InnerVolumeSpecName "kube-api-access-cjwm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:43:11.853621 master-0 kubenswrapper[27835]: I0318 13:43:11.853539 27835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b686167f-35b3-4b2c-a6c7-074c63023350-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:11.853621 master-0 kubenswrapper[27835]: I0318 13:43:11.853598 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjwm2\" (UniqueName: \"kubernetes.io/projected/b686167f-35b3-4b2c-a6c7-074c63023350-kube-api-access-cjwm2\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:12.159993 master-0 kubenswrapper[27835]: I0318 13:43:12.159926 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3ead41c4-903a-4686-a384-328e4b9fb938\" (UniqueName: \"kubernetes.io/csi/topolvm.io^919e8c73-7055-4dab-b7f5-393f87412c76\") pod \"glance-4f519-default-internal-api-0\" (UID: \"097312db-40df-43d6-b3ba-998c52c4e55f\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:12.177456 master-0 kubenswrapper[27835]: I0318 13:43:12.177358 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b7d9bdfc-bnzwf" event={"ID":"784c7158-5c02-4b97-bf4b-11241e7ebc40","Type":"ContainerStarted","Data":"61fcf7fd30d8e17a9af269aeed2b7b7d6ae9eb1b61e05c0e88026a4a3f5da3fc"} Mar 18 13:43:12.177838 master-0 kubenswrapper[27835]: I0318 13:43:12.177478 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84b7d9bdfc-bnzwf" Mar 18 13:43:12.184667 master-0 kubenswrapper[27835]: I0318 13:43:12.184616 27835 generic.go:334] "Generic (PLEG): container finished" podID="6b779cf8-bfc6-417f-b28b-4cfa060a6db2" containerID="a490481a02a6cf3bfe7dab4c92ba1dd45c88fd0e8d402c4f8cb7529465c6e5ce" exitCode=0 Mar 18 13:43:12.184879 master-0 kubenswrapper[27835]: I0318 13:43:12.184698 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-f5fe-account-create-update-4dwnc" event={"ID":"6b779cf8-bfc6-417f-b28b-4cfa060a6db2","Type":"ContainerDied","Data":"a490481a02a6cf3bfe7dab4c92ba1dd45c88fd0e8d402c4f8cb7529465c6e5ce"} Mar 18 13:43:12.187584 master-0 kubenswrapper[27835]: I0318 13:43:12.187489 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-vkvz7" Mar 18 13:43:12.187584 master-0 kubenswrapper[27835]: I0318 13:43:12.187515 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:12.187584 master-0 kubenswrapper[27835]: I0318 13:43:12.187563 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-vkvz7" event={"ID":"b686167f-35b3-4b2c-a6c7-074c63023350","Type":"ContainerDied","Data":"1cb7720a5b3424498ccc1e982b4e23081c4a5c182c6551af79298d2fec22d954"} Mar 18 13:43:12.187584 master-0 kubenswrapper[27835]: I0318 13:43:12.187586 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cb7720a5b3424498ccc1e982b4e23081c4a5c182c6551af79298d2fec22d954" Mar 18 13:43:12.203017 master-0 kubenswrapper[27835]: I0318 13:43:12.202979 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:12.277502 master-0 kubenswrapper[27835]: I0318 13:43:12.275157 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/097312db-40df-43d6-b3ba-998c52c4e55f-config-data\") pod \"097312db-40df-43d6-b3ba-998c52c4e55f\" (UID: \"097312db-40df-43d6-b3ba-998c52c4e55f\") " Mar 18 13:43:12.277502 master-0 kubenswrapper[27835]: I0318 13:43:12.275248 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/097312db-40df-43d6-b3ba-998c52c4e55f-logs\") pod \"097312db-40df-43d6-b3ba-998c52c4e55f\" (UID: \"097312db-40df-43d6-b3ba-998c52c4e55f\") " Mar 18 13:43:12.277502 master-0 kubenswrapper[27835]: I0318 13:43:12.275322 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/097312db-40df-43d6-b3ba-998c52c4e55f-combined-ca-bundle\") pod \"097312db-40df-43d6-b3ba-998c52c4e55f\" (UID: \"097312db-40df-43d6-b3ba-998c52c4e55f\") " Mar 18 13:43:12.277502 master-0 kubenswrapper[27835]: I0318 13:43:12.275356 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/097312db-40df-43d6-b3ba-998c52c4e55f-httpd-run\") pod \"097312db-40df-43d6-b3ba-998c52c4e55f\" (UID: \"097312db-40df-43d6-b3ba-998c52c4e55f\") " Mar 18 13:43:12.277502 master-0 kubenswrapper[27835]: I0318 13:43:12.275402 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/097312db-40df-43d6-b3ba-998c52c4e55f-internal-tls-certs\") pod \"097312db-40df-43d6-b3ba-998c52c4e55f\" (UID: \"097312db-40df-43d6-b3ba-998c52c4e55f\") " Mar 18 13:43:12.277502 master-0 kubenswrapper[27835]: I0318 13:43:12.275507 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt4m2\" (UniqueName: \"kubernetes.io/projected/097312db-40df-43d6-b3ba-998c52c4e55f-kube-api-access-kt4m2\") pod \"097312db-40df-43d6-b3ba-998c52c4e55f\" (UID: \"097312db-40df-43d6-b3ba-998c52c4e55f\") " Mar 18 13:43:12.277502 master-0 kubenswrapper[27835]: I0318 13:43:12.275643 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/097312db-40df-43d6-b3ba-998c52c4e55f-scripts\") pod \"097312db-40df-43d6-b3ba-998c52c4e55f\" (UID: \"097312db-40df-43d6-b3ba-998c52c4e55f\") " Mar 18 13:43:12.279991 master-0 kubenswrapper[27835]: I0318 13:43:12.279913 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/097312db-40df-43d6-b3ba-998c52c4e55f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "097312db-40df-43d6-b3ba-998c52c4e55f" (UID: "097312db-40df-43d6-b3ba-998c52c4e55f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:43:12.283047 master-0 kubenswrapper[27835]: I0318 13:43:12.282132 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/097312db-40df-43d6-b3ba-998c52c4e55f-config-data" (OuterVolumeSpecName: "config-data") pod "097312db-40df-43d6-b3ba-998c52c4e55f" (UID: "097312db-40df-43d6-b3ba-998c52c4e55f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:12.284447 master-0 kubenswrapper[27835]: I0318 13:43:12.283775 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/097312db-40df-43d6-b3ba-998c52c4e55f-kube-api-access-kt4m2" (OuterVolumeSpecName: "kube-api-access-kt4m2") pod "097312db-40df-43d6-b3ba-998c52c4e55f" (UID: "097312db-40df-43d6-b3ba-998c52c4e55f"). InnerVolumeSpecName "kube-api-access-kt4m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:43:12.320841 master-0 kubenswrapper[27835]: I0318 13:43:12.320750 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/097312db-40df-43d6-b3ba-998c52c4e55f-logs" (OuterVolumeSpecName: "logs") pod "097312db-40df-43d6-b3ba-998c52c4e55f" (UID: "097312db-40df-43d6-b3ba-998c52c4e55f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:43:12.328449 master-0 kubenswrapper[27835]: I0318 13:43:12.327700 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/097312db-40df-43d6-b3ba-998c52c4e55f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "097312db-40df-43d6-b3ba-998c52c4e55f" (UID: "097312db-40df-43d6-b3ba-998c52c4e55f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:12.330445 master-0 kubenswrapper[27835]: I0318 13:43:12.330023 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/097312db-40df-43d6-b3ba-998c52c4e55f-scripts" (OuterVolumeSpecName: "scripts") pod "097312db-40df-43d6-b3ba-998c52c4e55f" (UID: "097312db-40df-43d6-b3ba-998c52c4e55f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:12.332204 master-0 kubenswrapper[27835]: I0318 13:43:12.331737 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/097312db-40df-43d6-b3ba-998c52c4e55f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "097312db-40df-43d6-b3ba-998c52c4e55f" (UID: "097312db-40df-43d6-b3ba-998c52c4e55f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:12.379139 master-0 kubenswrapper[27835]: I0318 13:43:12.378649 27835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/097312db-40df-43d6-b3ba-998c52c4e55f-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:12.379139 master-0 kubenswrapper[27835]: I0318 13:43:12.378734 27835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/097312db-40df-43d6-b3ba-998c52c4e55f-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:12.379139 master-0 kubenswrapper[27835]: I0318 13:43:12.378753 27835 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/097312db-40df-43d6-b3ba-998c52c4e55f-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:12.379139 master-0 kubenswrapper[27835]: I0318 13:43:12.378769 27835 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/097312db-40df-43d6-b3ba-998c52c4e55f-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:12.379139 master-0 kubenswrapper[27835]: I0318 13:43:12.378783 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt4m2\" (UniqueName: \"kubernetes.io/projected/097312db-40df-43d6-b3ba-998c52c4e55f-kube-api-access-kt4m2\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:12.379139 master-0 kubenswrapper[27835]: I0318 13:43:12.378794 27835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/097312db-40df-43d6-b3ba-998c52c4e55f-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:12.379139 master-0 kubenswrapper[27835]: I0318 13:43:12.378805 27835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/097312db-40df-43d6-b3ba-998c52c4e55f-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:12.482000 master-0 kubenswrapper[27835]: I0318 13:43:12.481943 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^919e8c73-7055-4dab-b7f5-393f87412c76\") pod \"097312db-40df-43d6-b3ba-998c52c4e55f\" (UID: \"097312db-40df-43d6-b3ba-998c52c4e55f\") " Mar 18 13:43:12.488048 master-0 kubenswrapper[27835]: I0318 13:43:12.486041 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-4f519-default-external-api-0"] Mar 18 13:43:12.515654 master-0 kubenswrapper[27835]: I0318 13:43:12.513022 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^919e8c73-7055-4dab-b7f5-393f87412c76" (OuterVolumeSpecName: "glance") pod "097312db-40df-43d6-b3ba-998c52c4e55f" (UID: "097312db-40df-43d6-b3ba-998c52c4e55f"). InnerVolumeSpecName "pvc-3ead41c4-903a-4686-a384-328e4b9fb938". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 18 13:43:12.535312 master-0 kubenswrapper[27835]: I0318 13:43:12.535254 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dcc5b569-tg282"] Mar 18 13:43:12.587853 master-0 kubenswrapper[27835]: I0318 13:43:12.587786 27835 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-3ead41c4-903a-4686-a384-328e4b9fb938\" (UniqueName: \"kubernetes.io/csi/topolvm.io^919e8c73-7055-4dab-b7f5-393f87412c76\") on node \"master-0\" " Mar 18 13:43:12.623054 master-0 kubenswrapper[27835]: I0318 13:43:12.621884 27835 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 18 13:43:12.623054 master-0 kubenswrapper[27835]: I0318 13:43:12.622374 27835 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-3ead41c4-903a-4686-a384-328e4b9fb938" (UniqueName: "kubernetes.io/csi/topolvm.io^919e8c73-7055-4dab-b7f5-393f87412c76") on node "master-0" Mar 18 13:43:12.650756 master-0 kubenswrapper[27835]: I0318 13:43:12.650692 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dcc5b569-tg282"] Mar 18 13:43:12.691579 master-0 kubenswrapper[27835]: I0318 13:43:12.691519 27835 reconciler_common.go:293] "Volume detached for volume \"pvc-3ead41c4-903a-4686-a384-328e4b9fb938\" (UniqueName: \"kubernetes.io/csi/topolvm.io^919e8c73-7055-4dab-b7f5-393f87412c76\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:12.792496 master-0 kubenswrapper[27835]: I0318 13:43:12.791787 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84b7d9bdfc-bnzwf" podStartSLOduration=6.791764581 podStartE2EDuration="6.791764581s" podCreationTimestamp="2026-03-18 13:43:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:43:12.776274091 +0000 UTC m=+1156.741485681" watchObservedRunningTime="2026-03-18 13:43:12.791764581 +0000 UTC m=+1156.756976141" Mar 18 13:43:13.214996 master-0 kubenswrapper[27835]: I0318 13:43:13.214923 27835 generic.go:334] "Generic (PLEG): container finished" podID="562b6f7a-8086-457b-8f82-cc3c56043ac8" containerID="265a0fc59c15ef6c7115cfbea99575530cc3273b796645623364156cc8e7e6bf" exitCode=0 Mar 18 13:43:13.215612 master-0 kubenswrapper[27835]: I0318 13:43:13.215122 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nq7m4" event={"ID":"562b6f7a-8086-457b-8f82-cc3c56043ac8","Type":"ContainerDied","Data":"265a0fc59c15ef6c7115cfbea99575530cc3273b796645623364156cc8e7e6bf"} Mar 18 13:43:13.215717 master-0 kubenswrapper[27835]: I0318 13:43:13.215686 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:13.344679 master-0 kubenswrapper[27835]: I0318 13:43:13.331219 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-4f519-default-internal-api-0"] Mar 18 13:43:13.349269 master-0 kubenswrapper[27835]: I0318 13:43:13.349035 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-4f519-default-internal-api-0"] Mar 18 13:43:13.459021 master-0 kubenswrapper[27835]: I0318 13:43:13.458952 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-4f519-default-internal-api-0"] Mar 18 13:43:13.459605 master-0 kubenswrapper[27835]: E0318 13:43:13.459584 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b686167f-35b3-4b2c-a6c7-074c63023350" containerName="mariadb-database-create" Mar 18 13:43:13.459741 master-0 kubenswrapper[27835]: I0318 13:43:13.459606 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b686167f-35b3-4b2c-a6c7-074c63023350" containerName="mariadb-database-create" Mar 18 13:43:13.459940 master-0 kubenswrapper[27835]: I0318 13:43:13.459915 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b686167f-35b3-4b2c-a6c7-074c63023350" containerName="mariadb-database-create" Mar 18 13:43:13.463653 master-0 kubenswrapper[27835]: I0318 13:43:13.461387 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:13.463929 master-0 kubenswrapper[27835]: I0318 13:43:13.463878 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 18 13:43:13.465561 master-0 kubenswrapper[27835]: I0318 13:43:13.464307 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-4f519-default-internal-config-data" Mar 18 13:43:13.474876 master-0 kubenswrapper[27835]: I0318 13:43:13.474803 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4f519-default-internal-api-0"] Mar 18 13:43:13.520583 master-0 kubenswrapper[27835]: I0318 13:43:13.520529 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bf459ab-dc8e-4a13-bbee-b68d6d031781-combined-ca-bundle\") pod \"glance-4f519-default-internal-api-0\" (UID: \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:13.521326 master-0 kubenswrapper[27835]: I0318 13:43:13.521300 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8c4p\" (UniqueName: \"kubernetes.io/projected/9bf459ab-dc8e-4a13-bbee-b68d6d031781-kube-api-access-w8c4p\") pod \"glance-4f519-default-internal-api-0\" (UID: \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:13.521636 master-0 kubenswrapper[27835]: I0318 13:43:13.521616 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bf459ab-dc8e-4a13-bbee-b68d6d031781-scripts\") pod \"glance-4f519-default-internal-api-0\" (UID: \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:13.521907 master-0 kubenswrapper[27835]: I0318 13:43:13.521872 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bf459ab-dc8e-4a13-bbee-b68d6d031781-logs\") pod \"glance-4f519-default-internal-api-0\" (UID: \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:13.522304 master-0 kubenswrapper[27835]: I0318 13:43:13.522279 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3ead41c4-903a-4686-a384-328e4b9fb938\" (UniqueName: \"kubernetes.io/csi/topolvm.io^919e8c73-7055-4dab-b7f5-393f87412c76\") pod \"glance-4f519-default-internal-api-0\" (UID: \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:13.522501 master-0 kubenswrapper[27835]: I0318 13:43:13.522479 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bf459ab-dc8e-4a13-bbee-b68d6d031781-internal-tls-certs\") pod \"glance-4f519-default-internal-api-0\" (UID: \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:13.522664 master-0 kubenswrapper[27835]: I0318 13:43:13.522646 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bf459ab-dc8e-4a13-bbee-b68d6d031781-config-data\") pod \"glance-4f519-default-internal-api-0\" (UID: \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:13.522791 master-0 kubenswrapper[27835]: I0318 13:43:13.522770 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9bf459ab-dc8e-4a13-bbee-b68d6d031781-httpd-run\") pod \"glance-4f519-default-internal-api-0\" (UID: \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:13.627042 master-0 kubenswrapper[27835]: I0318 13:43:13.626978 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bf459ab-dc8e-4a13-bbee-b68d6d031781-config-data\") pod \"glance-4f519-default-internal-api-0\" (UID: \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:13.627042 master-0 kubenswrapper[27835]: I0318 13:43:13.627030 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9bf459ab-dc8e-4a13-bbee-b68d6d031781-httpd-run\") pod \"glance-4f519-default-internal-api-0\" (UID: \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:13.627325 master-0 kubenswrapper[27835]: I0318 13:43:13.627126 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bf459ab-dc8e-4a13-bbee-b68d6d031781-combined-ca-bundle\") pod \"glance-4f519-default-internal-api-0\" (UID: \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:13.627325 master-0 kubenswrapper[27835]: I0318 13:43:13.627172 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8c4p\" (UniqueName: \"kubernetes.io/projected/9bf459ab-dc8e-4a13-bbee-b68d6d031781-kube-api-access-w8c4p\") pod \"glance-4f519-default-internal-api-0\" (UID: \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:13.627325 master-0 kubenswrapper[27835]: I0318 13:43:13.627215 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bf459ab-dc8e-4a13-bbee-b68d6d031781-scripts\") pod \"glance-4f519-default-internal-api-0\" (UID: \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:13.627325 master-0 kubenswrapper[27835]: I0318 13:43:13.627235 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bf459ab-dc8e-4a13-bbee-b68d6d031781-logs\") pod \"glance-4f519-default-internal-api-0\" (UID: \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:13.627325 master-0 kubenswrapper[27835]: I0318 13:43:13.627255 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3ead41c4-903a-4686-a384-328e4b9fb938\" (UniqueName: \"kubernetes.io/csi/topolvm.io^919e8c73-7055-4dab-b7f5-393f87412c76\") pod \"glance-4f519-default-internal-api-0\" (UID: \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:13.627325 master-0 kubenswrapper[27835]: I0318 13:43:13.627289 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bf459ab-dc8e-4a13-bbee-b68d6d031781-internal-tls-certs\") pod \"glance-4f519-default-internal-api-0\" (UID: \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:13.629231 master-0 kubenswrapper[27835]: I0318 13:43:13.629190 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bf459ab-dc8e-4a13-bbee-b68d6d031781-logs\") pod \"glance-4f519-default-internal-api-0\" (UID: \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:13.629923 master-0 kubenswrapper[27835]: I0318 13:43:13.629765 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9bf459ab-dc8e-4a13-bbee-b68d6d031781-httpd-run\") pod \"glance-4f519-default-internal-api-0\" (UID: \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:13.631323 master-0 kubenswrapper[27835]: I0318 13:43:13.631283 27835 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 13:43:13.631400 master-0 kubenswrapper[27835]: I0318 13:43:13.631322 27835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3ead41c4-903a-4686-a384-328e4b9fb938\" (UniqueName: \"kubernetes.io/csi/topolvm.io^919e8c73-7055-4dab-b7f5-393f87412c76\") pod \"glance-4f519-default-internal-api-0\" (UID: \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/fe2c78a82f6fd4790d1308034ee8953c7233fac051a28905339bd938cd5ef252/globalmount\"" pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:13.631838 master-0 kubenswrapper[27835]: I0318 13:43:13.631799 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bf459ab-dc8e-4a13-bbee-b68d6d031781-internal-tls-certs\") pod \"glance-4f519-default-internal-api-0\" (UID: \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:13.632940 master-0 kubenswrapper[27835]: I0318 13:43:13.632900 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bf459ab-dc8e-4a13-bbee-b68d6d031781-scripts\") pod \"glance-4f519-default-internal-api-0\" (UID: \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:13.638704 master-0 kubenswrapper[27835]: I0318 13:43:13.638655 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bf459ab-dc8e-4a13-bbee-b68d6d031781-config-data\") pod \"glance-4f519-default-internal-api-0\" (UID: \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:13.653214 master-0 kubenswrapper[27835]: I0318 13:43:13.653170 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bf459ab-dc8e-4a13-bbee-b68d6d031781-combined-ca-bundle\") pod \"glance-4f519-default-internal-api-0\" (UID: \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:13.674485 master-0 kubenswrapper[27835]: I0318 13:43:13.674407 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8c4p\" (UniqueName: \"kubernetes.io/projected/9bf459ab-dc8e-4a13-bbee-b68d6d031781-kube-api-access-w8c4p\") pod \"glance-4f519-default-internal-api-0\" (UID: \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:14.302805 master-0 kubenswrapper[27835]: I0318 13:43:14.302715 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="097312db-40df-43d6-b3ba-998c52c4e55f" path="/var/lib/kubelet/pods/097312db-40df-43d6-b3ba-998c52c4e55f/volumes" Mar 18 13:43:14.303981 master-0 kubenswrapper[27835]: I0318 13:43:14.303961 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8203106-e334-473e-8e67-f6fddefc8016" path="/var/lib/kubelet/pods/a8203106-e334-473e-8e67-f6fddefc8016/volumes" Mar 18 13:43:15.026976 master-0 kubenswrapper[27835]: I0318 13:43:15.026926 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3ead41c4-903a-4686-a384-328e4b9fb938\" (UniqueName: \"kubernetes.io/csi/topolvm.io^919e8c73-7055-4dab-b7f5-393f87412c76\") pod \"glance-4f519-default-internal-api-0\" (UID: \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:15.304846 master-0 kubenswrapper[27835]: I0318 13:43:15.304765 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:15.532819 master-0 kubenswrapper[27835]: I0318 13:43:15.531542 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-f5fe-account-create-update-4dwnc" Mar 18 13:43:15.537110 master-0 kubenswrapper[27835]: I0318 13:43:15.537014 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nq7m4" Mar 18 13:43:15.639263 master-0 kubenswrapper[27835]: I0318 13:43:15.638690 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clrwj\" (UniqueName: \"kubernetes.io/projected/562b6f7a-8086-457b-8f82-cc3c56043ac8-kube-api-access-clrwj\") pod \"562b6f7a-8086-457b-8f82-cc3c56043ac8\" (UID: \"562b6f7a-8086-457b-8f82-cc3c56043ac8\") " Mar 18 13:43:15.639573 master-0 kubenswrapper[27835]: I0318 13:43:15.639338 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b779cf8-bfc6-417f-b28b-4cfa060a6db2-operator-scripts\") pod \"6b779cf8-bfc6-417f-b28b-4cfa060a6db2\" (UID: \"6b779cf8-bfc6-417f-b28b-4cfa060a6db2\") " Mar 18 13:43:15.639573 master-0 kubenswrapper[27835]: I0318 13:43:15.639395 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/562b6f7a-8086-457b-8f82-cc3c56043ac8-config-data\") pod \"562b6f7a-8086-457b-8f82-cc3c56043ac8\" (UID: \"562b6f7a-8086-457b-8f82-cc3c56043ac8\") " Mar 18 13:43:15.640464 master-0 kubenswrapper[27835]: I0318 13:43:15.640367 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b779cf8-bfc6-417f-b28b-4cfa060a6db2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6b779cf8-bfc6-417f-b28b-4cfa060a6db2" (UID: "6b779cf8-bfc6-417f-b28b-4cfa060a6db2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:43:15.641653 master-0 kubenswrapper[27835]: I0318 13:43:15.640876 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/562b6f7a-8086-457b-8f82-cc3c56043ac8-credential-keys\") pod \"562b6f7a-8086-457b-8f82-cc3c56043ac8\" (UID: \"562b6f7a-8086-457b-8f82-cc3c56043ac8\") " Mar 18 13:43:15.641653 master-0 kubenswrapper[27835]: I0318 13:43:15.640927 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mh8xr\" (UniqueName: \"kubernetes.io/projected/6b779cf8-bfc6-417f-b28b-4cfa060a6db2-kube-api-access-mh8xr\") pod \"6b779cf8-bfc6-417f-b28b-4cfa060a6db2\" (UID: \"6b779cf8-bfc6-417f-b28b-4cfa060a6db2\") " Mar 18 13:43:15.641653 master-0 kubenswrapper[27835]: I0318 13:43:15.641053 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/562b6f7a-8086-457b-8f82-cc3c56043ac8-scripts\") pod \"562b6f7a-8086-457b-8f82-cc3c56043ac8\" (UID: \"562b6f7a-8086-457b-8f82-cc3c56043ac8\") " Mar 18 13:43:15.641653 master-0 kubenswrapper[27835]: I0318 13:43:15.641085 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/562b6f7a-8086-457b-8f82-cc3c56043ac8-combined-ca-bundle\") pod \"562b6f7a-8086-457b-8f82-cc3c56043ac8\" (UID: \"562b6f7a-8086-457b-8f82-cc3c56043ac8\") " Mar 18 13:43:15.641653 master-0 kubenswrapper[27835]: I0318 13:43:15.641169 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/562b6f7a-8086-457b-8f82-cc3c56043ac8-fernet-keys\") pod \"562b6f7a-8086-457b-8f82-cc3c56043ac8\" (UID: \"562b6f7a-8086-457b-8f82-cc3c56043ac8\") " Mar 18 13:43:15.642840 master-0 kubenswrapper[27835]: I0318 13:43:15.642768 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/562b6f7a-8086-457b-8f82-cc3c56043ac8-kube-api-access-clrwj" (OuterVolumeSpecName: "kube-api-access-clrwj") pod "562b6f7a-8086-457b-8f82-cc3c56043ac8" (UID: "562b6f7a-8086-457b-8f82-cc3c56043ac8"). InnerVolumeSpecName "kube-api-access-clrwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:43:15.645585 master-0 kubenswrapper[27835]: I0318 13:43:15.645539 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clrwj\" (UniqueName: \"kubernetes.io/projected/562b6f7a-8086-457b-8f82-cc3c56043ac8-kube-api-access-clrwj\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:15.645585 master-0 kubenswrapper[27835]: I0318 13:43:15.645583 27835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b779cf8-bfc6-417f-b28b-4cfa060a6db2-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:15.647936 master-0 kubenswrapper[27835]: I0318 13:43:15.647863 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/562b6f7a-8086-457b-8f82-cc3c56043ac8-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "562b6f7a-8086-457b-8f82-cc3c56043ac8" (UID: "562b6f7a-8086-457b-8f82-cc3c56043ac8"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:15.653636 master-0 kubenswrapper[27835]: I0318 13:43:15.653552 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/562b6f7a-8086-457b-8f82-cc3c56043ac8-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "562b6f7a-8086-457b-8f82-cc3c56043ac8" (UID: "562b6f7a-8086-457b-8f82-cc3c56043ac8"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:15.659321 master-0 kubenswrapper[27835]: I0318 13:43:15.659239 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/562b6f7a-8086-457b-8f82-cc3c56043ac8-scripts" (OuterVolumeSpecName: "scripts") pod "562b6f7a-8086-457b-8f82-cc3c56043ac8" (UID: "562b6f7a-8086-457b-8f82-cc3c56043ac8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:15.659611 master-0 kubenswrapper[27835]: I0318 13:43:15.659324 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b779cf8-bfc6-417f-b28b-4cfa060a6db2-kube-api-access-mh8xr" (OuterVolumeSpecName: "kube-api-access-mh8xr") pod "6b779cf8-bfc6-417f-b28b-4cfa060a6db2" (UID: "6b779cf8-bfc6-417f-b28b-4cfa060a6db2"). InnerVolumeSpecName "kube-api-access-mh8xr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:43:15.687749 master-0 kubenswrapper[27835]: I0318 13:43:15.687663 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/562b6f7a-8086-457b-8f82-cc3c56043ac8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "562b6f7a-8086-457b-8f82-cc3c56043ac8" (UID: "562b6f7a-8086-457b-8f82-cc3c56043ac8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:15.697851 master-0 kubenswrapper[27835]: I0318 13:43:15.697769 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/562b6f7a-8086-457b-8f82-cc3c56043ac8-config-data" (OuterVolumeSpecName: "config-data") pod "562b6f7a-8086-457b-8f82-cc3c56043ac8" (UID: "562b6f7a-8086-457b-8f82-cc3c56043ac8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:15.748290 master-0 kubenswrapper[27835]: I0318 13:43:15.748202 27835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/562b6f7a-8086-457b-8f82-cc3c56043ac8-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:15.748290 master-0 kubenswrapper[27835]: I0318 13:43:15.748288 27835 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/562b6f7a-8086-457b-8f82-cc3c56043ac8-credential-keys\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:15.748578 master-0 kubenswrapper[27835]: I0318 13:43:15.748310 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mh8xr\" (UniqueName: \"kubernetes.io/projected/6b779cf8-bfc6-417f-b28b-4cfa060a6db2-kube-api-access-mh8xr\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:15.748578 master-0 kubenswrapper[27835]: I0318 13:43:15.748325 27835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/562b6f7a-8086-457b-8f82-cc3c56043ac8-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:15.748578 master-0 kubenswrapper[27835]: I0318 13:43:15.748335 27835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/562b6f7a-8086-457b-8f82-cc3c56043ac8-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:15.748578 master-0 kubenswrapper[27835]: I0318 13:43:15.748343 27835 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/562b6f7a-8086-457b-8f82-cc3c56043ac8-fernet-keys\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:16.020482 master-0 kubenswrapper[27835]: I0318 13:43:16.020054 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4f519-default-internal-api-0"] Mar 18 13:43:16.052755 master-0 kubenswrapper[27835]: W0318 13:43:16.052653 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bf459ab_dc8e_4a13_bbee_b68d6d031781.slice/crio-bf4e0bf001bf13a8d56c6ea89735e199aab41fed8898d008e6835d4b479eb57c WatchSource:0}: Error finding container bf4e0bf001bf13a8d56c6ea89735e199aab41fed8898d008e6835d4b479eb57c: Status 404 returned error can't find the container with id bf4e0bf001bf13a8d56c6ea89735e199aab41fed8898d008e6835d4b479eb57c Mar 18 13:43:16.310452 master-0 kubenswrapper[27835]: I0318 13:43:16.307531 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8mqtx" event={"ID":"fa96d61e-36ca-4846-a008-82052eff4ab8","Type":"ContainerStarted","Data":"34d2a0adebc7131cf2dc3dfb6494ccbf6824a761e785f1ac83c71d0751212918"} Mar 18 13:43:16.310452 master-0 kubenswrapper[27835]: I0318 13:43:16.307615 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4f519-default-internal-api-0" event={"ID":"9bf459ab-dc8e-4a13-bbee-b68d6d031781","Type":"ContainerStarted","Data":"bf4e0bf001bf13a8d56c6ea89735e199aab41fed8898d008e6835d4b479eb57c"} Mar 18 13:43:16.323570 master-0 kubenswrapper[27835]: I0318 13:43:16.323522 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nq7m4" event={"ID":"562b6f7a-8086-457b-8f82-cc3c56043ac8","Type":"ContainerDied","Data":"e93cbc1446ae4cfdbcc2a72a2613fcfb8b40273c9fbc819a6ceb4b99f26a1747"} Mar 18 13:43:16.323570 master-0 kubenswrapper[27835]: I0318 13:43:16.323572 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nq7m4" Mar 18 13:43:16.323799 master-0 kubenswrapper[27835]: I0318 13:43:16.323582 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e93cbc1446ae4cfdbcc2a72a2613fcfb8b40273c9fbc819a6ceb4b99f26a1747" Mar 18 13:43:16.326065 master-0 kubenswrapper[27835]: I0318 13:43:16.326032 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-f5fe-account-create-update-4dwnc" event={"ID":"6b779cf8-bfc6-417f-b28b-4cfa060a6db2","Type":"ContainerDied","Data":"1763928bef7232de636495b49d5bc7d33bf41abcd6a84eda063c90e5ba9dd21d"} Mar 18 13:43:16.326178 master-0 kubenswrapper[27835]: I0318 13:43:16.326163 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1763928bef7232de636495b49d5bc7d33bf41abcd6a84eda063c90e5ba9dd21d" Mar 18 13:43:16.326990 master-0 kubenswrapper[27835]: I0318 13:43:16.326724 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-f5fe-account-create-update-4dwnc" Mar 18 13:43:16.327615 master-0 kubenswrapper[27835]: I0318 13:43:16.327577 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4f519-default-external-api-0" event={"ID":"be12c3ea-d20c-494b-9467-b13c0f096788","Type":"ContainerStarted","Data":"eb825cedcb8ba87eb72cf0caec5d54cc3780ecc664afb5ae378daccb40203203"} Mar 18 13:43:16.464312 master-0 kubenswrapper[27835]: I0318 13:43:16.460361 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-8mqtx" podStartSLOduration=3.681641402 podStartE2EDuration="10.460334357s" podCreationTimestamp="2026-03-18 13:43:06 +0000 UTC" firstStartedPulling="2026-03-18 13:43:08.794127631 +0000 UTC m=+1152.759339191" lastFinishedPulling="2026-03-18 13:43:15.572820576 +0000 UTC m=+1159.538032146" observedRunningTime="2026-03-18 13:43:16.443330346 +0000 UTC m=+1160.408541906" watchObservedRunningTime="2026-03-18 13:43:16.460334357 +0000 UTC m=+1160.425545927" Mar 18 13:43:16.750596 master-0 kubenswrapper[27835]: I0318 13:43:16.750479 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-nq7m4"] Mar 18 13:43:16.791594 master-0 kubenswrapper[27835]: I0318 13:43:16.791450 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-nq7m4"] Mar 18 13:43:16.848789 master-0 kubenswrapper[27835]: I0318 13:43:16.848720 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-xcnw2"] Mar 18 13:43:16.849343 master-0 kubenswrapper[27835]: E0318 13:43:16.849268 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="562b6f7a-8086-457b-8f82-cc3c56043ac8" containerName="keystone-bootstrap" Mar 18 13:43:16.849343 master-0 kubenswrapper[27835]: I0318 13:43:16.849290 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="562b6f7a-8086-457b-8f82-cc3c56043ac8" containerName="keystone-bootstrap" Mar 18 13:43:16.849478 master-0 kubenswrapper[27835]: E0318 13:43:16.849360 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b779cf8-bfc6-417f-b28b-4cfa060a6db2" containerName="mariadb-account-create-update" Mar 18 13:43:16.849478 master-0 kubenswrapper[27835]: I0318 13:43:16.849371 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b779cf8-bfc6-417f-b28b-4cfa060a6db2" containerName="mariadb-account-create-update" Mar 18 13:43:16.849660 master-0 kubenswrapper[27835]: I0318 13:43:16.849648 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="562b6f7a-8086-457b-8f82-cc3c56043ac8" containerName="keystone-bootstrap" Mar 18 13:43:16.849714 master-0 kubenswrapper[27835]: I0318 13:43:16.849693 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b779cf8-bfc6-417f-b28b-4cfa060a6db2" containerName="mariadb-account-create-update" Mar 18 13:43:16.852090 master-0 kubenswrapper[27835]: I0318 13:43:16.850725 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xcnw2" Mar 18 13:43:16.860683 master-0 kubenswrapper[27835]: I0318 13:43:16.860624 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 18 13:43:16.860909 master-0 kubenswrapper[27835]: I0318 13:43:16.860846 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 18 13:43:16.868240 master-0 kubenswrapper[27835]: I0318 13:43:16.863111 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 18 13:43:16.868240 master-0 kubenswrapper[27835]: I0318 13:43:16.863564 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 18 13:43:16.872824 master-0 kubenswrapper[27835]: I0318 13:43:16.872765 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xcnw2"] Mar 18 13:43:16.988186 master-0 kubenswrapper[27835]: I0318 13:43:16.988138 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/81aa1d7d-25c8-4408-a790-4c9fe8ed9742-credential-keys\") pod \"keystone-bootstrap-xcnw2\" (UID: \"81aa1d7d-25c8-4408-a790-4c9fe8ed9742\") " pod="openstack/keystone-bootstrap-xcnw2" Mar 18 13:43:16.988186 master-0 kubenswrapper[27835]: I0318 13:43:16.988186 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98k2g\" (UniqueName: \"kubernetes.io/projected/81aa1d7d-25c8-4408-a790-4c9fe8ed9742-kube-api-access-98k2g\") pod \"keystone-bootstrap-xcnw2\" (UID: \"81aa1d7d-25c8-4408-a790-4c9fe8ed9742\") " pod="openstack/keystone-bootstrap-xcnw2" Mar 18 13:43:16.988348 master-0 kubenswrapper[27835]: I0318 13:43:16.988212 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81aa1d7d-25c8-4408-a790-4c9fe8ed9742-scripts\") pod \"keystone-bootstrap-xcnw2\" (UID: \"81aa1d7d-25c8-4408-a790-4c9fe8ed9742\") " pod="openstack/keystone-bootstrap-xcnw2" Mar 18 13:43:16.988393 master-0 kubenswrapper[27835]: I0318 13:43:16.988338 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81aa1d7d-25c8-4408-a790-4c9fe8ed9742-combined-ca-bundle\") pod \"keystone-bootstrap-xcnw2\" (UID: \"81aa1d7d-25c8-4408-a790-4c9fe8ed9742\") " pod="openstack/keystone-bootstrap-xcnw2" Mar 18 13:43:16.988836 master-0 kubenswrapper[27835]: I0318 13:43:16.988784 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/81aa1d7d-25c8-4408-a790-4c9fe8ed9742-fernet-keys\") pod \"keystone-bootstrap-xcnw2\" (UID: \"81aa1d7d-25c8-4408-a790-4c9fe8ed9742\") " pod="openstack/keystone-bootstrap-xcnw2" Mar 18 13:43:16.988972 master-0 kubenswrapper[27835]: I0318 13:43:16.988921 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81aa1d7d-25c8-4408-a790-4c9fe8ed9742-config-data\") pod \"keystone-bootstrap-xcnw2\" (UID: \"81aa1d7d-25c8-4408-a790-4c9fe8ed9742\") " pod="openstack/keystone-bootstrap-xcnw2" Mar 18 13:43:17.091485 master-0 kubenswrapper[27835]: I0318 13:43:17.091332 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/81aa1d7d-25c8-4408-a790-4c9fe8ed9742-fernet-keys\") pod \"keystone-bootstrap-xcnw2\" (UID: \"81aa1d7d-25c8-4408-a790-4c9fe8ed9742\") " pod="openstack/keystone-bootstrap-xcnw2" Mar 18 13:43:17.091695 master-0 kubenswrapper[27835]: I0318 13:43:17.091600 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81aa1d7d-25c8-4408-a790-4c9fe8ed9742-config-data\") pod \"keystone-bootstrap-xcnw2\" (UID: \"81aa1d7d-25c8-4408-a790-4c9fe8ed9742\") " pod="openstack/keystone-bootstrap-xcnw2" Mar 18 13:43:17.091836 master-0 kubenswrapper[27835]: I0318 13:43:17.091817 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/81aa1d7d-25c8-4408-a790-4c9fe8ed9742-credential-keys\") pod \"keystone-bootstrap-xcnw2\" (UID: \"81aa1d7d-25c8-4408-a790-4c9fe8ed9742\") " pod="openstack/keystone-bootstrap-xcnw2" Mar 18 13:43:17.091888 master-0 kubenswrapper[27835]: I0318 13:43:17.091847 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98k2g\" (UniqueName: \"kubernetes.io/projected/81aa1d7d-25c8-4408-a790-4c9fe8ed9742-kube-api-access-98k2g\") pod \"keystone-bootstrap-xcnw2\" (UID: \"81aa1d7d-25c8-4408-a790-4c9fe8ed9742\") " pod="openstack/keystone-bootstrap-xcnw2" Mar 18 13:43:17.091888 master-0 kubenswrapper[27835]: I0318 13:43:17.091880 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81aa1d7d-25c8-4408-a790-4c9fe8ed9742-scripts\") pod \"keystone-bootstrap-xcnw2\" (UID: \"81aa1d7d-25c8-4408-a790-4c9fe8ed9742\") " pod="openstack/keystone-bootstrap-xcnw2" Mar 18 13:43:17.091963 master-0 kubenswrapper[27835]: I0318 13:43:17.091944 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81aa1d7d-25c8-4408-a790-4c9fe8ed9742-combined-ca-bundle\") pod \"keystone-bootstrap-xcnw2\" (UID: \"81aa1d7d-25c8-4408-a790-4c9fe8ed9742\") " pod="openstack/keystone-bootstrap-xcnw2" Mar 18 13:43:17.095898 master-0 kubenswrapper[27835]: I0318 13:43:17.095859 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/81aa1d7d-25c8-4408-a790-4c9fe8ed9742-credential-keys\") pod \"keystone-bootstrap-xcnw2\" (UID: \"81aa1d7d-25c8-4408-a790-4c9fe8ed9742\") " pod="openstack/keystone-bootstrap-xcnw2" Mar 18 13:43:17.096646 master-0 kubenswrapper[27835]: I0318 13:43:17.096610 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81aa1d7d-25c8-4408-a790-4c9fe8ed9742-scripts\") pod \"keystone-bootstrap-xcnw2\" (UID: \"81aa1d7d-25c8-4408-a790-4c9fe8ed9742\") " pod="openstack/keystone-bootstrap-xcnw2" Mar 18 13:43:17.097052 master-0 kubenswrapper[27835]: I0318 13:43:17.097025 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81aa1d7d-25c8-4408-a790-4c9fe8ed9742-combined-ca-bundle\") pod \"keystone-bootstrap-xcnw2\" (UID: \"81aa1d7d-25c8-4408-a790-4c9fe8ed9742\") " pod="openstack/keystone-bootstrap-xcnw2" Mar 18 13:43:17.104120 master-0 kubenswrapper[27835]: I0318 13:43:17.104080 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81aa1d7d-25c8-4408-a790-4c9fe8ed9742-config-data\") pod \"keystone-bootstrap-xcnw2\" (UID: \"81aa1d7d-25c8-4408-a790-4c9fe8ed9742\") " pod="openstack/keystone-bootstrap-xcnw2" Mar 18 13:43:17.110109 master-0 kubenswrapper[27835]: I0318 13:43:17.110070 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/81aa1d7d-25c8-4408-a790-4c9fe8ed9742-fernet-keys\") pod \"keystone-bootstrap-xcnw2\" (UID: \"81aa1d7d-25c8-4408-a790-4c9fe8ed9742\") " pod="openstack/keystone-bootstrap-xcnw2" Mar 18 13:43:17.110544 master-0 kubenswrapper[27835]: I0318 13:43:17.110522 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98k2g\" (UniqueName: \"kubernetes.io/projected/81aa1d7d-25c8-4408-a790-4c9fe8ed9742-kube-api-access-98k2g\") pod \"keystone-bootstrap-xcnw2\" (UID: \"81aa1d7d-25c8-4408-a790-4c9fe8ed9742\") " pod="openstack/keystone-bootstrap-xcnw2" Mar 18 13:43:17.185954 master-0 kubenswrapper[27835]: I0318 13:43:17.185900 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xcnw2" Mar 18 13:43:17.352295 master-0 kubenswrapper[27835]: I0318 13:43:17.352248 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4f519-default-external-api-0" event={"ID":"be12c3ea-d20c-494b-9467-b13c0f096788","Type":"ContainerStarted","Data":"31e572ceb4f2cc48ee633f9a8f2598be65b020db185e3b373b9888020be3423e"} Mar 18 13:43:17.354577 master-0 kubenswrapper[27835]: I0318 13:43:17.354503 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4f519-default-internal-api-0" event={"ID":"9bf459ab-dc8e-4a13-bbee-b68d6d031781","Type":"ContainerStarted","Data":"ca78313e63b64504e4ce64c347c881395f4cdb56b4dbf7fb7c3f21484da0ea8b"} Mar 18 13:43:17.529000 master-0 kubenswrapper[27835]: I0318 13:43:17.528919 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84b7d9bdfc-bnzwf" Mar 18 13:43:17.534475 master-0 kubenswrapper[27835]: I0318 13:43:17.533906 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-sync-tmkx2"] Mar 18 13:43:17.537520 master-0 kubenswrapper[27835]: I0318 13:43:17.537475 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-tmkx2" Mar 18 13:43:17.539931 master-0 kubenswrapper[27835]: I0318 13:43:17.539808 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-scripts" Mar 18 13:43:17.547810 master-0 kubenswrapper[27835]: I0318 13:43:17.546012 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-tmkx2"] Mar 18 13:43:17.547810 master-0 kubenswrapper[27835]: I0318 13:43:17.547351 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Mar 18 13:43:17.729518 master-0 kubenswrapper[27835]: I0318 13:43:17.722650 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c56tx\" (UniqueName: \"kubernetes.io/projected/653293fa-39a3-4b35-ad41-6a3cac734e80-kube-api-access-c56tx\") pod \"ironic-db-sync-tmkx2\" (UID: \"653293fa-39a3-4b35-ad41-6a3cac734e80\") " pod="openstack/ironic-db-sync-tmkx2" Mar 18 13:43:17.729518 master-0 kubenswrapper[27835]: I0318 13:43:17.722705 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/653293fa-39a3-4b35-ad41-6a3cac734e80-etc-podinfo\") pod \"ironic-db-sync-tmkx2\" (UID: \"653293fa-39a3-4b35-ad41-6a3cac734e80\") " pod="openstack/ironic-db-sync-tmkx2" Mar 18 13:43:17.729518 master-0 kubenswrapper[27835]: I0318 13:43:17.722732 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/653293fa-39a3-4b35-ad41-6a3cac734e80-config-data\") pod \"ironic-db-sync-tmkx2\" (UID: \"653293fa-39a3-4b35-ad41-6a3cac734e80\") " pod="openstack/ironic-db-sync-tmkx2" Mar 18 13:43:17.729518 master-0 kubenswrapper[27835]: I0318 13:43:17.722749 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/653293fa-39a3-4b35-ad41-6a3cac734e80-combined-ca-bundle\") pod \"ironic-db-sync-tmkx2\" (UID: \"653293fa-39a3-4b35-ad41-6a3cac734e80\") " pod="openstack/ironic-db-sync-tmkx2" Mar 18 13:43:17.729518 master-0 kubenswrapper[27835]: I0318 13:43:17.722783 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/653293fa-39a3-4b35-ad41-6a3cac734e80-config-data-merged\") pod \"ironic-db-sync-tmkx2\" (UID: \"653293fa-39a3-4b35-ad41-6a3cac734e80\") " pod="openstack/ironic-db-sync-tmkx2" Mar 18 13:43:17.729518 master-0 kubenswrapper[27835]: I0318 13:43:17.722811 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/653293fa-39a3-4b35-ad41-6a3cac734e80-scripts\") pod \"ironic-db-sync-tmkx2\" (UID: \"653293fa-39a3-4b35-ad41-6a3cac734e80\") " pod="openstack/ironic-db-sync-tmkx2" Mar 18 13:43:17.830550 master-0 kubenswrapper[27835]: I0318 13:43:17.824847 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/653293fa-39a3-4b35-ad41-6a3cac734e80-scripts\") pod \"ironic-db-sync-tmkx2\" (UID: \"653293fa-39a3-4b35-ad41-6a3cac734e80\") " pod="openstack/ironic-db-sync-tmkx2" Mar 18 13:43:17.830550 master-0 kubenswrapper[27835]: I0318 13:43:17.825114 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c56tx\" (UniqueName: \"kubernetes.io/projected/653293fa-39a3-4b35-ad41-6a3cac734e80-kube-api-access-c56tx\") pod \"ironic-db-sync-tmkx2\" (UID: \"653293fa-39a3-4b35-ad41-6a3cac734e80\") " pod="openstack/ironic-db-sync-tmkx2" Mar 18 13:43:17.830550 master-0 kubenswrapper[27835]: I0318 13:43:17.825159 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/653293fa-39a3-4b35-ad41-6a3cac734e80-etc-podinfo\") pod \"ironic-db-sync-tmkx2\" (UID: \"653293fa-39a3-4b35-ad41-6a3cac734e80\") " pod="openstack/ironic-db-sync-tmkx2" Mar 18 13:43:17.830550 master-0 kubenswrapper[27835]: I0318 13:43:17.825182 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/653293fa-39a3-4b35-ad41-6a3cac734e80-config-data\") pod \"ironic-db-sync-tmkx2\" (UID: \"653293fa-39a3-4b35-ad41-6a3cac734e80\") " pod="openstack/ironic-db-sync-tmkx2" Mar 18 13:43:17.830550 master-0 kubenswrapper[27835]: I0318 13:43:17.825207 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/653293fa-39a3-4b35-ad41-6a3cac734e80-combined-ca-bundle\") pod \"ironic-db-sync-tmkx2\" (UID: \"653293fa-39a3-4b35-ad41-6a3cac734e80\") " pod="openstack/ironic-db-sync-tmkx2" Mar 18 13:43:17.830550 master-0 kubenswrapper[27835]: I0318 13:43:17.825250 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/653293fa-39a3-4b35-ad41-6a3cac734e80-config-data-merged\") pod \"ironic-db-sync-tmkx2\" (UID: \"653293fa-39a3-4b35-ad41-6a3cac734e80\") " pod="openstack/ironic-db-sync-tmkx2" Mar 18 13:43:17.830550 master-0 kubenswrapper[27835]: I0318 13:43:17.825828 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/653293fa-39a3-4b35-ad41-6a3cac734e80-config-data-merged\") pod \"ironic-db-sync-tmkx2\" (UID: \"653293fa-39a3-4b35-ad41-6a3cac734e80\") " pod="openstack/ironic-db-sync-tmkx2" Mar 18 13:43:17.830550 master-0 kubenswrapper[27835]: I0318 13:43:17.829409 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/653293fa-39a3-4b35-ad41-6a3cac734e80-scripts\") pod \"ironic-db-sync-tmkx2\" (UID: \"653293fa-39a3-4b35-ad41-6a3cac734e80\") " pod="openstack/ironic-db-sync-tmkx2" Mar 18 13:43:17.830550 master-0 kubenswrapper[27835]: I0318 13:43:17.829662 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/653293fa-39a3-4b35-ad41-6a3cac734e80-etc-podinfo\") pod \"ironic-db-sync-tmkx2\" (UID: \"653293fa-39a3-4b35-ad41-6a3cac734e80\") " pod="openstack/ironic-db-sync-tmkx2" Mar 18 13:43:17.830550 master-0 kubenswrapper[27835]: I0318 13:43:17.829738 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85f88db649-6j6lk"] Mar 18 13:43:17.830550 master-0 kubenswrapper[27835]: I0318 13:43:17.829954 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85f88db649-6j6lk" podUID="529ada08-25e2-4b25-aa95-f6fbb5263c13" containerName="dnsmasq-dns" containerID="cri-o://19cd67eec991225b9b13b9e8eaf173efffee41d378abfb1990727e3839b98a61" gracePeriod=10 Mar 18 13:43:17.867737 master-0 kubenswrapper[27835]: I0318 13:43:17.867698 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/653293fa-39a3-4b35-ad41-6a3cac734e80-config-data\") pod \"ironic-db-sync-tmkx2\" (UID: \"653293fa-39a3-4b35-ad41-6a3cac734e80\") " pod="openstack/ironic-db-sync-tmkx2" Mar 18 13:43:17.868721 master-0 kubenswrapper[27835]: I0318 13:43:17.868680 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/653293fa-39a3-4b35-ad41-6a3cac734e80-combined-ca-bundle\") pod \"ironic-db-sync-tmkx2\" (UID: \"653293fa-39a3-4b35-ad41-6a3cac734e80\") " pod="openstack/ironic-db-sync-tmkx2" Mar 18 13:43:17.873536 master-0 kubenswrapper[27835]: I0318 13:43:17.873472 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xcnw2"] Mar 18 13:43:17.880656 master-0 kubenswrapper[27835]: I0318 13:43:17.876465 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c56tx\" (UniqueName: \"kubernetes.io/projected/653293fa-39a3-4b35-ad41-6a3cac734e80-kube-api-access-c56tx\") pod \"ironic-db-sync-tmkx2\" (UID: \"653293fa-39a3-4b35-ad41-6a3cac734e80\") " pod="openstack/ironic-db-sync-tmkx2" Mar 18 13:43:17.944105 master-0 kubenswrapper[27835]: I0318 13:43:17.943887 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-tmkx2" Mar 18 13:43:18.307719 master-0 kubenswrapper[27835]: I0318 13:43:18.307674 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="562b6f7a-8086-457b-8f82-cc3c56043ac8" path="/var/lib/kubelet/pods/562b6f7a-8086-457b-8f82-cc3c56043ac8/volumes" Mar 18 13:43:18.380299 master-0 kubenswrapper[27835]: I0318 13:43:18.378765 27835 generic.go:334] "Generic (PLEG): container finished" podID="529ada08-25e2-4b25-aa95-f6fbb5263c13" containerID="19cd67eec991225b9b13b9e8eaf173efffee41d378abfb1990727e3839b98a61" exitCode=0 Mar 18 13:43:18.380299 master-0 kubenswrapper[27835]: I0318 13:43:18.378843 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f88db649-6j6lk" event={"ID":"529ada08-25e2-4b25-aa95-f6fbb5263c13","Type":"ContainerDied","Data":"19cd67eec991225b9b13b9e8eaf173efffee41d378abfb1990727e3839b98a61"} Mar 18 13:43:18.388597 master-0 kubenswrapper[27835]: I0318 13:43:18.388359 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4f519-default-internal-api-0" event={"ID":"9bf459ab-dc8e-4a13-bbee-b68d6d031781","Type":"ContainerStarted","Data":"3558dec8b9ad35dc7342bcd32de8cdbe7f94961bdfbc20eaa99bee14dbc28448"} Mar 18 13:43:18.403650 master-0 kubenswrapper[27835]: I0318 13:43:18.402288 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4f519-default-external-api-0" event={"ID":"be12c3ea-d20c-494b-9467-b13c0f096788","Type":"ContainerStarted","Data":"a64dcf558cb8f124983c5a7eb09e2e89627d05fcc6460eab6c78830c00cd9630"} Mar 18 13:43:18.403650 master-0 kubenswrapper[27835]: I0318 13:43:18.402703 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-4f519-default-external-api-0" podUID="be12c3ea-d20c-494b-9467-b13c0f096788" containerName="glance-log" containerID="cri-o://31e572ceb4f2cc48ee633f9a8f2598be65b020db185e3b373b9888020be3423e" gracePeriod=30 Mar 18 13:43:18.403650 master-0 kubenswrapper[27835]: I0318 13:43:18.402798 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-4f519-default-external-api-0" podUID="be12c3ea-d20c-494b-9467-b13c0f096788" containerName="glance-httpd" containerID="cri-o://a64dcf558cb8f124983c5a7eb09e2e89627d05fcc6460eab6c78830c00cd9630" gracePeriod=30 Mar 18 13:43:18.414901 master-0 kubenswrapper[27835]: I0318 13:43:18.414830 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xcnw2" event={"ID":"81aa1d7d-25c8-4408-a790-4c9fe8ed9742","Type":"ContainerStarted","Data":"564d5b37f0b4618d848bf0331bd8de714a314376c33d0cc0a085b9a66dbf058a"} Mar 18 13:43:18.414901 master-0 kubenswrapper[27835]: I0318 13:43:18.414885 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xcnw2" event={"ID":"81aa1d7d-25c8-4408-a790-4c9fe8ed9742","Type":"ContainerStarted","Data":"883752a276c107244986bc0405bd46532527fc68761e4be4a57a27dec80f015a"} Mar 18 13:43:18.501695 master-0 kubenswrapper[27835]: I0318 13:43:18.496952 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-4f519-default-internal-api-0" podStartSLOduration=5.496924294 podStartE2EDuration="5.496924294s" podCreationTimestamp="2026-03-18 13:43:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:43:18.411823567 +0000 UTC m=+1162.377035137" watchObservedRunningTime="2026-03-18 13:43:18.496924294 +0000 UTC m=+1162.462135854" Mar 18 13:43:18.553298 master-0 kubenswrapper[27835]: I0318 13:43:18.551742 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-4f519-default-external-api-0" podStartSLOduration=12.551718466 podStartE2EDuration="12.551718466s" podCreationTimestamp="2026-03-18 13:43:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:43:18.449014234 +0000 UTC m=+1162.414225794" watchObservedRunningTime="2026-03-18 13:43:18.551718466 +0000 UTC m=+1162.516930036" Mar 18 13:43:18.561113 master-0 kubenswrapper[27835]: I0318 13:43:18.560798 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-xcnw2" podStartSLOduration=2.560773477 podStartE2EDuration="2.560773477s" podCreationTimestamp="2026-03-18 13:43:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:43:18.482771159 +0000 UTC m=+1162.447982719" watchObservedRunningTime="2026-03-18 13:43:18.560773477 +0000 UTC m=+1162.525985037" Mar 18 13:43:18.623591 master-0 kubenswrapper[27835]: I0318 13:43:18.623520 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-tmkx2"] Mar 18 13:43:18.695259 master-0 kubenswrapper[27835]: I0318 13:43:18.694785 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85f88db649-6j6lk" Mar 18 13:43:18.863755 master-0 kubenswrapper[27835]: I0318 13:43:18.863708 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/529ada08-25e2-4b25-aa95-f6fbb5263c13-dns-swift-storage-0\") pod \"529ada08-25e2-4b25-aa95-f6fbb5263c13\" (UID: \"529ada08-25e2-4b25-aa95-f6fbb5263c13\") " Mar 18 13:43:18.867445 master-0 kubenswrapper[27835]: I0318 13:43:18.864047 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/529ada08-25e2-4b25-aa95-f6fbb5263c13-config\") pod \"529ada08-25e2-4b25-aa95-f6fbb5263c13\" (UID: \"529ada08-25e2-4b25-aa95-f6fbb5263c13\") " Mar 18 13:43:18.867445 master-0 kubenswrapper[27835]: I0318 13:43:18.864079 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/529ada08-25e2-4b25-aa95-f6fbb5263c13-ovsdbserver-sb\") pod \"529ada08-25e2-4b25-aa95-f6fbb5263c13\" (UID: \"529ada08-25e2-4b25-aa95-f6fbb5263c13\") " Mar 18 13:43:18.867445 master-0 kubenswrapper[27835]: I0318 13:43:18.864107 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6xpp\" (UniqueName: \"kubernetes.io/projected/529ada08-25e2-4b25-aa95-f6fbb5263c13-kube-api-access-v6xpp\") pod \"529ada08-25e2-4b25-aa95-f6fbb5263c13\" (UID: \"529ada08-25e2-4b25-aa95-f6fbb5263c13\") " Mar 18 13:43:18.867445 master-0 kubenswrapper[27835]: I0318 13:43:18.864129 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/529ada08-25e2-4b25-aa95-f6fbb5263c13-dns-svc\") pod \"529ada08-25e2-4b25-aa95-f6fbb5263c13\" (UID: \"529ada08-25e2-4b25-aa95-f6fbb5263c13\") " Mar 18 13:43:18.867445 master-0 kubenswrapper[27835]: I0318 13:43:18.864209 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/529ada08-25e2-4b25-aa95-f6fbb5263c13-ovsdbserver-nb\") pod \"529ada08-25e2-4b25-aa95-f6fbb5263c13\" (UID: \"529ada08-25e2-4b25-aa95-f6fbb5263c13\") " Mar 18 13:43:18.944511 master-0 kubenswrapper[27835]: I0318 13:43:18.939893 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/529ada08-25e2-4b25-aa95-f6fbb5263c13-kube-api-access-v6xpp" (OuterVolumeSpecName: "kube-api-access-v6xpp") pod "529ada08-25e2-4b25-aa95-f6fbb5263c13" (UID: "529ada08-25e2-4b25-aa95-f6fbb5263c13"). InnerVolumeSpecName "kube-api-access-v6xpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:43:18.958444 master-0 kubenswrapper[27835]: I0318 13:43:18.952902 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/529ada08-25e2-4b25-aa95-f6fbb5263c13-config" (OuterVolumeSpecName: "config") pod "529ada08-25e2-4b25-aa95-f6fbb5263c13" (UID: "529ada08-25e2-4b25-aa95-f6fbb5263c13"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:43:18.971446 master-0 kubenswrapper[27835]: I0318 13:43:18.961047 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/529ada08-25e2-4b25-aa95-f6fbb5263c13-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "529ada08-25e2-4b25-aa95-f6fbb5263c13" (UID: "529ada08-25e2-4b25-aa95-f6fbb5263c13"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:43:18.977086 master-0 kubenswrapper[27835]: I0318 13:43:18.972953 27835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/529ada08-25e2-4b25-aa95-f6fbb5263c13-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:18.977086 master-0 kubenswrapper[27835]: I0318 13:43:18.972992 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6xpp\" (UniqueName: \"kubernetes.io/projected/529ada08-25e2-4b25-aa95-f6fbb5263c13-kube-api-access-v6xpp\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:18.977086 master-0 kubenswrapper[27835]: I0318 13:43:18.973003 27835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/529ada08-25e2-4b25-aa95-f6fbb5263c13-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:18.978559 master-0 kubenswrapper[27835]: I0318 13:43:18.978493 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/529ada08-25e2-4b25-aa95-f6fbb5263c13-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "529ada08-25e2-4b25-aa95-f6fbb5263c13" (UID: "529ada08-25e2-4b25-aa95-f6fbb5263c13"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:43:18.986572 master-0 kubenswrapper[27835]: I0318 13:43:18.983462 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/529ada08-25e2-4b25-aa95-f6fbb5263c13-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "529ada08-25e2-4b25-aa95-f6fbb5263c13" (UID: "529ada08-25e2-4b25-aa95-f6fbb5263c13"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:43:19.029642 master-0 kubenswrapper[27835]: I0318 13:43:19.025449 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/529ada08-25e2-4b25-aa95-f6fbb5263c13-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "529ada08-25e2-4b25-aa95-f6fbb5263c13" (UID: "529ada08-25e2-4b25-aa95-f6fbb5263c13"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:43:19.084046 master-0 kubenswrapper[27835]: I0318 13:43:19.083956 27835 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/529ada08-25e2-4b25-aa95-f6fbb5263c13-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:19.084046 master-0 kubenswrapper[27835]: I0318 13:43:19.084007 27835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/529ada08-25e2-4b25-aa95-f6fbb5263c13-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:19.084046 master-0 kubenswrapper[27835]: I0318 13:43:19.084017 27835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/529ada08-25e2-4b25-aa95-f6fbb5263c13-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:19.341286 master-0 kubenswrapper[27835]: I0318 13:43:19.341241 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:19.442439 master-0 kubenswrapper[27835]: I0318 13:43:19.440985 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f88db649-6j6lk" event={"ID":"529ada08-25e2-4b25-aa95-f6fbb5263c13","Type":"ContainerDied","Data":"af80f58d9530c589384c0f3c30e14b0cd81e24fb4adb68ffd362986c27d7e130"} Mar 18 13:43:19.442439 master-0 kubenswrapper[27835]: I0318 13:43:19.441074 27835 scope.go:117] "RemoveContainer" containerID="19cd67eec991225b9b13b9e8eaf173efffee41d378abfb1990727e3839b98a61" Mar 18 13:43:19.442439 master-0 kubenswrapper[27835]: I0318 13:43:19.441237 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85f88db649-6j6lk" Mar 18 13:43:19.464518 master-0 kubenswrapper[27835]: I0318 13:43:19.459325 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-tmkx2" event={"ID":"653293fa-39a3-4b35-ad41-6a3cac734e80","Type":"ContainerStarted","Data":"88956c789da0b4a9447a2da7209c2c8a8b243b390b656b632bc38beedaecb41a"} Mar 18 13:43:19.474673 master-0 kubenswrapper[27835]: I0318 13:43:19.468564 27835 generic.go:334] "Generic (PLEG): container finished" podID="be12c3ea-d20c-494b-9467-b13c0f096788" containerID="a64dcf558cb8f124983c5a7eb09e2e89627d05fcc6460eab6c78830c00cd9630" exitCode=0 Mar 18 13:43:19.474673 master-0 kubenswrapper[27835]: I0318 13:43:19.468643 27835 generic.go:334] "Generic (PLEG): container finished" podID="be12c3ea-d20c-494b-9467-b13c0f096788" containerID="31e572ceb4f2cc48ee633f9a8f2598be65b020db185e3b373b9888020be3423e" exitCode=143 Mar 18 13:43:19.474673 master-0 kubenswrapper[27835]: I0318 13:43:19.468719 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4f519-default-external-api-0" event={"ID":"be12c3ea-d20c-494b-9467-b13c0f096788","Type":"ContainerDied","Data":"a64dcf558cb8f124983c5a7eb09e2e89627d05fcc6460eab6c78830c00cd9630"} Mar 18 13:43:19.474673 master-0 kubenswrapper[27835]: I0318 13:43:19.468828 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4f519-default-external-api-0" event={"ID":"be12c3ea-d20c-494b-9467-b13c0f096788","Type":"ContainerDied","Data":"31e572ceb4f2cc48ee633f9a8f2598be65b020db185e3b373b9888020be3423e"} Mar 18 13:43:19.474673 master-0 kubenswrapper[27835]: I0318 13:43:19.468844 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4f519-default-external-api-0" event={"ID":"be12c3ea-d20c-494b-9467-b13c0f096788","Type":"ContainerDied","Data":"eb825cedcb8ba87eb72cf0caec5d54cc3780ecc664afb5ae378daccb40203203"} Mar 18 13:43:19.474673 master-0 kubenswrapper[27835]: I0318 13:43:19.468922 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:19.500018 master-0 kubenswrapper[27835]: I0318 13:43:19.498964 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be12c3ea-d20c-494b-9467-b13c0f096788-public-tls-certs\") pod \"be12c3ea-d20c-494b-9467-b13c0f096788\" (UID: \"be12c3ea-d20c-494b-9467-b13c0f096788\") " Mar 18 13:43:19.500018 master-0 kubenswrapper[27835]: I0318 13:43:19.499048 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/be12c3ea-d20c-494b-9467-b13c0f096788-httpd-run\") pod \"be12c3ea-d20c-494b-9467-b13c0f096788\" (UID: \"be12c3ea-d20c-494b-9467-b13c0f096788\") " Mar 18 13:43:19.500018 master-0 kubenswrapper[27835]: I0318 13:43:19.499277 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be12c3ea-d20c-494b-9467-b13c0f096788-scripts\") pod \"be12c3ea-d20c-494b-9467-b13c0f096788\" (UID: \"be12c3ea-d20c-494b-9467-b13c0f096788\") " Mar 18 13:43:19.500018 master-0 kubenswrapper[27835]: I0318 13:43:19.499563 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8cd87a72-b057-4c0b-a353-f22fec868a61\") pod \"be12c3ea-d20c-494b-9467-b13c0f096788\" (UID: \"be12c3ea-d20c-494b-9467-b13c0f096788\") " Mar 18 13:43:19.500018 master-0 kubenswrapper[27835]: I0318 13:43:19.499605 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be12c3ea-d20c-494b-9467-b13c0f096788-config-data\") pod \"be12c3ea-d20c-494b-9467-b13c0f096788\" (UID: \"be12c3ea-d20c-494b-9467-b13c0f096788\") " Mar 18 13:43:19.500018 master-0 kubenswrapper[27835]: I0318 13:43:19.499770 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be12c3ea-d20c-494b-9467-b13c0f096788-logs\") pod \"be12c3ea-d20c-494b-9467-b13c0f096788\" (UID: \"be12c3ea-d20c-494b-9467-b13c0f096788\") " Mar 18 13:43:19.500018 master-0 kubenswrapper[27835]: I0318 13:43:19.499820 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be12c3ea-d20c-494b-9467-b13c0f096788-combined-ca-bundle\") pod \"be12c3ea-d20c-494b-9467-b13c0f096788\" (UID: \"be12c3ea-d20c-494b-9467-b13c0f096788\") " Mar 18 13:43:19.500018 master-0 kubenswrapper[27835]: I0318 13:43:19.499883 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8s6k\" (UniqueName: \"kubernetes.io/projected/be12c3ea-d20c-494b-9467-b13c0f096788-kube-api-access-d8s6k\") pod \"be12c3ea-d20c-494b-9467-b13c0f096788\" (UID: \"be12c3ea-d20c-494b-9467-b13c0f096788\") " Mar 18 13:43:19.500291 master-0 kubenswrapper[27835]: I0318 13:43:19.500005 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be12c3ea-d20c-494b-9467-b13c0f096788-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "be12c3ea-d20c-494b-9467-b13c0f096788" (UID: "be12c3ea-d20c-494b-9467-b13c0f096788"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:43:19.500604 master-0 kubenswrapper[27835]: I0318 13:43:19.500577 27835 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/be12c3ea-d20c-494b-9467-b13c0f096788-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:19.500920 master-0 kubenswrapper[27835]: I0318 13:43:19.500885 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be12c3ea-d20c-494b-9467-b13c0f096788-logs" (OuterVolumeSpecName: "logs") pod "be12c3ea-d20c-494b-9467-b13c0f096788" (UID: "be12c3ea-d20c-494b-9467-b13c0f096788"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:43:19.529445 master-0 kubenswrapper[27835]: I0318 13:43:19.524800 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be12c3ea-d20c-494b-9467-b13c0f096788-kube-api-access-d8s6k" (OuterVolumeSpecName: "kube-api-access-d8s6k") pod "be12c3ea-d20c-494b-9467-b13c0f096788" (UID: "be12c3ea-d20c-494b-9467-b13c0f096788"). InnerVolumeSpecName "kube-api-access-d8s6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:43:19.539629 master-0 kubenswrapper[27835]: I0318 13:43:19.530326 27835 scope.go:117] "RemoveContainer" containerID="b5bd8e58feb1dc9c8c8f5e8c9c7be58aabc1f0e512a605ad4a5de4b6c51b5b3d" Mar 18 13:43:19.539629 master-0 kubenswrapper[27835]: I0318 13:43:19.536198 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be12c3ea-d20c-494b-9467-b13c0f096788-scripts" (OuterVolumeSpecName: "scripts") pod "be12c3ea-d20c-494b-9467-b13c0f096788" (UID: "be12c3ea-d20c-494b-9467-b13c0f096788"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:19.546750 master-0 kubenswrapper[27835]: I0318 13:43:19.541515 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85f88db649-6j6lk"] Mar 18 13:43:19.547193 master-0 kubenswrapper[27835]: I0318 13:43:19.547148 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be12c3ea-d20c-494b-9467-b13c0f096788-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be12c3ea-d20c-494b-9467-b13c0f096788" (UID: "be12c3ea-d20c-494b-9467-b13c0f096788"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:19.584447 master-0 kubenswrapper[27835]: I0318 13:43:19.582842 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85f88db649-6j6lk"] Mar 18 13:43:19.584447 master-0 kubenswrapper[27835]: I0318 13:43:19.583823 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^8cd87a72-b057-4c0b-a353-f22fec868a61" (OuterVolumeSpecName: "glance") pod "be12c3ea-d20c-494b-9467-b13c0f096788" (UID: "be12c3ea-d20c-494b-9467-b13c0f096788"). InnerVolumeSpecName "pvc-ff789d6f-852a-4819-b19c-09444384ecbe". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 18 13:43:19.598638 master-0 kubenswrapper[27835]: I0318 13:43:19.596472 27835 scope.go:117] "RemoveContainer" containerID="a64dcf558cb8f124983c5a7eb09e2e89627d05fcc6460eab6c78830c00cd9630" Mar 18 13:43:19.598833 master-0 kubenswrapper[27835]: I0318 13:43:19.598789 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be12c3ea-d20c-494b-9467-b13c0f096788-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "be12c3ea-d20c-494b-9467-b13c0f096788" (UID: "be12c3ea-d20c-494b-9467-b13c0f096788"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:19.603995 master-0 kubenswrapper[27835]: I0318 13:43:19.603927 27835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be12c3ea-d20c-494b-9467-b13c0f096788-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:19.603995 master-0 kubenswrapper[27835]: I0318 13:43:19.603992 27835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be12c3ea-d20c-494b-9467-b13c0f096788-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:19.604111 master-0 kubenswrapper[27835]: I0318 13:43:19.604005 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8s6k\" (UniqueName: \"kubernetes.io/projected/be12c3ea-d20c-494b-9467-b13c0f096788-kube-api-access-d8s6k\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:19.604111 master-0 kubenswrapper[27835]: I0318 13:43:19.604016 27835 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be12c3ea-d20c-494b-9467-b13c0f096788-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:19.604111 master-0 kubenswrapper[27835]: I0318 13:43:19.604026 27835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be12c3ea-d20c-494b-9467-b13c0f096788-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:19.604111 master-0 kubenswrapper[27835]: I0318 13:43:19.604056 27835 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-ff789d6f-852a-4819-b19c-09444384ecbe\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8cd87a72-b057-4c0b-a353-f22fec868a61\") on node \"master-0\" " Mar 18 13:43:19.633444 master-0 kubenswrapper[27835]: I0318 13:43:19.629783 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be12c3ea-d20c-494b-9467-b13c0f096788-config-data" (OuterVolumeSpecName: "config-data") pod "be12c3ea-d20c-494b-9467-b13c0f096788" (UID: "be12c3ea-d20c-494b-9467-b13c0f096788"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:19.671756 master-0 kubenswrapper[27835]: I0318 13:43:19.671716 27835 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 18 13:43:19.672139 master-0 kubenswrapper[27835]: I0318 13:43:19.672121 27835 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-ff789d6f-852a-4819-b19c-09444384ecbe" (UniqueName: "kubernetes.io/csi/topolvm.io^8cd87a72-b057-4c0b-a353-f22fec868a61") on node "master-0" Mar 18 13:43:19.692634 master-0 kubenswrapper[27835]: I0318 13:43:19.691220 27835 scope.go:117] "RemoveContainer" containerID="31e572ceb4f2cc48ee633f9a8f2598be65b020db185e3b373b9888020be3423e" Mar 18 13:43:19.706276 master-0 kubenswrapper[27835]: I0318 13:43:19.706217 27835 reconciler_common.go:293] "Volume detached for volume \"pvc-ff789d6f-852a-4819-b19c-09444384ecbe\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8cd87a72-b057-4c0b-a353-f22fec868a61\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:19.706276 master-0 kubenswrapper[27835]: I0318 13:43:19.706275 27835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be12c3ea-d20c-494b-9467-b13c0f096788-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:19.733272 master-0 kubenswrapper[27835]: I0318 13:43:19.733206 27835 scope.go:117] "RemoveContainer" containerID="a64dcf558cb8f124983c5a7eb09e2e89627d05fcc6460eab6c78830c00cd9630" Mar 18 13:43:19.733876 master-0 kubenswrapper[27835]: E0318 13:43:19.733825 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a64dcf558cb8f124983c5a7eb09e2e89627d05fcc6460eab6c78830c00cd9630\": container with ID starting with a64dcf558cb8f124983c5a7eb09e2e89627d05fcc6460eab6c78830c00cd9630 not found: ID does not exist" containerID="a64dcf558cb8f124983c5a7eb09e2e89627d05fcc6460eab6c78830c00cd9630" Mar 18 13:43:19.733933 master-0 kubenswrapper[27835]: I0318 13:43:19.733886 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a64dcf558cb8f124983c5a7eb09e2e89627d05fcc6460eab6c78830c00cd9630"} err="failed to get container status \"a64dcf558cb8f124983c5a7eb09e2e89627d05fcc6460eab6c78830c00cd9630\": rpc error: code = NotFound desc = could not find container \"a64dcf558cb8f124983c5a7eb09e2e89627d05fcc6460eab6c78830c00cd9630\": container with ID starting with a64dcf558cb8f124983c5a7eb09e2e89627d05fcc6460eab6c78830c00cd9630 not found: ID does not exist" Mar 18 13:43:19.733933 master-0 kubenswrapper[27835]: I0318 13:43:19.733923 27835 scope.go:117] "RemoveContainer" containerID="31e572ceb4f2cc48ee633f9a8f2598be65b020db185e3b373b9888020be3423e" Mar 18 13:43:19.738697 master-0 kubenswrapper[27835]: E0318 13:43:19.738151 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31e572ceb4f2cc48ee633f9a8f2598be65b020db185e3b373b9888020be3423e\": container with ID starting with 31e572ceb4f2cc48ee633f9a8f2598be65b020db185e3b373b9888020be3423e not found: ID does not exist" containerID="31e572ceb4f2cc48ee633f9a8f2598be65b020db185e3b373b9888020be3423e" Mar 18 13:43:19.738697 master-0 kubenswrapper[27835]: I0318 13:43:19.738225 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31e572ceb4f2cc48ee633f9a8f2598be65b020db185e3b373b9888020be3423e"} err="failed to get container status \"31e572ceb4f2cc48ee633f9a8f2598be65b020db185e3b373b9888020be3423e\": rpc error: code = NotFound desc = could not find container \"31e572ceb4f2cc48ee633f9a8f2598be65b020db185e3b373b9888020be3423e\": container with ID starting with 31e572ceb4f2cc48ee633f9a8f2598be65b020db185e3b373b9888020be3423e not found: ID does not exist" Mar 18 13:43:19.738697 master-0 kubenswrapper[27835]: I0318 13:43:19.738262 27835 scope.go:117] "RemoveContainer" containerID="a64dcf558cb8f124983c5a7eb09e2e89627d05fcc6460eab6c78830c00cd9630" Mar 18 13:43:19.742434 master-0 kubenswrapper[27835]: I0318 13:43:19.739201 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a64dcf558cb8f124983c5a7eb09e2e89627d05fcc6460eab6c78830c00cd9630"} err="failed to get container status \"a64dcf558cb8f124983c5a7eb09e2e89627d05fcc6460eab6c78830c00cd9630\": rpc error: code = NotFound desc = could not find container \"a64dcf558cb8f124983c5a7eb09e2e89627d05fcc6460eab6c78830c00cd9630\": container with ID starting with a64dcf558cb8f124983c5a7eb09e2e89627d05fcc6460eab6c78830c00cd9630 not found: ID does not exist" Mar 18 13:43:19.742434 master-0 kubenswrapper[27835]: I0318 13:43:19.739234 27835 scope.go:117] "RemoveContainer" containerID="31e572ceb4f2cc48ee633f9a8f2598be65b020db185e3b373b9888020be3423e" Mar 18 13:43:19.742434 master-0 kubenswrapper[27835]: I0318 13:43:19.739935 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31e572ceb4f2cc48ee633f9a8f2598be65b020db185e3b373b9888020be3423e"} err="failed to get container status \"31e572ceb4f2cc48ee633f9a8f2598be65b020db185e3b373b9888020be3423e\": rpc error: code = NotFound desc = could not find container \"31e572ceb4f2cc48ee633f9a8f2598be65b020db185e3b373b9888020be3423e\": container with ID starting with 31e572ceb4f2cc48ee633f9a8f2598be65b020db185e3b373b9888020be3423e not found: ID does not exist" Mar 18 13:43:19.865447 master-0 kubenswrapper[27835]: I0318 13:43:19.865090 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-4f519-default-external-api-0"] Mar 18 13:43:19.948437 master-0 kubenswrapper[27835]: I0318 13:43:19.948067 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-4f519-default-external-api-0"] Mar 18 13:43:19.975450 master-0 kubenswrapper[27835]: I0318 13:43:19.969895 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-4f519-default-external-api-0"] Mar 18 13:43:19.975450 master-0 kubenswrapper[27835]: E0318 13:43:19.971947 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be12c3ea-d20c-494b-9467-b13c0f096788" containerName="glance-log" Mar 18 13:43:19.975450 master-0 kubenswrapper[27835]: I0318 13:43:19.971970 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="be12c3ea-d20c-494b-9467-b13c0f096788" containerName="glance-log" Mar 18 13:43:19.975450 master-0 kubenswrapper[27835]: E0318 13:43:19.971999 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="529ada08-25e2-4b25-aa95-f6fbb5263c13" containerName="dnsmasq-dns" Mar 18 13:43:19.975450 master-0 kubenswrapper[27835]: I0318 13:43:19.972005 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="529ada08-25e2-4b25-aa95-f6fbb5263c13" containerName="dnsmasq-dns" Mar 18 13:43:19.975450 master-0 kubenswrapper[27835]: E0318 13:43:19.972033 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="529ada08-25e2-4b25-aa95-f6fbb5263c13" containerName="init" Mar 18 13:43:19.975450 master-0 kubenswrapper[27835]: I0318 13:43:19.972040 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="529ada08-25e2-4b25-aa95-f6fbb5263c13" containerName="init" Mar 18 13:43:19.975450 master-0 kubenswrapper[27835]: E0318 13:43:19.972072 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be12c3ea-d20c-494b-9467-b13c0f096788" containerName="glance-httpd" Mar 18 13:43:19.975450 master-0 kubenswrapper[27835]: I0318 13:43:19.972079 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="be12c3ea-d20c-494b-9467-b13c0f096788" containerName="glance-httpd" Mar 18 13:43:19.975450 master-0 kubenswrapper[27835]: I0318 13:43:19.972713 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="be12c3ea-d20c-494b-9467-b13c0f096788" containerName="glance-log" Mar 18 13:43:19.975450 master-0 kubenswrapper[27835]: I0318 13:43:19.972778 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="be12c3ea-d20c-494b-9467-b13c0f096788" containerName="glance-httpd" Mar 18 13:43:19.975450 master-0 kubenswrapper[27835]: I0318 13:43:19.972798 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="529ada08-25e2-4b25-aa95-f6fbb5263c13" containerName="dnsmasq-dns" Mar 18 13:43:19.975450 master-0 kubenswrapper[27835]: I0318 13:43:19.974175 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:19.983443 master-0 kubenswrapper[27835]: I0318 13:43:19.979109 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-4f519-default-external-config-data" Mar 18 13:43:19.983443 master-0 kubenswrapper[27835]: I0318 13:43:19.979367 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 18 13:43:20.003438 master-0 kubenswrapper[27835]: I0318 13:43:20.001400 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4f519-default-external-api-0"] Mar 18 13:43:20.123514 master-0 kubenswrapper[27835]: I0318 13:43:20.123088 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70b8479b-b0d7-4f54-a717-0dd3289cf5be-scripts\") pod \"glance-4f519-default-external-api-0\" (UID: \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:20.123514 master-0 kubenswrapper[27835]: I0318 13:43:20.123468 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ff789d6f-852a-4819-b19c-09444384ecbe\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8cd87a72-b057-4c0b-a353-f22fec868a61\") pod \"glance-4f519-default-external-api-0\" (UID: \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:20.123741 master-0 kubenswrapper[27835]: I0318 13:43:20.123586 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70b8479b-b0d7-4f54-a717-0dd3289cf5be-config-data\") pod \"glance-4f519-default-external-api-0\" (UID: \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:20.123741 master-0 kubenswrapper[27835]: I0318 13:43:20.123724 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70b8479b-b0d7-4f54-a717-0dd3289cf5be-logs\") pod \"glance-4f519-default-external-api-0\" (UID: \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:20.123812 master-0 kubenswrapper[27835]: I0318 13:43:20.123758 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70b8479b-b0d7-4f54-a717-0dd3289cf5be-public-tls-certs\") pod \"glance-4f519-default-external-api-0\" (UID: \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:20.123812 master-0 kubenswrapper[27835]: I0318 13:43:20.123800 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70b8479b-b0d7-4f54-a717-0dd3289cf5be-combined-ca-bundle\") pod \"glance-4f519-default-external-api-0\" (UID: \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:20.126900 master-0 kubenswrapper[27835]: I0318 13:43:20.123945 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/70b8479b-b0d7-4f54-a717-0dd3289cf5be-httpd-run\") pod \"glance-4f519-default-external-api-0\" (UID: \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:20.126900 master-0 kubenswrapper[27835]: I0318 13:43:20.124052 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgk8s\" (UniqueName: \"kubernetes.io/projected/70b8479b-b0d7-4f54-a717-0dd3289cf5be-kube-api-access-xgk8s\") pod \"glance-4f519-default-external-api-0\" (UID: \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:20.126900 master-0 kubenswrapper[27835]: E0318 13:43:20.126687 27835 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe12c3ea_d20c_494b_9467_b13c0f096788.slice/crio-eb825cedcb8ba87eb72cf0caec5d54cc3780ecc664afb5ae378daccb40203203\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe12c3ea_d20c_494b_9467_b13c0f096788.slice\": RecentStats: unable to find data in memory cache]" Mar 18 13:43:20.226443 master-0 kubenswrapper[27835]: I0318 13:43:20.225928 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70b8479b-b0d7-4f54-a717-0dd3289cf5be-logs\") pod \"glance-4f519-default-external-api-0\" (UID: \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:20.226443 master-0 kubenswrapper[27835]: I0318 13:43:20.226188 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70b8479b-b0d7-4f54-a717-0dd3289cf5be-public-tls-certs\") pod \"glance-4f519-default-external-api-0\" (UID: \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:20.226443 master-0 kubenswrapper[27835]: I0318 13:43:20.226337 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70b8479b-b0d7-4f54-a717-0dd3289cf5be-combined-ca-bundle\") pod \"glance-4f519-default-external-api-0\" (UID: \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:20.226756 master-0 kubenswrapper[27835]: I0318 13:43:20.226471 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/70b8479b-b0d7-4f54-a717-0dd3289cf5be-httpd-run\") pod \"glance-4f519-default-external-api-0\" (UID: \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:20.226756 master-0 kubenswrapper[27835]: I0318 13:43:20.226599 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgk8s\" (UniqueName: \"kubernetes.io/projected/70b8479b-b0d7-4f54-a717-0dd3289cf5be-kube-api-access-xgk8s\") pod \"glance-4f519-default-external-api-0\" (UID: \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:20.226756 master-0 kubenswrapper[27835]: I0318 13:43:20.226712 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70b8479b-b0d7-4f54-a717-0dd3289cf5be-logs\") pod \"glance-4f519-default-external-api-0\" (UID: \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:20.226756 master-0 kubenswrapper[27835]: I0318 13:43:20.226729 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70b8479b-b0d7-4f54-a717-0dd3289cf5be-scripts\") pod \"glance-4f519-default-external-api-0\" (UID: \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:20.231435 master-0 kubenswrapper[27835]: I0318 13:43:20.226965 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/70b8479b-b0d7-4f54-a717-0dd3289cf5be-httpd-run\") pod \"glance-4f519-default-external-api-0\" (UID: \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:20.231435 master-0 kubenswrapper[27835]: I0318 13:43:20.227009 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ff789d6f-852a-4819-b19c-09444384ecbe\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8cd87a72-b057-4c0b-a353-f22fec868a61\") pod \"glance-4f519-default-external-api-0\" (UID: \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:20.231435 master-0 kubenswrapper[27835]: I0318 13:43:20.227101 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70b8479b-b0d7-4f54-a717-0dd3289cf5be-config-data\") pod \"glance-4f519-default-external-api-0\" (UID: \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:20.231435 master-0 kubenswrapper[27835]: I0318 13:43:20.229193 27835 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 13:43:20.231435 master-0 kubenswrapper[27835]: I0318 13:43:20.229218 27835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ff789d6f-852a-4819-b19c-09444384ecbe\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8cd87a72-b057-4c0b-a353-f22fec868a61\") pod \"glance-4f519-default-external-api-0\" (UID: \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/d7d2380a2367ec81f9f9b44b1b86eaac9ba6ff0ab5cc582d80f5ba97c51d1f86/globalmount\"" pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:20.235435 master-0 kubenswrapper[27835]: I0318 13:43:20.232275 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70b8479b-b0d7-4f54-a717-0dd3289cf5be-scripts\") pod \"glance-4f519-default-external-api-0\" (UID: \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:20.239431 master-0 kubenswrapper[27835]: I0318 13:43:20.237521 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70b8479b-b0d7-4f54-a717-0dd3289cf5be-combined-ca-bundle\") pod \"glance-4f519-default-external-api-0\" (UID: \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:20.239431 master-0 kubenswrapper[27835]: I0318 13:43:20.238534 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70b8479b-b0d7-4f54-a717-0dd3289cf5be-public-tls-certs\") pod \"glance-4f519-default-external-api-0\" (UID: \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:20.250439 master-0 kubenswrapper[27835]: I0318 13:43:20.250159 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgk8s\" (UniqueName: \"kubernetes.io/projected/70b8479b-b0d7-4f54-a717-0dd3289cf5be-kube-api-access-xgk8s\") pod \"glance-4f519-default-external-api-0\" (UID: \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:20.258446 master-0 kubenswrapper[27835]: I0318 13:43:20.256993 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70b8479b-b0d7-4f54-a717-0dd3289cf5be-config-data\") pod \"glance-4f519-default-external-api-0\" (UID: \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:20.300461 master-0 kubenswrapper[27835]: I0318 13:43:20.299803 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="529ada08-25e2-4b25-aa95-f6fbb5263c13" path="/var/lib/kubelet/pods/529ada08-25e2-4b25-aa95-f6fbb5263c13/volumes" Mar 18 13:43:20.305441 master-0 kubenswrapper[27835]: I0318 13:43:20.300820 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be12c3ea-d20c-494b-9467-b13c0f096788" path="/var/lib/kubelet/pods/be12c3ea-d20c-494b-9467-b13c0f096788/volumes" Mar 18 13:43:20.488223 master-0 kubenswrapper[27835]: I0318 13:43:20.488142 27835 generic.go:334] "Generic (PLEG): container finished" podID="fa96d61e-36ca-4846-a008-82052eff4ab8" containerID="34d2a0adebc7131cf2dc3dfb6494ccbf6824a761e785f1ac83c71d0751212918" exitCode=0 Mar 18 13:43:20.489627 master-0 kubenswrapper[27835]: I0318 13:43:20.488240 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8mqtx" event={"ID":"fa96d61e-36ca-4846-a008-82052eff4ab8","Type":"ContainerDied","Data":"34d2a0adebc7131cf2dc3dfb6494ccbf6824a761e785f1ac83c71d0751212918"} Mar 18 13:43:21.652659 master-0 kubenswrapper[27835]: I0318 13:43:21.652603 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ff789d6f-852a-4819-b19c-09444384ecbe\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8cd87a72-b057-4c0b-a353-f22fec868a61\") pod \"glance-4f519-default-external-api-0\" (UID: \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:21.805263 master-0 kubenswrapper[27835]: I0318 13:43:21.804550 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:23.199794 master-0 kubenswrapper[27835]: I0318 13:43:23.199725 27835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-85f88db649-6j6lk" podUID="529ada08-25e2-4b25-aa95-f6fbb5263c13" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.192:5353: i/o timeout" Mar 18 13:43:25.306048 master-0 kubenswrapper[27835]: I0318 13:43:25.305988 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:25.306048 master-0 kubenswrapper[27835]: I0318 13:43:25.306045 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:25.337391 master-0 kubenswrapper[27835]: I0318 13:43:25.337342 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:25.351831 master-0 kubenswrapper[27835]: I0318 13:43:25.351769 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:25.563068 master-0 kubenswrapper[27835]: I0318 13:43:25.562913 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:25.563315 master-0 kubenswrapper[27835]: I0318 13:43:25.563281 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:28.892969 master-0 kubenswrapper[27835]: I0318 13:43:28.892062 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:28.892969 master-0 kubenswrapper[27835]: I0318 13:43:28.892238 27835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 13:43:28.912017 master-0 kubenswrapper[27835]: I0318 13:43:28.911931 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:43:30.363110 master-0 kubenswrapper[27835]: I0318 13:43:30.363046 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8mqtx" Mar 18 13:43:30.491324 master-0 kubenswrapper[27835]: I0318 13:43:30.491237 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29d84\" (UniqueName: \"kubernetes.io/projected/fa96d61e-36ca-4846-a008-82052eff4ab8-kube-api-access-29d84\") pod \"fa96d61e-36ca-4846-a008-82052eff4ab8\" (UID: \"fa96d61e-36ca-4846-a008-82052eff4ab8\") " Mar 18 13:43:30.491558 master-0 kubenswrapper[27835]: I0318 13:43:30.491370 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa96d61e-36ca-4846-a008-82052eff4ab8-combined-ca-bundle\") pod \"fa96d61e-36ca-4846-a008-82052eff4ab8\" (UID: \"fa96d61e-36ca-4846-a008-82052eff4ab8\") " Mar 18 13:43:30.491558 master-0 kubenswrapper[27835]: I0318 13:43:30.491477 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa96d61e-36ca-4846-a008-82052eff4ab8-config-data\") pod \"fa96d61e-36ca-4846-a008-82052eff4ab8\" (UID: \"fa96d61e-36ca-4846-a008-82052eff4ab8\") " Mar 18 13:43:30.491558 master-0 kubenswrapper[27835]: I0318 13:43:30.491497 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa96d61e-36ca-4846-a008-82052eff4ab8-scripts\") pod \"fa96d61e-36ca-4846-a008-82052eff4ab8\" (UID: \"fa96d61e-36ca-4846-a008-82052eff4ab8\") " Mar 18 13:43:30.491674 master-0 kubenswrapper[27835]: I0318 13:43:30.491627 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa96d61e-36ca-4846-a008-82052eff4ab8-logs\") pod \"fa96d61e-36ca-4846-a008-82052eff4ab8\" (UID: \"fa96d61e-36ca-4846-a008-82052eff4ab8\") " Mar 18 13:43:30.492399 master-0 kubenswrapper[27835]: I0318 13:43:30.492362 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa96d61e-36ca-4846-a008-82052eff4ab8-logs" (OuterVolumeSpecName: "logs") pod "fa96d61e-36ca-4846-a008-82052eff4ab8" (UID: "fa96d61e-36ca-4846-a008-82052eff4ab8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:43:30.496923 master-0 kubenswrapper[27835]: I0318 13:43:30.496866 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa96d61e-36ca-4846-a008-82052eff4ab8-kube-api-access-29d84" (OuterVolumeSpecName: "kube-api-access-29d84") pod "fa96d61e-36ca-4846-a008-82052eff4ab8" (UID: "fa96d61e-36ca-4846-a008-82052eff4ab8"). InnerVolumeSpecName "kube-api-access-29d84". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:43:30.497156 master-0 kubenswrapper[27835]: I0318 13:43:30.497126 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa96d61e-36ca-4846-a008-82052eff4ab8-scripts" (OuterVolumeSpecName: "scripts") pod "fa96d61e-36ca-4846-a008-82052eff4ab8" (UID: "fa96d61e-36ca-4846-a008-82052eff4ab8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:30.518785 master-0 kubenswrapper[27835]: I0318 13:43:30.518708 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa96d61e-36ca-4846-a008-82052eff4ab8-config-data" (OuterVolumeSpecName: "config-data") pod "fa96d61e-36ca-4846-a008-82052eff4ab8" (UID: "fa96d61e-36ca-4846-a008-82052eff4ab8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:30.543332 master-0 kubenswrapper[27835]: I0318 13:43:30.543257 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa96d61e-36ca-4846-a008-82052eff4ab8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa96d61e-36ca-4846-a008-82052eff4ab8" (UID: "fa96d61e-36ca-4846-a008-82052eff4ab8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:30.597073 master-0 kubenswrapper[27835]: I0318 13:43:30.597010 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29d84\" (UniqueName: \"kubernetes.io/projected/fa96d61e-36ca-4846-a008-82052eff4ab8-kube-api-access-29d84\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:30.597073 master-0 kubenswrapper[27835]: I0318 13:43:30.597067 27835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa96d61e-36ca-4846-a008-82052eff4ab8-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:30.597227 master-0 kubenswrapper[27835]: I0318 13:43:30.597082 27835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa96d61e-36ca-4846-a008-82052eff4ab8-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:30.597227 master-0 kubenswrapper[27835]: I0318 13:43:30.597099 27835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa96d61e-36ca-4846-a008-82052eff4ab8-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:30.597227 master-0 kubenswrapper[27835]: I0318 13:43:30.597112 27835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa96d61e-36ca-4846-a008-82052eff4ab8-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:30.625476 master-0 kubenswrapper[27835]: I0318 13:43:30.625378 27835 generic.go:334] "Generic (PLEG): container finished" podID="81aa1d7d-25c8-4408-a790-4c9fe8ed9742" containerID="564d5b37f0b4618d848bf0331bd8de714a314376c33d0cc0a085b9a66dbf058a" exitCode=0 Mar 18 13:43:30.625476 master-0 kubenswrapper[27835]: I0318 13:43:30.625455 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xcnw2" event={"ID":"81aa1d7d-25c8-4408-a790-4c9fe8ed9742","Type":"ContainerDied","Data":"564d5b37f0b4618d848bf0331bd8de714a314376c33d0cc0a085b9a66dbf058a"} Mar 18 13:43:30.628675 master-0 kubenswrapper[27835]: I0318 13:43:30.628637 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8mqtx" event={"ID":"fa96d61e-36ca-4846-a008-82052eff4ab8","Type":"ContainerDied","Data":"f99e97c31b64caa5dbe8403225e7bdd2df3d7af9230fd2b8fc2ef0c90357df48"} Mar 18 13:43:30.628775 master-0 kubenswrapper[27835]: I0318 13:43:30.628684 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f99e97c31b64caa5dbe8403225e7bdd2df3d7af9230fd2b8fc2ef0c90357df48" Mar 18 13:43:30.628775 master-0 kubenswrapper[27835]: I0318 13:43:30.628699 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8mqtx" Mar 18 13:43:31.626207 master-0 kubenswrapper[27835]: I0318 13:43:31.626078 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6c47db445d-25kc6"] Mar 18 13:43:31.627044 master-0 kubenswrapper[27835]: E0318 13:43:31.626950 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa96d61e-36ca-4846-a008-82052eff4ab8" containerName="placement-db-sync" Mar 18 13:43:31.627044 master-0 kubenswrapper[27835]: I0318 13:43:31.626980 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa96d61e-36ca-4846-a008-82052eff4ab8" containerName="placement-db-sync" Mar 18 13:43:31.656447 master-0 kubenswrapper[27835]: I0318 13:43:31.652779 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa96d61e-36ca-4846-a008-82052eff4ab8" containerName="placement-db-sync" Mar 18 13:43:31.656447 master-0 kubenswrapper[27835]: I0318 13:43:31.654872 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c47db445d-25kc6" Mar 18 13:43:31.659281 master-0 kubenswrapper[27835]: I0318 13:43:31.657750 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6c47db445d-25kc6"] Mar 18 13:43:31.662036 master-0 kubenswrapper[27835]: I0318 13:43:31.660021 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 18 13:43:31.662036 master-0 kubenswrapper[27835]: I0318 13:43:31.660335 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Mar 18 13:43:31.662036 master-0 kubenswrapper[27835]: I0318 13:43:31.660970 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Mar 18 13:43:31.672667 master-0 kubenswrapper[27835]: I0318 13:43:31.672613 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 18 13:43:31.724639 master-0 kubenswrapper[27835]: I0318 13:43:31.724385 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrvrb\" (UniqueName: \"kubernetes.io/projected/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-kube-api-access-zrvrb\") pod \"placement-6c47db445d-25kc6\" (UID: \"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538\") " pod="openstack/placement-6c47db445d-25kc6" Mar 18 13:43:31.725046 master-0 kubenswrapper[27835]: I0318 13:43:31.724739 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-config-data\") pod \"placement-6c47db445d-25kc6\" (UID: \"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538\") " pod="openstack/placement-6c47db445d-25kc6" Mar 18 13:43:31.725046 master-0 kubenswrapper[27835]: I0318 13:43:31.724830 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-public-tls-certs\") pod \"placement-6c47db445d-25kc6\" (UID: \"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538\") " pod="openstack/placement-6c47db445d-25kc6" Mar 18 13:43:31.725046 master-0 kubenswrapper[27835]: I0318 13:43:31.724907 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-internal-tls-certs\") pod \"placement-6c47db445d-25kc6\" (UID: \"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538\") " pod="openstack/placement-6c47db445d-25kc6" Mar 18 13:43:31.725234 master-0 kubenswrapper[27835]: I0318 13:43:31.725201 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-combined-ca-bundle\") pod \"placement-6c47db445d-25kc6\" (UID: \"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538\") " pod="openstack/placement-6c47db445d-25kc6" Mar 18 13:43:31.725282 master-0 kubenswrapper[27835]: I0318 13:43:31.725241 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-scripts\") pod \"placement-6c47db445d-25kc6\" (UID: \"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538\") " pod="openstack/placement-6c47db445d-25kc6" Mar 18 13:43:31.725344 master-0 kubenswrapper[27835]: I0318 13:43:31.725303 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-logs\") pod \"placement-6c47db445d-25kc6\" (UID: \"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538\") " pod="openstack/placement-6c47db445d-25kc6" Mar 18 13:43:31.828245 master-0 kubenswrapper[27835]: I0318 13:43:31.827643 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-config-data\") pod \"placement-6c47db445d-25kc6\" (UID: \"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538\") " pod="openstack/placement-6c47db445d-25kc6" Mar 18 13:43:31.828245 master-0 kubenswrapper[27835]: I0318 13:43:31.827728 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-public-tls-certs\") pod \"placement-6c47db445d-25kc6\" (UID: \"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538\") " pod="openstack/placement-6c47db445d-25kc6" Mar 18 13:43:31.828245 master-0 kubenswrapper[27835]: I0318 13:43:31.827774 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-internal-tls-certs\") pod \"placement-6c47db445d-25kc6\" (UID: \"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538\") " pod="openstack/placement-6c47db445d-25kc6" Mar 18 13:43:31.828245 master-0 kubenswrapper[27835]: I0318 13:43:31.827860 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-combined-ca-bundle\") pod \"placement-6c47db445d-25kc6\" (UID: \"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538\") " pod="openstack/placement-6c47db445d-25kc6" Mar 18 13:43:31.828245 master-0 kubenswrapper[27835]: I0318 13:43:31.827881 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-scripts\") pod \"placement-6c47db445d-25kc6\" (UID: \"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538\") " pod="openstack/placement-6c47db445d-25kc6" Mar 18 13:43:31.828245 master-0 kubenswrapper[27835]: I0318 13:43:31.827908 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-logs\") pod \"placement-6c47db445d-25kc6\" (UID: \"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538\") " pod="openstack/placement-6c47db445d-25kc6" Mar 18 13:43:31.828245 master-0 kubenswrapper[27835]: I0318 13:43:31.827951 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrvrb\" (UniqueName: \"kubernetes.io/projected/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-kube-api-access-zrvrb\") pod \"placement-6c47db445d-25kc6\" (UID: \"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538\") " pod="openstack/placement-6c47db445d-25kc6" Mar 18 13:43:31.829374 master-0 kubenswrapper[27835]: I0318 13:43:31.829335 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-logs\") pod \"placement-6c47db445d-25kc6\" (UID: \"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538\") " pod="openstack/placement-6c47db445d-25kc6" Mar 18 13:43:31.832552 master-0 kubenswrapper[27835]: I0318 13:43:31.832509 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-internal-tls-certs\") pod \"placement-6c47db445d-25kc6\" (UID: \"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538\") " pod="openstack/placement-6c47db445d-25kc6" Mar 18 13:43:31.833710 master-0 kubenswrapper[27835]: I0318 13:43:31.833263 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-config-data\") pod \"placement-6c47db445d-25kc6\" (UID: \"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538\") " pod="openstack/placement-6c47db445d-25kc6" Mar 18 13:43:31.833710 master-0 kubenswrapper[27835]: I0318 13:43:31.833645 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-scripts\") pod \"placement-6c47db445d-25kc6\" (UID: \"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538\") " pod="openstack/placement-6c47db445d-25kc6" Mar 18 13:43:31.835041 master-0 kubenswrapper[27835]: I0318 13:43:31.834997 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-combined-ca-bundle\") pod \"placement-6c47db445d-25kc6\" (UID: \"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538\") " pod="openstack/placement-6c47db445d-25kc6" Mar 18 13:43:31.845803 master-0 kubenswrapper[27835]: I0318 13:43:31.845759 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-public-tls-certs\") pod \"placement-6c47db445d-25kc6\" (UID: \"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538\") " pod="openstack/placement-6c47db445d-25kc6" Mar 18 13:43:31.845803 master-0 kubenswrapper[27835]: I0318 13:43:31.845776 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrvrb\" (UniqueName: \"kubernetes.io/projected/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-kube-api-access-zrvrb\") pod \"placement-6c47db445d-25kc6\" (UID: \"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538\") " pod="openstack/placement-6c47db445d-25kc6" Mar 18 13:43:31.993056 master-0 kubenswrapper[27835]: I0318 13:43:31.992987 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c47db445d-25kc6" Mar 18 13:43:32.885867 master-0 kubenswrapper[27835]: I0318 13:43:32.885828 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xcnw2" Mar 18 13:43:32.955221 master-0 kubenswrapper[27835]: I0318 13:43:32.954512 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81aa1d7d-25c8-4408-a790-4c9fe8ed9742-scripts\") pod \"81aa1d7d-25c8-4408-a790-4c9fe8ed9742\" (UID: \"81aa1d7d-25c8-4408-a790-4c9fe8ed9742\") " Mar 18 13:43:32.955221 master-0 kubenswrapper[27835]: I0318 13:43:32.954668 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/81aa1d7d-25c8-4408-a790-4c9fe8ed9742-credential-keys\") pod \"81aa1d7d-25c8-4408-a790-4c9fe8ed9742\" (UID: \"81aa1d7d-25c8-4408-a790-4c9fe8ed9742\") " Mar 18 13:43:32.955221 master-0 kubenswrapper[27835]: I0318 13:43:32.954747 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81aa1d7d-25c8-4408-a790-4c9fe8ed9742-config-data\") pod \"81aa1d7d-25c8-4408-a790-4c9fe8ed9742\" (UID: \"81aa1d7d-25c8-4408-a790-4c9fe8ed9742\") " Mar 18 13:43:32.955221 master-0 kubenswrapper[27835]: I0318 13:43:32.954852 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/81aa1d7d-25c8-4408-a790-4c9fe8ed9742-fernet-keys\") pod \"81aa1d7d-25c8-4408-a790-4c9fe8ed9742\" (UID: \"81aa1d7d-25c8-4408-a790-4c9fe8ed9742\") " Mar 18 13:43:32.955221 master-0 kubenswrapper[27835]: I0318 13:43:32.954975 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81aa1d7d-25c8-4408-a790-4c9fe8ed9742-combined-ca-bundle\") pod \"81aa1d7d-25c8-4408-a790-4c9fe8ed9742\" (UID: \"81aa1d7d-25c8-4408-a790-4c9fe8ed9742\") " Mar 18 13:43:32.955221 master-0 kubenswrapper[27835]: I0318 13:43:32.955044 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98k2g\" (UniqueName: \"kubernetes.io/projected/81aa1d7d-25c8-4408-a790-4c9fe8ed9742-kube-api-access-98k2g\") pod \"81aa1d7d-25c8-4408-a790-4c9fe8ed9742\" (UID: \"81aa1d7d-25c8-4408-a790-4c9fe8ed9742\") " Mar 18 13:43:32.958301 master-0 kubenswrapper[27835]: I0318 13:43:32.958238 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81aa1d7d-25c8-4408-a790-4c9fe8ed9742-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "81aa1d7d-25c8-4408-a790-4c9fe8ed9742" (UID: "81aa1d7d-25c8-4408-a790-4c9fe8ed9742"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:32.958793 master-0 kubenswrapper[27835]: I0318 13:43:32.958607 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81aa1d7d-25c8-4408-a790-4c9fe8ed9742-scripts" (OuterVolumeSpecName: "scripts") pod "81aa1d7d-25c8-4408-a790-4c9fe8ed9742" (UID: "81aa1d7d-25c8-4408-a790-4c9fe8ed9742"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:32.959281 master-0 kubenswrapper[27835]: I0318 13:43:32.958942 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81aa1d7d-25c8-4408-a790-4c9fe8ed9742-kube-api-access-98k2g" (OuterVolumeSpecName: "kube-api-access-98k2g") pod "81aa1d7d-25c8-4408-a790-4c9fe8ed9742" (UID: "81aa1d7d-25c8-4408-a790-4c9fe8ed9742"). InnerVolumeSpecName "kube-api-access-98k2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:43:32.959809 master-0 kubenswrapper[27835]: I0318 13:43:32.959760 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81aa1d7d-25c8-4408-a790-4c9fe8ed9742-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "81aa1d7d-25c8-4408-a790-4c9fe8ed9742" (UID: "81aa1d7d-25c8-4408-a790-4c9fe8ed9742"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:32.992360 master-0 kubenswrapper[27835]: I0318 13:43:32.991963 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81aa1d7d-25c8-4408-a790-4c9fe8ed9742-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81aa1d7d-25c8-4408-a790-4c9fe8ed9742" (UID: "81aa1d7d-25c8-4408-a790-4c9fe8ed9742"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:32.995341 master-0 kubenswrapper[27835]: I0318 13:43:32.995244 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81aa1d7d-25c8-4408-a790-4c9fe8ed9742-config-data" (OuterVolumeSpecName: "config-data") pod "81aa1d7d-25c8-4408-a790-4c9fe8ed9742" (UID: "81aa1d7d-25c8-4408-a790-4c9fe8ed9742"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:33.059023 master-0 kubenswrapper[27835]: I0318 13:43:33.058910 27835 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/81aa1d7d-25c8-4408-a790-4c9fe8ed9742-credential-keys\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:33.059023 master-0 kubenswrapper[27835]: I0318 13:43:33.058955 27835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81aa1d7d-25c8-4408-a790-4c9fe8ed9742-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:33.059023 master-0 kubenswrapper[27835]: I0318 13:43:33.058965 27835 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/81aa1d7d-25c8-4408-a790-4c9fe8ed9742-fernet-keys\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:33.059023 master-0 kubenswrapper[27835]: I0318 13:43:33.058973 27835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81aa1d7d-25c8-4408-a790-4c9fe8ed9742-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:33.059023 master-0 kubenswrapper[27835]: I0318 13:43:33.059010 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98k2g\" (UniqueName: \"kubernetes.io/projected/81aa1d7d-25c8-4408-a790-4c9fe8ed9742-kube-api-access-98k2g\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:33.059023 master-0 kubenswrapper[27835]: I0318 13:43:33.059020 27835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81aa1d7d-25c8-4408-a790-4c9fe8ed9742-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:33.682320 master-0 kubenswrapper[27835]: I0318 13:43:33.682267 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xcnw2" event={"ID":"81aa1d7d-25c8-4408-a790-4c9fe8ed9742","Type":"ContainerDied","Data":"883752a276c107244986bc0405bd46532527fc68761e4be4a57a27dec80f015a"} Mar 18 13:43:33.682320 master-0 kubenswrapper[27835]: I0318 13:43:33.682314 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="883752a276c107244986bc0405bd46532527fc68761e4be4a57a27dec80f015a" Mar 18 13:43:33.682655 master-0 kubenswrapper[27835]: I0318 13:43:33.682364 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xcnw2" Mar 18 13:43:34.096431 master-0 kubenswrapper[27835]: I0318 13:43:34.096275 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-58d6fd9d55-fhlft"] Mar 18 13:43:34.097037 master-0 kubenswrapper[27835]: E0318 13:43:34.097008 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81aa1d7d-25c8-4408-a790-4c9fe8ed9742" containerName="keystone-bootstrap" Mar 18 13:43:34.097093 master-0 kubenswrapper[27835]: I0318 13:43:34.097037 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="81aa1d7d-25c8-4408-a790-4c9fe8ed9742" containerName="keystone-bootstrap" Mar 18 13:43:34.097386 master-0 kubenswrapper[27835]: I0318 13:43:34.097351 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="81aa1d7d-25c8-4408-a790-4c9fe8ed9742" containerName="keystone-bootstrap" Mar 18 13:43:34.098168 master-0 kubenswrapper[27835]: I0318 13:43:34.098128 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-58d6fd9d55-fhlft" Mar 18 13:43:34.100206 master-0 kubenswrapper[27835]: I0318 13:43:34.100130 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 18 13:43:34.102187 master-0 kubenswrapper[27835]: I0318 13:43:34.102136 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 18 13:43:34.102956 master-0 kubenswrapper[27835]: I0318 13:43:34.102924 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 18 13:43:34.103124 master-0 kubenswrapper[27835]: I0318 13:43:34.103104 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 18 13:43:34.103245 master-0 kubenswrapper[27835]: I0318 13:43:34.103227 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 18 13:43:34.131121 master-0 kubenswrapper[27835]: I0318 13:43:34.131019 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-58d6fd9d55-fhlft"] Mar 18 13:43:34.184254 master-0 kubenswrapper[27835]: I0318 13:43:34.184212 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88fc7fd1-97a2-4879-b813-d29ddcc4d3b0-public-tls-certs\") pod \"keystone-58d6fd9d55-fhlft\" (UID: \"88fc7fd1-97a2-4879-b813-d29ddcc4d3b0\") " pod="openstack/keystone-58d6fd9d55-fhlft" Mar 18 13:43:34.184434 master-0 kubenswrapper[27835]: I0318 13:43:34.184270 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88fc7fd1-97a2-4879-b813-d29ddcc4d3b0-combined-ca-bundle\") pod \"keystone-58d6fd9d55-fhlft\" (UID: \"88fc7fd1-97a2-4879-b813-d29ddcc4d3b0\") " pod="openstack/keystone-58d6fd9d55-fhlft" Mar 18 13:43:34.184434 master-0 kubenswrapper[27835]: I0318 13:43:34.184350 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/88fc7fd1-97a2-4879-b813-d29ddcc4d3b0-fernet-keys\") pod \"keystone-58d6fd9d55-fhlft\" (UID: \"88fc7fd1-97a2-4879-b813-d29ddcc4d3b0\") " pod="openstack/keystone-58d6fd9d55-fhlft" Mar 18 13:43:34.184557 master-0 kubenswrapper[27835]: I0318 13:43:34.184431 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88fc7fd1-97a2-4879-b813-d29ddcc4d3b0-scripts\") pod \"keystone-58d6fd9d55-fhlft\" (UID: \"88fc7fd1-97a2-4879-b813-d29ddcc4d3b0\") " pod="openstack/keystone-58d6fd9d55-fhlft" Mar 18 13:43:34.184557 master-0 kubenswrapper[27835]: I0318 13:43:34.184483 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/88fc7fd1-97a2-4879-b813-d29ddcc4d3b0-credential-keys\") pod \"keystone-58d6fd9d55-fhlft\" (UID: \"88fc7fd1-97a2-4879-b813-d29ddcc4d3b0\") " pod="openstack/keystone-58d6fd9d55-fhlft" Mar 18 13:43:34.184897 master-0 kubenswrapper[27835]: I0318 13:43:34.184862 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88fc7fd1-97a2-4879-b813-d29ddcc4d3b0-config-data\") pod \"keystone-58d6fd9d55-fhlft\" (UID: \"88fc7fd1-97a2-4879-b813-d29ddcc4d3b0\") " pod="openstack/keystone-58d6fd9d55-fhlft" Mar 18 13:43:34.184960 master-0 kubenswrapper[27835]: I0318 13:43:34.184942 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtvfq\" (UniqueName: \"kubernetes.io/projected/88fc7fd1-97a2-4879-b813-d29ddcc4d3b0-kube-api-access-dtvfq\") pod \"keystone-58d6fd9d55-fhlft\" (UID: \"88fc7fd1-97a2-4879-b813-d29ddcc4d3b0\") " pod="openstack/keystone-58d6fd9d55-fhlft" Mar 18 13:43:34.185012 master-0 kubenswrapper[27835]: I0318 13:43:34.184997 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/88fc7fd1-97a2-4879-b813-d29ddcc4d3b0-internal-tls-certs\") pod \"keystone-58d6fd9d55-fhlft\" (UID: \"88fc7fd1-97a2-4879-b813-d29ddcc4d3b0\") " pod="openstack/keystone-58d6fd9d55-fhlft" Mar 18 13:43:34.290405 master-0 kubenswrapper[27835]: I0318 13:43:34.287745 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88fc7fd1-97a2-4879-b813-d29ddcc4d3b0-config-data\") pod \"keystone-58d6fd9d55-fhlft\" (UID: \"88fc7fd1-97a2-4879-b813-d29ddcc4d3b0\") " pod="openstack/keystone-58d6fd9d55-fhlft" Mar 18 13:43:34.290405 master-0 kubenswrapper[27835]: I0318 13:43:34.287845 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtvfq\" (UniqueName: \"kubernetes.io/projected/88fc7fd1-97a2-4879-b813-d29ddcc4d3b0-kube-api-access-dtvfq\") pod \"keystone-58d6fd9d55-fhlft\" (UID: \"88fc7fd1-97a2-4879-b813-d29ddcc4d3b0\") " pod="openstack/keystone-58d6fd9d55-fhlft" Mar 18 13:43:34.290405 master-0 kubenswrapper[27835]: I0318 13:43:34.287889 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/88fc7fd1-97a2-4879-b813-d29ddcc4d3b0-internal-tls-certs\") pod \"keystone-58d6fd9d55-fhlft\" (UID: \"88fc7fd1-97a2-4879-b813-d29ddcc4d3b0\") " pod="openstack/keystone-58d6fd9d55-fhlft" Mar 18 13:43:34.290405 master-0 kubenswrapper[27835]: I0318 13:43:34.288007 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88fc7fd1-97a2-4879-b813-d29ddcc4d3b0-public-tls-certs\") pod \"keystone-58d6fd9d55-fhlft\" (UID: \"88fc7fd1-97a2-4879-b813-d29ddcc4d3b0\") " pod="openstack/keystone-58d6fd9d55-fhlft" Mar 18 13:43:34.290405 master-0 kubenswrapper[27835]: I0318 13:43:34.288055 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88fc7fd1-97a2-4879-b813-d29ddcc4d3b0-combined-ca-bundle\") pod \"keystone-58d6fd9d55-fhlft\" (UID: \"88fc7fd1-97a2-4879-b813-d29ddcc4d3b0\") " pod="openstack/keystone-58d6fd9d55-fhlft" Mar 18 13:43:34.290405 master-0 kubenswrapper[27835]: I0318 13:43:34.288089 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/88fc7fd1-97a2-4879-b813-d29ddcc4d3b0-fernet-keys\") pod \"keystone-58d6fd9d55-fhlft\" (UID: \"88fc7fd1-97a2-4879-b813-d29ddcc4d3b0\") " pod="openstack/keystone-58d6fd9d55-fhlft" Mar 18 13:43:34.290405 master-0 kubenswrapper[27835]: I0318 13:43:34.288120 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88fc7fd1-97a2-4879-b813-d29ddcc4d3b0-scripts\") pod \"keystone-58d6fd9d55-fhlft\" (UID: \"88fc7fd1-97a2-4879-b813-d29ddcc4d3b0\") " pod="openstack/keystone-58d6fd9d55-fhlft" Mar 18 13:43:34.290405 master-0 kubenswrapper[27835]: I0318 13:43:34.288164 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/88fc7fd1-97a2-4879-b813-d29ddcc4d3b0-credential-keys\") pod \"keystone-58d6fd9d55-fhlft\" (UID: \"88fc7fd1-97a2-4879-b813-d29ddcc4d3b0\") " pod="openstack/keystone-58d6fd9d55-fhlft" Mar 18 13:43:34.306844 master-0 kubenswrapper[27835]: I0318 13:43:34.305313 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88fc7fd1-97a2-4879-b813-d29ddcc4d3b0-combined-ca-bundle\") pod \"keystone-58d6fd9d55-fhlft\" (UID: \"88fc7fd1-97a2-4879-b813-d29ddcc4d3b0\") " pod="openstack/keystone-58d6fd9d55-fhlft" Mar 18 13:43:34.311139 master-0 kubenswrapper[27835]: I0318 13:43:34.311060 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88fc7fd1-97a2-4879-b813-d29ddcc4d3b0-public-tls-certs\") pod \"keystone-58d6fd9d55-fhlft\" (UID: \"88fc7fd1-97a2-4879-b813-d29ddcc4d3b0\") " pod="openstack/keystone-58d6fd9d55-fhlft" Mar 18 13:43:34.318312 master-0 kubenswrapper[27835]: I0318 13:43:34.318169 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88fc7fd1-97a2-4879-b813-d29ddcc4d3b0-config-data\") pod \"keystone-58d6fd9d55-fhlft\" (UID: \"88fc7fd1-97a2-4879-b813-d29ddcc4d3b0\") " pod="openstack/keystone-58d6fd9d55-fhlft" Mar 18 13:43:34.322440 master-0 kubenswrapper[27835]: I0318 13:43:34.319036 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/88fc7fd1-97a2-4879-b813-d29ddcc4d3b0-fernet-keys\") pod \"keystone-58d6fd9d55-fhlft\" (UID: \"88fc7fd1-97a2-4879-b813-d29ddcc4d3b0\") " pod="openstack/keystone-58d6fd9d55-fhlft" Mar 18 13:43:34.326400 master-0 kubenswrapper[27835]: I0318 13:43:34.323862 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88fc7fd1-97a2-4879-b813-d29ddcc4d3b0-scripts\") pod \"keystone-58d6fd9d55-fhlft\" (UID: \"88fc7fd1-97a2-4879-b813-d29ddcc4d3b0\") " pod="openstack/keystone-58d6fd9d55-fhlft" Mar 18 13:43:34.327204 master-0 kubenswrapper[27835]: I0318 13:43:34.327128 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/88fc7fd1-97a2-4879-b813-d29ddcc4d3b0-credential-keys\") pod \"keystone-58d6fd9d55-fhlft\" (UID: \"88fc7fd1-97a2-4879-b813-d29ddcc4d3b0\") " pod="openstack/keystone-58d6fd9d55-fhlft" Mar 18 13:43:34.330435 master-0 kubenswrapper[27835]: I0318 13:43:34.327971 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/88fc7fd1-97a2-4879-b813-d29ddcc4d3b0-internal-tls-certs\") pod \"keystone-58d6fd9d55-fhlft\" (UID: \"88fc7fd1-97a2-4879-b813-d29ddcc4d3b0\") " pod="openstack/keystone-58d6fd9d55-fhlft" Mar 18 13:43:34.349490 master-0 kubenswrapper[27835]: I0318 13:43:34.349180 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtvfq\" (UniqueName: \"kubernetes.io/projected/88fc7fd1-97a2-4879-b813-d29ddcc4d3b0-kube-api-access-dtvfq\") pod \"keystone-58d6fd9d55-fhlft\" (UID: \"88fc7fd1-97a2-4879-b813-d29ddcc4d3b0\") " pod="openstack/keystone-58d6fd9d55-fhlft" Mar 18 13:43:34.458500 master-0 kubenswrapper[27835]: I0318 13:43:34.450305 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-58d6fd9d55-fhlft" Mar 18 13:43:34.696988 master-0 kubenswrapper[27835]: I0318 13:43:34.695760 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-tmkx2" event={"ID":"653293fa-39a3-4b35-ad41-6a3cac734e80","Type":"ContainerStarted","Data":"4f8b886e162af17350a6390eb4e9b7f9c28f506083f90dd4dc57555fd466c11a"} Mar 18 13:43:34.955452 master-0 kubenswrapper[27835]: I0318 13:43:34.954907 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4f519-default-external-api-0"] Mar 18 13:43:35.013929 master-0 kubenswrapper[27835]: I0318 13:43:35.013846 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6c47db445d-25kc6"] Mar 18 13:43:35.104672 master-0 kubenswrapper[27835]: I0318 13:43:35.104617 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-58d6fd9d55-fhlft"] Mar 18 13:43:35.723057 master-0 kubenswrapper[27835]: I0318 13:43:35.722436 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-db-sync-jhwx7" event={"ID":"b14c1c8f-6882-4b86-bfdf-ef1cae5e8321","Type":"ContainerStarted","Data":"e2636cb1cd9101b9d35f358ca47f86d0948b46fce6ed128b3743c3c241466123"} Mar 18 13:43:35.733445 master-0 kubenswrapper[27835]: I0318 13:43:35.732672 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4f519-default-external-api-0" event={"ID":"70b8479b-b0d7-4f54-a717-0dd3289cf5be","Type":"ContainerStarted","Data":"8b770528ca0aacbabf474fe5cb46f81ed7316ab9070d767a1a2654780b3d081b"} Mar 18 13:43:35.733445 master-0 kubenswrapper[27835]: I0318 13:43:35.732736 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4f519-default-external-api-0" event={"ID":"70b8479b-b0d7-4f54-a717-0dd3289cf5be","Type":"ContainerStarted","Data":"d23185235997ef0f8220a1a23ccb2ba865e4e882eba38ecf441d6f35fcb9dbce"} Mar 18 13:43:35.747686 master-0 kubenswrapper[27835]: I0318 13:43:35.747626 27835 generic.go:334] "Generic (PLEG): container finished" podID="653293fa-39a3-4b35-ad41-6a3cac734e80" containerID="4f8b886e162af17350a6390eb4e9b7f9c28f506083f90dd4dc57555fd466c11a" exitCode=0 Mar 18 13:43:35.747916 master-0 kubenswrapper[27835]: I0318 13:43:35.747737 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-tmkx2" event={"ID":"653293fa-39a3-4b35-ad41-6a3cac734e80","Type":"ContainerDied","Data":"4f8b886e162af17350a6390eb4e9b7f9c28f506083f90dd4dc57555fd466c11a"} Mar 18 13:43:35.748195 master-0 kubenswrapper[27835]: I0318 13:43:35.748123 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-07518-db-sync-jhwx7" podStartSLOduration=4.18808928 podStartE2EDuration="29.748102651s" podCreationTimestamp="2026-03-18 13:43:06 +0000 UTC" firstStartedPulling="2026-03-18 13:43:08.609213829 +0000 UTC m=+1152.574425389" lastFinishedPulling="2026-03-18 13:43:34.1692272 +0000 UTC m=+1178.134438760" observedRunningTime="2026-03-18 13:43:35.747106585 +0000 UTC m=+1179.712318145" watchObservedRunningTime="2026-03-18 13:43:35.748102651 +0000 UTC m=+1179.713314221" Mar 18 13:43:35.755461 master-0 kubenswrapper[27835]: I0318 13:43:35.751761 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-58d6fd9d55-fhlft" event={"ID":"88fc7fd1-97a2-4879-b813-d29ddcc4d3b0","Type":"ContainerStarted","Data":"99e7a52e6f6c22b4d2110133577c275a83a63fdfaadbb5213f24eca09441bac8"} Mar 18 13:43:35.755461 master-0 kubenswrapper[27835]: I0318 13:43:35.751812 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-58d6fd9d55-fhlft" event={"ID":"88fc7fd1-97a2-4879-b813-d29ddcc4d3b0","Type":"ContainerStarted","Data":"2166e3ecf9176e260d9ac6eef9b2af742da983458a069af24fb3046ff0c9a0fe"} Mar 18 13:43:35.755461 master-0 kubenswrapper[27835]: I0318 13:43:35.752668 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-58d6fd9d55-fhlft" Mar 18 13:43:35.755461 master-0 kubenswrapper[27835]: I0318 13:43:35.754366 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c47db445d-25kc6" event={"ID":"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538","Type":"ContainerStarted","Data":"2f8dfe58ba6f40cba9e5cee1635a31c77ec8ffe24c85dadafa02aacae14c92e8"} Mar 18 13:43:35.755461 master-0 kubenswrapper[27835]: I0318 13:43:35.754419 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c47db445d-25kc6" event={"ID":"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538","Type":"ContainerStarted","Data":"1ee0190063a88a212d966248f9e1f2c5d2ac68c455f5aff12f178d4ff423a4e1"} Mar 18 13:43:35.755461 master-0 kubenswrapper[27835]: I0318 13:43:35.754431 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c47db445d-25kc6" event={"ID":"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538","Type":"ContainerStarted","Data":"3a71004c61f057c46e8c65260810d7aee461fe2099126319927dd2b6849e078a"} Mar 18 13:43:35.755461 master-0 kubenswrapper[27835]: I0318 13:43:35.755310 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6c47db445d-25kc6" Mar 18 13:43:35.755461 master-0 kubenswrapper[27835]: I0318 13:43:35.755334 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6c47db445d-25kc6" Mar 18 13:43:35.812434 master-0 kubenswrapper[27835]: I0318 13:43:35.807547 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-58d6fd9d55-fhlft" podStartSLOduration=1.806004267 podStartE2EDuration="1.806004267s" podCreationTimestamp="2026-03-18 13:43:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:43:35.792005205 +0000 UTC m=+1179.757216775" watchObservedRunningTime="2026-03-18 13:43:35.806004267 +0000 UTC m=+1179.771215827" Mar 18 13:43:35.846435 master-0 kubenswrapper[27835]: I0318 13:43:35.842589 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6c47db445d-25kc6" podStartSLOduration=4.8425718159999995 podStartE2EDuration="4.842571816s" podCreationTimestamp="2026-03-18 13:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:43:35.815458477 +0000 UTC m=+1179.780670057" watchObservedRunningTime="2026-03-18 13:43:35.842571816 +0000 UTC m=+1179.807783376" Mar 18 13:43:36.768448 master-0 kubenswrapper[27835]: I0318 13:43:36.768106 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-tmkx2" event={"ID":"653293fa-39a3-4b35-ad41-6a3cac734e80","Type":"ContainerStarted","Data":"f8f4c7cbe32c6357cee6ef56f3a6f6472666bb8fc1029ecdf447f80277df6df6"} Mar 18 13:43:36.771004 master-0 kubenswrapper[27835]: I0318 13:43:36.770945 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4f519-default-external-api-0" event={"ID":"70b8479b-b0d7-4f54-a717-0dd3289cf5be","Type":"ContainerStarted","Data":"db6da0c7160cdd36f1baf7c5cec389c7b26705bbdbbeea84501b520ea2a2ff54"} Mar 18 13:43:36.801121 master-0 kubenswrapper[27835]: I0318 13:43:36.801029 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-db-sync-tmkx2" podStartSLOduration=4.270363817 podStartE2EDuration="19.801009547s" podCreationTimestamp="2026-03-18 13:43:17 +0000 UTC" firstStartedPulling="2026-03-18 13:43:18.669311834 +0000 UTC m=+1162.634523394" lastFinishedPulling="2026-03-18 13:43:34.199957564 +0000 UTC m=+1178.165169124" observedRunningTime="2026-03-18 13:43:36.798147441 +0000 UTC m=+1180.763359021" watchObservedRunningTime="2026-03-18 13:43:36.801009547 +0000 UTC m=+1180.766221107" Mar 18 13:43:36.835225 master-0 kubenswrapper[27835]: I0318 13:43:36.835137 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-4f519-default-external-api-0" podStartSLOduration=17.835118871 podStartE2EDuration="17.835118871s" podCreationTimestamp="2026-03-18 13:43:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:43:36.829306197 +0000 UTC m=+1180.794517787" watchObservedRunningTime="2026-03-18 13:43:36.835118871 +0000 UTC m=+1180.800330431" Mar 18 13:43:37.784391 master-0 kubenswrapper[27835]: I0318 13:43:37.784258 27835 generic.go:334] "Generic (PLEG): container finished" podID="66ceeb4b-18bd-4d26-a1e7-ef700771aeec" containerID="9fd85f008728cfa85974596cac191e10f4d840ded1a79fbb3cd49c6801164197" exitCode=0 Mar 18 13:43:37.784966 master-0 kubenswrapper[27835]: I0318 13:43:37.784757 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mtjpd" event={"ID":"66ceeb4b-18bd-4d26-a1e7-ef700771aeec","Type":"ContainerDied","Data":"9fd85f008728cfa85974596cac191e10f4d840ded1a79fbb3cd49c6801164197"} Mar 18 13:43:38.388648 master-0 kubenswrapper[27835]: I0318 13:43:38.388560 27835 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podfe8ccd0f-8ecc-4d54-970d-3812c661d62b"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podfe8ccd0f-8ecc-4d54-970d-3812c661d62b] : Timed out while waiting for systemd to remove kubepods-besteffort-podfe8ccd0f_8ecc_4d54_970d_3812c661d62b.slice" Mar 18 13:43:38.388907 master-0 kubenswrapper[27835]: E0318 13:43:38.388656 27835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podfe8ccd0f-8ecc-4d54-970d-3812c661d62b] : unable to destroy cgroup paths for cgroup [kubepods besteffort podfe8ccd0f-8ecc-4d54-970d-3812c661d62b] : Timed out while waiting for systemd to remove kubepods-besteffort-podfe8ccd0f_8ecc_4d54_970d_3812c661d62b.slice" pod="openstack/dnsmasq-dns-5bbb956cfc-8ggj8" podUID="fe8ccd0f-8ecc-4d54-970d-3812c661d62b" Mar 18 13:43:38.793230 master-0 kubenswrapper[27835]: I0318 13:43:38.793187 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bbb956cfc-8ggj8" Mar 18 13:43:38.824696 master-0 kubenswrapper[27835]: I0318 13:43:38.822607 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bbb956cfc-8ggj8"] Mar 18 13:43:38.846384 master-0 kubenswrapper[27835]: I0318 13:43:38.846330 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bbb956cfc-8ggj8"] Mar 18 13:43:39.206815 master-0 kubenswrapper[27835]: I0318 13:43:39.206748 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mtjpd" Mar 18 13:43:39.349196 master-0 kubenswrapper[27835]: I0318 13:43:39.349101 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/66ceeb4b-18bd-4d26-a1e7-ef700771aeec-config\") pod \"66ceeb4b-18bd-4d26-a1e7-ef700771aeec\" (UID: \"66ceeb4b-18bd-4d26-a1e7-ef700771aeec\") " Mar 18 13:43:39.349607 master-0 kubenswrapper[27835]: I0318 13:43:39.349444 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66ceeb4b-18bd-4d26-a1e7-ef700771aeec-combined-ca-bundle\") pod \"66ceeb4b-18bd-4d26-a1e7-ef700771aeec\" (UID: \"66ceeb4b-18bd-4d26-a1e7-ef700771aeec\") " Mar 18 13:43:39.349607 master-0 kubenswrapper[27835]: I0318 13:43:39.349540 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2rf9\" (UniqueName: \"kubernetes.io/projected/66ceeb4b-18bd-4d26-a1e7-ef700771aeec-kube-api-access-x2rf9\") pod \"66ceeb4b-18bd-4d26-a1e7-ef700771aeec\" (UID: \"66ceeb4b-18bd-4d26-a1e7-ef700771aeec\") " Mar 18 13:43:39.355157 master-0 kubenswrapper[27835]: I0318 13:43:39.355046 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66ceeb4b-18bd-4d26-a1e7-ef700771aeec-kube-api-access-x2rf9" (OuterVolumeSpecName: "kube-api-access-x2rf9") pod "66ceeb4b-18bd-4d26-a1e7-ef700771aeec" (UID: "66ceeb4b-18bd-4d26-a1e7-ef700771aeec"). InnerVolumeSpecName "kube-api-access-x2rf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:43:39.377103 master-0 kubenswrapper[27835]: I0318 13:43:39.376981 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66ceeb4b-18bd-4d26-a1e7-ef700771aeec-config" (OuterVolumeSpecName: "config") pod "66ceeb4b-18bd-4d26-a1e7-ef700771aeec" (UID: "66ceeb4b-18bd-4d26-a1e7-ef700771aeec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:39.377788 master-0 kubenswrapper[27835]: I0318 13:43:39.377740 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66ceeb4b-18bd-4d26-a1e7-ef700771aeec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66ceeb4b-18bd-4d26-a1e7-ef700771aeec" (UID: "66ceeb4b-18bd-4d26-a1e7-ef700771aeec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:39.453136 master-0 kubenswrapper[27835]: I0318 13:43:39.453089 27835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66ceeb4b-18bd-4d26-a1e7-ef700771aeec-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:39.453136 master-0 kubenswrapper[27835]: I0318 13:43:39.453127 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2rf9\" (UniqueName: \"kubernetes.io/projected/66ceeb4b-18bd-4d26-a1e7-ef700771aeec-kube-api-access-x2rf9\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:39.453136 master-0 kubenswrapper[27835]: I0318 13:43:39.453139 27835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/66ceeb4b-18bd-4d26-a1e7-ef700771aeec-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:39.807674 master-0 kubenswrapper[27835]: I0318 13:43:39.807608 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-mtjpd" event={"ID":"66ceeb4b-18bd-4d26-a1e7-ef700771aeec","Type":"ContainerDied","Data":"0087979584d94b514f954ab39d3f878c8e3d3c172ab0b30f7b1e26087ba298a1"} Mar 18 13:43:39.807674 master-0 kubenswrapper[27835]: I0318 13:43:39.807651 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0087979584d94b514f954ab39d3f878c8e3d3c172ab0b30f7b1e26087ba298a1" Mar 18 13:43:39.808330 master-0 kubenswrapper[27835]: I0318 13:43:39.807701 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-mtjpd" Mar 18 13:43:40.101639 master-0 kubenswrapper[27835]: I0318 13:43:40.101446 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76c8b6b8bf-bpq6d"] Mar 18 13:43:40.102045 master-0 kubenswrapper[27835]: E0318 13:43:40.102013 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66ceeb4b-18bd-4d26-a1e7-ef700771aeec" containerName="neutron-db-sync" Mar 18 13:43:40.102045 master-0 kubenswrapper[27835]: I0318 13:43:40.102033 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="66ceeb4b-18bd-4d26-a1e7-ef700771aeec" containerName="neutron-db-sync" Mar 18 13:43:40.102324 master-0 kubenswrapper[27835]: I0318 13:43:40.102303 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="66ceeb4b-18bd-4d26-a1e7-ef700771aeec" containerName="neutron-db-sync" Mar 18 13:43:40.104508 master-0 kubenswrapper[27835]: I0318 13:43:40.103628 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76c8b6b8bf-bpq6d" Mar 18 13:43:40.116319 master-0 kubenswrapper[27835]: I0318 13:43:40.116129 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76c8b6b8bf-bpq6d"] Mar 18 13:43:40.180737 master-0 kubenswrapper[27835]: I0318 13:43:40.180683 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e32da24-78ff-44bb-9993-ac3c2048f236-config\") pod \"dnsmasq-dns-76c8b6b8bf-bpq6d\" (UID: \"8e32da24-78ff-44bb-9993-ac3c2048f236\") " pod="openstack/dnsmasq-dns-76c8b6b8bf-bpq6d" Mar 18 13:43:40.180998 master-0 kubenswrapper[27835]: I0318 13:43:40.180958 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e32da24-78ff-44bb-9993-ac3c2048f236-dns-svc\") pod \"dnsmasq-dns-76c8b6b8bf-bpq6d\" (UID: \"8e32da24-78ff-44bb-9993-ac3c2048f236\") " pod="openstack/dnsmasq-dns-76c8b6b8bf-bpq6d" Mar 18 13:43:40.181751 master-0 kubenswrapper[27835]: I0318 13:43:40.181082 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj67l\" (UniqueName: \"kubernetes.io/projected/8e32da24-78ff-44bb-9993-ac3c2048f236-kube-api-access-fj67l\") pod \"dnsmasq-dns-76c8b6b8bf-bpq6d\" (UID: \"8e32da24-78ff-44bb-9993-ac3c2048f236\") " pod="openstack/dnsmasq-dns-76c8b6b8bf-bpq6d" Mar 18 13:43:40.181926 master-0 kubenswrapper[27835]: I0318 13:43:40.181899 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e32da24-78ff-44bb-9993-ac3c2048f236-dns-swift-storage-0\") pod \"dnsmasq-dns-76c8b6b8bf-bpq6d\" (UID: \"8e32da24-78ff-44bb-9993-ac3c2048f236\") " pod="openstack/dnsmasq-dns-76c8b6b8bf-bpq6d" Mar 18 13:43:40.181976 master-0 kubenswrapper[27835]: I0318 13:43:40.181958 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e32da24-78ff-44bb-9993-ac3c2048f236-ovsdbserver-sb\") pod \"dnsmasq-dns-76c8b6b8bf-bpq6d\" (UID: \"8e32da24-78ff-44bb-9993-ac3c2048f236\") " pod="openstack/dnsmasq-dns-76c8b6b8bf-bpq6d" Mar 18 13:43:40.182196 master-0 kubenswrapper[27835]: I0318 13:43:40.182160 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e32da24-78ff-44bb-9993-ac3c2048f236-ovsdbserver-nb\") pod \"dnsmasq-dns-76c8b6b8bf-bpq6d\" (UID: \"8e32da24-78ff-44bb-9993-ac3c2048f236\") " pod="openstack/dnsmasq-dns-76c8b6b8bf-bpq6d" Mar 18 13:43:40.207099 master-0 kubenswrapper[27835]: I0318 13:43:40.206952 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6d475bdf48-lskhl"] Mar 18 13:43:40.210548 master-0 kubenswrapper[27835]: I0318 13:43:40.210501 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d475bdf48-lskhl" Mar 18 13:43:40.217587 master-0 kubenswrapper[27835]: I0318 13:43:40.214872 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 18 13:43:40.217587 master-0 kubenswrapper[27835]: I0318 13:43:40.216562 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 18 13:43:40.221809 master-0 kubenswrapper[27835]: I0318 13:43:40.220906 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Mar 18 13:43:40.226008 master-0 kubenswrapper[27835]: I0318 13:43:40.225960 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6d475bdf48-lskhl"] Mar 18 13:43:40.286028 master-0 kubenswrapper[27835]: I0318 13:43:40.285904 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e32da24-78ff-44bb-9993-ac3c2048f236-dns-svc\") pod \"dnsmasq-dns-76c8b6b8bf-bpq6d\" (UID: \"8e32da24-78ff-44bb-9993-ac3c2048f236\") " pod="openstack/dnsmasq-dns-76c8b6b8bf-bpq6d" Mar 18 13:43:40.286028 master-0 kubenswrapper[27835]: I0318 13:43:40.285991 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj67l\" (UniqueName: \"kubernetes.io/projected/8e32da24-78ff-44bb-9993-ac3c2048f236-kube-api-access-fj67l\") pod \"dnsmasq-dns-76c8b6b8bf-bpq6d\" (UID: \"8e32da24-78ff-44bb-9993-ac3c2048f236\") " pod="openstack/dnsmasq-dns-76c8b6b8bf-bpq6d" Mar 18 13:43:40.287351 master-0 kubenswrapper[27835]: I0318 13:43:40.287306 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e32da24-78ff-44bb-9993-ac3c2048f236-dns-svc\") pod \"dnsmasq-dns-76c8b6b8bf-bpq6d\" (UID: \"8e32da24-78ff-44bb-9993-ac3c2048f236\") " pod="openstack/dnsmasq-dns-76c8b6b8bf-bpq6d" Mar 18 13:43:40.287574 master-0 kubenswrapper[27835]: I0318 13:43:40.287516 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e32da24-78ff-44bb-9993-ac3c2048f236-dns-swift-storage-0\") pod \"dnsmasq-dns-76c8b6b8bf-bpq6d\" (UID: \"8e32da24-78ff-44bb-9993-ac3c2048f236\") " pod="openstack/dnsmasq-dns-76c8b6b8bf-bpq6d" Mar 18 13:43:40.287634 master-0 kubenswrapper[27835]: I0318 13:43:40.287612 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e32da24-78ff-44bb-9993-ac3c2048f236-ovsdbserver-sb\") pod \"dnsmasq-dns-76c8b6b8bf-bpq6d\" (UID: \"8e32da24-78ff-44bb-9993-ac3c2048f236\") " pod="openstack/dnsmasq-dns-76c8b6b8bf-bpq6d" Mar 18 13:43:40.287717 master-0 kubenswrapper[27835]: I0318 13:43:40.287694 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr6cs\" (UniqueName: \"kubernetes.io/projected/8fb691f3-dd32-4f7d-afa4-4d0980740b64-kube-api-access-rr6cs\") pod \"neutron-6d475bdf48-lskhl\" (UID: \"8fb691f3-dd32-4f7d-afa4-4d0980740b64\") " pod="openstack/neutron-6d475bdf48-lskhl" Mar 18 13:43:40.287828 master-0 kubenswrapper[27835]: I0318 13:43:40.287807 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8fb691f3-dd32-4f7d-afa4-4d0980740b64-httpd-config\") pod \"neutron-6d475bdf48-lskhl\" (UID: \"8fb691f3-dd32-4f7d-afa4-4d0980740b64\") " pod="openstack/neutron-6d475bdf48-lskhl" Mar 18 13:43:40.287930 master-0 kubenswrapper[27835]: I0318 13:43:40.287910 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fb691f3-dd32-4f7d-afa4-4d0980740b64-ovndb-tls-certs\") pod \"neutron-6d475bdf48-lskhl\" (UID: \"8fb691f3-dd32-4f7d-afa4-4d0980740b64\") " pod="openstack/neutron-6d475bdf48-lskhl" Mar 18 13:43:40.287972 master-0 kubenswrapper[27835]: I0318 13:43:40.287948 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e32da24-78ff-44bb-9993-ac3c2048f236-ovsdbserver-nb\") pod \"dnsmasq-dns-76c8b6b8bf-bpq6d\" (UID: \"8e32da24-78ff-44bb-9993-ac3c2048f236\") " pod="openstack/dnsmasq-dns-76c8b6b8bf-bpq6d" Mar 18 13:43:40.288480 master-0 kubenswrapper[27835]: I0318 13:43:40.288441 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e32da24-78ff-44bb-9993-ac3c2048f236-ovsdbserver-sb\") pod \"dnsmasq-dns-76c8b6b8bf-bpq6d\" (UID: \"8e32da24-78ff-44bb-9993-ac3c2048f236\") " pod="openstack/dnsmasq-dns-76c8b6b8bf-bpq6d" Mar 18 13:43:40.289130 master-0 kubenswrapper[27835]: I0318 13:43:40.289101 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e32da24-78ff-44bb-9993-ac3c2048f236-ovsdbserver-nb\") pod \"dnsmasq-dns-76c8b6b8bf-bpq6d\" (UID: \"8e32da24-78ff-44bb-9993-ac3c2048f236\") " pod="openstack/dnsmasq-dns-76c8b6b8bf-bpq6d" Mar 18 13:43:40.289458 master-0 kubenswrapper[27835]: I0318 13:43:40.289240 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e32da24-78ff-44bb-9993-ac3c2048f236-config\") pod \"dnsmasq-dns-76c8b6b8bf-bpq6d\" (UID: \"8e32da24-78ff-44bb-9993-ac3c2048f236\") " pod="openstack/dnsmasq-dns-76c8b6b8bf-bpq6d" Mar 18 13:43:40.289458 master-0 kubenswrapper[27835]: I0318 13:43:40.289406 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8fb691f3-dd32-4f7d-afa4-4d0980740b64-config\") pod \"neutron-6d475bdf48-lskhl\" (UID: \"8fb691f3-dd32-4f7d-afa4-4d0980740b64\") " pod="openstack/neutron-6d475bdf48-lskhl" Mar 18 13:43:40.289545 master-0 kubenswrapper[27835]: I0318 13:43:40.289488 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fb691f3-dd32-4f7d-afa4-4d0980740b64-combined-ca-bundle\") pod \"neutron-6d475bdf48-lskhl\" (UID: \"8fb691f3-dd32-4f7d-afa4-4d0980740b64\") " pod="openstack/neutron-6d475bdf48-lskhl" Mar 18 13:43:40.289803 master-0 kubenswrapper[27835]: I0318 13:43:40.289773 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e32da24-78ff-44bb-9993-ac3c2048f236-config\") pod \"dnsmasq-dns-76c8b6b8bf-bpq6d\" (UID: \"8e32da24-78ff-44bb-9993-ac3c2048f236\") " pod="openstack/dnsmasq-dns-76c8b6b8bf-bpq6d" Mar 18 13:43:40.290035 master-0 kubenswrapper[27835]: I0318 13:43:40.289994 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e32da24-78ff-44bb-9993-ac3c2048f236-dns-swift-storage-0\") pod \"dnsmasq-dns-76c8b6b8bf-bpq6d\" (UID: \"8e32da24-78ff-44bb-9993-ac3c2048f236\") " pod="openstack/dnsmasq-dns-76c8b6b8bf-bpq6d" Mar 18 13:43:40.329586 master-0 kubenswrapper[27835]: I0318 13:43:40.328864 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj67l\" (UniqueName: \"kubernetes.io/projected/8e32da24-78ff-44bb-9993-ac3c2048f236-kube-api-access-fj67l\") pod \"dnsmasq-dns-76c8b6b8bf-bpq6d\" (UID: \"8e32da24-78ff-44bb-9993-ac3c2048f236\") " pod="openstack/dnsmasq-dns-76c8b6b8bf-bpq6d" Mar 18 13:43:40.336775 master-0 kubenswrapper[27835]: I0318 13:43:40.336653 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe8ccd0f-8ecc-4d54-970d-3812c661d62b" path="/var/lib/kubelet/pods/fe8ccd0f-8ecc-4d54-970d-3812c661d62b/volumes" Mar 18 13:43:40.390913 master-0 kubenswrapper[27835]: I0318 13:43:40.390770 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fb691f3-dd32-4f7d-afa4-4d0980740b64-ovndb-tls-certs\") pod \"neutron-6d475bdf48-lskhl\" (UID: \"8fb691f3-dd32-4f7d-afa4-4d0980740b64\") " pod="openstack/neutron-6d475bdf48-lskhl" Mar 18 13:43:40.392368 master-0 kubenswrapper[27835]: I0318 13:43:40.391193 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8fb691f3-dd32-4f7d-afa4-4d0980740b64-config\") pod \"neutron-6d475bdf48-lskhl\" (UID: \"8fb691f3-dd32-4f7d-afa4-4d0980740b64\") " pod="openstack/neutron-6d475bdf48-lskhl" Mar 18 13:43:40.392368 master-0 kubenswrapper[27835]: I0318 13:43:40.391334 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fb691f3-dd32-4f7d-afa4-4d0980740b64-combined-ca-bundle\") pod \"neutron-6d475bdf48-lskhl\" (UID: \"8fb691f3-dd32-4f7d-afa4-4d0980740b64\") " pod="openstack/neutron-6d475bdf48-lskhl" Mar 18 13:43:40.392368 master-0 kubenswrapper[27835]: I0318 13:43:40.391924 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr6cs\" (UniqueName: \"kubernetes.io/projected/8fb691f3-dd32-4f7d-afa4-4d0980740b64-kube-api-access-rr6cs\") pod \"neutron-6d475bdf48-lskhl\" (UID: \"8fb691f3-dd32-4f7d-afa4-4d0980740b64\") " pod="openstack/neutron-6d475bdf48-lskhl" Mar 18 13:43:40.392368 master-0 kubenswrapper[27835]: I0318 13:43:40.392018 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8fb691f3-dd32-4f7d-afa4-4d0980740b64-httpd-config\") pod \"neutron-6d475bdf48-lskhl\" (UID: \"8fb691f3-dd32-4f7d-afa4-4d0980740b64\") " pod="openstack/neutron-6d475bdf48-lskhl" Mar 18 13:43:40.395498 master-0 kubenswrapper[27835]: I0318 13:43:40.395450 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8fb691f3-dd32-4f7d-afa4-4d0980740b64-config\") pod \"neutron-6d475bdf48-lskhl\" (UID: \"8fb691f3-dd32-4f7d-afa4-4d0980740b64\") " pod="openstack/neutron-6d475bdf48-lskhl" Mar 18 13:43:40.396986 master-0 kubenswrapper[27835]: I0318 13:43:40.396945 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fb691f3-dd32-4f7d-afa4-4d0980740b64-combined-ca-bundle\") pod \"neutron-6d475bdf48-lskhl\" (UID: \"8fb691f3-dd32-4f7d-afa4-4d0980740b64\") " pod="openstack/neutron-6d475bdf48-lskhl" Mar 18 13:43:40.397378 master-0 kubenswrapper[27835]: I0318 13:43:40.397357 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8fb691f3-dd32-4f7d-afa4-4d0980740b64-httpd-config\") pod \"neutron-6d475bdf48-lskhl\" (UID: \"8fb691f3-dd32-4f7d-afa4-4d0980740b64\") " pod="openstack/neutron-6d475bdf48-lskhl" Mar 18 13:43:40.399749 master-0 kubenswrapper[27835]: I0318 13:43:40.399696 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fb691f3-dd32-4f7d-afa4-4d0980740b64-ovndb-tls-certs\") pod \"neutron-6d475bdf48-lskhl\" (UID: \"8fb691f3-dd32-4f7d-afa4-4d0980740b64\") " pod="openstack/neutron-6d475bdf48-lskhl" Mar 18 13:43:40.413248 master-0 kubenswrapper[27835]: I0318 13:43:40.413186 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr6cs\" (UniqueName: \"kubernetes.io/projected/8fb691f3-dd32-4f7d-afa4-4d0980740b64-kube-api-access-rr6cs\") pod \"neutron-6d475bdf48-lskhl\" (UID: \"8fb691f3-dd32-4f7d-afa4-4d0980740b64\") " pod="openstack/neutron-6d475bdf48-lskhl" Mar 18 13:43:40.471737 master-0 kubenswrapper[27835]: I0318 13:43:40.471638 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76c8b6b8bf-bpq6d" Mar 18 13:43:40.533431 master-0 kubenswrapper[27835]: I0318 13:43:40.533202 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d475bdf48-lskhl" Mar 18 13:43:41.066076 master-0 kubenswrapper[27835]: I0318 13:43:41.065633 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76c8b6b8bf-bpq6d"] Mar 18 13:43:41.269760 master-0 kubenswrapper[27835]: W0318 13:43:41.269695 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fb691f3_dd32_4f7d_afa4_4d0980740b64.slice/crio-cd19598ca1d89a078408de8f0131d30772ac5fb3b8af46ae88f7c16c35bcefb3 WatchSource:0}: Error finding container cd19598ca1d89a078408de8f0131d30772ac5fb3b8af46ae88f7c16c35bcefb3: Status 404 returned error can't find the container with id cd19598ca1d89a078408de8f0131d30772ac5fb3b8af46ae88f7c16c35bcefb3 Mar 18 13:43:41.270879 master-0 kubenswrapper[27835]: I0318 13:43:41.270816 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6d475bdf48-lskhl"] Mar 18 13:43:41.805646 master-0 kubenswrapper[27835]: I0318 13:43:41.805582 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:41.805646 master-0 kubenswrapper[27835]: I0318 13:43:41.805645 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:41.837822 master-0 kubenswrapper[27835]: I0318 13:43:41.837695 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:41.842750 master-0 kubenswrapper[27835]: I0318 13:43:41.842633 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d475bdf48-lskhl" event={"ID":"8fb691f3-dd32-4f7d-afa4-4d0980740b64","Type":"ContainerStarted","Data":"e862540dc618ae0a90bacb8bc47bc230cb6d3a28836d17d998ad9600dce1898f"} Mar 18 13:43:41.842750 master-0 kubenswrapper[27835]: I0318 13:43:41.842682 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d475bdf48-lskhl" event={"ID":"8fb691f3-dd32-4f7d-afa4-4d0980740b64","Type":"ContainerStarted","Data":"26067cdf644c0ff8b6005aa4255c18ea6a5029445373767e9f274f38676036a1"} Mar 18 13:43:41.842750 master-0 kubenswrapper[27835]: I0318 13:43:41.842694 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d475bdf48-lskhl" event={"ID":"8fb691f3-dd32-4f7d-afa4-4d0980740b64","Type":"ContainerStarted","Data":"cd19598ca1d89a078408de8f0131d30772ac5fb3b8af46ae88f7c16c35bcefb3"} Mar 18 13:43:41.843046 master-0 kubenswrapper[27835]: I0318 13:43:41.842787 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6d475bdf48-lskhl" Mar 18 13:43:41.852109 master-0 kubenswrapper[27835]: I0318 13:43:41.851474 27835 generic.go:334] "Generic (PLEG): container finished" podID="8e32da24-78ff-44bb-9993-ac3c2048f236" containerID="2799627999b460dd6649c4fc2334f6a6106daa0f75be855b7ac710c2f284f32d" exitCode=0 Mar 18 13:43:41.852109 master-0 kubenswrapper[27835]: I0318 13:43:41.851526 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c8b6b8bf-bpq6d" event={"ID":"8e32da24-78ff-44bb-9993-ac3c2048f236","Type":"ContainerDied","Data":"2799627999b460dd6649c4fc2334f6a6106daa0f75be855b7ac710c2f284f32d"} Mar 18 13:43:41.852109 master-0 kubenswrapper[27835]: I0318 13:43:41.851567 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c8b6b8bf-bpq6d" event={"ID":"8e32da24-78ff-44bb-9993-ac3c2048f236","Type":"ContainerStarted","Data":"b34330c2a827c5dc29198d23070c9b96f6921b2ef6cee67f44e67beb976856e0"} Mar 18 13:43:41.852109 master-0 kubenswrapper[27835]: I0318 13:43:41.851754 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:41.863898 master-0 kubenswrapper[27835]: I0318 13:43:41.855645 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:41.968363 master-0 kubenswrapper[27835]: I0318 13:43:41.968195 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6d475bdf48-lskhl" podStartSLOduration=1.96798244 podStartE2EDuration="1.96798244s" podCreationTimestamp="2026-03-18 13:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:43:41.924347253 +0000 UTC m=+1185.889558833" watchObservedRunningTime="2026-03-18 13:43:41.96798244 +0000 UTC m=+1185.933194000" Mar 18 13:43:42.872003 master-0 kubenswrapper[27835]: I0318 13:43:42.871944 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c8b6b8bf-bpq6d" event={"ID":"8e32da24-78ff-44bb-9993-ac3c2048f236","Type":"ContainerStarted","Data":"84abc63d212182c1da56f66edd6aef3931262a198bdb532cda490f53e39eb924"} Mar 18 13:43:42.873062 master-0 kubenswrapper[27835]: I0318 13:43:42.872974 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:43.120809 master-0 kubenswrapper[27835]: I0318 13:43:43.120708 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76c8b6b8bf-bpq6d" podStartSLOduration=3.120682382 podStartE2EDuration="3.120682382s" podCreationTimestamp="2026-03-18 13:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:43:43.10855442 +0000 UTC m=+1187.073766000" watchObservedRunningTime="2026-03-18 13:43:43.120682382 +0000 UTC m=+1187.085893942" Mar 18 13:43:43.908755 master-0 kubenswrapper[27835]: I0318 13:43:43.908700 27835 generic.go:334] "Generic (PLEG): container finished" podID="b14c1c8f-6882-4b86-bfdf-ef1cae5e8321" containerID="e2636cb1cd9101b9d35f358ca47f86d0948b46fce6ed128b3743c3c241466123" exitCode=0 Mar 18 13:43:43.909361 master-0 kubenswrapper[27835]: I0318 13:43:43.909345 27835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 13:43:43.909600 master-0 kubenswrapper[27835]: I0318 13:43:43.908890 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-db-sync-jhwx7" event={"ID":"b14c1c8f-6882-4b86-bfdf-ef1cae5e8321","Type":"ContainerDied","Data":"e2636cb1cd9101b9d35f358ca47f86d0948b46fce6ed128b3743c3c241466123"} Mar 18 13:43:43.910619 master-0 kubenswrapper[27835]: I0318 13:43:43.910601 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76c8b6b8bf-bpq6d" Mar 18 13:43:44.422854 master-0 kubenswrapper[27835]: I0318 13:43:44.422791 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:44.424176 master-0 kubenswrapper[27835]: I0318 13:43:44.424123 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:43:44.690242 master-0 kubenswrapper[27835]: I0318 13:43:44.686903 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6bf5c56f77-ccf49"] Mar 18 13:43:44.690242 master-0 kubenswrapper[27835]: I0318 13:43:44.689524 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bf5c56f77-ccf49" Mar 18 13:43:44.692665 master-0 kubenswrapper[27835]: I0318 13:43:44.692623 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Mar 18 13:43:44.693026 master-0 kubenswrapper[27835]: I0318 13:43:44.692943 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Mar 18 13:43:44.838693 master-0 kubenswrapper[27835]: I0318 13:43:44.838614 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6bf5c56f77-ccf49"] Mar 18 13:43:44.842296 master-0 kubenswrapper[27835]: I0318 13:43:44.840541 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1253c32f-2d3e-455b-881b-e4d04e4f746c-ovndb-tls-certs\") pod \"neutron-6bf5c56f77-ccf49\" (UID: \"1253c32f-2d3e-455b-881b-e4d04e4f746c\") " pod="openstack/neutron-6bf5c56f77-ccf49" Mar 18 13:43:44.842296 master-0 kubenswrapper[27835]: I0318 13:43:44.840658 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1253c32f-2d3e-455b-881b-e4d04e4f746c-public-tls-certs\") pod \"neutron-6bf5c56f77-ccf49\" (UID: \"1253c32f-2d3e-455b-881b-e4d04e4f746c\") " pod="openstack/neutron-6bf5c56f77-ccf49" Mar 18 13:43:44.842296 master-0 kubenswrapper[27835]: I0318 13:43:44.840687 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1253c32f-2d3e-455b-881b-e4d04e4f746c-internal-tls-certs\") pod \"neutron-6bf5c56f77-ccf49\" (UID: \"1253c32f-2d3e-455b-881b-e4d04e4f746c\") " pod="openstack/neutron-6bf5c56f77-ccf49" Mar 18 13:43:44.842296 master-0 kubenswrapper[27835]: I0318 13:43:44.840747 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw8ds\" (UniqueName: \"kubernetes.io/projected/1253c32f-2d3e-455b-881b-e4d04e4f746c-kube-api-access-fw8ds\") pod \"neutron-6bf5c56f77-ccf49\" (UID: \"1253c32f-2d3e-455b-881b-e4d04e4f746c\") " pod="openstack/neutron-6bf5c56f77-ccf49" Mar 18 13:43:44.842296 master-0 kubenswrapper[27835]: I0318 13:43:44.840768 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1253c32f-2d3e-455b-881b-e4d04e4f746c-combined-ca-bundle\") pod \"neutron-6bf5c56f77-ccf49\" (UID: \"1253c32f-2d3e-455b-881b-e4d04e4f746c\") " pod="openstack/neutron-6bf5c56f77-ccf49" Mar 18 13:43:44.842296 master-0 kubenswrapper[27835]: I0318 13:43:44.840849 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1253c32f-2d3e-455b-881b-e4d04e4f746c-httpd-config\") pod \"neutron-6bf5c56f77-ccf49\" (UID: \"1253c32f-2d3e-455b-881b-e4d04e4f746c\") " pod="openstack/neutron-6bf5c56f77-ccf49" Mar 18 13:43:44.842296 master-0 kubenswrapper[27835]: I0318 13:43:44.840894 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1253c32f-2d3e-455b-881b-e4d04e4f746c-config\") pod \"neutron-6bf5c56f77-ccf49\" (UID: \"1253c32f-2d3e-455b-881b-e4d04e4f746c\") " pod="openstack/neutron-6bf5c56f77-ccf49" Mar 18 13:43:44.945363 master-0 kubenswrapper[27835]: I0318 13:43:44.943605 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1253c32f-2d3e-455b-881b-e4d04e4f746c-public-tls-certs\") pod \"neutron-6bf5c56f77-ccf49\" (UID: \"1253c32f-2d3e-455b-881b-e4d04e4f746c\") " pod="openstack/neutron-6bf5c56f77-ccf49" Mar 18 13:43:44.945363 master-0 kubenswrapper[27835]: I0318 13:43:44.943677 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1253c32f-2d3e-455b-881b-e4d04e4f746c-internal-tls-certs\") pod \"neutron-6bf5c56f77-ccf49\" (UID: \"1253c32f-2d3e-455b-881b-e4d04e4f746c\") " pod="openstack/neutron-6bf5c56f77-ccf49" Mar 18 13:43:44.945363 master-0 kubenswrapper[27835]: I0318 13:43:44.943771 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw8ds\" (UniqueName: \"kubernetes.io/projected/1253c32f-2d3e-455b-881b-e4d04e4f746c-kube-api-access-fw8ds\") pod \"neutron-6bf5c56f77-ccf49\" (UID: \"1253c32f-2d3e-455b-881b-e4d04e4f746c\") " pod="openstack/neutron-6bf5c56f77-ccf49" Mar 18 13:43:44.945363 master-0 kubenswrapper[27835]: I0318 13:43:44.943797 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1253c32f-2d3e-455b-881b-e4d04e4f746c-combined-ca-bundle\") pod \"neutron-6bf5c56f77-ccf49\" (UID: \"1253c32f-2d3e-455b-881b-e4d04e4f746c\") " pod="openstack/neutron-6bf5c56f77-ccf49" Mar 18 13:43:44.945363 master-0 kubenswrapper[27835]: I0318 13:43:44.943864 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1253c32f-2d3e-455b-881b-e4d04e4f746c-httpd-config\") pod \"neutron-6bf5c56f77-ccf49\" (UID: \"1253c32f-2d3e-455b-881b-e4d04e4f746c\") " pod="openstack/neutron-6bf5c56f77-ccf49" Mar 18 13:43:44.945363 master-0 kubenswrapper[27835]: I0318 13:43:44.943925 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1253c32f-2d3e-455b-881b-e4d04e4f746c-config\") pod \"neutron-6bf5c56f77-ccf49\" (UID: \"1253c32f-2d3e-455b-881b-e4d04e4f746c\") " pod="openstack/neutron-6bf5c56f77-ccf49" Mar 18 13:43:44.945363 master-0 kubenswrapper[27835]: I0318 13:43:44.943963 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1253c32f-2d3e-455b-881b-e4d04e4f746c-ovndb-tls-certs\") pod \"neutron-6bf5c56f77-ccf49\" (UID: \"1253c32f-2d3e-455b-881b-e4d04e4f746c\") " pod="openstack/neutron-6bf5c56f77-ccf49" Mar 18 13:43:44.948869 master-0 kubenswrapper[27835]: I0318 13:43:44.948817 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1253c32f-2d3e-455b-881b-e4d04e4f746c-public-tls-certs\") pod \"neutron-6bf5c56f77-ccf49\" (UID: \"1253c32f-2d3e-455b-881b-e4d04e4f746c\") " pod="openstack/neutron-6bf5c56f77-ccf49" Mar 18 13:43:44.951557 master-0 kubenswrapper[27835]: I0318 13:43:44.951521 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1253c32f-2d3e-455b-881b-e4d04e4f746c-combined-ca-bundle\") pod \"neutron-6bf5c56f77-ccf49\" (UID: \"1253c32f-2d3e-455b-881b-e4d04e4f746c\") " pod="openstack/neutron-6bf5c56f77-ccf49" Mar 18 13:43:44.951656 master-0 kubenswrapper[27835]: I0318 13:43:44.951635 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1253c32f-2d3e-455b-881b-e4d04e4f746c-httpd-config\") pod \"neutron-6bf5c56f77-ccf49\" (UID: \"1253c32f-2d3e-455b-881b-e4d04e4f746c\") " pod="openstack/neutron-6bf5c56f77-ccf49" Mar 18 13:43:44.953103 master-0 kubenswrapper[27835]: I0318 13:43:44.953068 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1253c32f-2d3e-455b-881b-e4d04e4f746c-ovndb-tls-certs\") pod \"neutron-6bf5c56f77-ccf49\" (UID: \"1253c32f-2d3e-455b-881b-e4d04e4f746c\") " pod="openstack/neutron-6bf5c56f77-ccf49" Mar 18 13:43:44.963297 master-0 kubenswrapper[27835]: I0318 13:43:44.963220 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1253c32f-2d3e-455b-881b-e4d04e4f746c-config\") pod \"neutron-6bf5c56f77-ccf49\" (UID: \"1253c32f-2d3e-455b-881b-e4d04e4f746c\") " pod="openstack/neutron-6bf5c56f77-ccf49" Mar 18 13:43:44.973541 master-0 kubenswrapper[27835]: I0318 13:43:44.973500 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1253c32f-2d3e-455b-881b-e4d04e4f746c-internal-tls-certs\") pod \"neutron-6bf5c56f77-ccf49\" (UID: \"1253c32f-2d3e-455b-881b-e4d04e4f746c\") " pod="openstack/neutron-6bf5c56f77-ccf49" Mar 18 13:43:45.326457 master-0 kubenswrapper[27835]: I0318 13:43:45.323892 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw8ds\" (UniqueName: \"kubernetes.io/projected/1253c32f-2d3e-455b-881b-e4d04e4f746c-kube-api-access-fw8ds\") pod \"neutron-6bf5c56f77-ccf49\" (UID: \"1253c32f-2d3e-455b-881b-e4d04e4f746c\") " pod="openstack/neutron-6bf5c56f77-ccf49" Mar 18 13:43:45.418700 master-0 kubenswrapper[27835]: I0318 13:43:45.418635 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-07518-db-sync-jhwx7" Mar 18 13:43:45.457086 master-0 kubenswrapper[27835]: I0318 13:43:45.456975 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b14c1c8f-6882-4b86-bfdf-ef1cae5e8321-scripts\") pod \"b14c1c8f-6882-4b86-bfdf-ef1cae5e8321\" (UID: \"b14c1c8f-6882-4b86-bfdf-ef1cae5e8321\") " Mar 18 13:43:45.457086 master-0 kubenswrapper[27835]: I0318 13:43:45.457078 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b14c1c8f-6882-4b86-bfdf-ef1cae5e8321-etc-machine-id\") pod \"b14c1c8f-6882-4b86-bfdf-ef1cae5e8321\" (UID: \"b14c1c8f-6882-4b86-bfdf-ef1cae5e8321\") " Mar 18 13:43:45.457350 master-0 kubenswrapper[27835]: I0318 13:43:45.457126 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b14c1c8f-6882-4b86-bfdf-ef1cae5e8321-config-data\") pod \"b14c1c8f-6882-4b86-bfdf-ef1cae5e8321\" (UID: \"b14c1c8f-6882-4b86-bfdf-ef1cae5e8321\") " Mar 18 13:43:45.457350 master-0 kubenswrapper[27835]: I0318 13:43:45.457238 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b14c1c8f-6882-4b86-bfdf-ef1cae5e8321-combined-ca-bundle\") pod \"b14c1c8f-6882-4b86-bfdf-ef1cae5e8321\" (UID: \"b14c1c8f-6882-4b86-bfdf-ef1cae5e8321\") " Mar 18 13:43:45.457608 master-0 kubenswrapper[27835]: I0318 13:43:45.457570 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b14c1c8f-6882-4b86-bfdf-ef1cae5e8321-db-sync-config-data\") pod \"b14c1c8f-6882-4b86-bfdf-ef1cae5e8321\" (UID: \"b14c1c8f-6882-4b86-bfdf-ef1cae5e8321\") " Mar 18 13:43:45.457701 master-0 kubenswrapper[27835]: I0318 13:43:45.457670 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkzj7\" (UniqueName: \"kubernetes.io/projected/b14c1c8f-6882-4b86-bfdf-ef1cae5e8321-kube-api-access-wkzj7\") pod \"b14c1c8f-6882-4b86-bfdf-ef1cae5e8321\" (UID: \"b14c1c8f-6882-4b86-bfdf-ef1cae5e8321\") " Mar 18 13:43:45.463163 master-0 kubenswrapper[27835]: I0318 13:43:45.463097 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b14c1c8f-6882-4b86-bfdf-ef1cae5e8321-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b14c1c8f-6882-4b86-bfdf-ef1cae5e8321" (UID: "b14c1c8f-6882-4b86-bfdf-ef1cae5e8321"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:43:45.467762 master-0 kubenswrapper[27835]: I0318 13:43:45.467719 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b14c1c8f-6882-4b86-bfdf-ef1cae5e8321-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b14c1c8f-6882-4b86-bfdf-ef1cae5e8321" (UID: "b14c1c8f-6882-4b86-bfdf-ef1cae5e8321"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:45.468441 master-0 kubenswrapper[27835]: I0318 13:43:45.468376 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b14c1c8f-6882-4b86-bfdf-ef1cae5e8321-kube-api-access-wkzj7" (OuterVolumeSpecName: "kube-api-access-wkzj7") pod "b14c1c8f-6882-4b86-bfdf-ef1cae5e8321" (UID: "b14c1c8f-6882-4b86-bfdf-ef1cae5e8321"). InnerVolumeSpecName "kube-api-access-wkzj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:43:45.469767 master-0 kubenswrapper[27835]: I0318 13:43:45.469694 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b14c1c8f-6882-4b86-bfdf-ef1cae5e8321-scripts" (OuterVolumeSpecName: "scripts") pod "b14c1c8f-6882-4b86-bfdf-ef1cae5e8321" (UID: "b14c1c8f-6882-4b86-bfdf-ef1cae5e8321"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:45.507236 master-0 kubenswrapper[27835]: I0318 13:43:45.507086 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b14c1c8f-6882-4b86-bfdf-ef1cae5e8321-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b14c1c8f-6882-4b86-bfdf-ef1cae5e8321" (UID: "b14c1c8f-6882-4b86-bfdf-ef1cae5e8321"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:45.527905 master-0 kubenswrapper[27835]: I0318 13:43:45.527825 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b14c1c8f-6882-4b86-bfdf-ef1cae5e8321-config-data" (OuterVolumeSpecName: "config-data") pod "b14c1c8f-6882-4b86-bfdf-ef1cae5e8321" (UID: "b14c1c8f-6882-4b86-bfdf-ef1cae5e8321"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:45.559999 master-0 kubenswrapper[27835]: I0318 13:43:45.559909 27835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b14c1c8f-6882-4b86-bfdf-ef1cae5e8321-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:45.559999 master-0 kubenswrapper[27835]: I0318 13:43:45.559971 27835 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b14c1c8f-6882-4b86-bfdf-ef1cae5e8321-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:45.559999 master-0 kubenswrapper[27835]: I0318 13:43:45.559983 27835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b14c1c8f-6882-4b86-bfdf-ef1cae5e8321-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:45.559999 master-0 kubenswrapper[27835]: I0318 13:43:45.559997 27835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b14c1c8f-6882-4b86-bfdf-ef1cae5e8321-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:45.559999 master-0 kubenswrapper[27835]: I0318 13:43:45.560006 27835 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b14c1c8f-6882-4b86-bfdf-ef1cae5e8321-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:45.559999 master-0 kubenswrapper[27835]: I0318 13:43:45.560017 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkzj7\" (UniqueName: \"kubernetes.io/projected/b14c1c8f-6882-4b86-bfdf-ef1cae5e8321-kube-api-access-wkzj7\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:45.612885 master-0 kubenswrapper[27835]: I0318 13:43:45.607751 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6bf5c56f77-ccf49" Mar 18 13:43:45.948807 master-0 kubenswrapper[27835]: I0318 13:43:45.948755 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-db-sync-jhwx7" event={"ID":"b14c1c8f-6882-4b86-bfdf-ef1cae5e8321","Type":"ContainerDied","Data":"694e546aa9f15054614f728b930ed73eb1079c6944eb0dcf47cfb389703c8e41"} Mar 18 13:43:45.948807 master-0 kubenswrapper[27835]: I0318 13:43:45.948807 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="694e546aa9f15054614f728b930ed73eb1079c6944eb0dcf47cfb389703c8e41" Mar 18 13:43:45.949314 master-0 kubenswrapper[27835]: I0318 13:43:45.948767 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-07518-db-sync-jhwx7" Mar 18 13:43:46.223557 master-0 kubenswrapper[27835]: W0318 13:43:46.223503 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1253c32f_2d3e_455b_881b_e4d04e4f746c.slice/crio-988eed6e204797552be6f652b30601929ff2f32ecf82b0ac9c94414aa4a3dae2 WatchSource:0}: Error finding container 988eed6e204797552be6f652b30601929ff2f32ecf82b0ac9c94414aa4a3dae2: Status 404 returned error can't find the container with id 988eed6e204797552be6f652b30601929ff2f32ecf82b0ac9c94414aa4a3dae2 Mar 18 13:43:46.239936 master-0 kubenswrapper[27835]: I0318 13:43:46.239876 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6bf5c56f77-ccf49"] Mar 18 13:43:46.454747 master-0 kubenswrapper[27835]: I0318 13:43:46.454584 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-07518-scheduler-0"] Mar 18 13:43:46.455539 master-0 kubenswrapper[27835]: E0318 13:43:46.455377 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b14c1c8f-6882-4b86-bfdf-ef1cae5e8321" containerName="cinder-07518-db-sync" Mar 18 13:43:46.455539 master-0 kubenswrapper[27835]: I0318 13:43:46.455403 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b14c1c8f-6882-4b86-bfdf-ef1cae5e8321" containerName="cinder-07518-db-sync" Mar 18 13:43:46.457569 master-0 kubenswrapper[27835]: I0318 13:43:46.455767 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b14c1c8f-6882-4b86-bfdf-ef1cae5e8321" containerName="cinder-07518-db-sync" Mar 18 13:43:46.458103 master-0 kubenswrapper[27835]: I0318 13:43:46.458045 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-07518-scheduler-0" Mar 18 13:43:46.465498 master-0 kubenswrapper[27835]: I0318 13:43:46.461917 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-07518-scripts" Mar 18 13:43:46.465498 master-0 kubenswrapper[27835]: I0318 13:43:46.462105 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-07518-config-data" Mar 18 13:43:46.465498 master-0 kubenswrapper[27835]: I0318 13:43:46.464553 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-07518-scheduler-config-data" Mar 18 13:43:46.514739 master-0 kubenswrapper[27835]: I0318 13:43:46.510243 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-07518-volume-lvm-iscsi-0"] Mar 18 13:43:46.539469 master-0 kubenswrapper[27835]: I0318 13:43:46.519671 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.539469 master-0 kubenswrapper[27835]: I0318 13:43:46.528491 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-07518-volume-lvm-iscsi-config-data" Mar 18 13:43:46.552396 master-0 kubenswrapper[27835]: I0318 13:43:46.546687 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea50522-5362-473e-b6d2-999ce4e00950-combined-ca-bundle\") pod \"cinder-07518-scheduler-0\" (UID: \"eea50522-5362-473e-b6d2-999ce4e00950\") " pod="openstack/cinder-07518-scheduler-0" Mar 18 13:43:46.552396 master-0 kubenswrapper[27835]: I0318 13:43:46.546774 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eea50522-5362-473e-b6d2-999ce4e00950-config-data-custom\") pod \"cinder-07518-scheduler-0\" (UID: \"eea50522-5362-473e-b6d2-999ce4e00950\") " pod="openstack/cinder-07518-scheduler-0" Mar 18 13:43:46.552396 master-0 kubenswrapper[27835]: I0318 13:43:46.546934 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eea50522-5362-473e-b6d2-999ce4e00950-etc-machine-id\") pod \"cinder-07518-scheduler-0\" (UID: \"eea50522-5362-473e-b6d2-999ce4e00950\") " pod="openstack/cinder-07518-scheduler-0" Mar 18 13:43:46.552396 master-0 kubenswrapper[27835]: I0318 13:43:46.547020 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eea50522-5362-473e-b6d2-999ce4e00950-scripts\") pod \"cinder-07518-scheduler-0\" (UID: \"eea50522-5362-473e-b6d2-999ce4e00950\") " pod="openstack/cinder-07518-scheduler-0" Mar 18 13:43:46.552396 master-0 kubenswrapper[27835]: I0318 13:43:46.547047 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eea50522-5362-473e-b6d2-999ce4e00950-config-data\") pod \"cinder-07518-scheduler-0\" (UID: \"eea50522-5362-473e-b6d2-999ce4e00950\") " pod="openstack/cinder-07518-scheduler-0" Mar 18 13:43:46.552396 master-0 kubenswrapper[27835]: I0318 13:43:46.547114 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26glv\" (UniqueName: \"kubernetes.io/projected/eea50522-5362-473e-b6d2-999ce4e00950-kube-api-access-26glv\") pod \"cinder-07518-scheduler-0\" (UID: \"eea50522-5362-473e-b6d2-999ce4e00950\") " pod="openstack/cinder-07518-scheduler-0" Mar 18 13:43:46.552396 master-0 kubenswrapper[27835]: I0318 13:43:46.547679 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-07518-scheduler-0"] Mar 18 13:43:46.562910 master-0 kubenswrapper[27835]: I0318 13:43:46.562669 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-07518-volume-lvm-iscsi-0"] Mar 18 13:43:46.597283 master-0 kubenswrapper[27835]: I0318 13:43:46.591941 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76c8b6b8bf-bpq6d"] Mar 18 13:43:46.597283 master-0 kubenswrapper[27835]: I0318 13:43:46.592213 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76c8b6b8bf-bpq6d" podUID="8e32da24-78ff-44bb-9993-ac3c2048f236" containerName="dnsmasq-dns" containerID="cri-o://84abc63d212182c1da56f66edd6aef3931262a198bdb532cda490f53e39eb924" gracePeriod=10 Mar 18 13:43:46.640643 master-0 kubenswrapper[27835]: I0318 13:43:46.639387 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bd84765b9-ps9s9"] Mar 18 13:43:46.653811 master-0 kubenswrapper[27835]: I0318 13:43:46.641268 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bd84765b9-ps9s9" Mar 18 13:43:46.656517 master-0 kubenswrapper[27835]: I0318 13:43:46.656483 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-07518-backup-0"] Mar 18 13:43:46.657983 master-0 kubenswrapper[27835]: I0318 13:43:46.657911 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a56300b8-65fa-4a52-8226-4a9e1cec54f7-scripts\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.658077 master-0 kubenswrapper[27835]: I0318 13:43:46.657986 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-etc-nvme\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.658077 master-0 kubenswrapper[27835]: I0318 13:43:46.658014 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a56300b8-65fa-4a52-8226-4a9e1cec54f7-combined-ca-bundle\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.658758 master-0 kubenswrapper[27835]: I0318 13:43:46.658724 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-run\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.658833 master-0 kubenswrapper[27835]: I0318 13:43:46.658761 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a56300b8-65fa-4a52-8226-4a9e1cec54f7-config-data\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.658833 master-0 kubenswrapper[27835]: I0318 13:43:46.658783 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a56300b8-65fa-4a52-8226-4a9e1cec54f7-config-data-custom\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.658833 master-0 kubenswrapper[27835]: I0318 13:43:46.658808 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eea50522-5362-473e-b6d2-999ce4e00950-etc-machine-id\") pod \"cinder-07518-scheduler-0\" (UID: \"eea50522-5362-473e-b6d2-999ce4e00950\") " pod="openstack/cinder-07518-scheduler-0" Mar 18 13:43:46.658982 master-0 kubenswrapper[27835]: I0318 13:43:46.658850 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-etc-machine-id\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.658982 master-0 kubenswrapper[27835]: I0318 13:43:46.658870 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-sys\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.658982 master-0 kubenswrapper[27835]: I0318 13:43:46.658891 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eea50522-5362-473e-b6d2-999ce4e00950-scripts\") pod \"cinder-07518-scheduler-0\" (UID: \"eea50522-5362-473e-b6d2-999ce4e00950\") " pod="openstack/cinder-07518-scheduler-0" Mar 18 13:43:46.658982 master-0 kubenswrapper[27835]: I0318 13:43:46.658909 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-var-locks-cinder\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.658982 master-0 kubenswrapper[27835]: I0318 13:43:46.658928 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eea50522-5362-473e-b6d2-999ce4e00950-config-data\") pod \"cinder-07518-scheduler-0\" (UID: \"eea50522-5362-473e-b6d2-999ce4e00950\") " pod="openstack/cinder-07518-scheduler-0" Mar 18 13:43:46.660022 master-0 kubenswrapper[27835]: I0318 13:43:46.659990 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.660609 master-0 kubenswrapper[27835]: I0318 13:43:46.660586 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eea50522-5362-473e-b6d2-999ce4e00950-etc-machine-id\") pod \"cinder-07518-scheduler-0\" (UID: \"eea50522-5362-473e-b6d2-999ce4e00950\") " pod="openstack/cinder-07518-scheduler-0" Mar 18 13:43:46.662525 master-0 kubenswrapper[27835]: I0318 13:43:46.662466 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-07518-backup-config-data" Mar 18 13:43:46.669874 master-0 kubenswrapper[27835]: I0318 13:43:46.658945 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-dev\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.669874 master-0 kubenswrapper[27835]: I0318 13:43:46.669186 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7snwz\" (UniqueName: \"kubernetes.io/projected/a56300b8-65fa-4a52-8226-4a9e1cec54f7-kube-api-access-7snwz\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.669874 master-0 kubenswrapper[27835]: I0318 13:43:46.669238 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-var-locks-brick\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.669874 master-0 kubenswrapper[27835]: I0318 13:43:46.669274 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26glv\" (UniqueName: \"kubernetes.io/projected/eea50522-5362-473e-b6d2-999ce4e00950-kube-api-access-26glv\") pod \"cinder-07518-scheduler-0\" (UID: \"eea50522-5362-473e-b6d2-999ce4e00950\") " pod="openstack/cinder-07518-scheduler-0" Mar 18 13:43:46.669874 master-0 kubenswrapper[27835]: I0318 13:43:46.669320 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea50522-5362-473e-b6d2-999ce4e00950-combined-ca-bundle\") pod \"cinder-07518-scheduler-0\" (UID: \"eea50522-5362-473e-b6d2-999ce4e00950\") " pod="openstack/cinder-07518-scheduler-0" Mar 18 13:43:46.669874 master-0 kubenswrapper[27835]: I0318 13:43:46.669376 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eea50522-5362-473e-b6d2-999ce4e00950-config-data-custom\") pod \"cinder-07518-scheduler-0\" (UID: \"eea50522-5362-473e-b6d2-999ce4e00950\") " pod="openstack/cinder-07518-scheduler-0" Mar 18 13:43:46.669874 master-0 kubenswrapper[27835]: I0318 13:43:46.669400 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-var-lib-cinder\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.669874 master-0 kubenswrapper[27835]: I0318 13:43:46.669434 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-lib-modules\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.671648 master-0 kubenswrapper[27835]: I0318 13:43:46.671612 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eea50522-5362-473e-b6d2-999ce4e00950-scripts\") pod \"cinder-07518-scheduler-0\" (UID: \"eea50522-5362-473e-b6d2-999ce4e00950\") " pod="openstack/cinder-07518-scheduler-0" Mar 18 13:43:46.676184 master-0 kubenswrapper[27835]: I0318 13:43:46.676151 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea50522-5362-473e-b6d2-999ce4e00950-combined-ca-bundle\") pod \"cinder-07518-scheduler-0\" (UID: \"eea50522-5362-473e-b6d2-999ce4e00950\") " pod="openstack/cinder-07518-scheduler-0" Mar 18 13:43:46.676512 master-0 kubenswrapper[27835]: I0318 13:43:46.676404 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-etc-iscsi\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.685557 master-0 kubenswrapper[27835]: I0318 13:43:46.684076 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eea50522-5362-473e-b6d2-999ce4e00950-config-data-custom\") pod \"cinder-07518-scheduler-0\" (UID: \"eea50522-5362-473e-b6d2-999ce4e00950\") " pod="openstack/cinder-07518-scheduler-0" Mar 18 13:43:46.687501 master-0 kubenswrapper[27835]: I0318 13:43:46.687377 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bd84765b9-ps9s9"] Mar 18 13:43:46.687959 master-0 kubenswrapper[27835]: I0318 13:43:46.687925 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eea50522-5362-473e-b6d2-999ce4e00950-config-data\") pod \"cinder-07518-scheduler-0\" (UID: \"eea50522-5362-473e-b6d2-999ce4e00950\") " pod="openstack/cinder-07518-scheduler-0" Mar 18 13:43:46.723641 master-0 kubenswrapper[27835]: I0318 13:43:46.722677 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-07518-backup-0"] Mar 18 13:43:46.725038 master-0 kubenswrapper[27835]: I0318 13:43:46.724709 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26glv\" (UniqueName: \"kubernetes.io/projected/eea50522-5362-473e-b6d2-999ce4e00950-kube-api-access-26glv\") pod \"cinder-07518-scheduler-0\" (UID: \"eea50522-5362-473e-b6d2-999ce4e00950\") " pod="openstack/cinder-07518-scheduler-0" Mar 18 13:43:46.779382 master-0 kubenswrapper[27835]: I0318 13:43:46.779262 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-lib-modules\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.779591 master-0 kubenswrapper[27835]: I0318 13:43:46.779389 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4adff55-e451-46fd-8e42-3aae24aa8baf-combined-ca-bundle\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.779591 master-0 kubenswrapper[27835]: I0318 13:43:46.779447 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-run\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.779591 master-0 kubenswrapper[27835]: I0318 13:43:46.779476 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a56300b8-65fa-4a52-8226-4a9e1cec54f7-config-data\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.779591 master-0 kubenswrapper[27835]: I0318 13:43:46.779497 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a56300b8-65fa-4a52-8226-4a9e1cec54f7-config-data-custom\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.779591 master-0 kubenswrapper[27835]: I0318 13:43:46.779519 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37f5936e-eb3f-4cad-8762-fdcd4f042c63-dns-svc\") pod \"dnsmasq-dns-6bd84765b9-ps9s9\" (UID: \"37f5936e-eb3f-4cad-8762-fdcd4f042c63\") " pod="openstack/dnsmasq-dns-6bd84765b9-ps9s9" Mar 18 13:43:46.779591 master-0 kubenswrapper[27835]: I0318 13:43:46.779536 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-sys\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.779591 master-0 kubenswrapper[27835]: I0318 13:43:46.779559 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgxv9\" (UniqueName: \"kubernetes.io/projected/f4adff55-e451-46fd-8e42-3aae24aa8baf-kube-api-access-tgxv9\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.779810 master-0 kubenswrapper[27835]: I0318 13:43:46.779596 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-var-lib-cinder\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.780062 master-0 kubenswrapper[27835]: I0318 13:43:46.780031 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-etc-machine-id\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.780184 master-0 kubenswrapper[27835]: I0318 13:43:46.780122 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-sys\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.782329 master-0 kubenswrapper[27835]: I0318 13:43:46.780580 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-etc-iscsi\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.782329 master-0 kubenswrapper[27835]: I0318 13:43:46.780619 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-var-locks-cinder\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.782329 master-0 kubenswrapper[27835]: I0318 13:43:46.780639 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37f5936e-eb3f-4cad-8762-fdcd4f042c63-ovsdbserver-sb\") pod \"dnsmasq-dns-6bd84765b9-ps9s9\" (UID: \"37f5936e-eb3f-4cad-8762-fdcd4f042c63\") " pod="openstack/dnsmasq-dns-6bd84765b9-ps9s9" Mar 18 13:43:46.782329 master-0 kubenswrapper[27835]: I0318 13:43:46.780662 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-dev\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.782329 master-0 kubenswrapper[27835]: I0318 13:43:46.780682 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-etc-nvme\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.782329 master-0 kubenswrapper[27835]: I0318 13:43:46.780703 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-etc-machine-id\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.782329 master-0 kubenswrapper[27835]: I0318 13:43:46.780732 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7snwz\" (UniqueName: \"kubernetes.io/projected/a56300b8-65fa-4a52-8226-4a9e1cec54f7-kube-api-access-7snwz\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.782329 master-0 kubenswrapper[27835]: I0318 13:43:46.780754 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knxph\" (UniqueName: \"kubernetes.io/projected/37f5936e-eb3f-4cad-8762-fdcd4f042c63-kube-api-access-knxph\") pod \"dnsmasq-dns-6bd84765b9-ps9s9\" (UID: \"37f5936e-eb3f-4cad-8762-fdcd4f042c63\") " pod="openstack/dnsmasq-dns-6bd84765b9-ps9s9" Mar 18 13:43:46.782329 master-0 kubenswrapper[27835]: I0318 13:43:46.780779 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-var-locks-brick\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.782329 master-0 kubenswrapper[27835]: I0318 13:43:46.780841 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-var-locks-brick\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.782329 master-0 kubenswrapper[27835]: I0318 13:43:46.780873 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-var-locks-cinder\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.782329 master-0 kubenswrapper[27835]: I0318 13:43:46.780918 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-dev\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.782329 master-0 kubenswrapper[27835]: I0318 13:43:46.780943 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-var-lib-cinder\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.782329 master-0 kubenswrapper[27835]: I0318 13:43:46.780959 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-run\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.782329 master-0 kubenswrapper[27835]: I0318 13:43:46.780978 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-lib-modules\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.782329 master-0 kubenswrapper[27835]: I0318 13:43:46.781116 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-etc-iscsi\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.782329 master-0 kubenswrapper[27835]: I0318 13:43:46.781217 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37f5936e-eb3f-4cad-8762-fdcd4f042c63-dns-swift-storage-0\") pod \"dnsmasq-dns-6bd84765b9-ps9s9\" (UID: \"37f5936e-eb3f-4cad-8762-fdcd4f042c63\") " pod="openstack/dnsmasq-dns-6bd84765b9-ps9s9" Mar 18 13:43:46.782329 master-0 kubenswrapper[27835]: I0318 13:43:46.781238 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37f5936e-eb3f-4cad-8762-fdcd4f042c63-config\") pod \"dnsmasq-dns-6bd84765b9-ps9s9\" (UID: \"37f5936e-eb3f-4cad-8762-fdcd4f042c63\") " pod="openstack/dnsmasq-dns-6bd84765b9-ps9s9" Mar 18 13:43:46.782329 master-0 kubenswrapper[27835]: I0318 13:43:46.781274 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37f5936e-eb3f-4cad-8762-fdcd4f042c63-ovsdbserver-nb\") pod \"dnsmasq-dns-6bd84765b9-ps9s9\" (UID: \"37f5936e-eb3f-4cad-8762-fdcd4f042c63\") " pod="openstack/dnsmasq-dns-6bd84765b9-ps9s9" Mar 18 13:43:46.782329 master-0 kubenswrapper[27835]: I0318 13:43:46.781302 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4adff55-e451-46fd-8e42-3aae24aa8baf-config-data-custom\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.782329 master-0 kubenswrapper[27835]: I0318 13:43:46.781320 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a56300b8-65fa-4a52-8226-4a9e1cec54f7-scripts\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.782329 master-0 kubenswrapper[27835]: I0318 13:43:46.781381 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-etc-nvme\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.782329 master-0 kubenswrapper[27835]: I0318 13:43:46.781404 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a56300b8-65fa-4a52-8226-4a9e1cec54f7-combined-ca-bundle\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.782329 master-0 kubenswrapper[27835]: I0318 13:43:46.781435 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4adff55-e451-46fd-8e42-3aae24aa8baf-scripts\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.782329 master-0 kubenswrapper[27835]: I0318 13:43:46.781452 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4adff55-e451-46fd-8e42-3aae24aa8baf-config-data\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.782329 master-0 kubenswrapper[27835]: I0318 13:43:46.781582 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-run\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.786236 master-0 kubenswrapper[27835]: I0318 13:43:46.786207 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a56300b8-65fa-4a52-8226-4a9e1cec54f7-config-data\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.790292 master-0 kubenswrapper[27835]: I0318 13:43:46.790266 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a56300b8-65fa-4a52-8226-4a9e1cec54f7-config-data-custom\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.790394 master-0 kubenswrapper[27835]: I0318 13:43:46.790349 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-etc-machine-id\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.790394 master-0 kubenswrapper[27835]: I0318 13:43:46.790378 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-sys\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.790638 master-0 kubenswrapper[27835]: I0318 13:43:46.790609 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-var-locks-cinder\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.790689 master-0 kubenswrapper[27835]: I0318 13:43:46.790649 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-dev\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.791066 master-0 kubenswrapper[27835]: I0318 13:43:46.791048 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-var-locks-brick\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.791192 master-0 kubenswrapper[27835]: I0318 13:43:46.791174 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-var-lib-cinder\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.791246 master-0 kubenswrapper[27835]: I0318 13:43:46.791214 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-lib-modules\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.791246 master-0 kubenswrapper[27835]: I0318 13:43:46.791242 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-etc-iscsi\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.792284 master-0 kubenswrapper[27835]: I0318 13:43:46.791854 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-etc-nvme\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.798466 master-0 kubenswrapper[27835]: I0318 13:43:46.798401 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-07518-api-0"] Mar 18 13:43:46.800301 master-0 kubenswrapper[27835]: I0318 13:43:46.800264 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-07518-api-0" Mar 18 13:43:46.805729 master-0 kubenswrapper[27835]: I0318 13:43:46.805443 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a56300b8-65fa-4a52-8226-4a9e1cec54f7-scripts\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.807046 master-0 kubenswrapper[27835]: I0318 13:43:46.806947 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a56300b8-65fa-4a52-8226-4a9e1cec54f7-combined-ca-bundle\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.817494 master-0 kubenswrapper[27835]: I0318 13:43:46.813186 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-07518-api-config-data" Mar 18 13:43:46.821515 master-0 kubenswrapper[27835]: I0318 13:43:46.821068 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7snwz\" (UniqueName: \"kubernetes.io/projected/a56300b8-65fa-4a52-8226-4a9e1cec54f7-kube-api-access-7snwz\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.821515 master-0 kubenswrapper[27835]: I0318 13:43:46.821130 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-07518-api-0"] Mar 18 13:43:46.883335 master-0 kubenswrapper[27835]: I0318 13:43:46.883280 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37f5936e-eb3f-4cad-8762-fdcd4f042c63-ovsdbserver-sb\") pod \"dnsmasq-dns-6bd84765b9-ps9s9\" (UID: \"37f5936e-eb3f-4cad-8762-fdcd4f042c63\") " pod="openstack/dnsmasq-dns-6bd84765b9-ps9s9" Mar 18 13:43:46.887280 master-0 kubenswrapper[27835]: I0318 13:43:46.887251 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-config-data-custom\") pod \"cinder-07518-api-0\" (UID: \"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:46.887740 master-0 kubenswrapper[27835]: I0318 13:43:46.885573 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-07518-scheduler-0" Mar 18 13:43:46.888484 master-0 kubenswrapper[27835]: I0318 13:43:46.888463 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-etc-nvme\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.888621 master-0 kubenswrapper[27835]: I0318 13:43:46.888580 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-etc-machine-id\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.888735 master-0 kubenswrapper[27835]: I0318 13:43:46.888716 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knxph\" (UniqueName: \"kubernetes.io/projected/37f5936e-eb3f-4cad-8762-fdcd4f042c63-kube-api-access-knxph\") pod \"dnsmasq-dns-6bd84765b9-ps9s9\" (UID: \"37f5936e-eb3f-4cad-8762-fdcd4f042c63\") " pod="openstack/dnsmasq-dns-6bd84765b9-ps9s9" Mar 18 13:43:46.888822 master-0 kubenswrapper[27835]: I0318 13:43:46.888809 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-var-locks-brick\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.888895 master-0 kubenswrapper[27835]: I0318 13:43:46.888883 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-etc-machine-id\") pod \"cinder-07518-api-0\" (UID: \"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:46.888983 master-0 kubenswrapper[27835]: I0318 13:43:46.888966 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-var-locks-cinder\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.889101 master-0 kubenswrapper[27835]: I0318 13:43:46.889086 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-dev\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.889215 master-0 kubenswrapper[27835]: I0318 13:43:46.889195 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-run\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.889343 master-0 kubenswrapper[27835]: I0318 13:43:46.889325 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37f5936e-eb3f-4cad-8762-fdcd4f042c63-dns-swift-storage-0\") pod \"dnsmasq-dns-6bd84765b9-ps9s9\" (UID: \"37f5936e-eb3f-4cad-8762-fdcd4f042c63\") " pod="openstack/dnsmasq-dns-6bd84765b9-ps9s9" Mar 18 13:43:46.889623 master-0 kubenswrapper[27835]: I0318 13:43:46.889606 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37f5936e-eb3f-4cad-8762-fdcd4f042c63-config\") pod \"dnsmasq-dns-6bd84765b9-ps9s9\" (UID: \"37f5936e-eb3f-4cad-8762-fdcd4f042c63\") " pod="openstack/dnsmasq-dns-6bd84765b9-ps9s9" Mar 18 13:43:46.889714 master-0 kubenswrapper[27835]: I0318 13:43:46.889700 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-scripts\") pod \"cinder-07518-api-0\" (UID: \"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:46.889791 master-0 kubenswrapper[27835]: I0318 13:43:46.889760 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-var-locks-cinder\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.889838 master-0 kubenswrapper[27835]: I0318 13:43:46.889824 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-etc-nvme\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.889884 master-0 kubenswrapper[27835]: I0318 13:43:46.889854 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-etc-machine-id\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.889984 master-0 kubenswrapper[27835]: I0318 13:43:46.889964 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37f5936e-eb3f-4cad-8762-fdcd4f042c63-ovsdbserver-nb\") pod \"dnsmasq-dns-6bd84765b9-ps9s9\" (UID: \"37f5936e-eb3f-4cad-8762-fdcd4f042c63\") " pod="openstack/dnsmasq-dns-6bd84765b9-ps9s9" Mar 18 13:43:46.890101 master-0 kubenswrapper[27835]: I0318 13:43:46.890085 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4adff55-e451-46fd-8e42-3aae24aa8baf-config-data-custom\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.890183 master-0 kubenswrapper[27835]: I0318 13:43:46.885073 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37f5936e-eb3f-4cad-8762-fdcd4f042c63-ovsdbserver-sb\") pod \"dnsmasq-dns-6bd84765b9-ps9s9\" (UID: \"37f5936e-eb3f-4cad-8762-fdcd4f042c63\") " pod="openstack/dnsmasq-dns-6bd84765b9-ps9s9" Mar 18 13:43:46.890818 master-0 kubenswrapper[27835]: I0318 13:43:46.890791 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37f5936e-eb3f-4cad-8762-fdcd4f042c63-config\") pod \"dnsmasq-dns-6bd84765b9-ps9s9\" (UID: \"37f5936e-eb3f-4cad-8762-fdcd4f042c63\") " pod="openstack/dnsmasq-dns-6bd84765b9-ps9s9" Mar 18 13:43:46.890881 master-0 kubenswrapper[27835]: I0318 13:43:46.890159 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-var-locks-brick\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.890881 master-0 kubenswrapper[27835]: I0318 13:43:46.890875 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-dev\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.890952 master-0 kubenswrapper[27835]: I0318 13:43:46.890892 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-run\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.891321 master-0 kubenswrapper[27835]: I0318 13:43:46.891304 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37f5936e-eb3f-4cad-8762-fdcd4f042c63-ovsdbserver-nb\") pod \"dnsmasq-dns-6bd84765b9-ps9s9\" (UID: \"37f5936e-eb3f-4cad-8762-fdcd4f042c63\") " pod="openstack/dnsmasq-dns-6bd84765b9-ps9s9" Mar 18 13:43:46.891644 master-0 kubenswrapper[27835]: I0318 13:43:46.891626 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4adff55-e451-46fd-8e42-3aae24aa8baf-scripts\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.891732 master-0 kubenswrapper[27835]: I0318 13:43:46.891717 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4adff55-e451-46fd-8e42-3aae24aa8baf-config-data\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.891814 master-0 kubenswrapper[27835]: I0318 13:43:46.891798 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxqdw\" (UniqueName: \"kubernetes.io/projected/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-kube-api-access-qxqdw\") pod \"cinder-07518-api-0\" (UID: \"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:46.891920 master-0 kubenswrapper[27835]: I0318 13:43:46.891906 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-logs\") pod \"cinder-07518-api-0\" (UID: \"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:46.892020 master-0 kubenswrapper[27835]: I0318 13:43:46.892008 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-lib-modules\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.892113 master-0 kubenswrapper[27835]: I0318 13:43:46.892100 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4adff55-e451-46fd-8e42-3aae24aa8baf-combined-ca-bundle\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.892213 master-0 kubenswrapper[27835]: I0318 13:43:46.892200 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-config-data\") pod \"cinder-07518-api-0\" (UID: \"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:46.892287 master-0 kubenswrapper[27835]: I0318 13:43:46.892275 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37f5936e-eb3f-4cad-8762-fdcd4f042c63-dns-svc\") pod \"dnsmasq-dns-6bd84765b9-ps9s9\" (UID: \"37f5936e-eb3f-4cad-8762-fdcd4f042c63\") " pod="openstack/dnsmasq-dns-6bd84765b9-ps9s9" Mar 18 13:43:46.892371 master-0 kubenswrapper[27835]: I0318 13:43:46.892351 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-sys\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.892466 master-0 kubenswrapper[27835]: I0318 13:43:46.892452 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgxv9\" (UniqueName: \"kubernetes.io/projected/f4adff55-e451-46fd-8e42-3aae24aa8baf-kube-api-access-tgxv9\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.892574 master-0 kubenswrapper[27835]: I0318 13:43:46.892560 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-var-lib-cinder\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.892682 master-0 kubenswrapper[27835]: I0318 13:43:46.892655 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-combined-ca-bundle\") pod \"cinder-07518-api-0\" (UID: \"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:46.892778 master-0 kubenswrapper[27835]: I0318 13:43:46.892765 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-etc-iscsi\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.892954 master-0 kubenswrapper[27835]: I0318 13:43:46.892932 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-etc-iscsi\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.893265 master-0 kubenswrapper[27835]: I0318 13:43:46.893234 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37f5936e-eb3f-4cad-8762-fdcd4f042c63-dns-swift-storage-0\") pod \"dnsmasq-dns-6bd84765b9-ps9s9\" (UID: \"37f5936e-eb3f-4cad-8762-fdcd4f042c63\") " pod="openstack/dnsmasq-dns-6bd84765b9-ps9s9" Mar 18 13:43:46.894433 master-0 kubenswrapper[27835]: I0318 13:43:46.894371 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-var-lib-cinder\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.894492 master-0 kubenswrapper[27835]: I0318 13:43:46.894478 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-lib-modules\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.894820 master-0 kubenswrapper[27835]: I0318 13:43:46.894799 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37f5936e-eb3f-4cad-8762-fdcd4f042c63-dns-svc\") pod \"dnsmasq-dns-6bd84765b9-ps9s9\" (UID: \"37f5936e-eb3f-4cad-8762-fdcd4f042c63\") " pod="openstack/dnsmasq-dns-6bd84765b9-ps9s9" Mar 18 13:43:46.895000 master-0 kubenswrapper[27835]: I0318 13:43:46.894930 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-sys\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.897247 master-0 kubenswrapper[27835]: I0318 13:43:46.897221 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4adff55-e451-46fd-8e42-3aae24aa8baf-scripts\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.899469 master-0 kubenswrapper[27835]: I0318 13:43:46.897963 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4adff55-e451-46fd-8e42-3aae24aa8baf-config-data-custom\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.900008 master-0 kubenswrapper[27835]: I0318 13:43:46.899949 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4adff55-e451-46fd-8e42-3aae24aa8baf-config-data\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.904840 master-0 kubenswrapper[27835]: I0318 13:43:46.904717 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4adff55-e451-46fd-8e42-3aae24aa8baf-combined-ca-bundle\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.919618 master-0 kubenswrapper[27835]: I0318 13:43:46.914792 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knxph\" (UniqueName: \"kubernetes.io/projected/37f5936e-eb3f-4cad-8762-fdcd4f042c63-kube-api-access-knxph\") pod \"dnsmasq-dns-6bd84765b9-ps9s9\" (UID: \"37f5936e-eb3f-4cad-8762-fdcd4f042c63\") " pod="openstack/dnsmasq-dns-6bd84765b9-ps9s9" Mar 18 13:43:46.920555 master-0 kubenswrapper[27835]: I0318 13:43:46.920503 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:46.938883 master-0 kubenswrapper[27835]: I0318 13:43:46.937204 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgxv9\" (UniqueName: \"kubernetes.io/projected/f4adff55-e451-46fd-8e42-3aae24aa8baf-kube-api-access-tgxv9\") pod \"cinder-07518-backup-0\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:43:46.995885 master-0 kubenswrapper[27835]: I0318 13:43:46.995826 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-combined-ca-bundle\") pod \"cinder-07518-api-0\" (UID: \"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:46.996488 master-0 kubenswrapper[27835]: I0318 13:43:46.995896 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-config-data-custom\") pod \"cinder-07518-api-0\" (UID: \"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:46.996488 master-0 kubenswrapper[27835]: I0318 13:43:46.995942 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-etc-machine-id\") pod \"cinder-07518-api-0\" (UID: \"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:46.996488 master-0 kubenswrapper[27835]: I0318 13:43:46.996015 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-scripts\") pod \"cinder-07518-api-0\" (UID: \"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:46.996488 master-0 kubenswrapper[27835]: I0318 13:43:46.996076 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxqdw\" (UniqueName: \"kubernetes.io/projected/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-kube-api-access-qxqdw\") pod \"cinder-07518-api-0\" (UID: \"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:46.996488 master-0 kubenswrapper[27835]: I0318 13:43:46.996118 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-logs\") pod \"cinder-07518-api-0\" (UID: \"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:46.996488 master-0 kubenswrapper[27835]: I0318 13:43:46.996184 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-config-data\") pod \"cinder-07518-api-0\" (UID: \"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:46.996743 master-0 kubenswrapper[27835]: I0318 13:43:46.996649 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-etc-machine-id\") pod \"cinder-07518-api-0\" (UID: \"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:46.996993 master-0 kubenswrapper[27835]: I0318 13:43:46.996961 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-logs\") pod \"cinder-07518-api-0\" (UID: \"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:47.044875 master-0 kubenswrapper[27835]: I0318 13:43:47.022049 27835 generic.go:334] "Generic (PLEG): container finished" podID="8e32da24-78ff-44bb-9993-ac3c2048f236" containerID="84abc63d212182c1da56f66edd6aef3931262a198bdb532cda490f53e39eb924" exitCode=0 Mar 18 13:43:47.044875 master-0 kubenswrapper[27835]: I0318 13:43:47.022163 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c8b6b8bf-bpq6d" event={"ID":"8e32da24-78ff-44bb-9993-ac3c2048f236","Type":"ContainerDied","Data":"84abc63d212182c1da56f66edd6aef3931262a198bdb532cda490f53e39eb924"} Mar 18 13:43:47.048457 master-0 kubenswrapper[27835]: I0318 13:43:47.047437 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bf5c56f77-ccf49" event={"ID":"1253c32f-2d3e-455b-881b-e4d04e4f746c","Type":"ContainerStarted","Data":"a2579ebf2f541114b3d600ea00fcfde63499261d774c319ef6cb8a70ba0f21fa"} Mar 18 13:43:47.048672 master-0 kubenswrapper[27835]: I0318 13:43:47.048654 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bf5c56f77-ccf49" event={"ID":"1253c32f-2d3e-455b-881b-e4d04e4f746c","Type":"ContainerStarted","Data":"988eed6e204797552be6f652b30601929ff2f32ecf82b0ac9c94414aa4a3dae2"} Mar 18 13:43:47.054957 master-0 kubenswrapper[27835]: I0318 13:43:47.054920 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-scripts\") pod \"cinder-07518-api-0\" (UID: \"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:47.061364 master-0 kubenswrapper[27835]: I0318 13:43:47.061319 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-config-data-custom\") pod \"cinder-07518-api-0\" (UID: \"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:47.069568 master-0 kubenswrapper[27835]: I0318 13:43:47.068064 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxqdw\" (UniqueName: \"kubernetes.io/projected/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-kube-api-access-qxqdw\") pod \"cinder-07518-api-0\" (UID: \"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:47.086829 master-0 kubenswrapper[27835]: I0318 13:43:47.086787 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-config-data\") pod \"cinder-07518-api-0\" (UID: \"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:47.087036 master-0 kubenswrapper[27835]: I0318 13:43:47.086914 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-combined-ca-bundle\") pod \"cinder-07518-api-0\" (UID: \"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:47.098596 master-0 kubenswrapper[27835]: I0318 13:43:47.098485 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bd84765b9-ps9s9" Mar 18 13:43:47.149302 master-0 kubenswrapper[27835]: I0318 13:43:47.149252 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-07518-backup-0" Mar 18 13:43:47.216929 master-0 kubenswrapper[27835]: I0318 13:43:47.216769 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-07518-api-0" Mar 18 13:43:47.268700 master-0 kubenswrapper[27835]: I0318 13:43:47.268427 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76c8b6b8bf-bpq6d" Mar 18 13:43:47.304574 master-0 kubenswrapper[27835]: I0318 13:43:47.304498 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e32da24-78ff-44bb-9993-ac3c2048f236-dns-swift-storage-0\") pod \"8e32da24-78ff-44bb-9993-ac3c2048f236\" (UID: \"8e32da24-78ff-44bb-9993-ac3c2048f236\") " Mar 18 13:43:47.304674 master-0 kubenswrapper[27835]: I0318 13:43:47.304605 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e32da24-78ff-44bb-9993-ac3c2048f236-ovsdbserver-sb\") pod \"8e32da24-78ff-44bb-9993-ac3c2048f236\" (UID: \"8e32da24-78ff-44bb-9993-ac3c2048f236\") " Mar 18 13:43:47.304776 master-0 kubenswrapper[27835]: I0318 13:43:47.304742 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fj67l\" (UniqueName: \"kubernetes.io/projected/8e32da24-78ff-44bb-9993-ac3c2048f236-kube-api-access-fj67l\") pod \"8e32da24-78ff-44bb-9993-ac3c2048f236\" (UID: \"8e32da24-78ff-44bb-9993-ac3c2048f236\") " Mar 18 13:43:47.304842 master-0 kubenswrapper[27835]: I0318 13:43:47.304805 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e32da24-78ff-44bb-9993-ac3c2048f236-ovsdbserver-nb\") pod \"8e32da24-78ff-44bb-9993-ac3c2048f236\" (UID: \"8e32da24-78ff-44bb-9993-ac3c2048f236\") " Mar 18 13:43:47.304961 master-0 kubenswrapper[27835]: I0318 13:43:47.304937 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e32da24-78ff-44bb-9993-ac3c2048f236-dns-svc\") pod \"8e32da24-78ff-44bb-9993-ac3c2048f236\" (UID: \"8e32da24-78ff-44bb-9993-ac3c2048f236\") " Mar 18 13:43:47.305021 master-0 kubenswrapper[27835]: I0318 13:43:47.304979 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e32da24-78ff-44bb-9993-ac3c2048f236-config\") pod \"8e32da24-78ff-44bb-9993-ac3c2048f236\" (UID: \"8e32da24-78ff-44bb-9993-ac3c2048f236\") " Mar 18 13:43:47.330600 master-0 kubenswrapper[27835]: I0318 13:43:47.330189 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e32da24-78ff-44bb-9993-ac3c2048f236-kube-api-access-fj67l" (OuterVolumeSpecName: "kube-api-access-fj67l") pod "8e32da24-78ff-44bb-9993-ac3c2048f236" (UID: "8e32da24-78ff-44bb-9993-ac3c2048f236"). InnerVolumeSpecName "kube-api-access-fj67l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:43:47.407916 master-0 kubenswrapper[27835]: I0318 13:43:47.407833 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fj67l\" (UniqueName: \"kubernetes.io/projected/8e32da24-78ff-44bb-9993-ac3c2048f236-kube-api-access-fj67l\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:47.628651 master-0 kubenswrapper[27835]: I0318 13:43:47.628448 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e32da24-78ff-44bb-9993-ac3c2048f236-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8e32da24-78ff-44bb-9993-ac3c2048f236" (UID: "8e32da24-78ff-44bb-9993-ac3c2048f236"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:43:47.646983 master-0 kubenswrapper[27835]: I0318 13:43:47.646921 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e32da24-78ff-44bb-9993-ac3c2048f236-config" (OuterVolumeSpecName: "config") pod "8e32da24-78ff-44bb-9993-ac3c2048f236" (UID: "8e32da24-78ff-44bb-9993-ac3c2048f236"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:43:47.647217 master-0 kubenswrapper[27835]: I0318 13:43:47.647069 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e32da24-78ff-44bb-9993-ac3c2048f236-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8e32da24-78ff-44bb-9993-ac3c2048f236" (UID: "8e32da24-78ff-44bb-9993-ac3c2048f236"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:43:47.650071 master-0 kubenswrapper[27835]: I0318 13:43:47.649726 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e32da24-78ff-44bb-9993-ac3c2048f236-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8e32da24-78ff-44bb-9993-ac3c2048f236" (UID: "8e32da24-78ff-44bb-9993-ac3c2048f236"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:43:47.651802 master-0 kubenswrapper[27835]: I0318 13:43:47.651686 27835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e32da24-78ff-44bb-9993-ac3c2048f236-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:47.651802 master-0 kubenswrapper[27835]: I0318 13:43:47.651714 27835 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8e32da24-78ff-44bb-9993-ac3c2048f236-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:47.651802 master-0 kubenswrapper[27835]: I0318 13:43:47.651724 27835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8e32da24-78ff-44bb-9993-ac3c2048f236-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:47.651802 master-0 kubenswrapper[27835]: I0318 13:43:47.651733 27835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e32da24-78ff-44bb-9993-ac3c2048f236-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:47.704794 master-0 kubenswrapper[27835]: I0318 13:43:47.704634 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-07518-scheduler-0"] Mar 18 13:43:47.743858 master-0 kubenswrapper[27835]: I0318 13:43:47.743777 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e32da24-78ff-44bb-9993-ac3c2048f236-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8e32da24-78ff-44bb-9993-ac3c2048f236" (UID: "8e32da24-78ff-44bb-9993-ac3c2048f236"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:43:47.761055 master-0 kubenswrapper[27835]: I0318 13:43:47.759205 27835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8e32da24-78ff-44bb-9993-ac3c2048f236-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:47.832133 master-0 kubenswrapper[27835]: I0318 13:43:47.832009 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bd84765b9-ps9s9"] Mar 18 13:43:47.879812 master-0 kubenswrapper[27835]: I0318 13:43:47.879755 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-07518-volume-lvm-iscsi-0"] Mar 18 13:43:47.905537 master-0 kubenswrapper[27835]: W0318 13:43:47.905422 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda56300b8_65fa_4a52_8226_4a9e1cec54f7.slice/crio-eae4581d59f5d0c2dffedddd22f8885012fb7176af5ebab26e527bd54bca1af3 WatchSource:0}: Error finding container eae4581d59f5d0c2dffedddd22f8885012fb7176af5ebab26e527bd54bca1af3: Status 404 returned error can't find the container with id eae4581d59f5d0c2dffedddd22f8885012fb7176af5ebab26e527bd54bca1af3 Mar 18 13:43:48.066534 master-0 kubenswrapper[27835]: I0318 13:43:48.059110 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-volume-lvm-iscsi-0" event={"ID":"a56300b8-65fa-4a52-8226-4a9e1cec54f7","Type":"ContainerStarted","Data":"eae4581d59f5d0c2dffedddd22f8885012fb7176af5ebab26e527bd54bca1af3"} Mar 18 13:43:48.066534 master-0 kubenswrapper[27835]: I0318 13:43:48.060201 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bd84765b9-ps9s9" event={"ID":"37f5936e-eb3f-4cad-8762-fdcd4f042c63","Type":"ContainerStarted","Data":"4be5d73359dfc29a9b2ed7999060e69529ff0ec42c134b74397f8a2e66d6771b"} Mar 18 13:43:48.066534 master-0 kubenswrapper[27835]: I0318 13:43:48.061452 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c8b6b8bf-bpq6d" event={"ID":"8e32da24-78ff-44bb-9993-ac3c2048f236","Type":"ContainerDied","Data":"b34330c2a827c5dc29198d23070c9b96f6921b2ef6cee67f44e67beb976856e0"} Mar 18 13:43:48.066534 master-0 kubenswrapper[27835]: I0318 13:43:48.061481 27835 scope.go:117] "RemoveContainer" containerID="84abc63d212182c1da56f66edd6aef3931262a198bdb532cda490f53e39eb924" Mar 18 13:43:48.066534 master-0 kubenswrapper[27835]: I0318 13:43:48.061585 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76c8b6b8bf-bpq6d" Mar 18 13:43:48.070001 master-0 kubenswrapper[27835]: I0318 13:43:48.069919 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6bf5c56f77-ccf49" event={"ID":"1253c32f-2d3e-455b-881b-e4d04e4f746c","Type":"ContainerStarted","Data":"edcd34f1d1782d4c21afcdddacc3f42ef7014ed11cd0f0ffabb0cf8d6c5814ec"} Mar 18 13:43:48.070250 master-0 kubenswrapper[27835]: I0318 13:43:48.070092 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6bf5c56f77-ccf49" Mar 18 13:43:48.073197 master-0 kubenswrapper[27835]: I0318 13:43:48.073161 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-scheduler-0" event={"ID":"eea50522-5362-473e-b6d2-999ce4e00950","Type":"ContainerStarted","Data":"eb6aff1dc5397636ddce8851f049258f949663bc57d543f3267eaabb21983ef7"} Mar 18 13:43:48.142620 master-0 kubenswrapper[27835]: I0318 13:43:48.135526 27835 scope.go:117] "RemoveContainer" containerID="2799627999b460dd6649c4fc2334f6a6106daa0f75be855b7ac710c2f284f32d" Mar 18 13:43:48.168346 master-0 kubenswrapper[27835]: I0318 13:43:48.155975 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-07518-api-0"] Mar 18 13:43:48.182721 master-0 kubenswrapper[27835]: I0318 13:43:48.182640 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6bf5c56f77-ccf49" podStartSLOduration=4.18261791 podStartE2EDuration="4.18261791s" podCreationTimestamp="2026-03-18 13:43:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:43:48.169079501 +0000 UTC m=+1192.134291081" watchObservedRunningTime="2026-03-18 13:43:48.18261791 +0000 UTC m=+1192.147829480" Mar 18 13:43:48.245742 master-0 kubenswrapper[27835]: I0318 13:43:48.245664 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76c8b6b8bf-bpq6d"] Mar 18 13:43:48.260211 master-0 kubenswrapper[27835]: I0318 13:43:48.260144 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76c8b6b8bf-bpq6d"] Mar 18 13:43:48.324437 master-0 kubenswrapper[27835]: I0318 13:43:48.317549 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e32da24-78ff-44bb-9993-ac3c2048f236" path="/var/lib/kubelet/pods/8e32da24-78ff-44bb-9993-ac3c2048f236/volumes" Mar 18 13:43:48.336882 master-0 kubenswrapper[27835]: W0318 13:43:48.336677 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4adff55_e451_46fd_8e42_3aae24aa8baf.slice/crio-0253b6dd2763a2d2da902c20d1d02e1abd36610e9790a26b05426a7369f425aa WatchSource:0}: Error finding container 0253b6dd2763a2d2da902c20d1d02e1abd36610e9790a26b05426a7369f425aa: Status 404 returned error can't find the container with id 0253b6dd2763a2d2da902c20d1d02e1abd36610e9790a26b05426a7369f425aa Mar 18 13:43:48.343801 master-0 kubenswrapper[27835]: I0318 13:43:48.343623 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-07518-backup-0"] Mar 18 13:43:49.106465 master-0 kubenswrapper[27835]: I0318 13:43:49.106231 27835 generic.go:334] "Generic (PLEG): container finished" podID="37f5936e-eb3f-4cad-8762-fdcd4f042c63" containerID="e023683fd94a4bebad0450b20d9fb7fd926a47e7bc42225a37df1f21e8112985" exitCode=0 Mar 18 13:43:49.107157 master-0 kubenswrapper[27835]: I0318 13:43:49.106481 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bd84765b9-ps9s9" event={"ID":"37f5936e-eb3f-4cad-8762-fdcd4f042c63","Type":"ContainerDied","Data":"e023683fd94a4bebad0450b20d9fb7fd926a47e7bc42225a37df1f21e8112985"} Mar 18 13:43:49.114190 master-0 kubenswrapper[27835]: I0318 13:43:49.114143 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-backup-0" event={"ID":"f4adff55-e451-46fd-8e42-3aae24aa8baf","Type":"ContainerStarted","Data":"0253b6dd2763a2d2da902c20d1d02e1abd36610e9790a26b05426a7369f425aa"} Mar 18 13:43:49.124070 master-0 kubenswrapper[27835]: I0318 13:43:49.124017 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-api-0" event={"ID":"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17","Type":"ContainerStarted","Data":"2cfca46c55ad9af5fc41f53b25b0e5412137a600280cfe1080cb410a4584f005"} Mar 18 13:43:49.124070 master-0 kubenswrapper[27835]: I0318 13:43:49.124073 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-api-0" event={"ID":"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17","Type":"ContainerStarted","Data":"49b2f6919732a094786c05ec4fa3e72ac8477d03ae1d2cd7c3090a61ab55a32a"} Mar 18 13:43:49.224537 master-0 kubenswrapper[27835]: I0318 13:43:49.224465 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-07518-api-0"] Mar 18 13:43:50.161963 master-0 kubenswrapper[27835]: I0318 13:43:50.161904 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-scheduler-0" event={"ID":"eea50522-5362-473e-b6d2-999ce4e00950","Type":"ContainerStarted","Data":"b46f7d8ac358127fb734caef37480cdda19800bfb6beaf303f0db89ef7e4fb5e"} Mar 18 13:43:50.166952 master-0 kubenswrapper[27835]: I0318 13:43:50.166904 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-volume-lvm-iscsi-0" event={"ID":"a56300b8-65fa-4a52-8226-4a9e1cec54f7","Type":"ContainerStarted","Data":"c351e66ead3b863e0bf41427b1781b2f159cb4bc7fb04945ff8e1b265cc8ad3a"} Mar 18 13:43:50.167110 master-0 kubenswrapper[27835]: I0318 13:43:50.167092 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-volume-lvm-iscsi-0" event={"ID":"a56300b8-65fa-4a52-8226-4a9e1cec54f7","Type":"ContainerStarted","Data":"7566039db55a3fa7a1c1e767db952b44b7bce0f0fe743c1d4c48cd7017b26a49"} Mar 18 13:43:50.183343 master-0 kubenswrapper[27835]: I0318 13:43:50.182768 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bd84765b9-ps9s9" event={"ID":"37f5936e-eb3f-4cad-8762-fdcd4f042c63","Type":"ContainerStarted","Data":"7f9740ae99ba5a486baed3852c69856a097c8e8afbfb6226e9e1872f9e710781"} Mar 18 13:43:50.183343 master-0 kubenswrapper[27835]: I0318 13:43:50.182975 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bd84765b9-ps9s9" Mar 18 13:43:50.190474 master-0 kubenswrapper[27835]: I0318 13:43:50.189799 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-backup-0" event={"ID":"f4adff55-e451-46fd-8e42-3aae24aa8baf","Type":"ContainerStarted","Data":"43f8b3bdb46056778948638357ff249d94df1b2779c75ff764cc1ed3f2f99d3a"} Mar 18 13:43:50.190474 master-0 kubenswrapper[27835]: I0318 13:43:50.189867 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-backup-0" event={"ID":"f4adff55-e451-46fd-8e42-3aae24aa8baf","Type":"ContainerStarted","Data":"f89bf4204b535ab750e695241c60df308ecd5f2ab4f3683dc0bb785add0f33b4"} Mar 18 13:43:50.217782 master-0 kubenswrapper[27835]: I0318 13:43:50.213564 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-07518-volume-lvm-iscsi-0" podStartSLOduration=3.172435382 podStartE2EDuration="4.213509785s" podCreationTimestamp="2026-03-18 13:43:46 +0000 UTC" firstStartedPulling="2026-03-18 13:43:47.907812463 +0000 UTC m=+1191.873024023" lastFinishedPulling="2026-03-18 13:43:48.948886866 +0000 UTC m=+1192.914098426" observedRunningTime="2026-03-18 13:43:50.198443465 +0000 UTC m=+1194.163655025" watchObservedRunningTime="2026-03-18 13:43:50.213509785 +0000 UTC m=+1194.178721355" Mar 18 13:43:50.243326 master-0 kubenswrapper[27835]: I0318 13:43:50.242918 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-07518-backup-0" podStartSLOduration=3.362384089 podStartE2EDuration="4.242898614s" podCreationTimestamp="2026-03-18 13:43:46 +0000 UTC" firstStartedPulling="2026-03-18 13:43:48.349078663 +0000 UTC m=+1192.314290223" lastFinishedPulling="2026-03-18 13:43:49.229593188 +0000 UTC m=+1193.194804748" observedRunningTime="2026-03-18 13:43:50.233726241 +0000 UTC m=+1194.198937801" watchObservedRunningTime="2026-03-18 13:43:50.242898614 +0000 UTC m=+1194.208110174" Mar 18 13:43:50.268446 master-0 kubenswrapper[27835]: I0318 13:43:50.267935 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bd84765b9-ps9s9" podStartSLOduration=4.267909867 podStartE2EDuration="4.267909867s" podCreationTimestamp="2026-03-18 13:43:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:43:50.261938079 +0000 UTC m=+1194.227149659" watchObservedRunningTime="2026-03-18 13:43:50.267909867 +0000 UTC m=+1194.233121427" Mar 18 13:43:51.214946 master-0 kubenswrapper[27835]: I0318 13:43:51.214859 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-scheduler-0" event={"ID":"eea50522-5362-473e-b6d2-999ce4e00950","Type":"ContainerStarted","Data":"549a7ae3111471c58ef4084a11ca7ad6014c3967b2035c1cdd4e7a7be1c9132e"} Mar 18 13:43:51.235476 master-0 kubenswrapper[27835]: I0318 13:43:51.235378 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-api-0" event={"ID":"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17","Type":"ContainerStarted","Data":"09873b32c873f251b6e613812a76dbdc0212e396601cdd54947c12abc46fec8c"} Mar 18 13:43:51.235718 master-0 kubenswrapper[27835]: I0318 13:43:51.235634 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-07518-api-0" podUID="cd1e20cb-cebc-44b3-8f43-57ecd97ddf17" containerName="cinder-07518-api-log" containerID="cri-o://2cfca46c55ad9af5fc41f53b25b0e5412137a600280cfe1080cb410a4584f005" gracePeriod=30 Mar 18 13:43:51.235774 master-0 kubenswrapper[27835]: I0318 13:43:51.235763 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-07518-api-0" Mar 18 13:43:51.235839 master-0 kubenswrapper[27835]: I0318 13:43:51.235804 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-07518-api-0" podUID="cd1e20cb-cebc-44b3-8f43-57ecd97ddf17" containerName="cinder-api" containerID="cri-o://09873b32c873f251b6e613812a76dbdc0212e396601cdd54947c12abc46fec8c" gracePeriod=30 Mar 18 13:43:51.285051 master-0 kubenswrapper[27835]: I0318 13:43:51.284896 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-07518-scheduler-0" podStartSLOduration=4.44451391 podStartE2EDuration="5.28487268s" podCreationTimestamp="2026-03-18 13:43:46 +0000 UTC" firstStartedPulling="2026-03-18 13:43:47.730822991 +0000 UTC m=+1191.696034561" lastFinishedPulling="2026-03-18 13:43:48.571181771 +0000 UTC m=+1192.536393331" observedRunningTime="2026-03-18 13:43:51.261992324 +0000 UTC m=+1195.227203904" watchObservedRunningTime="2026-03-18 13:43:51.28487268 +0000 UTC m=+1195.250084240" Mar 18 13:43:51.319217 master-0 kubenswrapper[27835]: I0318 13:43:51.319129 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-07518-api-0" podStartSLOduration=5.319109048 podStartE2EDuration="5.319109048s" podCreationTimestamp="2026-03-18 13:43:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:43:51.305711362 +0000 UTC m=+1195.270922932" watchObservedRunningTime="2026-03-18 13:43:51.319109048 +0000 UTC m=+1195.284320608" Mar 18 13:43:51.897579 master-0 kubenswrapper[27835]: I0318 13:43:51.887211 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-07518-scheduler-0" Mar 18 13:43:51.922072 master-0 kubenswrapper[27835]: I0318 13:43:51.922020 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:52.150331 master-0 kubenswrapper[27835]: I0318 13:43:52.150266 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-07518-backup-0" Mar 18 13:43:52.184666 master-0 kubenswrapper[27835]: I0318 13:43:52.184626 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-07518-api-0" Mar 18 13:43:52.284443 master-0 kubenswrapper[27835]: I0318 13:43:52.283793 27835 generic.go:334] "Generic (PLEG): container finished" podID="653293fa-39a3-4b35-ad41-6a3cac734e80" containerID="f8f4c7cbe32c6357cee6ef56f3a6f6472666bb8fc1029ecdf447f80277df6df6" exitCode=0 Mar 18 13:43:52.295482 master-0 kubenswrapper[27835]: I0318 13:43:52.288848 27835 generic.go:334] "Generic (PLEG): container finished" podID="cd1e20cb-cebc-44b3-8f43-57ecd97ddf17" containerID="09873b32c873f251b6e613812a76dbdc0212e396601cdd54947c12abc46fec8c" exitCode=0 Mar 18 13:43:52.295482 master-0 kubenswrapper[27835]: I0318 13:43:52.288887 27835 generic.go:334] "Generic (PLEG): container finished" podID="cd1e20cb-cebc-44b3-8f43-57ecd97ddf17" containerID="2cfca46c55ad9af5fc41f53b25b0e5412137a600280cfe1080cb410a4584f005" exitCode=143 Mar 18 13:43:52.295482 master-0 kubenswrapper[27835]: I0318 13:43:52.289816 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-07518-api-0" Mar 18 13:43:52.302437 master-0 kubenswrapper[27835]: I0318 13:43:52.296261 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-tmkx2" event={"ID":"653293fa-39a3-4b35-ad41-6a3cac734e80","Type":"ContainerDied","Data":"f8f4c7cbe32c6357cee6ef56f3a6f6472666bb8fc1029ecdf447f80277df6df6"} Mar 18 13:43:52.302437 master-0 kubenswrapper[27835]: I0318 13:43:52.296316 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-api-0" event={"ID":"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17","Type":"ContainerDied","Data":"09873b32c873f251b6e613812a76dbdc0212e396601cdd54947c12abc46fec8c"} Mar 18 13:43:52.302437 master-0 kubenswrapper[27835]: I0318 13:43:52.296333 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-api-0" event={"ID":"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17","Type":"ContainerDied","Data":"2cfca46c55ad9af5fc41f53b25b0e5412137a600280cfe1080cb410a4584f005"} Mar 18 13:43:52.302437 master-0 kubenswrapper[27835]: I0318 13:43:52.296344 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-api-0" event={"ID":"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17","Type":"ContainerDied","Data":"49b2f6919732a094786c05ec4fa3e72ac8477d03ae1d2cd7c3090a61ab55a32a"} Mar 18 13:43:52.302437 master-0 kubenswrapper[27835]: I0318 13:43:52.296362 27835 scope.go:117] "RemoveContainer" containerID="09873b32c873f251b6e613812a76dbdc0212e396601cdd54947c12abc46fec8c" Mar 18 13:43:52.313444 master-0 kubenswrapper[27835]: I0318 13:43:52.308254 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-config-data-custom\") pod \"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17\" (UID: \"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17\") " Mar 18 13:43:52.313444 master-0 kubenswrapper[27835]: I0318 13:43:52.308361 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxqdw\" (UniqueName: \"kubernetes.io/projected/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-kube-api-access-qxqdw\") pod \"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17\" (UID: \"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17\") " Mar 18 13:43:52.313444 master-0 kubenswrapper[27835]: I0318 13:43:52.308430 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-config-data\") pod \"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17\" (UID: \"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17\") " Mar 18 13:43:52.313444 master-0 kubenswrapper[27835]: I0318 13:43:52.308458 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-logs\") pod \"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17\" (UID: \"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17\") " Mar 18 13:43:52.313444 master-0 kubenswrapper[27835]: I0318 13:43:52.308486 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-combined-ca-bundle\") pod \"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17\" (UID: \"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17\") " Mar 18 13:43:52.313444 master-0 kubenswrapper[27835]: I0318 13:43:52.308644 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-scripts\") pod \"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17\" (UID: \"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17\") " Mar 18 13:43:52.313444 master-0 kubenswrapper[27835]: I0318 13:43:52.308681 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-etc-machine-id\") pod \"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17\" (UID: \"cd1e20cb-cebc-44b3-8f43-57ecd97ddf17\") " Mar 18 13:43:52.313444 master-0 kubenswrapper[27835]: I0318 13:43:52.310153 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "cd1e20cb-cebc-44b3-8f43-57ecd97ddf17" (UID: "cd1e20cb-cebc-44b3-8f43-57ecd97ddf17"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:43:52.313444 master-0 kubenswrapper[27835]: I0318 13:43:52.310739 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-logs" (OuterVolumeSpecName: "logs") pod "cd1e20cb-cebc-44b3-8f43-57ecd97ddf17" (UID: "cd1e20cb-cebc-44b3-8f43-57ecd97ddf17"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:43:52.322450 master-0 kubenswrapper[27835]: I0318 13:43:52.318689 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cd1e20cb-cebc-44b3-8f43-57ecd97ddf17" (UID: "cd1e20cb-cebc-44b3-8f43-57ecd97ddf17"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:52.322450 master-0 kubenswrapper[27835]: I0318 13:43:52.321798 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-kube-api-access-qxqdw" (OuterVolumeSpecName: "kube-api-access-qxqdw") pod "cd1e20cb-cebc-44b3-8f43-57ecd97ddf17" (UID: "cd1e20cb-cebc-44b3-8f43-57ecd97ddf17"). InnerVolumeSpecName "kube-api-access-qxqdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:43:52.337820 master-0 kubenswrapper[27835]: I0318 13:43:52.332170 27835 scope.go:117] "RemoveContainer" containerID="2cfca46c55ad9af5fc41f53b25b0e5412137a600280cfe1080cb410a4584f005" Mar 18 13:43:52.341437 master-0 kubenswrapper[27835]: I0318 13:43:52.339747 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-scripts" (OuterVolumeSpecName: "scripts") pod "cd1e20cb-cebc-44b3-8f43-57ecd97ddf17" (UID: "cd1e20cb-cebc-44b3-8f43-57ecd97ddf17"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:52.358457 master-0 kubenswrapper[27835]: I0318 13:43:52.358096 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd1e20cb-cebc-44b3-8f43-57ecd97ddf17" (UID: "cd1e20cb-cebc-44b3-8f43-57ecd97ddf17"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:52.398438 master-0 kubenswrapper[27835]: I0318 13:43:52.397752 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-config-data" (OuterVolumeSpecName: "config-data") pod "cd1e20cb-cebc-44b3-8f43-57ecd97ddf17" (UID: "cd1e20cb-cebc-44b3-8f43-57ecd97ddf17"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:52.416434 master-0 kubenswrapper[27835]: I0318 13:43:52.413189 27835 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:52.416434 master-0 kubenswrapper[27835]: I0318 13:43:52.413242 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxqdw\" (UniqueName: \"kubernetes.io/projected/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-kube-api-access-qxqdw\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:52.416434 master-0 kubenswrapper[27835]: I0318 13:43:52.413258 27835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:52.416434 master-0 kubenswrapper[27835]: I0318 13:43:52.413268 27835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:52.416434 master-0 kubenswrapper[27835]: I0318 13:43:52.413279 27835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:52.416434 master-0 kubenswrapper[27835]: I0318 13:43:52.413291 27835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:52.416434 master-0 kubenswrapper[27835]: I0318 13:43:52.413301 27835 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:52.500965 master-0 kubenswrapper[27835]: I0318 13:43:52.498975 27835 scope.go:117] "RemoveContainer" containerID="09873b32c873f251b6e613812a76dbdc0212e396601cdd54947c12abc46fec8c" Mar 18 13:43:52.500965 master-0 kubenswrapper[27835]: E0318 13:43:52.499835 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09873b32c873f251b6e613812a76dbdc0212e396601cdd54947c12abc46fec8c\": container with ID starting with 09873b32c873f251b6e613812a76dbdc0212e396601cdd54947c12abc46fec8c not found: ID does not exist" containerID="09873b32c873f251b6e613812a76dbdc0212e396601cdd54947c12abc46fec8c" Mar 18 13:43:52.500965 master-0 kubenswrapper[27835]: I0318 13:43:52.499892 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09873b32c873f251b6e613812a76dbdc0212e396601cdd54947c12abc46fec8c"} err="failed to get container status \"09873b32c873f251b6e613812a76dbdc0212e396601cdd54947c12abc46fec8c\": rpc error: code = NotFound desc = could not find container \"09873b32c873f251b6e613812a76dbdc0212e396601cdd54947c12abc46fec8c\": container with ID starting with 09873b32c873f251b6e613812a76dbdc0212e396601cdd54947c12abc46fec8c not found: ID does not exist" Mar 18 13:43:52.500965 master-0 kubenswrapper[27835]: I0318 13:43:52.499926 27835 scope.go:117] "RemoveContainer" containerID="2cfca46c55ad9af5fc41f53b25b0e5412137a600280cfe1080cb410a4584f005" Mar 18 13:43:52.503948 master-0 kubenswrapper[27835]: E0318 13:43:52.503821 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cfca46c55ad9af5fc41f53b25b0e5412137a600280cfe1080cb410a4584f005\": container with ID starting with 2cfca46c55ad9af5fc41f53b25b0e5412137a600280cfe1080cb410a4584f005 not found: ID does not exist" containerID="2cfca46c55ad9af5fc41f53b25b0e5412137a600280cfe1080cb410a4584f005" Mar 18 13:43:52.503948 master-0 kubenswrapper[27835]: I0318 13:43:52.503867 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cfca46c55ad9af5fc41f53b25b0e5412137a600280cfe1080cb410a4584f005"} err="failed to get container status \"2cfca46c55ad9af5fc41f53b25b0e5412137a600280cfe1080cb410a4584f005\": rpc error: code = NotFound desc = could not find container \"2cfca46c55ad9af5fc41f53b25b0e5412137a600280cfe1080cb410a4584f005\": container with ID starting with 2cfca46c55ad9af5fc41f53b25b0e5412137a600280cfe1080cb410a4584f005 not found: ID does not exist" Mar 18 13:43:52.503948 master-0 kubenswrapper[27835]: I0318 13:43:52.503896 27835 scope.go:117] "RemoveContainer" containerID="09873b32c873f251b6e613812a76dbdc0212e396601cdd54947c12abc46fec8c" Mar 18 13:43:52.517438 master-0 kubenswrapper[27835]: I0318 13:43:52.514891 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09873b32c873f251b6e613812a76dbdc0212e396601cdd54947c12abc46fec8c"} err="failed to get container status \"09873b32c873f251b6e613812a76dbdc0212e396601cdd54947c12abc46fec8c\": rpc error: code = NotFound desc = could not find container \"09873b32c873f251b6e613812a76dbdc0212e396601cdd54947c12abc46fec8c\": container with ID starting with 09873b32c873f251b6e613812a76dbdc0212e396601cdd54947c12abc46fec8c not found: ID does not exist" Mar 18 13:43:52.517438 master-0 kubenswrapper[27835]: I0318 13:43:52.514957 27835 scope.go:117] "RemoveContainer" containerID="2cfca46c55ad9af5fc41f53b25b0e5412137a600280cfe1080cb410a4584f005" Mar 18 13:43:52.517438 master-0 kubenswrapper[27835]: I0318 13:43:52.516952 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cfca46c55ad9af5fc41f53b25b0e5412137a600280cfe1080cb410a4584f005"} err="failed to get container status \"2cfca46c55ad9af5fc41f53b25b0e5412137a600280cfe1080cb410a4584f005\": rpc error: code = NotFound desc = could not find container \"2cfca46c55ad9af5fc41f53b25b0e5412137a600280cfe1080cb410a4584f005\": container with ID starting with 2cfca46c55ad9af5fc41f53b25b0e5412137a600280cfe1080cb410a4584f005 not found: ID does not exist" Mar 18 13:43:52.670443 master-0 kubenswrapper[27835]: I0318 13:43:52.664072 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-07518-api-0"] Mar 18 13:43:52.681962 master-0 kubenswrapper[27835]: I0318 13:43:52.678676 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-07518-api-0"] Mar 18 13:43:52.699660 master-0 kubenswrapper[27835]: I0318 13:43:52.698788 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-07518-api-0"] Mar 18 13:43:52.699660 master-0 kubenswrapper[27835]: E0318 13:43:52.699349 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e32da24-78ff-44bb-9993-ac3c2048f236" containerName="dnsmasq-dns" Mar 18 13:43:52.699660 master-0 kubenswrapper[27835]: I0318 13:43:52.699366 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e32da24-78ff-44bb-9993-ac3c2048f236" containerName="dnsmasq-dns" Mar 18 13:43:52.699660 master-0 kubenswrapper[27835]: E0318 13:43:52.699382 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd1e20cb-cebc-44b3-8f43-57ecd97ddf17" containerName="cinder-api" Mar 18 13:43:52.699660 master-0 kubenswrapper[27835]: I0318 13:43:52.699388 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd1e20cb-cebc-44b3-8f43-57ecd97ddf17" containerName="cinder-api" Mar 18 13:43:52.699660 master-0 kubenswrapper[27835]: E0318 13:43:52.699453 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e32da24-78ff-44bb-9993-ac3c2048f236" containerName="init" Mar 18 13:43:52.699660 master-0 kubenswrapper[27835]: I0318 13:43:52.699462 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e32da24-78ff-44bb-9993-ac3c2048f236" containerName="init" Mar 18 13:43:52.699660 master-0 kubenswrapper[27835]: E0318 13:43:52.699489 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd1e20cb-cebc-44b3-8f43-57ecd97ddf17" containerName="cinder-07518-api-log" Mar 18 13:43:52.699660 master-0 kubenswrapper[27835]: I0318 13:43:52.699496 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd1e20cb-cebc-44b3-8f43-57ecd97ddf17" containerName="cinder-07518-api-log" Mar 18 13:43:52.700238 master-0 kubenswrapper[27835]: I0318 13:43:52.699719 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e32da24-78ff-44bb-9993-ac3c2048f236" containerName="dnsmasq-dns" Mar 18 13:43:52.700238 master-0 kubenswrapper[27835]: I0318 13:43:52.699731 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd1e20cb-cebc-44b3-8f43-57ecd97ddf17" containerName="cinder-07518-api-log" Mar 18 13:43:52.700238 master-0 kubenswrapper[27835]: I0318 13:43:52.699759 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd1e20cb-cebc-44b3-8f43-57ecd97ddf17" containerName="cinder-api" Mar 18 13:43:52.701665 master-0 kubenswrapper[27835]: I0318 13:43:52.700934 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-07518-api-0" Mar 18 13:43:52.704810 master-0 kubenswrapper[27835]: I0318 13:43:52.704750 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-07518-api-config-data" Mar 18 13:43:52.705054 master-0 kubenswrapper[27835]: I0318 13:43:52.705025 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 18 13:43:52.707715 master-0 kubenswrapper[27835]: I0318 13:43:52.707662 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 18 13:43:52.717456 master-0 kubenswrapper[27835]: I0318 13:43:52.717350 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-07518-api-0"] Mar 18 13:43:52.829081 master-0 kubenswrapper[27835]: I0318 13:43:52.829004 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fcb2b42-0212-4505-ac03-9b094ce3f2eb-logs\") pod \"cinder-07518-api-0\" (UID: \"2fcb2b42-0212-4505-ac03-9b094ce3f2eb\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:52.829345 master-0 kubenswrapper[27835]: I0318 13:43:52.829095 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fcb2b42-0212-4505-ac03-9b094ce3f2eb-public-tls-certs\") pod \"cinder-07518-api-0\" (UID: \"2fcb2b42-0212-4505-ac03-9b094ce3f2eb\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:52.829345 master-0 kubenswrapper[27835]: I0318 13:43:52.829141 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fcb2b42-0212-4505-ac03-9b094ce3f2eb-internal-tls-certs\") pod \"cinder-07518-api-0\" (UID: \"2fcb2b42-0212-4505-ac03-9b094ce3f2eb\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:52.829345 master-0 kubenswrapper[27835]: I0318 13:43:52.829172 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9fc4\" (UniqueName: \"kubernetes.io/projected/2fcb2b42-0212-4505-ac03-9b094ce3f2eb-kube-api-access-t9fc4\") pod \"cinder-07518-api-0\" (UID: \"2fcb2b42-0212-4505-ac03-9b094ce3f2eb\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:52.829345 master-0 kubenswrapper[27835]: I0318 13:43:52.829191 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fcb2b42-0212-4505-ac03-9b094ce3f2eb-scripts\") pod \"cinder-07518-api-0\" (UID: \"2fcb2b42-0212-4505-ac03-9b094ce3f2eb\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:52.829345 master-0 kubenswrapper[27835]: I0318 13:43:52.829216 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fcb2b42-0212-4505-ac03-9b094ce3f2eb-config-data\") pod \"cinder-07518-api-0\" (UID: \"2fcb2b42-0212-4505-ac03-9b094ce3f2eb\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:52.829345 master-0 kubenswrapper[27835]: I0318 13:43:52.829250 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2fcb2b42-0212-4505-ac03-9b094ce3f2eb-config-data-custom\") pod \"cinder-07518-api-0\" (UID: \"2fcb2b42-0212-4505-ac03-9b094ce3f2eb\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:52.829345 master-0 kubenswrapper[27835]: I0318 13:43:52.829275 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fcb2b42-0212-4505-ac03-9b094ce3f2eb-combined-ca-bundle\") pod \"cinder-07518-api-0\" (UID: \"2fcb2b42-0212-4505-ac03-9b094ce3f2eb\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:52.829754 master-0 kubenswrapper[27835]: I0318 13:43:52.829406 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2fcb2b42-0212-4505-ac03-9b094ce3f2eb-etc-machine-id\") pod \"cinder-07518-api-0\" (UID: \"2fcb2b42-0212-4505-ac03-9b094ce3f2eb\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:52.930974 master-0 kubenswrapper[27835]: I0318 13:43:52.930902 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9fc4\" (UniqueName: \"kubernetes.io/projected/2fcb2b42-0212-4505-ac03-9b094ce3f2eb-kube-api-access-t9fc4\") pod \"cinder-07518-api-0\" (UID: \"2fcb2b42-0212-4505-ac03-9b094ce3f2eb\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:52.930974 master-0 kubenswrapper[27835]: I0318 13:43:52.930962 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fcb2b42-0212-4505-ac03-9b094ce3f2eb-scripts\") pod \"cinder-07518-api-0\" (UID: \"2fcb2b42-0212-4505-ac03-9b094ce3f2eb\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:52.931342 master-0 kubenswrapper[27835]: I0318 13:43:52.930994 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fcb2b42-0212-4505-ac03-9b094ce3f2eb-config-data\") pod \"cinder-07518-api-0\" (UID: \"2fcb2b42-0212-4505-ac03-9b094ce3f2eb\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:52.931342 master-0 kubenswrapper[27835]: I0318 13:43:52.931032 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2fcb2b42-0212-4505-ac03-9b094ce3f2eb-config-data-custom\") pod \"cinder-07518-api-0\" (UID: \"2fcb2b42-0212-4505-ac03-9b094ce3f2eb\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:52.931342 master-0 kubenswrapper[27835]: I0318 13:43:52.931059 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fcb2b42-0212-4505-ac03-9b094ce3f2eb-combined-ca-bundle\") pod \"cinder-07518-api-0\" (UID: \"2fcb2b42-0212-4505-ac03-9b094ce3f2eb\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:52.931342 master-0 kubenswrapper[27835]: I0318 13:43:52.931127 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2fcb2b42-0212-4505-ac03-9b094ce3f2eb-etc-machine-id\") pod \"cinder-07518-api-0\" (UID: \"2fcb2b42-0212-4505-ac03-9b094ce3f2eb\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:52.931342 master-0 kubenswrapper[27835]: I0318 13:43:52.931166 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fcb2b42-0212-4505-ac03-9b094ce3f2eb-logs\") pod \"cinder-07518-api-0\" (UID: \"2fcb2b42-0212-4505-ac03-9b094ce3f2eb\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:52.931342 master-0 kubenswrapper[27835]: I0318 13:43:52.931211 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fcb2b42-0212-4505-ac03-9b094ce3f2eb-public-tls-certs\") pod \"cinder-07518-api-0\" (UID: \"2fcb2b42-0212-4505-ac03-9b094ce3f2eb\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:52.931342 master-0 kubenswrapper[27835]: I0318 13:43:52.931249 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fcb2b42-0212-4505-ac03-9b094ce3f2eb-internal-tls-certs\") pod \"cinder-07518-api-0\" (UID: \"2fcb2b42-0212-4505-ac03-9b094ce3f2eb\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:52.933509 master-0 kubenswrapper[27835]: I0318 13:43:52.933458 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fcb2b42-0212-4505-ac03-9b094ce3f2eb-logs\") pod \"cinder-07518-api-0\" (UID: \"2fcb2b42-0212-4505-ac03-9b094ce3f2eb\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:52.933655 master-0 kubenswrapper[27835]: I0318 13:43:52.933586 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2fcb2b42-0212-4505-ac03-9b094ce3f2eb-etc-machine-id\") pod \"cinder-07518-api-0\" (UID: \"2fcb2b42-0212-4505-ac03-9b094ce3f2eb\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:52.935256 master-0 kubenswrapper[27835]: I0318 13:43:52.935127 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fcb2b42-0212-4505-ac03-9b094ce3f2eb-scripts\") pod \"cinder-07518-api-0\" (UID: \"2fcb2b42-0212-4505-ac03-9b094ce3f2eb\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:52.938051 master-0 kubenswrapper[27835]: I0318 13:43:52.937985 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fcb2b42-0212-4505-ac03-9b094ce3f2eb-combined-ca-bundle\") pod \"cinder-07518-api-0\" (UID: \"2fcb2b42-0212-4505-ac03-9b094ce3f2eb\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:52.940546 master-0 kubenswrapper[27835]: I0318 13:43:52.938809 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fcb2b42-0212-4505-ac03-9b094ce3f2eb-internal-tls-certs\") pod \"cinder-07518-api-0\" (UID: \"2fcb2b42-0212-4505-ac03-9b094ce3f2eb\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:52.941069 master-0 kubenswrapper[27835]: I0318 13:43:52.941003 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fcb2b42-0212-4505-ac03-9b094ce3f2eb-public-tls-certs\") pod \"cinder-07518-api-0\" (UID: \"2fcb2b42-0212-4505-ac03-9b094ce3f2eb\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:52.941523 master-0 kubenswrapper[27835]: I0318 13:43:52.941430 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fcb2b42-0212-4505-ac03-9b094ce3f2eb-config-data\") pod \"cinder-07518-api-0\" (UID: \"2fcb2b42-0212-4505-ac03-9b094ce3f2eb\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:52.954863 master-0 kubenswrapper[27835]: I0318 13:43:52.954725 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2fcb2b42-0212-4505-ac03-9b094ce3f2eb-config-data-custom\") pod \"cinder-07518-api-0\" (UID: \"2fcb2b42-0212-4505-ac03-9b094ce3f2eb\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:52.955159 master-0 kubenswrapper[27835]: I0318 13:43:52.954923 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9fc4\" (UniqueName: \"kubernetes.io/projected/2fcb2b42-0212-4505-ac03-9b094ce3f2eb-kube-api-access-t9fc4\") pod \"cinder-07518-api-0\" (UID: \"2fcb2b42-0212-4505-ac03-9b094ce3f2eb\") " pod="openstack/cinder-07518-api-0" Mar 18 13:43:53.034053 master-0 kubenswrapper[27835]: I0318 13:43:53.033992 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-07518-api-0" Mar 18 13:43:53.534821 master-0 kubenswrapper[27835]: I0318 13:43:53.533518 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-07518-api-0"] Mar 18 13:43:53.540102 master-0 kubenswrapper[27835]: W0318 13:43:53.539824 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fcb2b42_0212_4505_ac03_9b094ce3f2eb.slice/crio-12c0358ea9513b834405b6fa293d6346b5ef4e20905b22dcd024dc1d4ea94e72 WatchSource:0}: Error finding container 12c0358ea9513b834405b6fa293d6346b5ef4e20905b22dcd024dc1d4ea94e72: Status 404 returned error can't find the container with id 12c0358ea9513b834405b6fa293d6346b5ef4e20905b22dcd024dc1d4ea94e72 Mar 18 13:43:53.890423 master-0 kubenswrapper[27835]: I0318 13:43:53.890332 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-tmkx2" Mar 18 13:43:53.956123 master-0 kubenswrapper[27835]: I0318 13:43:53.956071 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/653293fa-39a3-4b35-ad41-6a3cac734e80-etc-podinfo\") pod \"653293fa-39a3-4b35-ad41-6a3cac734e80\" (UID: \"653293fa-39a3-4b35-ad41-6a3cac734e80\") " Mar 18 13:43:53.956316 master-0 kubenswrapper[27835]: I0318 13:43:53.956129 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c56tx\" (UniqueName: \"kubernetes.io/projected/653293fa-39a3-4b35-ad41-6a3cac734e80-kube-api-access-c56tx\") pod \"653293fa-39a3-4b35-ad41-6a3cac734e80\" (UID: \"653293fa-39a3-4b35-ad41-6a3cac734e80\") " Mar 18 13:43:53.956316 master-0 kubenswrapper[27835]: I0318 13:43:53.956227 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/653293fa-39a3-4b35-ad41-6a3cac734e80-scripts\") pod \"653293fa-39a3-4b35-ad41-6a3cac734e80\" (UID: \"653293fa-39a3-4b35-ad41-6a3cac734e80\") " Mar 18 13:43:53.956316 master-0 kubenswrapper[27835]: I0318 13:43:53.956262 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/653293fa-39a3-4b35-ad41-6a3cac734e80-config-data\") pod \"653293fa-39a3-4b35-ad41-6a3cac734e80\" (UID: \"653293fa-39a3-4b35-ad41-6a3cac734e80\") " Mar 18 13:43:53.956316 master-0 kubenswrapper[27835]: I0318 13:43:53.956286 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/653293fa-39a3-4b35-ad41-6a3cac734e80-config-data-merged\") pod \"653293fa-39a3-4b35-ad41-6a3cac734e80\" (UID: \"653293fa-39a3-4b35-ad41-6a3cac734e80\") " Mar 18 13:43:53.956316 master-0 kubenswrapper[27835]: I0318 13:43:53.956303 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/653293fa-39a3-4b35-ad41-6a3cac734e80-combined-ca-bundle\") pod \"653293fa-39a3-4b35-ad41-6a3cac734e80\" (UID: \"653293fa-39a3-4b35-ad41-6a3cac734e80\") " Mar 18 13:43:53.958351 master-0 kubenswrapper[27835]: I0318 13:43:53.958302 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/653293fa-39a3-4b35-ad41-6a3cac734e80-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "653293fa-39a3-4b35-ad41-6a3cac734e80" (UID: "653293fa-39a3-4b35-ad41-6a3cac734e80"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:43:53.974222 master-0 kubenswrapper[27835]: I0318 13:43:53.973626 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/653293fa-39a3-4b35-ad41-6a3cac734e80-scripts" (OuterVolumeSpecName: "scripts") pod "653293fa-39a3-4b35-ad41-6a3cac734e80" (UID: "653293fa-39a3-4b35-ad41-6a3cac734e80"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:53.975987 master-0 kubenswrapper[27835]: I0318 13:43:53.975860 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/653293fa-39a3-4b35-ad41-6a3cac734e80-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "653293fa-39a3-4b35-ad41-6a3cac734e80" (UID: "653293fa-39a3-4b35-ad41-6a3cac734e80"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 18 13:43:53.976509 master-0 kubenswrapper[27835]: I0318 13:43:53.976453 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/653293fa-39a3-4b35-ad41-6a3cac734e80-kube-api-access-c56tx" (OuterVolumeSpecName: "kube-api-access-c56tx") pod "653293fa-39a3-4b35-ad41-6a3cac734e80" (UID: "653293fa-39a3-4b35-ad41-6a3cac734e80"). InnerVolumeSpecName "kube-api-access-c56tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:43:54.004391 master-0 kubenswrapper[27835]: I0318 13:43:54.004344 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/653293fa-39a3-4b35-ad41-6a3cac734e80-config-data" (OuterVolumeSpecName: "config-data") pod "653293fa-39a3-4b35-ad41-6a3cac734e80" (UID: "653293fa-39a3-4b35-ad41-6a3cac734e80"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:54.060510 master-0 kubenswrapper[27835]: I0318 13:43:54.060389 27835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/653293fa-39a3-4b35-ad41-6a3cac734e80-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:54.060510 master-0 kubenswrapper[27835]: I0318 13:43:54.060442 27835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/653293fa-39a3-4b35-ad41-6a3cac734e80-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:54.060510 master-0 kubenswrapper[27835]: I0318 13:43:54.060461 27835 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/653293fa-39a3-4b35-ad41-6a3cac734e80-config-data-merged\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:54.060510 master-0 kubenswrapper[27835]: I0318 13:43:54.060474 27835 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/653293fa-39a3-4b35-ad41-6a3cac734e80-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:54.060510 master-0 kubenswrapper[27835]: I0318 13:43:54.060487 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c56tx\" (UniqueName: \"kubernetes.io/projected/653293fa-39a3-4b35-ad41-6a3cac734e80-kube-api-access-c56tx\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:54.072618 master-0 kubenswrapper[27835]: I0318 13:43:54.072551 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/653293fa-39a3-4b35-ad41-6a3cac734e80-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "653293fa-39a3-4b35-ad41-6a3cac734e80" (UID: "653293fa-39a3-4b35-ad41-6a3cac734e80"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:43:54.162429 master-0 kubenswrapper[27835]: I0318 13:43:54.162341 27835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/653293fa-39a3-4b35-ad41-6a3cac734e80-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:54.293331 master-0 kubenswrapper[27835]: I0318 13:43:54.293272 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd1e20cb-cebc-44b3-8f43-57ecd97ddf17" path="/var/lib/kubelet/pods/cd1e20cb-cebc-44b3-8f43-57ecd97ddf17/volumes" Mar 18 13:43:54.382512 master-0 kubenswrapper[27835]: I0318 13:43:54.381868 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-tmkx2" event={"ID":"653293fa-39a3-4b35-ad41-6a3cac734e80","Type":"ContainerDied","Data":"88956c789da0b4a9447a2da7209c2c8a8b243b390b656b632bc38beedaecb41a"} Mar 18 13:43:54.382512 master-0 kubenswrapper[27835]: I0318 13:43:54.381909 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-tmkx2" Mar 18 13:43:54.382512 master-0 kubenswrapper[27835]: I0318 13:43:54.381916 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88956c789da0b4a9447a2da7209c2c8a8b243b390b656b632bc38beedaecb41a" Mar 18 13:43:54.384529 master-0 kubenswrapper[27835]: I0318 13:43:54.384479 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-api-0" event={"ID":"2fcb2b42-0212-4505-ac03-9b094ce3f2eb","Type":"ContainerStarted","Data":"bb6cc5392817ff8f05a678a53e4325810543798b734df0532dbf902030fcefea"} Mar 18 13:43:54.384596 master-0 kubenswrapper[27835]: I0318 13:43:54.384532 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-api-0" event={"ID":"2fcb2b42-0212-4505-ac03-9b094ce3f2eb","Type":"ContainerStarted","Data":"12c0358ea9513b834405b6fa293d6346b5ef4e20905b22dcd024dc1d4ea94e72"} Mar 18 13:43:54.851606 master-0 kubenswrapper[27835]: I0318 13:43:54.850743 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-create-tkbch"] Mar 18 13:43:54.851606 master-0 kubenswrapper[27835]: E0318 13:43:54.851232 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="653293fa-39a3-4b35-ad41-6a3cac734e80" containerName="init" Mar 18 13:43:54.851606 master-0 kubenswrapper[27835]: I0318 13:43:54.851246 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="653293fa-39a3-4b35-ad41-6a3cac734e80" containerName="init" Mar 18 13:43:54.851606 master-0 kubenswrapper[27835]: E0318 13:43:54.851282 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="653293fa-39a3-4b35-ad41-6a3cac734e80" containerName="ironic-db-sync" Mar 18 13:43:54.851606 master-0 kubenswrapper[27835]: I0318 13:43:54.851288 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="653293fa-39a3-4b35-ad41-6a3cac734e80" containerName="ironic-db-sync" Mar 18 13:43:54.851606 master-0 kubenswrapper[27835]: I0318 13:43:54.851593 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="653293fa-39a3-4b35-ad41-6a3cac734e80" containerName="ironic-db-sync" Mar 18 13:43:54.854251 master-0 kubenswrapper[27835]: I0318 13:43:54.852256 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-tkbch" Mar 18 13:43:54.885444 master-0 kubenswrapper[27835]: I0318 13:43:54.884333 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-neutron-agent-689c666fd-tjnb9"] Mar 18 13:43:54.887986 master-0 kubenswrapper[27835]: I0318 13:43:54.887936 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" Mar 18 13:43:54.892069 master-0 kubenswrapper[27835]: I0318 13:43:54.892016 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-ironic-neutron-agent-config-data" Mar 18 13:43:54.907073 master-0 kubenswrapper[27835]: I0318 13:43:54.901807 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31573687-c807-4574-8813-ba2280fb170a-operator-scripts\") pod \"ironic-inspector-db-create-tkbch\" (UID: \"31573687-c807-4574-8813-ba2280fb170a\") " pod="openstack/ironic-inspector-db-create-tkbch" Mar 18 13:43:54.907073 master-0 kubenswrapper[27835]: I0318 13:43:54.903146 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bbvh\" (UniqueName: \"kubernetes.io/projected/31573687-c807-4574-8813-ba2280fb170a-kube-api-access-7bbvh\") pod \"ironic-inspector-db-create-tkbch\" (UID: \"31573687-c807-4574-8813-ba2280fb170a\") " pod="openstack/ironic-inspector-db-create-tkbch" Mar 18 13:43:54.915204 master-0 kubenswrapper[27835]: I0318 13:43:54.915144 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-tkbch"] Mar 18 13:43:54.946856 master-0 kubenswrapper[27835]: I0318 13:43:54.939555 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-689c666fd-tjnb9"] Mar 18 13:43:55.007146 master-0 kubenswrapper[27835]: I0318 13:43:55.006927 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45dfh\" (UniqueName: \"kubernetes.io/projected/cc7df07d-4c6b-469f-b007-e3d799a49fd5-kube-api-access-45dfh\") pod \"ironic-neutron-agent-689c666fd-tjnb9\" (UID: \"cc7df07d-4c6b-469f-b007-e3d799a49fd5\") " pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" Mar 18 13:43:55.007146 master-0 kubenswrapper[27835]: I0318 13:43:55.007006 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc7df07d-4c6b-469f-b007-e3d799a49fd5-combined-ca-bundle\") pod \"ironic-neutron-agent-689c666fd-tjnb9\" (UID: \"cc7df07d-4c6b-469f-b007-e3d799a49fd5\") " pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" Mar 18 13:43:55.007146 master-0 kubenswrapper[27835]: I0318 13:43:55.007033 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bbvh\" (UniqueName: \"kubernetes.io/projected/31573687-c807-4574-8813-ba2280fb170a-kube-api-access-7bbvh\") pod \"ironic-inspector-db-create-tkbch\" (UID: \"31573687-c807-4574-8813-ba2280fb170a\") " pod="openstack/ironic-inspector-db-create-tkbch" Mar 18 13:43:55.007146 master-0 kubenswrapper[27835]: I0318 13:43:55.007109 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31573687-c807-4574-8813-ba2280fb170a-operator-scripts\") pod \"ironic-inspector-db-create-tkbch\" (UID: \"31573687-c807-4574-8813-ba2280fb170a\") " pod="openstack/ironic-inspector-db-create-tkbch" Mar 18 13:43:55.007146 master-0 kubenswrapper[27835]: I0318 13:43:55.007132 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cc7df07d-4c6b-469f-b007-e3d799a49fd5-config\") pod \"ironic-neutron-agent-689c666fd-tjnb9\" (UID: \"cc7df07d-4c6b-469f-b007-e3d799a49fd5\") " pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" Mar 18 13:43:55.052982 master-0 kubenswrapper[27835]: I0318 13:43:55.037559 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31573687-c807-4574-8813-ba2280fb170a-operator-scripts\") pod \"ironic-inspector-db-create-tkbch\" (UID: \"31573687-c807-4574-8813-ba2280fb170a\") " pod="openstack/ironic-inspector-db-create-tkbch" Mar 18 13:43:55.078497 master-0 kubenswrapper[27835]: I0318 13:43:55.067656 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bbvh\" (UniqueName: \"kubernetes.io/projected/31573687-c807-4574-8813-ba2280fb170a-kube-api-access-7bbvh\") pod \"ironic-inspector-db-create-tkbch\" (UID: \"31573687-c807-4574-8813-ba2280fb170a\") " pod="openstack/ironic-inspector-db-create-tkbch" Mar 18 13:43:55.114452 master-0 kubenswrapper[27835]: I0318 13:43:55.113333 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cc7df07d-4c6b-469f-b007-e3d799a49fd5-config\") pod \"ironic-neutron-agent-689c666fd-tjnb9\" (UID: \"cc7df07d-4c6b-469f-b007-e3d799a49fd5\") " pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" Mar 18 13:43:55.114452 master-0 kubenswrapper[27835]: I0318 13:43:55.113503 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45dfh\" (UniqueName: \"kubernetes.io/projected/cc7df07d-4c6b-469f-b007-e3d799a49fd5-kube-api-access-45dfh\") pod \"ironic-neutron-agent-689c666fd-tjnb9\" (UID: \"cc7df07d-4c6b-469f-b007-e3d799a49fd5\") " pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" Mar 18 13:43:55.114452 master-0 kubenswrapper[27835]: I0318 13:43:55.113567 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc7df07d-4c6b-469f-b007-e3d799a49fd5-combined-ca-bundle\") pod \"ironic-neutron-agent-689c666fd-tjnb9\" (UID: \"cc7df07d-4c6b-469f-b007-e3d799a49fd5\") " pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" Mar 18 13:43:55.154445 master-0 kubenswrapper[27835]: I0318 13:43:55.151858 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc7df07d-4c6b-469f-b007-e3d799a49fd5-combined-ca-bundle\") pod \"ironic-neutron-agent-689c666fd-tjnb9\" (UID: \"cc7df07d-4c6b-469f-b007-e3d799a49fd5\") " pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" Mar 18 13:43:55.162449 master-0 kubenswrapper[27835]: I0318 13:43:55.155299 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45dfh\" (UniqueName: \"kubernetes.io/projected/cc7df07d-4c6b-469f-b007-e3d799a49fd5-kube-api-access-45dfh\") pod \"ironic-neutron-agent-689c666fd-tjnb9\" (UID: \"cc7df07d-4c6b-469f-b007-e3d799a49fd5\") " pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" Mar 18 13:43:55.162449 master-0 kubenswrapper[27835]: I0318 13:43:55.155340 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cc7df07d-4c6b-469f-b007-e3d799a49fd5-config\") pod \"ironic-neutron-agent-689c666fd-tjnb9\" (UID: \"cc7df07d-4c6b-469f-b007-e3d799a49fd5\") " pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" Mar 18 13:43:55.173445 master-0 kubenswrapper[27835]: I0318 13:43:55.168680 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-1f88-account-create-update-pdcfx"] Mar 18 13:43:55.173445 master-0 kubenswrapper[27835]: I0318 13:43:55.171647 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-1f88-account-create-update-pdcfx" Mar 18 13:43:55.180781 master-0 kubenswrapper[27835]: I0318 13:43:55.176814 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-db-secret" Mar 18 13:43:55.222546 master-0 kubenswrapper[27835]: I0318 13:43:55.219915 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxt7n\" (UniqueName: \"kubernetes.io/projected/b7cb1d5c-2899-4a22-b167-97f305fd2393-kube-api-access-nxt7n\") pod \"ironic-inspector-1f88-account-create-update-pdcfx\" (UID: \"b7cb1d5c-2899-4a22-b167-97f305fd2393\") " pod="openstack/ironic-inspector-1f88-account-create-update-pdcfx" Mar 18 13:43:55.222546 master-0 kubenswrapper[27835]: I0318 13:43:55.220056 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7cb1d5c-2899-4a22-b167-97f305fd2393-operator-scripts\") pod \"ironic-inspector-1f88-account-create-update-pdcfx\" (UID: \"b7cb1d5c-2899-4a22-b167-97f305fd2393\") " pod="openstack/ironic-inspector-1f88-account-create-update-pdcfx" Mar 18 13:43:55.239079 master-0 kubenswrapper[27835]: I0318 13:43:55.238985 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-1f88-account-create-update-pdcfx"] Mar 18 13:43:55.320438 master-0 kubenswrapper[27835]: I0318 13:43:55.317846 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-tkbch" Mar 18 13:43:55.333250 master-0 kubenswrapper[27835]: I0318 13:43:55.333185 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7cb1d5c-2899-4a22-b167-97f305fd2393-operator-scripts\") pod \"ironic-inspector-1f88-account-create-update-pdcfx\" (UID: \"b7cb1d5c-2899-4a22-b167-97f305fd2393\") " pod="openstack/ironic-inspector-1f88-account-create-update-pdcfx" Mar 18 13:43:55.333523 master-0 kubenswrapper[27835]: I0318 13:43:55.333501 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxt7n\" (UniqueName: \"kubernetes.io/projected/b7cb1d5c-2899-4a22-b167-97f305fd2393-kube-api-access-nxt7n\") pod \"ironic-inspector-1f88-account-create-update-pdcfx\" (UID: \"b7cb1d5c-2899-4a22-b167-97f305fd2393\") " pod="openstack/ironic-inspector-1f88-account-create-update-pdcfx" Mar 18 13:43:55.334539 master-0 kubenswrapper[27835]: I0318 13:43:55.334504 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7cb1d5c-2899-4a22-b167-97f305fd2393-operator-scripts\") pod \"ironic-inspector-1f88-account-create-update-pdcfx\" (UID: \"b7cb1d5c-2899-4a22-b167-97f305fd2393\") " pod="openstack/ironic-inspector-1f88-account-create-update-pdcfx" Mar 18 13:43:55.348045 master-0 kubenswrapper[27835]: I0318 13:43:55.347991 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" Mar 18 13:43:55.417570 master-0 kubenswrapper[27835]: I0318 13:43:55.417508 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bd84765b9-ps9s9"] Mar 18 13:43:55.418128 master-0 kubenswrapper[27835]: I0318 13:43:55.418071 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bd84765b9-ps9s9" podUID="37f5936e-eb3f-4cad-8762-fdcd4f042c63" containerName="dnsmasq-dns" containerID="cri-o://7f9740ae99ba5a486baed3852c69856a097c8e8afbfb6226e9e1872f9e710781" gracePeriod=10 Mar 18 13:43:55.429895 master-0 kubenswrapper[27835]: I0318 13:43:55.429857 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77bd5547c7-x2vlw"] Mar 18 13:43:55.430920 master-0 kubenswrapper[27835]: I0318 13:43:55.430872 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxt7n\" (UniqueName: \"kubernetes.io/projected/b7cb1d5c-2899-4a22-b167-97f305fd2393-kube-api-access-nxt7n\") pod \"ironic-inspector-1f88-account-create-update-pdcfx\" (UID: \"b7cb1d5c-2899-4a22-b167-97f305fd2393\") " pod="openstack/ironic-inspector-1f88-account-create-update-pdcfx" Mar 18 13:43:55.432346 master-0 kubenswrapper[27835]: I0318 13:43:55.432325 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77bd5547c7-x2vlw" Mar 18 13:43:55.433205 master-0 kubenswrapper[27835]: I0318 13:43:55.433161 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bd84765b9-ps9s9" Mar 18 13:43:55.444662 master-0 kubenswrapper[27835]: I0318 13:43:55.443329 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77bd5547c7-x2vlw"] Mar 18 13:43:55.449749 master-0 kubenswrapper[27835]: I0318 13:43:55.449683 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-857bc4bdd5-pjcbx"] Mar 18 13:43:55.465975 master-0 kubenswrapper[27835]: I0318 13:43:55.465166 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-857bc4bdd5-pjcbx" Mar 18 13:43:55.492968 master-0 kubenswrapper[27835]: I0318 13:43:55.490291 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 18 13:43:55.492968 master-0 kubenswrapper[27835]: I0318 13:43:55.490483 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Mar 18 13:43:55.492968 master-0 kubenswrapper[27835]: I0318 13:43:55.490593 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-config-data" Mar 18 13:43:55.492968 master-0 kubenswrapper[27835]: I0318 13:43:55.490604 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-scripts" Mar 18 13:43:55.492968 master-0 kubenswrapper[27835]: I0318 13:43:55.490779 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-transport" Mar 18 13:43:55.525466 master-0 kubenswrapper[27835]: I0318 13:43:55.525382 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-857bc4bdd5-pjcbx"] Mar 18 13:43:55.538967 master-0 kubenswrapper[27835]: I0318 13:43:55.538908 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e55b5aa7-9a4f-4042-91b6-6f03aaeada53-dns-svc\") pod \"dnsmasq-dns-77bd5547c7-x2vlw\" (UID: \"e55b5aa7-9a4f-4042-91b6-6f03aaeada53\") " pod="openstack/dnsmasq-dns-77bd5547c7-x2vlw" Mar 18 13:43:55.539145 master-0 kubenswrapper[27835]: I0318 13:43:55.539018 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r7n8\" (UniqueName: \"kubernetes.io/projected/e55b5aa7-9a4f-4042-91b6-6f03aaeada53-kube-api-access-8r7n8\") pod \"dnsmasq-dns-77bd5547c7-x2vlw\" (UID: \"e55b5aa7-9a4f-4042-91b6-6f03aaeada53\") " pod="openstack/dnsmasq-dns-77bd5547c7-x2vlw" Mar 18 13:43:55.539145 master-0 kubenswrapper[27835]: I0318 13:43:55.539045 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e55b5aa7-9a4f-4042-91b6-6f03aaeada53-ovsdbserver-sb\") pod \"dnsmasq-dns-77bd5547c7-x2vlw\" (UID: \"e55b5aa7-9a4f-4042-91b6-6f03aaeada53\") " pod="openstack/dnsmasq-dns-77bd5547c7-x2vlw" Mar 18 13:43:55.539145 master-0 kubenswrapper[27835]: I0318 13:43:55.539123 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e55b5aa7-9a4f-4042-91b6-6f03aaeada53-dns-swift-storage-0\") pod \"dnsmasq-dns-77bd5547c7-x2vlw\" (UID: \"e55b5aa7-9a4f-4042-91b6-6f03aaeada53\") " pod="openstack/dnsmasq-dns-77bd5547c7-x2vlw" Mar 18 13:43:55.539253 master-0 kubenswrapper[27835]: I0318 13:43:55.539178 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e55b5aa7-9a4f-4042-91b6-6f03aaeada53-config\") pod \"dnsmasq-dns-77bd5547c7-x2vlw\" (UID: \"e55b5aa7-9a4f-4042-91b6-6f03aaeada53\") " pod="openstack/dnsmasq-dns-77bd5547c7-x2vlw" Mar 18 13:43:55.539253 master-0 kubenswrapper[27835]: I0318 13:43:55.539194 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e55b5aa7-9a4f-4042-91b6-6f03aaeada53-ovsdbserver-nb\") pod \"dnsmasq-dns-77bd5547c7-x2vlw\" (UID: \"e55b5aa7-9a4f-4042-91b6-6f03aaeada53\") " pod="openstack/dnsmasq-dns-77bd5547c7-x2vlw" Mar 18 13:43:55.572740 master-0 kubenswrapper[27835]: I0318 13:43:55.566641 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-api-0" event={"ID":"2fcb2b42-0212-4505-ac03-9b094ce3f2eb","Type":"ContainerStarted","Data":"c2fcec12c5af72281758ed105b64ca24a8cbe8f0a13e75dd828721a00fb46afc"} Mar 18 13:43:55.572740 master-0 kubenswrapper[27835]: I0318 13:43:55.567477 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-07518-api-0" Mar 18 13:43:55.609802 master-0 kubenswrapper[27835]: I0318 13:43:55.608520 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-1f88-account-create-update-pdcfx" Mar 18 13:43:55.645456 master-0 kubenswrapper[27835]: I0318 13:43:55.643194 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-config-data\") pod \"ironic-857bc4bdd5-pjcbx\" (UID: \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\") " pod="openstack/ironic-857bc4bdd5-pjcbx" Mar 18 13:43:55.645456 master-0 kubenswrapper[27835]: I0318 13:43:55.643260 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-logs\") pod \"ironic-857bc4bdd5-pjcbx\" (UID: \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\") " pod="openstack/ironic-857bc4bdd5-pjcbx" Mar 18 13:43:55.645456 master-0 kubenswrapper[27835]: I0318 13:43:55.643287 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e55b5aa7-9a4f-4042-91b6-6f03aaeada53-dns-svc\") pod \"dnsmasq-dns-77bd5547c7-x2vlw\" (UID: \"e55b5aa7-9a4f-4042-91b6-6f03aaeada53\") " pod="openstack/dnsmasq-dns-77bd5547c7-x2vlw" Mar 18 13:43:55.645456 master-0 kubenswrapper[27835]: I0318 13:43:55.643318 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-config-data-merged\") pod \"ironic-857bc4bdd5-pjcbx\" (UID: \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\") " pod="openstack/ironic-857bc4bdd5-pjcbx" Mar 18 13:43:55.645456 master-0 kubenswrapper[27835]: I0318 13:43:55.643387 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r7n8\" (UniqueName: \"kubernetes.io/projected/e55b5aa7-9a4f-4042-91b6-6f03aaeada53-kube-api-access-8r7n8\") pod \"dnsmasq-dns-77bd5547c7-x2vlw\" (UID: \"e55b5aa7-9a4f-4042-91b6-6f03aaeada53\") " pod="openstack/dnsmasq-dns-77bd5547c7-x2vlw" Mar 18 13:43:55.645456 master-0 kubenswrapper[27835]: I0318 13:43:55.643423 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e55b5aa7-9a4f-4042-91b6-6f03aaeada53-ovsdbserver-sb\") pod \"dnsmasq-dns-77bd5547c7-x2vlw\" (UID: \"e55b5aa7-9a4f-4042-91b6-6f03aaeada53\") " pod="openstack/dnsmasq-dns-77bd5547c7-x2vlw" Mar 18 13:43:55.645456 master-0 kubenswrapper[27835]: I0318 13:43:55.643461 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-config-data-custom\") pod \"ironic-857bc4bdd5-pjcbx\" (UID: \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\") " pod="openstack/ironic-857bc4bdd5-pjcbx" Mar 18 13:43:55.645456 master-0 kubenswrapper[27835]: I0318 13:43:55.643512 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-etc-podinfo\") pod \"ironic-857bc4bdd5-pjcbx\" (UID: \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\") " pod="openstack/ironic-857bc4bdd5-pjcbx" Mar 18 13:43:55.645456 master-0 kubenswrapper[27835]: I0318 13:43:55.643531 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e55b5aa7-9a4f-4042-91b6-6f03aaeada53-dns-swift-storage-0\") pod \"dnsmasq-dns-77bd5547c7-x2vlw\" (UID: \"e55b5aa7-9a4f-4042-91b6-6f03aaeada53\") " pod="openstack/dnsmasq-dns-77bd5547c7-x2vlw" Mar 18 13:43:55.645456 master-0 kubenswrapper[27835]: I0318 13:43:55.643559 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-scripts\") pod \"ironic-857bc4bdd5-pjcbx\" (UID: \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\") " pod="openstack/ironic-857bc4bdd5-pjcbx" Mar 18 13:43:55.645456 master-0 kubenswrapper[27835]: I0318 13:43:55.643583 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrdz6\" (UniqueName: \"kubernetes.io/projected/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-kube-api-access-qrdz6\") pod \"ironic-857bc4bdd5-pjcbx\" (UID: \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\") " pod="openstack/ironic-857bc4bdd5-pjcbx" Mar 18 13:43:55.645456 master-0 kubenswrapper[27835]: I0318 13:43:55.643778 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-combined-ca-bundle\") pod \"ironic-857bc4bdd5-pjcbx\" (UID: \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\") " pod="openstack/ironic-857bc4bdd5-pjcbx" Mar 18 13:43:55.645456 master-0 kubenswrapper[27835]: I0318 13:43:55.643840 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e55b5aa7-9a4f-4042-91b6-6f03aaeada53-config\") pod \"dnsmasq-dns-77bd5547c7-x2vlw\" (UID: \"e55b5aa7-9a4f-4042-91b6-6f03aaeada53\") " pod="openstack/dnsmasq-dns-77bd5547c7-x2vlw" Mar 18 13:43:55.645456 master-0 kubenswrapper[27835]: I0318 13:43:55.643859 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e55b5aa7-9a4f-4042-91b6-6f03aaeada53-ovsdbserver-nb\") pod \"dnsmasq-dns-77bd5547c7-x2vlw\" (UID: \"e55b5aa7-9a4f-4042-91b6-6f03aaeada53\") " pod="openstack/dnsmasq-dns-77bd5547c7-x2vlw" Mar 18 13:43:55.645456 master-0 kubenswrapper[27835]: I0318 13:43:55.644820 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e55b5aa7-9a4f-4042-91b6-6f03aaeada53-ovsdbserver-nb\") pod \"dnsmasq-dns-77bd5547c7-x2vlw\" (UID: \"e55b5aa7-9a4f-4042-91b6-6f03aaeada53\") " pod="openstack/dnsmasq-dns-77bd5547c7-x2vlw" Mar 18 13:43:55.645456 master-0 kubenswrapper[27835]: I0318 13:43:55.645397 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e55b5aa7-9a4f-4042-91b6-6f03aaeada53-dns-svc\") pod \"dnsmasq-dns-77bd5547c7-x2vlw\" (UID: \"e55b5aa7-9a4f-4042-91b6-6f03aaeada53\") " pod="openstack/dnsmasq-dns-77bd5547c7-x2vlw" Mar 18 13:43:55.646157 master-0 kubenswrapper[27835]: I0318 13:43:55.645948 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e55b5aa7-9a4f-4042-91b6-6f03aaeada53-dns-swift-storage-0\") pod \"dnsmasq-dns-77bd5547c7-x2vlw\" (UID: \"e55b5aa7-9a4f-4042-91b6-6f03aaeada53\") " pod="openstack/dnsmasq-dns-77bd5547c7-x2vlw" Mar 18 13:43:55.646556 master-0 kubenswrapper[27835]: I0318 13:43:55.646529 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e55b5aa7-9a4f-4042-91b6-6f03aaeada53-ovsdbserver-sb\") pod \"dnsmasq-dns-77bd5547c7-x2vlw\" (UID: \"e55b5aa7-9a4f-4042-91b6-6f03aaeada53\") " pod="openstack/dnsmasq-dns-77bd5547c7-x2vlw" Mar 18 13:43:55.669277 master-0 kubenswrapper[27835]: I0318 13:43:55.669218 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e55b5aa7-9a4f-4042-91b6-6f03aaeada53-config\") pod \"dnsmasq-dns-77bd5547c7-x2vlw\" (UID: \"e55b5aa7-9a4f-4042-91b6-6f03aaeada53\") " pod="openstack/dnsmasq-dns-77bd5547c7-x2vlw" Mar 18 13:43:55.747770 master-0 kubenswrapper[27835]: I0318 13:43:55.746847 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-logs\") pod \"ironic-857bc4bdd5-pjcbx\" (UID: \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\") " pod="openstack/ironic-857bc4bdd5-pjcbx" Mar 18 13:43:55.747770 master-0 kubenswrapper[27835]: I0318 13:43:55.746979 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-config-data-merged\") pod \"ironic-857bc4bdd5-pjcbx\" (UID: \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\") " pod="openstack/ironic-857bc4bdd5-pjcbx" Mar 18 13:43:55.747770 master-0 kubenswrapper[27835]: I0318 13:43:55.747196 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-config-data-custom\") pod \"ironic-857bc4bdd5-pjcbx\" (UID: \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\") " pod="openstack/ironic-857bc4bdd5-pjcbx" Mar 18 13:43:55.747770 master-0 kubenswrapper[27835]: I0318 13:43:55.747283 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-etc-podinfo\") pod \"ironic-857bc4bdd5-pjcbx\" (UID: \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\") " pod="openstack/ironic-857bc4bdd5-pjcbx" Mar 18 13:43:55.747770 master-0 kubenswrapper[27835]: I0318 13:43:55.747344 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-scripts\") pod \"ironic-857bc4bdd5-pjcbx\" (UID: \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\") " pod="openstack/ironic-857bc4bdd5-pjcbx" Mar 18 13:43:55.747770 master-0 kubenswrapper[27835]: I0318 13:43:55.747373 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrdz6\" (UniqueName: \"kubernetes.io/projected/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-kube-api-access-qrdz6\") pod \"ironic-857bc4bdd5-pjcbx\" (UID: \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\") " pod="openstack/ironic-857bc4bdd5-pjcbx" Mar 18 13:43:55.747770 master-0 kubenswrapper[27835]: I0318 13:43:55.747405 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-combined-ca-bundle\") pod \"ironic-857bc4bdd5-pjcbx\" (UID: \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\") " pod="openstack/ironic-857bc4bdd5-pjcbx" Mar 18 13:43:55.747770 master-0 kubenswrapper[27835]: I0318 13:43:55.747559 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-config-data\") pod \"ironic-857bc4bdd5-pjcbx\" (UID: \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\") " pod="openstack/ironic-857bc4bdd5-pjcbx" Mar 18 13:43:55.752726 master-0 kubenswrapper[27835]: I0318 13:43:55.752353 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-logs\") pod \"ironic-857bc4bdd5-pjcbx\" (UID: \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\") " pod="openstack/ironic-857bc4bdd5-pjcbx" Mar 18 13:43:55.752852 master-0 kubenswrapper[27835]: I0318 13:43:55.752832 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-config-data-merged\") pod \"ironic-857bc4bdd5-pjcbx\" (UID: \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\") " pod="openstack/ironic-857bc4bdd5-pjcbx" Mar 18 13:43:55.753313 master-0 kubenswrapper[27835]: I0318 13:43:55.752924 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-config-data\") pod \"ironic-857bc4bdd5-pjcbx\" (UID: \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\") " pod="openstack/ironic-857bc4bdd5-pjcbx" Mar 18 13:43:55.765566 master-0 kubenswrapper[27835]: I0318 13:43:55.765518 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-combined-ca-bundle\") pod \"ironic-857bc4bdd5-pjcbx\" (UID: \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\") " pod="openstack/ironic-857bc4bdd5-pjcbx" Mar 18 13:43:55.769129 master-0 kubenswrapper[27835]: I0318 13:43:55.767779 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-scripts\") pod \"ironic-857bc4bdd5-pjcbx\" (UID: \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\") " pod="openstack/ironic-857bc4bdd5-pjcbx" Mar 18 13:43:55.771014 master-0 kubenswrapper[27835]: I0318 13:43:55.770157 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-config-data-custom\") pod \"ironic-857bc4bdd5-pjcbx\" (UID: \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\") " pod="openstack/ironic-857bc4bdd5-pjcbx" Mar 18 13:43:55.951845 master-0 kubenswrapper[27835]: I0318 13:43:55.951620 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-etc-podinfo\") pod \"ironic-857bc4bdd5-pjcbx\" (UID: \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\") " pod="openstack/ironic-857bc4bdd5-pjcbx" Mar 18 13:43:56.026570 master-0 kubenswrapper[27835]: I0318 13:43:56.026520 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r7n8\" (UniqueName: \"kubernetes.io/projected/e55b5aa7-9a4f-4042-91b6-6f03aaeada53-kube-api-access-8r7n8\") pod \"dnsmasq-dns-77bd5547c7-x2vlw\" (UID: \"e55b5aa7-9a4f-4042-91b6-6f03aaeada53\") " pod="openstack/dnsmasq-dns-77bd5547c7-x2vlw" Mar 18 13:43:56.036216 master-0 kubenswrapper[27835]: I0318 13:43:56.036123 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-07518-api-0" podStartSLOduration=4.036097971 podStartE2EDuration="4.036097971s" podCreationTimestamp="2026-03-18 13:43:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:43:56.023836496 +0000 UTC m=+1199.989048076" watchObservedRunningTime="2026-03-18 13:43:56.036097971 +0000 UTC m=+1200.001309551" Mar 18 13:43:56.054218 master-0 kubenswrapper[27835]: I0318 13:43:56.054169 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrdz6\" (UniqueName: \"kubernetes.io/projected/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-kube-api-access-qrdz6\") pod \"ironic-857bc4bdd5-pjcbx\" (UID: \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\") " pod="openstack/ironic-857bc4bdd5-pjcbx" Mar 18 13:43:56.111142 master-0 kubenswrapper[27835]: I0318 13:43:56.110988 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77bd5547c7-x2vlw" Mar 18 13:43:56.175246 master-0 kubenswrapper[27835]: I0318 13:43:56.175086 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-857bc4bdd5-pjcbx" Mar 18 13:43:56.342076 master-0 kubenswrapper[27835]: I0318 13:43:56.342007 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bd84765b9-ps9s9" Mar 18 13:43:56.392704 master-0 kubenswrapper[27835]: I0318 13:43:56.391269 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37f5936e-eb3f-4cad-8762-fdcd4f042c63-ovsdbserver-nb\") pod \"37f5936e-eb3f-4cad-8762-fdcd4f042c63\" (UID: \"37f5936e-eb3f-4cad-8762-fdcd4f042c63\") " Mar 18 13:43:56.392704 master-0 kubenswrapper[27835]: I0318 13:43:56.391886 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37f5936e-eb3f-4cad-8762-fdcd4f042c63-config\") pod \"37f5936e-eb3f-4cad-8762-fdcd4f042c63\" (UID: \"37f5936e-eb3f-4cad-8762-fdcd4f042c63\") " Mar 18 13:43:56.392704 master-0 kubenswrapper[27835]: I0318 13:43:56.391982 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knxph\" (UniqueName: \"kubernetes.io/projected/37f5936e-eb3f-4cad-8762-fdcd4f042c63-kube-api-access-knxph\") pod \"37f5936e-eb3f-4cad-8762-fdcd4f042c63\" (UID: \"37f5936e-eb3f-4cad-8762-fdcd4f042c63\") " Mar 18 13:43:56.392704 master-0 kubenswrapper[27835]: I0318 13:43:56.392018 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37f5936e-eb3f-4cad-8762-fdcd4f042c63-dns-svc\") pod \"37f5936e-eb3f-4cad-8762-fdcd4f042c63\" (UID: \"37f5936e-eb3f-4cad-8762-fdcd4f042c63\") " Mar 18 13:43:56.392704 master-0 kubenswrapper[27835]: I0318 13:43:56.392107 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37f5936e-eb3f-4cad-8762-fdcd4f042c63-ovsdbserver-sb\") pod \"37f5936e-eb3f-4cad-8762-fdcd4f042c63\" (UID: \"37f5936e-eb3f-4cad-8762-fdcd4f042c63\") " Mar 18 13:43:56.392704 master-0 kubenswrapper[27835]: I0318 13:43:56.392132 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37f5936e-eb3f-4cad-8762-fdcd4f042c63-dns-swift-storage-0\") pod \"37f5936e-eb3f-4cad-8762-fdcd4f042c63\" (UID: \"37f5936e-eb3f-4cad-8762-fdcd4f042c63\") " Mar 18 13:43:56.486049 master-0 kubenswrapper[27835]: I0318 13:43:56.484220 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37f5936e-eb3f-4cad-8762-fdcd4f042c63-config" (OuterVolumeSpecName: "config") pod "37f5936e-eb3f-4cad-8762-fdcd4f042c63" (UID: "37f5936e-eb3f-4cad-8762-fdcd4f042c63"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:43:56.486636 master-0 kubenswrapper[27835]: I0318 13:43:56.486557 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37f5936e-eb3f-4cad-8762-fdcd4f042c63-kube-api-access-knxph" (OuterVolumeSpecName: "kube-api-access-knxph") pod "37f5936e-eb3f-4cad-8762-fdcd4f042c63" (UID: "37f5936e-eb3f-4cad-8762-fdcd4f042c63"). InnerVolumeSpecName "kube-api-access-knxph". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:43:56.503062 master-0 kubenswrapper[27835]: I0318 13:43:56.502925 27835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37f5936e-eb3f-4cad-8762-fdcd4f042c63-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:56.503062 master-0 kubenswrapper[27835]: I0318 13:43:56.502970 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knxph\" (UniqueName: \"kubernetes.io/projected/37f5936e-eb3f-4cad-8762-fdcd4f042c63-kube-api-access-knxph\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:56.503948 master-0 kubenswrapper[27835]: I0318 13:43:56.503863 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37f5936e-eb3f-4cad-8762-fdcd4f042c63-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "37f5936e-eb3f-4cad-8762-fdcd4f042c63" (UID: "37f5936e-eb3f-4cad-8762-fdcd4f042c63"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:43:56.504160 master-0 kubenswrapper[27835]: I0318 13:43:56.504135 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37f5936e-eb3f-4cad-8762-fdcd4f042c63-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "37f5936e-eb3f-4cad-8762-fdcd4f042c63" (UID: "37f5936e-eb3f-4cad-8762-fdcd4f042c63"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:43:56.506074 master-0 kubenswrapper[27835]: I0318 13:43:56.506046 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37f5936e-eb3f-4cad-8762-fdcd4f042c63-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "37f5936e-eb3f-4cad-8762-fdcd4f042c63" (UID: "37f5936e-eb3f-4cad-8762-fdcd4f042c63"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:43:56.558235 master-0 kubenswrapper[27835]: I0318 13:43:56.558176 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37f5936e-eb3f-4cad-8762-fdcd4f042c63-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "37f5936e-eb3f-4cad-8762-fdcd4f042c63" (UID: "37f5936e-eb3f-4cad-8762-fdcd4f042c63"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:43:56.606252 master-0 kubenswrapper[27835]: I0318 13:43:56.606119 27835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37f5936e-eb3f-4cad-8762-fdcd4f042c63-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:56.606252 master-0 kubenswrapper[27835]: I0318 13:43:56.606183 27835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37f5936e-eb3f-4cad-8762-fdcd4f042c63-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:56.606252 master-0 kubenswrapper[27835]: I0318 13:43:56.606203 27835 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37f5936e-eb3f-4cad-8762-fdcd4f042c63-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:56.606252 master-0 kubenswrapper[27835]: I0318 13:43:56.606215 27835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37f5936e-eb3f-4cad-8762-fdcd4f042c63-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 18 13:43:56.680451 master-0 kubenswrapper[27835]: I0318 13:43:56.673618 27835 generic.go:334] "Generic (PLEG): container finished" podID="37f5936e-eb3f-4cad-8762-fdcd4f042c63" containerID="7f9740ae99ba5a486baed3852c69856a097c8e8afbfb6226e9e1872f9e710781" exitCode=0 Mar 18 13:43:56.680451 master-0 kubenswrapper[27835]: I0318 13:43:56.674709 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bd84765b9-ps9s9" Mar 18 13:43:56.680451 master-0 kubenswrapper[27835]: I0318 13:43:56.677148 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bd84765b9-ps9s9" event={"ID":"37f5936e-eb3f-4cad-8762-fdcd4f042c63","Type":"ContainerDied","Data":"7f9740ae99ba5a486baed3852c69856a097c8e8afbfb6226e9e1872f9e710781"} Mar 18 13:43:56.680451 master-0 kubenswrapper[27835]: I0318 13:43:56.677203 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bd84765b9-ps9s9" event={"ID":"37f5936e-eb3f-4cad-8762-fdcd4f042c63","Type":"ContainerDied","Data":"4be5d73359dfc29a9b2ed7999060e69529ff0ec42c134b74397f8a2e66d6771b"} Mar 18 13:43:56.680451 master-0 kubenswrapper[27835]: I0318 13:43:56.677226 27835 scope.go:117] "RemoveContainer" containerID="7f9740ae99ba5a486baed3852c69856a097c8e8afbfb6226e9e1872f9e710781" Mar 18 13:43:56.769438 master-0 kubenswrapper[27835]: I0318 13:43:56.759189 27835 scope.go:117] "RemoveContainer" containerID="e023683fd94a4bebad0450b20d9fb7fd926a47e7bc42225a37df1f21e8112985" Mar 18 13:43:56.776805 master-0 kubenswrapper[27835]: I0318 13:43:56.774455 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-1f88-account-create-update-pdcfx"] Mar 18 13:43:56.789880 master-0 kubenswrapper[27835]: I0318 13:43:56.784336 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-db-secret" Mar 18 13:43:56.789880 master-0 kubenswrapper[27835]: W0318 13:43:56.787511 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc7df07d_4c6b_469f_b007_e3d799a49fd5.slice/crio-066889232c74101117bd7a77cdf9baefb397599972bcbcc6854ebc3a70f29eed WatchSource:0}: Error finding container 066889232c74101117bd7a77cdf9baefb397599972bcbcc6854ebc3a70f29eed: Status 404 returned error can't find the container with id 066889232c74101117bd7a77cdf9baefb397599972bcbcc6854ebc3a70f29eed Mar 18 13:43:56.820533 master-0 kubenswrapper[27835]: I0318 13:43:56.812688 27835 scope.go:117] "RemoveContainer" containerID="7f9740ae99ba5a486baed3852c69856a097c8e8afbfb6226e9e1872f9e710781" Mar 18 13:43:56.832574 master-0 kubenswrapper[27835]: E0318 13:43:56.824978 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f9740ae99ba5a486baed3852c69856a097c8e8afbfb6226e9e1872f9e710781\": container with ID starting with 7f9740ae99ba5a486baed3852c69856a097c8e8afbfb6226e9e1872f9e710781 not found: ID does not exist" containerID="7f9740ae99ba5a486baed3852c69856a097c8e8afbfb6226e9e1872f9e710781" Mar 18 13:43:56.832574 master-0 kubenswrapper[27835]: I0318 13:43:56.825032 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f9740ae99ba5a486baed3852c69856a097c8e8afbfb6226e9e1872f9e710781"} err="failed to get container status \"7f9740ae99ba5a486baed3852c69856a097c8e8afbfb6226e9e1872f9e710781\": rpc error: code = NotFound desc = could not find container \"7f9740ae99ba5a486baed3852c69856a097c8e8afbfb6226e9e1872f9e710781\": container with ID starting with 7f9740ae99ba5a486baed3852c69856a097c8e8afbfb6226e9e1872f9e710781 not found: ID does not exist" Mar 18 13:43:56.832574 master-0 kubenswrapper[27835]: I0318 13:43:56.825056 27835 scope.go:117] "RemoveContainer" containerID="e023683fd94a4bebad0450b20d9fb7fd926a47e7bc42225a37df1f21e8112985" Mar 18 13:43:56.840569 master-0 kubenswrapper[27835]: E0318 13:43:56.833593 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e023683fd94a4bebad0450b20d9fb7fd926a47e7bc42225a37df1f21e8112985\": container with ID starting with e023683fd94a4bebad0450b20d9fb7fd926a47e7bc42225a37df1f21e8112985 not found: ID does not exist" containerID="e023683fd94a4bebad0450b20d9fb7fd926a47e7bc42225a37df1f21e8112985" Mar 18 13:43:56.840569 master-0 kubenswrapper[27835]: I0318 13:43:56.833669 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e023683fd94a4bebad0450b20d9fb7fd926a47e7bc42225a37df1f21e8112985"} err="failed to get container status \"e023683fd94a4bebad0450b20d9fb7fd926a47e7bc42225a37df1f21e8112985\": rpc error: code = NotFound desc = could not find container \"e023683fd94a4bebad0450b20d9fb7fd926a47e7bc42225a37df1f21e8112985\": container with ID starting with e023683fd94a4bebad0450b20d9fb7fd926a47e7bc42225a37df1f21e8112985 not found: ID does not exist" Mar 18 13:43:56.850295 master-0 kubenswrapper[27835]: I0318 13:43:56.849234 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-689c666fd-tjnb9"] Mar 18 13:43:56.852038 master-0 kubenswrapper[27835]: W0318 13:43:56.851992 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode55b5aa7_9a4f_4042_91b6_6f03aaeada53.slice/crio-35fa65e6f777e4169c00c7b85d77ee3394f35b34c6ef7bb12fffc4bf39fa8197 WatchSource:0}: Error finding container 35fa65e6f777e4169c00c7b85d77ee3394f35b34c6ef7bb12fffc4bf39fa8197: Status 404 returned error can't find the container with id 35fa65e6f777e4169c00c7b85d77ee3394f35b34c6ef7bb12fffc4bf39fa8197 Mar 18 13:43:56.874938 master-0 kubenswrapper[27835]: I0318 13:43:56.865227 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-tkbch"] Mar 18 13:43:56.884749 master-0 kubenswrapper[27835]: I0318 13:43:56.884666 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bd84765b9-ps9s9"] Mar 18 13:43:56.908014 master-0 kubenswrapper[27835]: I0318 13:43:56.907705 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bd84765b9-ps9s9"] Mar 18 13:43:56.931847 master-0 kubenswrapper[27835]: I0318 13:43:56.931667 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77bd5547c7-x2vlw"] Mar 18 13:43:57.097846 master-0 kubenswrapper[27835]: I0318 13:43:57.097767 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-857bc4bdd5-pjcbx"] Mar 18 13:43:57.265036 master-0 kubenswrapper[27835]: I0318 13:43:57.264992 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-07518-scheduler-0" Mar 18 13:43:57.313622 master-0 kubenswrapper[27835]: I0318 13:43:57.307128 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:43:57.399620 master-0 kubenswrapper[27835]: I0318 13:43:57.399562 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-07518-scheduler-0"] Mar 18 13:43:57.497720 master-0 kubenswrapper[27835]: I0318 13:43:57.497589 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-07518-volume-lvm-iscsi-0"] Mar 18 13:43:57.501840 master-0 kubenswrapper[27835]: I0318 13:43:57.501735 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-07518-backup-0" Mar 18 13:43:57.624613 master-0 kubenswrapper[27835]: I0318 13:43:57.608163 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-07518-backup-0"] Mar 18 13:43:57.689060 master-0 kubenswrapper[27835]: I0318 13:43:57.688962 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77bd5547c7-x2vlw" event={"ID":"e55b5aa7-9a4f-4042-91b6-6f03aaeada53","Type":"ContainerStarted","Data":"35fa65e6f777e4169c00c7b85d77ee3394f35b34c6ef7bb12fffc4bf39fa8197"} Mar 18 13:43:57.691339 master-0 kubenswrapper[27835]: I0318 13:43:57.691309 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" event={"ID":"cc7df07d-4c6b-469f-b007-e3d799a49fd5","Type":"ContainerStarted","Data":"066889232c74101117bd7a77cdf9baefb397599972bcbcc6854ebc3a70f29eed"} Mar 18 13:43:57.695997 master-0 kubenswrapper[27835]: I0318 13:43:57.695181 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-857bc4bdd5-pjcbx" event={"ID":"ca0a3287-6ed1-4a7e-a07a-80284820fbc3","Type":"ContainerStarted","Data":"a3d8568801f12916302771df0c3d0b0bd3e991a53157ff1307b76c4d24a6897d"} Mar 18 13:43:57.696530 master-0 kubenswrapper[27835]: I0318 13:43:57.696495 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-tkbch" event={"ID":"31573687-c807-4574-8813-ba2280fb170a","Type":"ContainerStarted","Data":"a99bc7a309ba708665568b8b93be67b1506bb0b7e439a9fae2656ec4490c53dd"} Mar 18 13:43:57.699099 master-0 kubenswrapper[27835]: I0318 13:43:57.699017 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-07518-backup-0" podUID="f4adff55-e451-46fd-8e42-3aae24aa8baf" containerName="cinder-backup" containerID="cri-o://f89bf4204b535ab750e695241c60df308ecd5f2ab4f3683dc0bb785add0f33b4" gracePeriod=30 Mar 18 13:43:57.701594 master-0 kubenswrapper[27835]: I0318 13:43:57.701381 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-07518-scheduler-0" podUID="eea50522-5362-473e-b6d2-999ce4e00950" containerName="cinder-scheduler" containerID="cri-o://b46f7d8ac358127fb734caef37480cdda19800bfb6beaf303f0db89ef7e4fb5e" gracePeriod=30 Mar 18 13:43:57.701746 master-0 kubenswrapper[27835]: I0318 13:43:57.701697 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-07518-scheduler-0" podUID="eea50522-5362-473e-b6d2-999ce4e00950" containerName="probe" containerID="cri-o://549a7ae3111471c58ef4084a11ca7ad6014c3967b2035c1cdd4e7a7be1c9132e" gracePeriod=30 Mar 18 13:43:57.703782 master-0 kubenswrapper[27835]: I0318 13:43:57.703638 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-07518-backup-0" podUID="f4adff55-e451-46fd-8e42-3aae24aa8baf" containerName="probe" containerID="cri-o://43f8b3bdb46056778948638357ff249d94df1b2779c75ff764cc1ed3f2f99d3a" gracePeriod=30 Mar 18 13:43:57.703956 master-0 kubenswrapper[27835]: I0318 13:43:57.699022 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-1f88-account-create-update-pdcfx" event={"ID":"b7cb1d5c-2899-4a22-b167-97f305fd2393","Type":"ContainerStarted","Data":"60186159df923e6cdd0f6b706a0f485c21c5e11f415f01af6de436c469388240"} Mar 18 13:43:57.703956 master-0 kubenswrapper[27835]: I0318 13:43:57.703939 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-1f88-account-create-update-pdcfx" event={"ID":"b7cb1d5c-2899-4a22-b167-97f305fd2393","Type":"ContainerStarted","Data":"08208dfd13b47bf688ee8caf03b3b7c6532665575bf6b828a54095c8c0439d0a"} Mar 18 13:43:57.704207 master-0 kubenswrapper[27835]: I0318 13:43:57.704158 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-07518-volume-lvm-iscsi-0" podUID="a56300b8-65fa-4a52-8226-4a9e1cec54f7" containerName="cinder-volume" containerID="cri-o://7566039db55a3fa7a1c1e767db952b44b7bce0f0fe743c1d4c48cd7017b26a49" gracePeriod=30 Mar 18 13:43:57.704314 master-0 kubenswrapper[27835]: I0318 13:43:57.704231 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-07518-volume-lvm-iscsi-0" podUID="a56300b8-65fa-4a52-8226-4a9e1cec54f7" containerName="probe" containerID="cri-o://c351e66ead3b863e0bf41427b1781b2f159cb4bc7fb04945ff8e1b265cc8ad3a" gracePeriod=30 Mar 18 13:43:58.301125 master-0 kubenswrapper[27835]: I0318 13:43:58.300281 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37f5936e-eb3f-4cad-8762-fdcd4f042c63" path="/var/lib/kubelet/pods/37f5936e-eb3f-4cad-8762-fdcd4f042c63/volumes" Mar 18 13:43:58.724542 master-0 kubenswrapper[27835]: I0318 13:43:58.723481 27835 generic.go:334] "Generic (PLEG): container finished" podID="31573687-c807-4574-8813-ba2280fb170a" containerID="558920f472bcb8b8e3a3d02b1e6bfa76affd13b01e72b8f787e9104a9b85be22" exitCode=0 Mar 18 13:43:58.724542 master-0 kubenswrapper[27835]: I0318 13:43:58.723676 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-tkbch" event={"ID":"31573687-c807-4574-8813-ba2280fb170a","Type":"ContainerDied","Data":"558920f472bcb8b8e3a3d02b1e6bfa76affd13b01e72b8f787e9104a9b85be22"} Mar 18 13:43:58.731296 master-0 kubenswrapper[27835]: I0318 13:43:58.731196 27835 generic.go:334] "Generic (PLEG): container finished" podID="b7cb1d5c-2899-4a22-b167-97f305fd2393" containerID="60186159df923e6cdd0f6b706a0f485c21c5e11f415f01af6de436c469388240" exitCode=0 Mar 18 13:43:58.731520 master-0 kubenswrapper[27835]: I0318 13:43:58.731307 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-1f88-account-create-update-pdcfx" event={"ID":"b7cb1d5c-2899-4a22-b167-97f305fd2393","Type":"ContainerDied","Data":"60186159df923e6cdd0f6b706a0f485c21c5e11f415f01af6de436c469388240"} Mar 18 13:43:58.735503 master-0 kubenswrapper[27835]: I0318 13:43:58.734279 27835 generic.go:334] "Generic (PLEG): container finished" podID="e55b5aa7-9a4f-4042-91b6-6f03aaeada53" containerID="b2864d0693a14f2ef2886a56951aea80a78bb695e86b9f014ae631a06b82319e" exitCode=0 Mar 18 13:43:58.735503 master-0 kubenswrapper[27835]: I0318 13:43:58.734461 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77bd5547c7-x2vlw" event={"ID":"e55b5aa7-9a4f-4042-91b6-6f03aaeada53","Type":"ContainerDied","Data":"b2864d0693a14f2ef2886a56951aea80a78bb695e86b9f014ae631a06b82319e"} Mar 18 13:43:58.738530 master-0 kubenswrapper[27835]: I0318 13:43:58.737956 27835 generic.go:334] "Generic (PLEG): container finished" podID="a56300b8-65fa-4a52-8226-4a9e1cec54f7" containerID="7566039db55a3fa7a1c1e767db952b44b7bce0f0fe743c1d4c48cd7017b26a49" exitCode=0 Mar 18 13:43:58.738530 master-0 kubenswrapper[27835]: I0318 13:43:58.738007 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-volume-lvm-iscsi-0" event={"ID":"a56300b8-65fa-4a52-8226-4a9e1cec54f7","Type":"ContainerDied","Data":"7566039db55a3fa7a1c1e767db952b44b7bce0f0fe743c1d4c48cd7017b26a49"} Mar 18 13:43:58.742433 master-0 kubenswrapper[27835]: I0318 13:43:58.741053 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-1f88-account-create-update-pdcfx" podStartSLOduration=4.741033968 podStartE2EDuration="4.741033968s" podCreationTimestamp="2026-03-18 13:43:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:43:57.828228386 +0000 UTC m=+1201.793439966" watchObservedRunningTime="2026-03-18 13:43:58.741033968 +0000 UTC m=+1202.706245538" Mar 18 13:43:59.003903 master-0 kubenswrapper[27835]: I0318 13:43:59.003837 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-conductor-0"] Mar 18 13:43:59.004367 master-0 kubenswrapper[27835]: E0318 13:43:59.004337 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37f5936e-eb3f-4cad-8762-fdcd4f042c63" containerName="init" Mar 18 13:43:59.004367 master-0 kubenswrapper[27835]: I0318 13:43:59.004359 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="37f5936e-eb3f-4cad-8762-fdcd4f042c63" containerName="init" Mar 18 13:43:59.004521 master-0 kubenswrapper[27835]: E0318 13:43:59.004373 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37f5936e-eb3f-4cad-8762-fdcd4f042c63" containerName="dnsmasq-dns" Mar 18 13:43:59.004521 master-0 kubenswrapper[27835]: I0318 13:43:59.004380 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="37f5936e-eb3f-4cad-8762-fdcd4f042c63" containerName="dnsmasq-dns" Mar 18 13:43:59.004650 master-0 kubenswrapper[27835]: I0318 13:43:59.004631 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="37f5936e-eb3f-4cad-8762-fdcd4f042c63" containerName="dnsmasq-dns" Mar 18 13:43:59.007433 master-0 kubenswrapper[27835]: I0318 13:43:59.007384 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Mar 18 13:43:59.016297 master-0 kubenswrapper[27835]: I0318 13:43:59.015457 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-config-data" Mar 18 13:43:59.016297 master-0 kubenswrapper[27835]: I0318 13:43:59.015578 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-scripts" Mar 18 13:43:59.055027 master-0 kubenswrapper[27835]: I0318 13:43:59.054974 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Mar 18 13:43:59.118729 master-0 kubenswrapper[27835]: I0318 13:43:59.117355 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2a793d4-62c6-4482-a5e5-21ed4cc72e33-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"d2a793d4-62c6-4482-a5e5-21ed4cc72e33\") " pod="openstack/ironic-conductor-0" Mar 18 13:43:59.118729 master-0 kubenswrapper[27835]: I0318 13:43:59.117465 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2a793d4-62c6-4482-a5e5-21ed4cc72e33-scripts\") pod \"ironic-conductor-0\" (UID: \"d2a793d4-62c6-4482-a5e5-21ed4cc72e33\") " pod="openstack/ironic-conductor-0" Mar 18 13:43:59.118729 master-0 kubenswrapper[27835]: I0318 13:43:59.117501 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d2a793d4-62c6-4482-a5e5-21ed4cc72e33-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"d2a793d4-62c6-4482-a5e5-21ed4cc72e33\") " pod="openstack/ironic-conductor-0" Mar 18 13:43:59.118729 master-0 kubenswrapper[27835]: I0318 13:43:59.117561 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-70dc50fb-1678-4e9e-bbb3-486bf215528c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^da2b6641-439d-46ee-9d6d-b11779b74995\") pod \"ironic-conductor-0\" (UID: \"d2a793d4-62c6-4482-a5e5-21ed4cc72e33\") " pod="openstack/ironic-conductor-0" Mar 18 13:43:59.118729 master-0 kubenswrapper[27835]: I0318 13:43:59.117592 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2a793d4-62c6-4482-a5e5-21ed4cc72e33-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"d2a793d4-62c6-4482-a5e5-21ed4cc72e33\") " pod="openstack/ironic-conductor-0" Mar 18 13:43:59.118729 master-0 kubenswrapper[27835]: I0318 13:43:59.117663 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2a793d4-62c6-4482-a5e5-21ed4cc72e33-config-data\") pod \"ironic-conductor-0\" (UID: \"d2a793d4-62c6-4482-a5e5-21ed4cc72e33\") " pod="openstack/ironic-conductor-0" Mar 18 13:43:59.118729 master-0 kubenswrapper[27835]: I0318 13:43:59.117683 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d2a793d4-62c6-4482-a5e5-21ed4cc72e33-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"d2a793d4-62c6-4482-a5e5-21ed4cc72e33\") " pod="openstack/ironic-conductor-0" Mar 18 13:43:59.118729 master-0 kubenswrapper[27835]: I0318 13:43:59.117726 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lj6s\" (UniqueName: \"kubernetes.io/projected/d2a793d4-62c6-4482-a5e5-21ed4cc72e33-kube-api-access-5lj6s\") pod \"ironic-conductor-0\" (UID: \"d2a793d4-62c6-4482-a5e5-21ed4cc72e33\") " pod="openstack/ironic-conductor-0" Mar 18 13:43:59.220364 master-0 kubenswrapper[27835]: I0318 13:43:59.219814 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2a793d4-62c6-4482-a5e5-21ed4cc72e33-config-data\") pod \"ironic-conductor-0\" (UID: \"d2a793d4-62c6-4482-a5e5-21ed4cc72e33\") " pod="openstack/ironic-conductor-0" Mar 18 13:43:59.220364 master-0 kubenswrapper[27835]: I0318 13:43:59.219904 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d2a793d4-62c6-4482-a5e5-21ed4cc72e33-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"d2a793d4-62c6-4482-a5e5-21ed4cc72e33\") " pod="openstack/ironic-conductor-0" Mar 18 13:43:59.220364 master-0 kubenswrapper[27835]: I0318 13:43:59.219983 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lj6s\" (UniqueName: \"kubernetes.io/projected/d2a793d4-62c6-4482-a5e5-21ed4cc72e33-kube-api-access-5lj6s\") pod \"ironic-conductor-0\" (UID: \"d2a793d4-62c6-4482-a5e5-21ed4cc72e33\") " pod="openstack/ironic-conductor-0" Mar 18 13:43:59.220364 master-0 kubenswrapper[27835]: I0318 13:43:59.220079 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2a793d4-62c6-4482-a5e5-21ed4cc72e33-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"d2a793d4-62c6-4482-a5e5-21ed4cc72e33\") " pod="openstack/ironic-conductor-0" Mar 18 13:43:59.220364 master-0 kubenswrapper[27835]: I0318 13:43:59.220143 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2a793d4-62c6-4482-a5e5-21ed4cc72e33-scripts\") pod \"ironic-conductor-0\" (UID: \"d2a793d4-62c6-4482-a5e5-21ed4cc72e33\") " pod="openstack/ironic-conductor-0" Mar 18 13:43:59.220364 master-0 kubenswrapper[27835]: I0318 13:43:59.220188 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d2a793d4-62c6-4482-a5e5-21ed4cc72e33-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"d2a793d4-62c6-4482-a5e5-21ed4cc72e33\") " pod="openstack/ironic-conductor-0" Mar 18 13:43:59.220364 master-0 kubenswrapper[27835]: I0318 13:43:59.220270 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-70dc50fb-1678-4e9e-bbb3-486bf215528c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^da2b6641-439d-46ee-9d6d-b11779b74995\") pod \"ironic-conductor-0\" (UID: \"d2a793d4-62c6-4482-a5e5-21ed4cc72e33\") " pod="openstack/ironic-conductor-0" Mar 18 13:43:59.220364 master-0 kubenswrapper[27835]: I0318 13:43:59.220313 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2a793d4-62c6-4482-a5e5-21ed4cc72e33-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"d2a793d4-62c6-4482-a5e5-21ed4cc72e33\") " pod="openstack/ironic-conductor-0" Mar 18 13:43:59.221046 master-0 kubenswrapper[27835]: I0318 13:43:59.221005 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d2a793d4-62c6-4482-a5e5-21ed4cc72e33-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"d2a793d4-62c6-4482-a5e5-21ed4cc72e33\") " pod="openstack/ironic-conductor-0" Mar 18 13:43:59.226270 master-0 kubenswrapper[27835]: I0318 13:43:59.226243 27835 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 13:43:59.226436 master-0 kubenswrapper[27835]: I0318 13:43:59.226286 27835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-70dc50fb-1678-4e9e-bbb3-486bf215528c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^da2b6641-439d-46ee-9d6d-b11779b74995\") pod \"ironic-conductor-0\" (UID: \"d2a793d4-62c6-4482-a5e5-21ed4cc72e33\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/7519877228e672603eaef58465bb2f21894ebeebd520900f6d9e71d920ab049a/globalmount\"" pod="openstack/ironic-conductor-0" Mar 18 13:43:59.226826 master-0 kubenswrapper[27835]: I0318 13:43:59.226715 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d2a793d4-62c6-4482-a5e5-21ed4cc72e33-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"d2a793d4-62c6-4482-a5e5-21ed4cc72e33\") " pod="openstack/ironic-conductor-0" Mar 18 13:43:59.235091 master-0 kubenswrapper[27835]: I0318 13:43:59.234780 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2a793d4-62c6-4482-a5e5-21ed4cc72e33-config-data\") pod \"ironic-conductor-0\" (UID: \"d2a793d4-62c6-4482-a5e5-21ed4cc72e33\") " pod="openstack/ironic-conductor-0" Mar 18 13:43:59.239047 master-0 kubenswrapper[27835]: I0318 13:43:59.238668 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2a793d4-62c6-4482-a5e5-21ed4cc72e33-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"d2a793d4-62c6-4482-a5e5-21ed4cc72e33\") " pod="openstack/ironic-conductor-0" Mar 18 13:43:59.239047 master-0 kubenswrapper[27835]: I0318 13:43:59.238892 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2a793d4-62c6-4482-a5e5-21ed4cc72e33-scripts\") pod \"ironic-conductor-0\" (UID: \"d2a793d4-62c6-4482-a5e5-21ed4cc72e33\") " pod="openstack/ironic-conductor-0" Mar 18 13:43:59.251201 master-0 kubenswrapper[27835]: I0318 13:43:59.251143 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2a793d4-62c6-4482-a5e5-21ed4cc72e33-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"d2a793d4-62c6-4482-a5e5-21ed4cc72e33\") " pod="openstack/ironic-conductor-0" Mar 18 13:43:59.293367 master-0 kubenswrapper[27835]: I0318 13:43:59.293238 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lj6s\" (UniqueName: \"kubernetes.io/projected/d2a793d4-62c6-4482-a5e5-21ed4cc72e33-kube-api-access-5lj6s\") pod \"ironic-conductor-0\" (UID: \"d2a793d4-62c6-4482-a5e5-21ed4cc72e33\") " pod="openstack/ironic-conductor-0" Mar 18 13:43:59.659581 master-0 kubenswrapper[27835]: I0318 13:43:59.652576 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-64fd4cfc77-nwzfl"] Mar 18 13:43:59.711609 master-0 kubenswrapper[27835]: I0318 13:43:59.711570 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-64fd4cfc77-nwzfl" Mar 18 13:43:59.713892 master-0 kubenswrapper[27835]: I0318 13:43:59.713847 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-public-svc" Mar 18 13:43:59.715059 master-0 kubenswrapper[27835]: I0318 13:43:59.715030 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-internal-svc" Mar 18 13:43:59.731459 master-0 kubenswrapper[27835]: I0318 13:43:59.731374 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-64fd4cfc77-nwzfl"] Mar 18 13:43:59.757057 master-0 kubenswrapper[27835]: I0318 13:43:59.756732 27835 generic.go:334] "Generic (PLEG): container finished" podID="eea50522-5362-473e-b6d2-999ce4e00950" containerID="549a7ae3111471c58ef4084a11ca7ad6014c3967b2035c1cdd4e7a7be1c9132e" exitCode=0 Mar 18 13:43:59.757057 master-0 kubenswrapper[27835]: I0318 13:43:59.756816 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-scheduler-0" event={"ID":"eea50522-5362-473e-b6d2-999ce4e00950","Type":"ContainerDied","Data":"549a7ae3111471c58ef4084a11ca7ad6014c3967b2035c1cdd4e7a7be1c9132e"} Mar 18 13:43:59.760897 master-0 kubenswrapper[27835]: I0318 13:43:59.760845 27835 generic.go:334] "Generic (PLEG): container finished" podID="f4adff55-e451-46fd-8e42-3aae24aa8baf" containerID="43f8b3bdb46056778948638357ff249d94df1b2779c75ff764cc1ed3f2f99d3a" exitCode=0 Mar 18 13:43:59.761011 master-0 kubenswrapper[27835]: I0318 13:43:59.760913 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-backup-0" event={"ID":"f4adff55-e451-46fd-8e42-3aae24aa8baf","Type":"ContainerDied","Data":"43f8b3bdb46056778948638357ff249d94df1b2779c75ff764cc1ed3f2f99d3a"} Mar 18 13:43:59.804960 master-0 kubenswrapper[27835]: I0318 13:43:59.804900 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm6cc\" (UniqueName: \"kubernetes.io/projected/f5e2d50f-621d-4ba6-9293-7c3d111e08dc-kube-api-access-mm6cc\") pod \"ironic-64fd4cfc77-nwzfl\" (UID: \"f5e2d50f-621d-4ba6-9293-7c3d111e08dc\") " pod="openstack/ironic-64fd4cfc77-nwzfl" Mar 18 13:43:59.805215 master-0 kubenswrapper[27835]: I0318 13:43:59.804997 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5e2d50f-621d-4ba6-9293-7c3d111e08dc-logs\") pod \"ironic-64fd4cfc77-nwzfl\" (UID: \"f5e2d50f-621d-4ba6-9293-7c3d111e08dc\") " pod="openstack/ironic-64fd4cfc77-nwzfl" Mar 18 13:43:59.805305 master-0 kubenswrapper[27835]: I0318 13:43:59.805224 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f5e2d50f-621d-4ba6-9293-7c3d111e08dc-config-data-merged\") pod \"ironic-64fd4cfc77-nwzfl\" (UID: \"f5e2d50f-621d-4ba6-9293-7c3d111e08dc\") " pod="openstack/ironic-64fd4cfc77-nwzfl" Mar 18 13:43:59.805553 master-0 kubenswrapper[27835]: I0318 13:43:59.805528 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5e2d50f-621d-4ba6-9293-7c3d111e08dc-scripts\") pod \"ironic-64fd4cfc77-nwzfl\" (UID: \"f5e2d50f-621d-4ba6-9293-7c3d111e08dc\") " pod="openstack/ironic-64fd4cfc77-nwzfl" Mar 18 13:43:59.805837 master-0 kubenswrapper[27835]: I0318 13:43:59.805800 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5e2d50f-621d-4ba6-9293-7c3d111e08dc-config-data\") pod \"ironic-64fd4cfc77-nwzfl\" (UID: \"f5e2d50f-621d-4ba6-9293-7c3d111e08dc\") " pod="openstack/ironic-64fd4cfc77-nwzfl" Mar 18 13:43:59.805923 master-0 kubenswrapper[27835]: I0318 13:43:59.805877 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5e2d50f-621d-4ba6-9293-7c3d111e08dc-internal-tls-certs\") pod \"ironic-64fd4cfc77-nwzfl\" (UID: \"f5e2d50f-621d-4ba6-9293-7c3d111e08dc\") " pod="openstack/ironic-64fd4cfc77-nwzfl" Mar 18 13:43:59.806123 master-0 kubenswrapper[27835]: I0318 13:43:59.806091 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5e2d50f-621d-4ba6-9293-7c3d111e08dc-public-tls-certs\") pod \"ironic-64fd4cfc77-nwzfl\" (UID: \"f5e2d50f-621d-4ba6-9293-7c3d111e08dc\") " pod="openstack/ironic-64fd4cfc77-nwzfl" Mar 18 13:43:59.806229 master-0 kubenswrapper[27835]: I0318 13:43:59.806213 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5e2d50f-621d-4ba6-9293-7c3d111e08dc-combined-ca-bundle\") pod \"ironic-64fd4cfc77-nwzfl\" (UID: \"f5e2d50f-621d-4ba6-9293-7c3d111e08dc\") " pod="openstack/ironic-64fd4cfc77-nwzfl" Mar 18 13:43:59.806389 master-0 kubenswrapper[27835]: I0318 13:43:59.806375 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f5e2d50f-621d-4ba6-9293-7c3d111e08dc-etc-podinfo\") pod \"ironic-64fd4cfc77-nwzfl\" (UID: \"f5e2d50f-621d-4ba6-9293-7c3d111e08dc\") " pod="openstack/ironic-64fd4cfc77-nwzfl" Mar 18 13:43:59.806469 master-0 kubenswrapper[27835]: I0318 13:43:59.806452 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5e2d50f-621d-4ba6-9293-7c3d111e08dc-config-data-custom\") pod \"ironic-64fd4cfc77-nwzfl\" (UID: \"f5e2d50f-621d-4ba6-9293-7c3d111e08dc\") " pod="openstack/ironic-64fd4cfc77-nwzfl" Mar 18 13:43:59.911753 master-0 kubenswrapper[27835]: I0318 13:43:59.910547 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5e2d50f-621d-4ba6-9293-7c3d111e08dc-config-data\") pod \"ironic-64fd4cfc77-nwzfl\" (UID: \"f5e2d50f-621d-4ba6-9293-7c3d111e08dc\") " pod="openstack/ironic-64fd4cfc77-nwzfl" Mar 18 13:43:59.913360 master-0 kubenswrapper[27835]: I0318 13:43:59.913312 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5e2d50f-621d-4ba6-9293-7c3d111e08dc-internal-tls-certs\") pod \"ironic-64fd4cfc77-nwzfl\" (UID: \"f5e2d50f-621d-4ba6-9293-7c3d111e08dc\") " pod="openstack/ironic-64fd4cfc77-nwzfl" Mar 18 13:43:59.913360 master-0 kubenswrapper[27835]: I0318 13:43:59.913505 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5e2d50f-621d-4ba6-9293-7c3d111e08dc-public-tls-certs\") pod \"ironic-64fd4cfc77-nwzfl\" (UID: \"f5e2d50f-621d-4ba6-9293-7c3d111e08dc\") " pod="openstack/ironic-64fd4cfc77-nwzfl" Mar 18 13:43:59.913887 master-0 kubenswrapper[27835]: I0318 13:43:59.913859 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5e2d50f-621d-4ba6-9293-7c3d111e08dc-combined-ca-bundle\") pod \"ironic-64fd4cfc77-nwzfl\" (UID: \"f5e2d50f-621d-4ba6-9293-7c3d111e08dc\") " pod="openstack/ironic-64fd4cfc77-nwzfl" Mar 18 13:43:59.914169 master-0 kubenswrapper[27835]: I0318 13:43:59.914078 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f5e2d50f-621d-4ba6-9293-7c3d111e08dc-etc-podinfo\") pod \"ironic-64fd4cfc77-nwzfl\" (UID: \"f5e2d50f-621d-4ba6-9293-7c3d111e08dc\") " pod="openstack/ironic-64fd4cfc77-nwzfl" Mar 18 13:43:59.914792 master-0 kubenswrapper[27835]: I0318 13:43:59.914744 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5e2d50f-621d-4ba6-9293-7c3d111e08dc-config-data-custom\") pod \"ironic-64fd4cfc77-nwzfl\" (UID: \"f5e2d50f-621d-4ba6-9293-7c3d111e08dc\") " pod="openstack/ironic-64fd4cfc77-nwzfl" Mar 18 13:43:59.915000 master-0 kubenswrapper[27835]: I0318 13:43:59.914967 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm6cc\" (UniqueName: \"kubernetes.io/projected/f5e2d50f-621d-4ba6-9293-7c3d111e08dc-kube-api-access-mm6cc\") pod \"ironic-64fd4cfc77-nwzfl\" (UID: \"f5e2d50f-621d-4ba6-9293-7c3d111e08dc\") " pod="openstack/ironic-64fd4cfc77-nwzfl" Mar 18 13:43:59.915076 master-0 kubenswrapper[27835]: I0318 13:43:59.915024 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5e2d50f-621d-4ba6-9293-7c3d111e08dc-logs\") pod \"ironic-64fd4cfc77-nwzfl\" (UID: \"f5e2d50f-621d-4ba6-9293-7c3d111e08dc\") " pod="openstack/ironic-64fd4cfc77-nwzfl" Mar 18 13:43:59.915129 master-0 kubenswrapper[27835]: I0318 13:43:59.915113 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f5e2d50f-621d-4ba6-9293-7c3d111e08dc-config-data-merged\") pod \"ironic-64fd4cfc77-nwzfl\" (UID: \"f5e2d50f-621d-4ba6-9293-7c3d111e08dc\") " pod="openstack/ironic-64fd4cfc77-nwzfl" Mar 18 13:43:59.915346 master-0 kubenswrapper[27835]: I0318 13:43:59.915315 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5e2d50f-621d-4ba6-9293-7c3d111e08dc-scripts\") pod \"ironic-64fd4cfc77-nwzfl\" (UID: \"f5e2d50f-621d-4ba6-9293-7c3d111e08dc\") " pod="openstack/ironic-64fd4cfc77-nwzfl" Mar 18 13:43:59.917818 master-0 kubenswrapper[27835]: I0318 13:43:59.917761 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5e2d50f-621d-4ba6-9293-7c3d111e08dc-logs\") pod \"ironic-64fd4cfc77-nwzfl\" (UID: \"f5e2d50f-621d-4ba6-9293-7c3d111e08dc\") " pod="openstack/ironic-64fd4cfc77-nwzfl" Mar 18 13:43:59.918214 master-0 kubenswrapper[27835]: I0318 13:43:59.918139 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5e2d50f-621d-4ba6-9293-7c3d111e08dc-config-data\") pod \"ironic-64fd4cfc77-nwzfl\" (UID: \"f5e2d50f-621d-4ba6-9293-7c3d111e08dc\") " pod="openstack/ironic-64fd4cfc77-nwzfl" Mar 18 13:43:59.918984 master-0 kubenswrapper[27835]: I0318 13:43:59.918933 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5e2d50f-621d-4ba6-9293-7c3d111e08dc-internal-tls-certs\") pod \"ironic-64fd4cfc77-nwzfl\" (UID: \"f5e2d50f-621d-4ba6-9293-7c3d111e08dc\") " pod="openstack/ironic-64fd4cfc77-nwzfl" Mar 18 13:43:59.921598 master-0 kubenswrapper[27835]: I0318 13:43:59.921557 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f5e2d50f-621d-4ba6-9293-7c3d111e08dc-config-data-merged\") pod \"ironic-64fd4cfc77-nwzfl\" (UID: \"f5e2d50f-621d-4ba6-9293-7c3d111e08dc\") " pod="openstack/ironic-64fd4cfc77-nwzfl" Mar 18 13:43:59.923308 master-0 kubenswrapper[27835]: I0318 13:43:59.923273 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5e2d50f-621d-4ba6-9293-7c3d111e08dc-config-data-custom\") pod \"ironic-64fd4cfc77-nwzfl\" (UID: \"f5e2d50f-621d-4ba6-9293-7c3d111e08dc\") " pod="openstack/ironic-64fd4cfc77-nwzfl" Mar 18 13:43:59.924077 master-0 kubenswrapper[27835]: I0318 13:43:59.924027 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5e2d50f-621d-4ba6-9293-7c3d111e08dc-scripts\") pod \"ironic-64fd4cfc77-nwzfl\" (UID: \"f5e2d50f-621d-4ba6-9293-7c3d111e08dc\") " pod="openstack/ironic-64fd4cfc77-nwzfl" Mar 18 13:43:59.924225 master-0 kubenswrapper[27835]: I0318 13:43:59.924190 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5e2d50f-621d-4ba6-9293-7c3d111e08dc-combined-ca-bundle\") pod \"ironic-64fd4cfc77-nwzfl\" (UID: \"f5e2d50f-621d-4ba6-9293-7c3d111e08dc\") " pod="openstack/ironic-64fd4cfc77-nwzfl" Mar 18 13:43:59.924645 master-0 kubenswrapper[27835]: I0318 13:43:59.924613 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5e2d50f-621d-4ba6-9293-7c3d111e08dc-public-tls-certs\") pod \"ironic-64fd4cfc77-nwzfl\" (UID: \"f5e2d50f-621d-4ba6-9293-7c3d111e08dc\") " pod="openstack/ironic-64fd4cfc77-nwzfl" Mar 18 13:43:59.925885 master-0 kubenswrapper[27835]: I0318 13:43:59.925848 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f5e2d50f-621d-4ba6-9293-7c3d111e08dc-etc-podinfo\") pod \"ironic-64fd4cfc77-nwzfl\" (UID: \"f5e2d50f-621d-4ba6-9293-7c3d111e08dc\") " pod="openstack/ironic-64fd4cfc77-nwzfl" Mar 18 13:43:59.934676 master-0 kubenswrapper[27835]: I0318 13:43:59.934616 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm6cc\" (UniqueName: \"kubernetes.io/projected/f5e2d50f-621d-4ba6-9293-7c3d111e08dc-kube-api-access-mm6cc\") pod \"ironic-64fd4cfc77-nwzfl\" (UID: \"f5e2d50f-621d-4ba6-9293-7c3d111e08dc\") " pod="openstack/ironic-64fd4cfc77-nwzfl" Mar 18 13:44:00.060179 master-0 kubenswrapper[27835]: I0318 13:44:00.059913 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-64fd4cfc77-nwzfl" Mar 18 13:44:00.777593 master-0 kubenswrapper[27835]: I0318 13:44:00.775117 27835 generic.go:334] "Generic (PLEG): container finished" podID="a56300b8-65fa-4a52-8226-4a9e1cec54f7" containerID="c351e66ead3b863e0bf41427b1781b2f159cb4bc7fb04945ff8e1b265cc8ad3a" exitCode=0 Mar 18 13:44:00.777593 master-0 kubenswrapper[27835]: I0318 13:44:00.775181 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-volume-lvm-iscsi-0" event={"ID":"a56300b8-65fa-4a52-8226-4a9e1cec54f7","Type":"ContainerDied","Data":"c351e66ead3b863e0bf41427b1781b2f159cb4bc7fb04945ff8e1b265cc8ad3a"} Mar 18 13:44:01.036520 master-0 kubenswrapper[27835]: I0318 13:44:01.029068 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-70dc50fb-1678-4e9e-bbb3-486bf215528c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^da2b6641-439d-46ee-9d6d-b11779b74995\") pod \"ironic-conductor-0\" (UID: \"d2a793d4-62c6-4482-a5e5-21ed4cc72e33\") " pod="openstack/ironic-conductor-0" Mar 18 13:44:01.133997 master-0 kubenswrapper[27835]: I0318 13:44:01.133099 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Mar 18 13:44:01.812738 master-0 kubenswrapper[27835]: I0318 13:44:01.812674 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-volume-lvm-iscsi-0" event={"ID":"a56300b8-65fa-4a52-8226-4a9e1cec54f7","Type":"ContainerDied","Data":"eae4581d59f5d0c2dffedddd22f8885012fb7176af5ebab26e527bd54bca1af3"} Mar 18 13:44:01.813373 master-0 kubenswrapper[27835]: I0318 13:44:01.812745 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eae4581d59f5d0c2dffedddd22f8885012fb7176af5ebab26e527bd54bca1af3" Mar 18 13:44:01.831155 master-0 kubenswrapper[27835]: I0318 13:44:01.830951 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-1f88-account-create-update-pdcfx" Mar 18 13:44:01.832137 master-0 kubenswrapper[27835]: I0318 13:44:01.832070 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-tkbch" event={"ID":"31573687-c807-4574-8813-ba2280fb170a","Type":"ContainerDied","Data":"a99bc7a309ba708665568b8b93be67b1506bb0b7e439a9fae2656ec4490c53dd"} Mar 18 13:44:01.832137 master-0 kubenswrapper[27835]: I0318 13:44:01.832121 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a99bc7a309ba708665568b8b93be67b1506bb0b7e439a9fae2656ec4490c53dd" Mar 18 13:44:01.859952 master-0 kubenswrapper[27835]: I0318 13:44:01.858014 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:01.860976 master-0 kubenswrapper[27835]: I0318 13:44:01.860939 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-tkbch" Mar 18 13:44:01.861178 master-0 kubenswrapper[27835]: I0318 13:44:01.861151 27835 generic.go:334] "Generic (PLEG): container finished" podID="f4adff55-e451-46fd-8e42-3aae24aa8baf" containerID="f89bf4204b535ab750e695241c60df308ecd5f2ab4f3683dc0bb785add0f33b4" exitCode=0 Mar 18 13:44:01.861266 master-0 kubenswrapper[27835]: I0318 13:44:01.861206 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-backup-0" event={"ID":"f4adff55-e451-46fd-8e42-3aae24aa8baf","Type":"ContainerDied","Data":"f89bf4204b535ab750e695241c60df308ecd5f2ab4f3683dc0bb785add0f33b4"} Mar 18 13:44:01.870180 master-0 kubenswrapper[27835]: I0318 13:44:01.870120 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 18 13:44:01.871280 master-0 kubenswrapper[27835]: I0318 13:44:01.871234 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-1f88-account-create-update-pdcfx" Mar 18 13:44:01.871518 master-0 kubenswrapper[27835]: I0318 13:44:01.871247 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-1f88-account-create-update-pdcfx" event={"ID":"b7cb1d5c-2899-4a22-b167-97f305fd2393","Type":"ContainerDied","Data":"08208dfd13b47bf688ee8caf03b3b7c6532665575bf6b828a54095c8c0439d0a"} Mar 18 13:44:01.871518 master-0 kubenswrapper[27835]: I0318 13:44:01.871444 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08208dfd13b47bf688ee8caf03b3b7c6532665575bf6b828a54095c8c0439d0a" Mar 18 13:44:01.879979 master-0 kubenswrapper[27835]: I0318 13:44:01.877294 27835 generic.go:334] "Generic (PLEG): container finished" podID="eea50522-5362-473e-b6d2-999ce4e00950" containerID="b46f7d8ac358127fb734caef37480cdda19800bfb6beaf303f0db89ef7e4fb5e" exitCode=0 Mar 18 13:44:01.879979 master-0 kubenswrapper[27835]: I0318 13:44:01.877359 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-scheduler-0" event={"ID":"eea50522-5362-473e-b6d2-999ce4e00950","Type":"ContainerDied","Data":"b46f7d8ac358127fb734caef37480cdda19800bfb6beaf303f0db89ef7e4fb5e"} Mar 18 13:44:01.882043 master-0 kubenswrapper[27835]: I0318 13:44:01.882001 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-transport" Mar 18 13:44:01.936360 master-0 kubenswrapper[27835]: I0318 13:44:01.932141 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a56300b8-65fa-4a52-8226-4a9e1cec54f7-combined-ca-bundle\") pod \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " Mar 18 13:44:01.936360 master-0 kubenswrapper[27835]: I0318 13:44:01.932268 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-sys\") pod \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " Mar 18 13:44:01.936360 master-0 kubenswrapper[27835]: I0318 13:44:01.932292 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-run\") pod \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " Mar 18 13:44:01.936360 master-0 kubenswrapper[27835]: I0318 13:44:01.932306 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-lib-modules\") pod \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " Mar 18 13:44:01.936360 master-0 kubenswrapper[27835]: I0318 13:44:01.932325 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7cb1d5c-2899-4a22-b167-97f305fd2393-operator-scripts\") pod \"b7cb1d5c-2899-4a22-b167-97f305fd2393\" (UID: \"b7cb1d5c-2899-4a22-b167-97f305fd2393\") " Mar 18 13:44:01.936360 master-0 kubenswrapper[27835]: I0318 13:44:01.932368 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7snwz\" (UniqueName: \"kubernetes.io/projected/a56300b8-65fa-4a52-8226-4a9e1cec54f7-kube-api-access-7snwz\") pod \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " Mar 18 13:44:01.936360 master-0 kubenswrapper[27835]: I0318 13:44:01.932395 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-dev\") pod \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " Mar 18 13:44:01.936360 master-0 kubenswrapper[27835]: I0318 13:44:01.932433 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a56300b8-65fa-4a52-8226-4a9e1cec54f7-config-data-custom\") pod \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " Mar 18 13:44:01.936360 master-0 kubenswrapper[27835]: I0318 13:44:01.932461 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-etc-nvme\") pod \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " Mar 18 13:44:01.936360 master-0 kubenswrapper[27835]: I0318 13:44:01.932509 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-etc-machine-id\") pod \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " Mar 18 13:44:01.936360 master-0 kubenswrapper[27835]: I0318 13:44:01.932537 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a56300b8-65fa-4a52-8226-4a9e1cec54f7-config-data\") pod \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " Mar 18 13:44:01.936360 master-0 kubenswrapper[27835]: I0318 13:44:01.932554 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-var-locks-brick\") pod \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " Mar 18 13:44:01.936360 master-0 kubenswrapper[27835]: I0318 13:44:01.932570 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-etc-iscsi\") pod \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " Mar 18 13:44:01.936360 master-0 kubenswrapper[27835]: I0318 13:44:01.932599 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-var-lib-cinder\") pod \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " Mar 18 13:44:01.936360 master-0 kubenswrapper[27835]: I0318 13:44:01.932621 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bbvh\" (UniqueName: \"kubernetes.io/projected/31573687-c807-4574-8813-ba2280fb170a-kube-api-access-7bbvh\") pod \"31573687-c807-4574-8813-ba2280fb170a\" (UID: \"31573687-c807-4574-8813-ba2280fb170a\") " Mar 18 13:44:01.936360 master-0 kubenswrapper[27835]: I0318 13:44:01.932645 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a56300b8-65fa-4a52-8226-4a9e1cec54f7-scripts\") pod \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " Mar 18 13:44:01.936360 master-0 kubenswrapper[27835]: I0318 13:44:01.932696 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-var-locks-cinder\") pod \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\" (UID: \"a56300b8-65fa-4a52-8226-4a9e1cec54f7\") " Mar 18 13:44:01.936360 master-0 kubenswrapper[27835]: I0318 13:44:01.932747 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31573687-c807-4574-8813-ba2280fb170a-operator-scripts\") pod \"31573687-c807-4574-8813-ba2280fb170a\" (UID: \"31573687-c807-4574-8813-ba2280fb170a\") " Mar 18 13:44:01.936360 master-0 kubenswrapper[27835]: I0318 13:44:01.932769 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxt7n\" (UniqueName: \"kubernetes.io/projected/b7cb1d5c-2899-4a22-b167-97f305fd2393-kube-api-access-nxt7n\") pod \"b7cb1d5c-2899-4a22-b167-97f305fd2393\" (UID: \"b7cb1d5c-2899-4a22-b167-97f305fd2393\") " Mar 18 13:44:01.936360 master-0 kubenswrapper[27835]: I0318 13:44:01.936289 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "a56300b8-65fa-4a52-8226-4a9e1cec54f7" (UID: "a56300b8-65fa-4a52-8226-4a9e1cec54f7"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:44:01.946129 master-0 kubenswrapper[27835]: I0318 13:44:01.945903 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "a56300b8-65fa-4a52-8226-4a9e1cec54f7" (UID: "a56300b8-65fa-4a52-8226-4a9e1cec54f7"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:44:01.946129 master-0 kubenswrapper[27835]: I0318 13:44:01.945965 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a56300b8-65fa-4a52-8226-4a9e1cec54f7" (UID: "a56300b8-65fa-4a52-8226-4a9e1cec54f7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:44:01.955718 master-0 kubenswrapper[27835]: I0318 13:44:01.948577 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "a56300b8-65fa-4a52-8226-4a9e1cec54f7" (UID: "a56300b8-65fa-4a52-8226-4a9e1cec54f7"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:44:01.955718 master-0 kubenswrapper[27835]: I0318 13:44:01.948675 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "a56300b8-65fa-4a52-8226-4a9e1cec54f7" (UID: "a56300b8-65fa-4a52-8226-4a9e1cec54f7"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:44:01.955718 master-0 kubenswrapper[27835]: I0318 13:44:01.950974 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7cb1d5c-2899-4a22-b167-97f305fd2393-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b7cb1d5c-2899-4a22-b167-97f305fd2393" (UID: "b7cb1d5c-2899-4a22-b167-97f305fd2393"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:44:01.955718 master-0 kubenswrapper[27835]: I0318 13:44:01.951014 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-sys" (OuterVolumeSpecName: "sys") pod "a56300b8-65fa-4a52-8226-4a9e1cec54f7" (UID: "a56300b8-65fa-4a52-8226-4a9e1cec54f7"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:44:01.955718 master-0 kubenswrapper[27835]: I0318 13:44:01.951036 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-run" (OuterVolumeSpecName: "run") pod "a56300b8-65fa-4a52-8226-4a9e1cec54f7" (UID: "a56300b8-65fa-4a52-8226-4a9e1cec54f7"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:44:01.955718 master-0 kubenswrapper[27835]: I0318 13:44:01.951054 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a56300b8-65fa-4a52-8226-4a9e1cec54f7" (UID: "a56300b8-65fa-4a52-8226-4a9e1cec54f7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:44:01.955718 master-0 kubenswrapper[27835]: I0318 13:44:01.951077 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "a56300b8-65fa-4a52-8226-4a9e1cec54f7" (UID: "a56300b8-65fa-4a52-8226-4a9e1cec54f7"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:44:01.955718 master-0 kubenswrapper[27835]: I0318 13:44:01.951096 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-dev" (OuterVolumeSpecName: "dev") pod "a56300b8-65fa-4a52-8226-4a9e1cec54f7" (UID: "a56300b8-65fa-4a52-8226-4a9e1cec54f7"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:44:01.955718 master-0 kubenswrapper[27835]: I0318 13:44:01.953944 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a56300b8-65fa-4a52-8226-4a9e1cec54f7-scripts" (OuterVolumeSpecName: "scripts") pod "a56300b8-65fa-4a52-8226-4a9e1cec54f7" (UID: "a56300b8-65fa-4a52-8226-4a9e1cec54f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:01.955718 master-0 kubenswrapper[27835]: I0318 13:44:01.953995 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a56300b8-65fa-4a52-8226-4a9e1cec54f7-kube-api-access-7snwz" (OuterVolumeSpecName: "kube-api-access-7snwz") pod "a56300b8-65fa-4a52-8226-4a9e1cec54f7" (UID: "a56300b8-65fa-4a52-8226-4a9e1cec54f7"). InnerVolumeSpecName "kube-api-access-7snwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:44:01.955718 master-0 kubenswrapper[27835]: I0318 13:44:01.954065 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31573687-c807-4574-8813-ba2280fb170a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "31573687-c807-4574-8813-ba2280fb170a" (UID: "31573687-c807-4574-8813-ba2280fb170a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:44:01.973842 master-0 kubenswrapper[27835]: I0318 13:44:01.972885 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7cb1d5c-2899-4a22-b167-97f305fd2393-kube-api-access-nxt7n" (OuterVolumeSpecName: "kube-api-access-nxt7n") pod "b7cb1d5c-2899-4a22-b167-97f305fd2393" (UID: "b7cb1d5c-2899-4a22-b167-97f305fd2393"). InnerVolumeSpecName "kube-api-access-nxt7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:44:01.999008 master-0 kubenswrapper[27835]: I0318 13:44:01.996911 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a56300b8-65fa-4a52-8226-4a9e1cec54f7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a56300b8-65fa-4a52-8226-4a9e1cec54f7" (UID: "a56300b8-65fa-4a52-8226-4a9e1cec54f7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:02.011585 master-0 kubenswrapper[27835]: I0318 13:44:02.003367 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31573687-c807-4574-8813-ba2280fb170a-kube-api-access-7bbvh" (OuterVolumeSpecName: "kube-api-access-7bbvh") pod "31573687-c807-4574-8813-ba2280fb170a" (UID: "31573687-c807-4574-8813-ba2280fb170a"). InnerVolumeSpecName "kube-api-access-7bbvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:44:02.046053 master-0 kubenswrapper[27835]: I0318 13:44:02.037680 27835 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:02.046053 master-0 kubenswrapper[27835]: I0318 13:44:02.037723 27835 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:02.046053 master-0 kubenswrapper[27835]: I0318 13:44:02.037733 27835 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:02.046053 master-0 kubenswrapper[27835]: I0318 13:44:02.037742 27835 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:02.046053 master-0 kubenswrapper[27835]: I0318 13:44:02.037752 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bbvh\" (UniqueName: \"kubernetes.io/projected/31573687-c807-4574-8813-ba2280fb170a-kube-api-access-7bbvh\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:02.046053 master-0 kubenswrapper[27835]: I0318 13:44:02.037762 27835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a56300b8-65fa-4a52-8226-4a9e1cec54f7-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:02.046053 master-0 kubenswrapper[27835]: I0318 13:44:02.037770 27835 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:02.046053 master-0 kubenswrapper[27835]: I0318 13:44:02.037779 27835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31573687-c807-4574-8813-ba2280fb170a-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:02.046053 master-0 kubenswrapper[27835]: I0318 13:44:02.037789 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxt7n\" (UniqueName: \"kubernetes.io/projected/b7cb1d5c-2899-4a22-b167-97f305fd2393-kube-api-access-nxt7n\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:02.046053 master-0 kubenswrapper[27835]: I0318 13:44:02.037797 27835 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-sys\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:02.046053 master-0 kubenswrapper[27835]: I0318 13:44:02.037809 27835 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-run\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:02.046053 master-0 kubenswrapper[27835]: I0318 13:44:02.037818 27835 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-lib-modules\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:02.046053 master-0 kubenswrapper[27835]: I0318 13:44:02.037826 27835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7cb1d5c-2899-4a22-b167-97f305fd2393-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:02.046053 master-0 kubenswrapper[27835]: I0318 13:44:02.037834 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7snwz\" (UniqueName: \"kubernetes.io/projected/a56300b8-65fa-4a52-8226-4a9e1cec54f7-kube-api-access-7snwz\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:02.046053 master-0 kubenswrapper[27835]: I0318 13:44:02.037842 27835 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-dev\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:02.046053 master-0 kubenswrapper[27835]: I0318 13:44:02.037850 27835 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a56300b8-65fa-4a52-8226-4a9e1cec54f7-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:02.046053 master-0 kubenswrapper[27835]: I0318 13:44:02.037858 27835 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a56300b8-65fa-4a52-8226-4a9e1cec54f7-etc-nvme\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:02.083366 master-0 kubenswrapper[27835]: I0318 13:44:02.082253 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a56300b8-65fa-4a52-8226-4a9e1cec54f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a56300b8-65fa-4a52-8226-4a9e1cec54f7" (UID: "a56300b8-65fa-4a52-8226-4a9e1cec54f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:02.158705 master-0 kubenswrapper[27835]: I0318 13:44:02.157121 27835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a56300b8-65fa-4a52-8226-4a9e1cec54f7-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:02.369528 master-0 kubenswrapper[27835]: I0318 13:44:02.369369 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-07518-scheduler-0" Mar 18 13:44:02.467188 master-0 kubenswrapper[27835]: I0318 13:44:02.467119 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eea50522-5362-473e-b6d2-999ce4e00950-config-data-custom\") pod \"eea50522-5362-473e-b6d2-999ce4e00950\" (UID: \"eea50522-5362-473e-b6d2-999ce4e00950\") " Mar 18 13:44:02.467495 master-0 kubenswrapper[27835]: I0318 13:44:02.467294 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eea50522-5362-473e-b6d2-999ce4e00950-etc-machine-id\") pod \"eea50522-5362-473e-b6d2-999ce4e00950\" (UID: \"eea50522-5362-473e-b6d2-999ce4e00950\") " Mar 18 13:44:02.467495 master-0 kubenswrapper[27835]: I0318 13:44:02.467337 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eea50522-5362-473e-b6d2-999ce4e00950-scripts\") pod \"eea50522-5362-473e-b6d2-999ce4e00950\" (UID: \"eea50522-5362-473e-b6d2-999ce4e00950\") " Mar 18 13:44:02.467495 master-0 kubenswrapper[27835]: I0318 13:44:02.467367 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26glv\" (UniqueName: \"kubernetes.io/projected/eea50522-5362-473e-b6d2-999ce4e00950-kube-api-access-26glv\") pod \"eea50522-5362-473e-b6d2-999ce4e00950\" (UID: \"eea50522-5362-473e-b6d2-999ce4e00950\") " Mar 18 13:44:02.467495 master-0 kubenswrapper[27835]: I0318 13:44:02.467470 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eea50522-5362-473e-b6d2-999ce4e00950-config-data\") pod \"eea50522-5362-473e-b6d2-999ce4e00950\" (UID: \"eea50522-5362-473e-b6d2-999ce4e00950\") " Mar 18 13:44:02.467700 master-0 kubenswrapper[27835]: I0318 13:44:02.467664 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea50522-5362-473e-b6d2-999ce4e00950-combined-ca-bundle\") pod \"eea50522-5362-473e-b6d2-999ce4e00950\" (UID: \"eea50522-5362-473e-b6d2-999ce4e00950\") " Mar 18 13:44:02.479963 master-0 kubenswrapper[27835]: I0318 13:44:02.479882 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eea50522-5362-473e-b6d2-999ce4e00950-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "eea50522-5362-473e-b6d2-999ce4e00950" (UID: "eea50522-5362-473e-b6d2-999ce4e00950"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:44:02.481526 master-0 kubenswrapper[27835]: I0318 13:44:02.481488 27835 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eea50522-5362-473e-b6d2-999ce4e00950-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:02.508947 master-0 kubenswrapper[27835]: I0318 13:44:02.508763 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eea50522-5362-473e-b6d2-999ce4e00950-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "eea50522-5362-473e-b6d2-999ce4e00950" (UID: "eea50522-5362-473e-b6d2-999ce4e00950"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:02.516040 master-0 kubenswrapper[27835]: I0318 13:44:02.515931 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eea50522-5362-473e-b6d2-999ce4e00950-kube-api-access-26glv" (OuterVolumeSpecName: "kube-api-access-26glv") pod "eea50522-5362-473e-b6d2-999ce4e00950" (UID: "eea50522-5362-473e-b6d2-999ce4e00950"). InnerVolumeSpecName "kube-api-access-26glv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:44:02.535070 master-0 kubenswrapper[27835]: I0318 13:44:02.534977 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eea50522-5362-473e-b6d2-999ce4e00950-scripts" (OuterVolumeSpecName: "scripts") pod "eea50522-5362-473e-b6d2-999ce4e00950" (UID: "eea50522-5362-473e-b6d2-999ce4e00950"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:02.585142 master-0 kubenswrapper[27835]: I0318 13:44:02.585076 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26glv\" (UniqueName: \"kubernetes.io/projected/eea50522-5362-473e-b6d2-999ce4e00950-kube-api-access-26glv\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:02.585142 master-0 kubenswrapper[27835]: I0318 13:44:02.585136 27835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eea50522-5362-473e-b6d2-999ce4e00950-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:02.585142 master-0 kubenswrapper[27835]: I0318 13:44:02.585147 27835 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eea50522-5362-473e-b6d2-999ce4e00950-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:02.812437 master-0 kubenswrapper[27835]: I0318 13:44:02.806164 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-64fd4cfc77-nwzfl"] Mar 18 13:44:02.907651 master-0 kubenswrapper[27835]: I0318 13:44:02.907582 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Mar 18 13:44:02.908500 master-0 kubenswrapper[27835]: I0318 13:44:02.908259 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-backup-0" event={"ID":"f4adff55-e451-46fd-8e42-3aae24aa8baf","Type":"ContainerDied","Data":"0253b6dd2763a2d2da902c20d1d02e1abd36610e9790a26b05426a7369f425aa"} Mar 18 13:44:02.908500 master-0 kubenswrapper[27835]: I0318 13:44:02.908288 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0253b6dd2763a2d2da902c20d1d02e1abd36610e9790a26b05426a7369f425aa" Mar 18 13:44:02.911557 master-0 kubenswrapper[27835]: I0318 13:44:02.911435 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77bd5547c7-x2vlw" event={"ID":"e55b5aa7-9a4f-4042-91b6-6f03aaeada53","Type":"ContainerStarted","Data":"d23f8cc53a3e4ddc3643e1801a64fc62b1e0b556bc933ce54e5a6a65b3886338"} Mar 18 13:44:02.915098 master-0 kubenswrapper[27835]: I0318 13:44:02.914908 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77bd5547c7-x2vlw" Mar 18 13:44:02.917835 master-0 kubenswrapper[27835]: W0318 13:44:02.917686 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2a793d4_62c6_4482_a5e5_21ed4cc72e33.slice/crio-c333d7528e5fc747d604cc897279e0832a6b635d82b036df1f31bd808ce79553 WatchSource:0}: Error finding container c333d7528e5fc747d604cc897279e0832a6b635d82b036df1f31bd808ce79553: Status 404 returned error can't find the container with id c333d7528e5fc747d604cc897279e0832a6b635d82b036df1f31bd808ce79553 Mar 18 13:44:02.923710 master-0 kubenswrapper[27835]: I0318 13:44:02.923360 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:02.924668 master-0 kubenswrapper[27835]: I0318 13:44:02.924611 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-07518-scheduler-0" Mar 18 13:44:02.938905 master-0 kubenswrapper[27835]: I0318 13:44:02.927054 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-scheduler-0" event={"ID":"eea50522-5362-473e-b6d2-999ce4e00950","Type":"ContainerDied","Data":"eb6aff1dc5397636ddce8851f049258f949663bc57d543f3267eaabb21983ef7"} Mar 18 13:44:02.938905 master-0 kubenswrapper[27835]: I0318 13:44:02.927154 27835 scope.go:117] "RemoveContainer" containerID="549a7ae3111471c58ef4084a11ca7ad6014c3967b2035c1cdd4e7a7be1c9132e" Mar 18 13:44:02.938905 master-0 kubenswrapper[27835]: I0318 13:44:02.927097 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-tkbch" Mar 18 13:44:02.972438 master-0 kubenswrapper[27835]: I0318 13:44:02.972070 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77bd5547c7-x2vlw" podStartSLOduration=7.972051556 podStartE2EDuration="7.972051556s" podCreationTimestamp="2026-03-18 13:43:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:44:02.95144612 +0000 UTC m=+1206.916657700" watchObservedRunningTime="2026-03-18 13:44:02.972051556 +0000 UTC m=+1206.937263126" Mar 18 13:44:03.078868 master-0 kubenswrapper[27835]: I0318 13:44:03.078812 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eea50522-5362-473e-b6d2-999ce4e00950-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eea50522-5362-473e-b6d2-999ce4e00950" (UID: "eea50522-5362-473e-b6d2-999ce4e00950"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:03.173562 master-0 kubenswrapper[27835]: I0318 13:44:03.173204 27835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea50522-5362-473e-b6d2-999ce4e00950-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:03.186813 master-0 kubenswrapper[27835]: I0318 13:44:03.182330 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a56300b8-65fa-4a52-8226-4a9e1cec54f7-config-data" (OuterVolumeSpecName: "config-data") pod "a56300b8-65fa-4a52-8226-4a9e1cec54f7" (UID: "a56300b8-65fa-4a52-8226-4a9e1cec54f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:03.278479 master-0 kubenswrapper[27835]: I0318 13:44:03.277806 27835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a56300b8-65fa-4a52-8226-4a9e1cec54f7-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:03.340651 master-0 kubenswrapper[27835]: I0318 13:44:03.340571 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eea50522-5362-473e-b6d2-999ce4e00950-config-data" (OuterVolumeSpecName: "config-data") pod "eea50522-5362-473e-b6d2-999ce4e00950" (UID: "eea50522-5362-473e-b6d2-999ce4e00950"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:03.380186 master-0 kubenswrapper[27835]: I0318 13:44:03.380138 27835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eea50522-5362-473e-b6d2-999ce4e00950-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:03.398034 master-0 kubenswrapper[27835]: I0318 13:44:03.397979 27835 scope.go:117] "RemoveContainer" containerID="b46f7d8ac358127fb734caef37480cdda19800bfb6beaf303f0db89ef7e4fb5e" Mar 18 13:44:03.404823 master-0 kubenswrapper[27835]: I0318 13:44:03.404771 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-07518-backup-0" Mar 18 13:44:03.481798 master-0 kubenswrapper[27835]: I0318 13:44:03.481724 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-etc-nvme\") pod \"f4adff55-e451-46fd-8e42-3aae24aa8baf\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " Mar 18 13:44:03.481798 master-0 kubenswrapper[27835]: I0318 13:44:03.481798 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-run\") pod \"f4adff55-e451-46fd-8e42-3aae24aa8baf\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " Mar 18 13:44:03.482075 master-0 kubenswrapper[27835]: I0318 13:44:03.481854 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-etc-machine-id\") pod \"f4adff55-e451-46fd-8e42-3aae24aa8baf\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " Mar 18 13:44:03.482075 master-0 kubenswrapper[27835]: I0318 13:44:03.481889 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-lib-modules\") pod \"f4adff55-e451-46fd-8e42-3aae24aa8baf\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " Mar 18 13:44:03.482075 master-0 kubenswrapper[27835]: I0318 13:44:03.481921 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4adff55-e451-46fd-8e42-3aae24aa8baf-combined-ca-bundle\") pod \"f4adff55-e451-46fd-8e42-3aae24aa8baf\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " Mar 18 13:44:03.482075 master-0 kubenswrapper[27835]: I0318 13:44:03.481968 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-sys\") pod \"f4adff55-e451-46fd-8e42-3aae24aa8baf\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " Mar 18 13:44:03.482075 master-0 kubenswrapper[27835]: I0318 13:44:03.481988 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-var-lib-cinder\") pod \"f4adff55-e451-46fd-8e42-3aae24aa8baf\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " Mar 18 13:44:03.482075 master-0 kubenswrapper[27835]: I0318 13:44:03.482005 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4adff55-e451-46fd-8e42-3aae24aa8baf-scripts\") pod \"f4adff55-e451-46fd-8e42-3aae24aa8baf\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " Mar 18 13:44:03.482075 master-0 kubenswrapper[27835]: I0318 13:44:03.482033 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-etc-iscsi\") pod \"f4adff55-e451-46fd-8e42-3aae24aa8baf\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " Mar 18 13:44:03.482075 master-0 kubenswrapper[27835]: I0318 13:44:03.482072 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgxv9\" (UniqueName: \"kubernetes.io/projected/f4adff55-e451-46fd-8e42-3aae24aa8baf-kube-api-access-tgxv9\") pod \"f4adff55-e451-46fd-8e42-3aae24aa8baf\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " Mar 18 13:44:03.482626 master-0 kubenswrapper[27835]: I0318 13:44:03.482089 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-dev\") pod \"f4adff55-e451-46fd-8e42-3aae24aa8baf\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " Mar 18 13:44:03.482626 master-0 kubenswrapper[27835]: I0318 13:44:03.482115 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4adff55-e451-46fd-8e42-3aae24aa8baf-config-data\") pod \"f4adff55-e451-46fd-8e42-3aae24aa8baf\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " Mar 18 13:44:03.482626 master-0 kubenswrapper[27835]: I0318 13:44:03.482131 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4adff55-e451-46fd-8e42-3aae24aa8baf-config-data-custom\") pod \"f4adff55-e451-46fd-8e42-3aae24aa8baf\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " Mar 18 13:44:03.482626 master-0 kubenswrapper[27835]: I0318 13:44:03.482161 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-var-locks-cinder\") pod \"f4adff55-e451-46fd-8e42-3aae24aa8baf\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " Mar 18 13:44:03.482626 master-0 kubenswrapper[27835]: I0318 13:44:03.482238 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-var-locks-brick\") pod \"f4adff55-e451-46fd-8e42-3aae24aa8baf\" (UID: \"f4adff55-e451-46fd-8e42-3aae24aa8baf\") " Mar 18 13:44:03.482626 master-0 kubenswrapper[27835]: I0318 13:44:03.482531 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "f4adff55-e451-46fd-8e42-3aae24aa8baf" (UID: "f4adff55-e451-46fd-8e42-3aae24aa8baf"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:44:03.483053 master-0 kubenswrapper[27835]: I0318 13:44:03.482636 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "f4adff55-e451-46fd-8e42-3aae24aa8baf" (UID: "f4adff55-e451-46fd-8e42-3aae24aa8baf"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:44:03.483053 master-0 kubenswrapper[27835]: I0318 13:44:03.482569 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-dev" (OuterVolumeSpecName: "dev") pod "f4adff55-e451-46fd-8e42-3aae24aa8baf" (UID: "f4adff55-e451-46fd-8e42-3aae24aa8baf"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:44:03.486330 master-0 kubenswrapper[27835]: I0318 13:44:03.483083 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "f4adff55-e451-46fd-8e42-3aae24aa8baf" (UID: "f4adff55-e451-46fd-8e42-3aae24aa8baf"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:44:03.486330 master-0 kubenswrapper[27835]: I0318 13:44:03.483103 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-run" (OuterVolumeSpecName: "run") pod "f4adff55-e451-46fd-8e42-3aae24aa8baf" (UID: "f4adff55-e451-46fd-8e42-3aae24aa8baf"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:44:03.486330 master-0 kubenswrapper[27835]: I0318 13:44:03.483118 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f4adff55-e451-46fd-8e42-3aae24aa8baf" (UID: "f4adff55-e451-46fd-8e42-3aae24aa8baf"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:44:03.486330 master-0 kubenswrapper[27835]: I0318 13:44:03.483134 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f4adff55-e451-46fd-8e42-3aae24aa8baf" (UID: "f4adff55-e451-46fd-8e42-3aae24aa8baf"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:44:03.486609 master-0 kubenswrapper[27835]: I0318 13:44:03.483056 27835 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:03.486609 master-0 kubenswrapper[27835]: I0318 13:44:03.486391 27835 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:03.486609 master-0 kubenswrapper[27835]: I0318 13:44:03.486508 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-sys" (OuterVolumeSpecName: "sys") pod "f4adff55-e451-46fd-8e42-3aae24aa8baf" (UID: "f4adff55-e451-46fd-8e42-3aae24aa8baf"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:44:03.486609 master-0 kubenswrapper[27835]: I0318 13:44:03.486540 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "f4adff55-e451-46fd-8e42-3aae24aa8baf" (UID: "f4adff55-e451-46fd-8e42-3aae24aa8baf"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:44:03.491000 master-0 kubenswrapper[27835]: I0318 13:44:03.490893 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "f4adff55-e451-46fd-8e42-3aae24aa8baf" (UID: "f4adff55-e451-46fd-8e42-3aae24aa8baf"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 13:44:03.529771 master-0 kubenswrapper[27835]: I0318 13:44:03.524480 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-07518-volume-lvm-iscsi-0"] Mar 18 13:44:03.529771 master-0 kubenswrapper[27835]: I0318 13:44:03.525237 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4adff55-e451-46fd-8e42-3aae24aa8baf-scripts" (OuterVolumeSpecName: "scripts") pod "f4adff55-e451-46fd-8e42-3aae24aa8baf" (UID: "f4adff55-e451-46fd-8e42-3aae24aa8baf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:03.529771 master-0 kubenswrapper[27835]: I0318 13:44:03.526809 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4adff55-e451-46fd-8e42-3aae24aa8baf-kube-api-access-tgxv9" (OuterVolumeSpecName: "kube-api-access-tgxv9") pod "f4adff55-e451-46fd-8e42-3aae24aa8baf" (UID: "f4adff55-e451-46fd-8e42-3aae24aa8baf"). InnerVolumeSpecName "kube-api-access-tgxv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:44:03.548963 master-0 kubenswrapper[27835]: I0318 13:44:03.548716 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-07518-volume-lvm-iscsi-0"] Mar 18 13:44:03.557578 master-0 kubenswrapper[27835]: I0318 13:44:03.555192 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4adff55-e451-46fd-8e42-3aae24aa8baf-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f4adff55-e451-46fd-8e42-3aae24aa8baf" (UID: "f4adff55-e451-46fd-8e42-3aae24aa8baf"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:03.570465 master-0 kubenswrapper[27835]: I0318 13:44:03.562064 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-07518-volume-lvm-iscsi-0"] Mar 18 13:44:03.570465 master-0 kubenswrapper[27835]: E0318 13:44:03.562617 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea50522-5362-473e-b6d2-999ce4e00950" containerName="cinder-scheduler" Mar 18 13:44:03.570465 master-0 kubenswrapper[27835]: I0318 13:44:03.562655 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea50522-5362-473e-b6d2-999ce4e00950" containerName="cinder-scheduler" Mar 18 13:44:03.570465 master-0 kubenswrapper[27835]: E0318 13:44:03.562668 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a56300b8-65fa-4a52-8226-4a9e1cec54f7" containerName="cinder-volume" Mar 18 13:44:03.570465 master-0 kubenswrapper[27835]: I0318 13:44:03.562674 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="a56300b8-65fa-4a52-8226-4a9e1cec54f7" containerName="cinder-volume" Mar 18 13:44:03.570465 master-0 kubenswrapper[27835]: E0318 13:44:03.562682 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7cb1d5c-2899-4a22-b167-97f305fd2393" containerName="mariadb-account-create-update" Mar 18 13:44:03.570465 master-0 kubenswrapper[27835]: I0318 13:44:03.562688 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7cb1d5c-2899-4a22-b167-97f305fd2393" containerName="mariadb-account-create-update" Mar 18 13:44:03.570465 master-0 kubenswrapper[27835]: E0318 13:44:03.562707 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4adff55-e451-46fd-8e42-3aae24aa8baf" containerName="cinder-backup" Mar 18 13:44:03.570465 master-0 kubenswrapper[27835]: I0318 13:44:03.562714 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4adff55-e451-46fd-8e42-3aae24aa8baf" containerName="cinder-backup" Mar 18 13:44:03.570465 master-0 kubenswrapper[27835]: E0318 13:44:03.562736 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a56300b8-65fa-4a52-8226-4a9e1cec54f7" containerName="probe" Mar 18 13:44:03.570465 master-0 kubenswrapper[27835]: I0318 13:44:03.562742 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="a56300b8-65fa-4a52-8226-4a9e1cec54f7" containerName="probe" Mar 18 13:44:03.570465 master-0 kubenswrapper[27835]: E0318 13:44:03.562754 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31573687-c807-4574-8813-ba2280fb170a" containerName="mariadb-database-create" Mar 18 13:44:03.570465 master-0 kubenswrapper[27835]: I0318 13:44:03.562762 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="31573687-c807-4574-8813-ba2280fb170a" containerName="mariadb-database-create" Mar 18 13:44:03.570465 master-0 kubenswrapper[27835]: E0318 13:44:03.562775 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4adff55-e451-46fd-8e42-3aae24aa8baf" containerName="probe" Mar 18 13:44:03.570465 master-0 kubenswrapper[27835]: I0318 13:44:03.562781 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4adff55-e451-46fd-8e42-3aae24aa8baf" containerName="probe" Mar 18 13:44:03.570465 master-0 kubenswrapper[27835]: E0318 13:44:03.562799 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea50522-5362-473e-b6d2-999ce4e00950" containerName="probe" Mar 18 13:44:03.570465 master-0 kubenswrapper[27835]: I0318 13:44:03.562805 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea50522-5362-473e-b6d2-999ce4e00950" containerName="probe" Mar 18 13:44:03.570465 master-0 kubenswrapper[27835]: I0318 13:44:03.563035 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7cb1d5c-2899-4a22-b167-97f305fd2393" containerName="mariadb-account-create-update" Mar 18 13:44:03.570465 master-0 kubenswrapper[27835]: I0318 13:44:03.563055 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea50522-5362-473e-b6d2-999ce4e00950" containerName="cinder-scheduler" Mar 18 13:44:03.570465 master-0 kubenswrapper[27835]: I0318 13:44:03.563066 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4adff55-e451-46fd-8e42-3aae24aa8baf" containerName="cinder-backup" Mar 18 13:44:03.570465 master-0 kubenswrapper[27835]: I0318 13:44:03.563080 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4adff55-e451-46fd-8e42-3aae24aa8baf" containerName="probe" Mar 18 13:44:03.570465 master-0 kubenswrapper[27835]: I0318 13:44:03.563088 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="a56300b8-65fa-4a52-8226-4a9e1cec54f7" containerName="cinder-volume" Mar 18 13:44:03.570465 master-0 kubenswrapper[27835]: I0318 13:44:03.563102 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea50522-5362-473e-b6d2-999ce4e00950" containerName="probe" Mar 18 13:44:03.570465 master-0 kubenswrapper[27835]: I0318 13:44:03.563114 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="31573687-c807-4574-8813-ba2280fb170a" containerName="mariadb-database-create" Mar 18 13:44:03.570465 master-0 kubenswrapper[27835]: I0318 13:44:03.563127 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="a56300b8-65fa-4a52-8226-4a9e1cec54f7" containerName="probe" Mar 18 13:44:03.591700 master-0 kubenswrapper[27835]: I0318 13:44:03.577619 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-07518-volume-lvm-iscsi-0"] Mar 18 13:44:03.591700 master-0 kubenswrapper[27835]: I0318 13:44:03.577815 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.591700 master-0 kubenswrapper[27835]: I0318 13:44:03.584305 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-07518-volume-lvm-iscsi-config-data" Mar 18 13:44:03.595367 master-0 kubenswrapper[27835]: I0318 13:44:03.594937 27835 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-etc-nvme\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:03.595367 master-0 kubenswrapper[27835]: I0318 13:44:03.594987 27835 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-run\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:03.595367 master-0 kubenswrapper[27835]: I0318 13:44:03.594998 27835 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:03.595367 master-0 kubenswrapper[27835]: I0318 13:44:03.595012 27835 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-lib-modules\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:03.595367 master-0 kubenswrapper[27835]: I0318 13:44:03.595021 27835 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-sys\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:03.595367 master-0 kubenswrapper[27835]: I0318 13:44:03.595029 27835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4adff55-e451-46fd-8e42-3aae24aa8baf-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:03.595367 master-0 kubenswrapper[27835]: I0318 13:44:03.595037 27835 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:03.595367 master-0 kubenswrapper[27835]: I0318 13:44:03.595049 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgxv9\" (UniqueName: \"kubernetes.io/projected/f4adff55-e451-46fd-8e42-3aae24aa8baf-kube-api-access-tgxv9\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:03.595367 master-0 kubenswrapper[27835]: I0318 13:44:03.595059 27835 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-dev\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:03.595367 master-0 kubenswrapper[27835]: I0318 13:44:03.595067 27835 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4adff55-e451-46fd-8e42-3aae24aa8baf-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:03.595367 master-0 kubenswrapper[27835]: I0318 13:44:03.595076 27835 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f4adff55-e451-46fd-8e42-3aae24aa8baf-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:03.678759 master-0 kubenswrapper[27835]: I0318 13:44:03.678721 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6c47db445d-25kc6" Mar 18 13:44:03.702324 master-0 kubenswrapper[27835]: I0318 13:44:03.702280 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/223b1b5f-e043-4216-99da-3329720c45d7-scripts\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.702602 master-0 kubenswrapper[27835]: I0318 13:44:03.702334 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/223b1b5f-e043-4216-99da-3329720c45d7-etc-machine-id\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.702602 master-0 kubenswrapper[27835]: I0318 13:44:03.702380 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/223b1b5f-e043-4216-99da-3329720c45d7-var-locks-cinder\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.702602 master-0 kubenswrapper[27835]: I0318 13:44:03.702403 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t89xm\" (UniqueName: \"kubernetes.io/projected/223b1b5f-e043-4216-99da-3329720c45d7-kube-api-access-t89xm\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.702602 master-0 kubenswrapper[27835]: I0318 13:44:03.702513 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/223b1b5f-e043-4216-99da-3329720c45d7-config-data\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.702602 master-0 kubenswrapper[27835]: I0318 13:44:03.702535 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/223b1b5f-e043-4216-99da-3329720c45d7-dev\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.702602 master-0 kubenswrapper[27835]: I0318 13:44:03.702569 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/223b1b5f-e043-4216-99da-3329720c45d7-sys\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.702602 master-0 kubenswrapper[27835]: I0318 13:44:03.702604 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/223b1b5f-e043-4216-99da-3329720c45d7-etc-nvme\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.702832 master-0 kubenswrapper[27835]: I0318 13:44:03.702640 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/223b1b5f-e043-4216-99da-3329720c45d7-run\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.702832 master-0 kubenswrapper[27835]: I0318 13:44:03.702666 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/223b1b5f-e043-4216-99da-3329720c45d7-config-data-custom\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.702832 master-0 kubenswrapper[27835]: I0318 13:44:03.702692 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/223b1b5f-e043-4216-99da-3329720c45d7-var-lib-cinder\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.702832 master-0 kubenswrapper[27835]: I0318 13:44:03.702708 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/223b1b5f-e043-4216-99da-3329720c45d7-combined-ca-bundle\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.702832 master-0 kubenswrapper[27835]: I0318 13:44:03.702727 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/223b1b5f-e043-4216-99da-3329720c45d7-var-locks-brick\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.702832 master-0 kubenswrapper[27835]: I0318 13:44:03.702765 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/223b1b5f-e043-4216-99da-3329720c45d7-etc-iscsi\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.702832 master-0 kubenswrapper[27835]: I0318 13:44:03.702783 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/223b1b5f-e043-4216-99da-3329720c45d7-lib-modules\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.740752 master-0 kubenswrapper[27835]: I0318 13:44:03.740597 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6c47db445d-25kc6" Mar 18 13:44:03.810218 master-0 kubenswrapper[27835]: I0318 13:44:03.804697 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/223b1b5f-e043-4216-99da-3329720c45d7-var-locks-cinder\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.810218 master-0 kubenswrapper[27835]: I0318 13:44:03.804785 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t89xm\" (UniqueName: \"kubernetes.io/projected/223b1b5f-e043-4216-99da-3329720c45d7-kube-api-access-t89xm\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.810218 master-0 kubenswrapper[27835]: I0318 13:44:03.804850 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/223b1b5f-e043-4216-99da-3329720c45d7-config-data\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.810218 master-0 kubenswrapper[27835]: I0318 13:44:03.804878 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/223b1b5f-e043-4216-99da-3329720c45d7-dev\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.810218 master-0 kubenswrapper[27835]: I0318 13:44:03.804954 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/223b1b5f-e043-4216-99da-3329720c45d7-sys\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.810218 master-0 kubenswrapper[27835]: I0318 13:44:03.805019 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/223b1b5f-e043-4216-99da-3329720c45d7-etc-nvme\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.810218 master-0 kubenswrapper[27835]: I0318 13:44:03.805079 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/223b1b5f-e043-4216-99da-3329720c45d7-run\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.810218 master-0 kubenswrapper[27835]: I0318 13:44:03.805118 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/223b1b5f-e043-4216-99da-3329720c45d7-config-data-custom\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.810218 master-0 kubenswrapper[27835]: I0318 13:44:03.805162 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/223b1b5f-e043-4216-99da-3329720c45d7-var-lib-cinder\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.810218 master-0 kubenswrapper[27835]: I0318 13:44:03.805191 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/223b1b5f-e043-4216-99da-3329720c45d7-combined-ca-bundle\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.810218 master-0 kubenswrapper[27835]: I0318 13:44:03.805224 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/223b1b5f-e043-4216-99da-3329720c45d7-var-locks-brick\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.810218 master-0 kubenswrapper[27835]: I0318 13:44:03.805319 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/223b1b5f-e043-4216-99da-3329720c45d7-etc-iscsi\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.810218 master-0 kubenswrapper[27835]: I0318 13:44:03.805347 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/223b1b5f-e043-4216-99da-3329720c45d7-lib-modules\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.810218 master-0 kubenswrapper[27835]: I0318 13:44:03.805488 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/223b1b5f-e043-4216-99da-3329720c45d7-scripts\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.810218 master-0 kubenswrapper[27835]: I0318 13:44:03.805533 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/223b1b5f-e043-4216-99da-3329720c45d7-etc-machine-id\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.810218 master-0 kubenswrapper[27835]: I0318 13:44:03.805665 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/223b1b5f-e043-4216-99da-3329720c45d7-etc-machine-id\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.810218 master-0 kubenswrapper[27835]: I0318 13:44:03.806184 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/223b1b5f-e043-4216-99da-3329720c45d7-var-locks-cinder\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.810218 master-0 kubenswrapper[27835]: I0318 13:44:03.806542 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/223b1b5f-e043-4216-99da-3329720c45d7-lib-modules\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.810218 master-0 kubenswrapper[27835]: I0318 13:44:03.806558 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/223b1b5f-e043-4216-99da-3329720c45d7-var-lib-cinder\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.810218 master-0 kubenswrapper[27835]: I0318 13:44:03.806597 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/223b1b5f-e043-4216-99da-3329720c45d7-etc-iscsi\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.810218 master-0 kubenswrapper[27835]: I0318 13:44:03.806936 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/223b1b5f-e043-4216-99da-3329720c45d7-dev\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.810218 master-0 kubenswrapper[27835]: I0318 13:44:03.807223 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/223b1b5f-e043-4216-99da-3329720c45d7-sys\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.810218 master-0 kubenswrapper[27835]: I0318 13:44:03.807265 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/223b1b5f-e043-4216-99da-3329720c45d7-etc-nvme\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.810218 master-0 kubenswrapper[27835]: I0318 13:44:03.807292 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/223b1b5f-e043-4216-99da-3329720c45d7-run\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.810218 master-0 kubenswrapper[27835]: I0318 13:44:03.807423 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/223b1b5f-e043-4216-99da-3329720c45d7-var-locks-brick\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.861854 master-0 kubenswrapper[27835]: I0318 13:44:03.856872 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-07518-scheduler-0"] Mar 18 13:44:03.866052 master-0 kubenswrapper[27835]: I0318 13:44:03.866004 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-07518-scheduler-0"] Mar 18 13:44:03.874660 master-0 kubenswrapper[27835]: I0318 13:44:03.872629 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-07518-scheduler-0"] Mar 18 13:44:03.874660 master-0 kubenswrapper[27835]: I0318 13:44:03.872908 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/223b1b5f-e043-4216-99da-3329720c45d7-scripts\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.875877 master-0 kubenswrapper[27835]: I0318 13:44:03.875820 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-07518-scheduler-0" Mar 18 13:44:03.876950 master-0 kubenswrapper[27835]: I0318 13:44:03.876880 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/223b1b5f-e043-4216-99da-3329720c45d7-config-data\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.879225 master-0 kubenswrapper[27835]: I0318 13:44:03.878174 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-07518-scheduler-config-data" Mar 18 13:44:03.901498 master-0 kubenswrapper[27835]: I0318 13:44:03.898657 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/223b1b5f-e043-4216-99da-3329720c45d7-config-data-custom\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.901498 master-0 kubenswrapper[27835]: I0318 13:44:03.901174 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/223b1b5f-e043-4216-99da-3329720c45d7-combined-ca-bundle\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.907394 master-0 kubenswrapper[27835]: I0318 13:44:03.907067 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76243274-a619-4c3f-8b9c-19a8c89eb6f1-scripts\") pod \"cinder-07518-scheduler-0\" (UID: \"76243274-a619-4c3f-8b9c-19a8c89eb6f1\") " pod="openstack/cinder-07518-scheduler-0" Mar 18 13:44:03.907394 master-0 kubenswrapper[27835]: I0318 13:44:03.907115 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmxsv\" (UniqueName: \"kubernetes.io/projected/76243274-a619-4c3f-8b9c-19a8c89eb6f1-kube-api-access-tmxsv\") pod \"cinder-07518-scheduler-0\" (UID: \"76243274-a619-4c3f-8b9c-19a8c89eb6f1\") " pod="openstack/cinder-07518-scheduler-0" Mar 18 13:44:03.907394 master-0 kubenswrapper[27835]: I0318 13:44:03.907205 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76243274-a619-4c3f-8b9c-19a8c89eb6f1-etc-machine-id\") pod \"cinder-07518-scheduler-0\" (UID: \"76243274-a619-4c3f-8b9c-19a8c89eb6f1\") " pod="openstack/cinder-07518-scheduler-0" Mar 18 13:44:03.907394 master-0 kubenswrapper[27835]: I0318 13:44:03.907230 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76243274-a619-4c3f-8b9c-19a8c89eb6f1-config-data\") pod \"cinder-07518-scheduler-0\" (UID: \"76243274-a619-4c3f-8b9c-19a8c89eb6f1\") " pod="openstack/cinder-07518-scheduler-0" Mar 18 13:44:03.907394 master-0 kubenswrapper[27835]: I0318 13:44:03.907254 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76243274-a619-4c3f-8b9c-19a8c89eb6f1-config-data-custom\") pod \"cinder-07518-scheduler-0\" (UID: \"76243274-a619-4c3f-8b9c-19a8c89eb6f1\") " pod="openstack/cinder-07518-scheduler-0" Mar 18 13:44:03.907394 master-0 kubenswrapper[27835]: I0318 13:44:03.907313 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76243274-a619-4c3f-8b9c-19a8c89eb6f1-combined-ca-bundle\") pod \"cinder-07518-scheduler-0\" (UID: \"76243274-a619-4c3f-8b9c-19a8c89eb6f1\") " pod="openstack/cinder-07518-scheduler-0" Mar 18 13:44:03.932445 master-0 kubenswrapper[27835]: I0318 13:44:03.924582 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t89xm\" (UniqueName: \"kubernetes.io/projected/223b1b5f-e043-4216-99da-3329720c45d7-kube-api-access-t89xm\") pod \"cinder-07518-volume-lvm-iscsi-0\" (UID: \"223b1b5f-e043-4216-99da-3329720c45d7\") " pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:03.970672 master-0 kubenswrapper[27835]: I0318 13:44:03.970578 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-64fd4cfc77-nwzfl" event={"ID":"f5e2d50f-621d-4ba6-9293-7c3d111e08dc","Type":"ContainerStarted","Data":"348d62f48a132c626487cbc1c2cd4311e27b136a076c26d4e6774cd9baf88793"} Mar 18 13:44:03.970672 master-0 kubenswrapper[27835]: I0318 13:44:03.970667 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-64fd4cfc77-nwzfl" event={"ID":"f5e2d50f-621d-4ba6-9293-7c3d111e08dc","Type":"ContainerStarted","Data":"cc8d40c6b7cb2ad9a7c0cb169f64b13e9882adf2bc2875626bb8a0aa12d038d3"} Mar 18 13:44:03.981031 master-0 kubenswrapper[27835]: I0318 13:44:03.980712 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" event={"ID":"cc7df07d-4c6b-469f-b007-e3d799a49fd5","Type":"ContainerStarted","Data":"1bd2458b7437e971916523f8a3129ce49df3b5e69e5aff24455222f6316315ba"} Mar 18 13:44:03.981031 master-0 kubenswrapper[27835]: I0318 13:44:03.980879 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" Mar 18 13:44:04.012763 master-0 kubenswrapper[27835]: I0318 13:44:04.010270 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-07518-backup-0" Mar 18 13:44:04.012763 master-0 kubenswrapper[27835]: I0318 13:44:04.011476 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:04.017578 master-0 kubenswrapper[27835]: I0318 13:44:04.017477 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"d2a793d4-62c6-4482-a5e5-21ed4cc72e33","Type":"ContainerStarted","Data":"c333d7528e5fc747d604cc897279e0832a6b635d82b036df1f31bd808ce79553"} Mar 18 13:44:04.020826 master-0 kubenswrapper[27835]: I0318 13:44:04.020772 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-07518-scheduler-0"] Mar 18 13:44:04.051671 master-0 kubenswrapper[27835]: I0318 13:44:04.043649 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76243274-a619-4c3f-8b9c-19a8c89eb6f1-scripts\") pod \"cinder-07518-scheduler-0\" (UID: \"76243274-a619-4c3f-8b9c-19a8c89eb6f1\") " pod="openstack/cinder-07518-scheduler-0" Mar 18 13:44:04.051671 master-0 kubenswrapper[27835]: I0318 13:44:04.043715 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmxsv\" (UniqueName: \"kubernetes.io/projected/76243274-a619-4c3f-8b9c-19a8c89eb6f1-kube-api-access-tmxsv\") pod \"cinder-07518-scheduler-0\" (UID: \"76243274-a619-4c3f-8b9c-19a8c89eb6f1\") " pod="openstack/cinder-07518-scheduler-0" Mar 18 13:44:04.051671 master-0 kubenswrapper[27835]: I0318 13:44:04.044403 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76243274-a619-4c3f-8b9c-19a8c89eb6f1-etc-machine-id\") pod \"cinder-07518-scheduler-0\" (UID: \"76243274-a619-4c3f-8b9c-19a8c89eb6f1\") " pod="openstack/cinder-07518-scheduler-0" Mar 18 13:44:04.051671 master-0 kubenswrapper[27835]: I0318 13:44:04.044518 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76243274-a619-4c3f-8b9c-19a8c89eb6f1-config-data\") pod \"cinder-07518-scheduler-0\" (UID: \"76243274-a619-4c3f-8b9c-19a8c89eb6f1\") " pod="openstack/cinder-07518-scheduler-0" Mar 18 13:44:04.051671 master-0 kubenswrapper[27835]: I0318 13:44:04.044569 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76243274-a619-4c3f-8b9c-19a8c89eb6f1-config-data-custom\") pod \"cinder-07518-scheduler-0\" (UID: \"76243274-a619-4c3f-8b9c-19a8c89eb6f1\") " pod="openstack/cinder-07518-scheduler-0" Mar 18 13:44:04.051671 master-0 kubenswrapper[27835]: I0318 13:44:04.044708 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76243274-a619-4c3f-8b9c-19a8c89eb6f1-combined-ca-bundle\") pod \"cinder-07518-scheduler-0\" (UID: \"76243274-a619-4c3f-8b9c-19a8c89eb6f1\") " pod="openstack/cinder-07518-scheduler-0" Mar 18 13:44:04.051671 master-0 kubenswrapper[27835]: I0318 13:44:04.047215 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76243274-a619-4c3f-8b9c-19a8c89eb6f1-etc-machine-id\") pod \"cinder-07518-scheduler-0\" (UID: \"76243274-a619-4c3f-8b9c-19a8c89eb6f1\") " pod="openstack/cinder-07518-scheduler-0" Mar 18 13:44:04.063742 master-0 kubenswrapper[27835]: I0318 13:44:04.063682 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76243274-a619-4c3f-8b9c-19a8c89eb6f1-config-data\") pod \"cinder-07518-scheduler-0\" (UID: \"76243274-a619-4c3f-8b9c-19a8c89eb6f1\") " pod="openstack/cinder-07518-scheduler-0" Mar 18 13:44:04.067852 master-0 kubenswrapper[27835]: I0318 13:44:04.067555 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76243274-a619-4c3f-8b9c-19a8c89eb6f1-scripts\") pod \"cinder-07518-scheduler-0\" (UID: \"76243274-a619-4c3f-8b9c-19a8c89eb6f1\") " pod="openstack/cinder-07518-scheduler-0" Mar 18 13:44:04.067852 master-0 kubenswrapper[27835]: I0318 13:44:04.067570 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76243274-a619-4c3f-8b9c-19a8c89eb6f1-config-data-custom\") pod \"cinder-07518-scheduler-0\" (UID: \"76243274-a619-4c3f-8b9c-19a8c89eb6f1\") " pod="openstack/cinder-07518-scheduler-0" Mar 18 13:44:04.086166 master-0 kubenswrapper[27835]: I0318 13:44:04.078117 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmxsv\" (UniqueName: \"kubernetes.io/projected/76243274-a619-4c3f-8b9c-19a8c89eb6f1-kube-api-access-tmxsv\") pod \"cinder-07518-scheduler-0\" (UID: \"76243274-a619-4c3f-8b9c-19a8c89eb6f1\") " pod="openstack/cinder-07518-scheduler-0" Mar 18 13:44:04.086166 master-0 kubenswrapper[27835]: I0318 13:44:04.079632 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76243274-a619-4c3f-8b9c-19a8c89eb6f1-combined-ca-bundle\") pod \"cinder-07518-scheduler-0\" (UID: \"76243274-a619-4c3f-8b9c-19a8c89eb6f1\") " pod="openstack/cinder-07518-scheduler-0" Mar 18 13:44:04.118858 master-0 kubenswrapper[27835]: I0318 13:44:04.118777 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" podStartSLOduration=5.292679422 podStartE2EDuration="10.118756319s" podCreationTimestamp="2026-03-18 13:43:54 +0000 UTC" firstStartedPulling="2026-03-18 13:43:56.840834676 +0000 UTC m=+1200.806046236" lastFinishedPulling="2026-03-18 13:44:01.666911573 +0000 UTC m=+1205.632123133" observedRunningTime="2026-03-18 13:44:04.0565542 +0000 UTC m=+1208.021765760" watchObservedRunningTime="2026-03-18 13:44:04.118756319 +0000 UTC m=+1208.083967879" Mar 18 13:44:04.224839 master-0 kubenswrapper[27835]: I0318 13:44:04.224063 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4adff55-e451-46fd-8e42-3aae24aa8baf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4adff55-e451-46fd-8e42-3aae24aa8baf" (UID: "f4adff55-e451-46fd-8e42-3aae24aa8baf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:04.280629 master-0 kubenswrapper[27835]: I0318 13:44:04.273214 27835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4adff55-e451-46fd-8e42-3aae24aa8baf-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:04.350255 master-0 kubenswrapper[27835]: I0318 13:44:04.349334 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a56300b8-65fa-4a52-8226-4a9e1cec54f7" path="/var/lib/kubelet/pods/a56300b8-65fa-4a52-8226-4a9e1cec54f7/volumes" Mar 18 13:44:04.350255 master-0 kubenswrapper[27835]: I0318 13:44:04.349987 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eea50522-5362-473e-b6d2-999ce4e00950" path="/var/lib/kubelet/pods/eea50522-5362-473e-b6d2-999ce4e00950/volumes" Mar 18 13:44:04.395441 master-0 kubenswrapper[27835]: I0318 13:44:04.394609 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4adff55-e451-46fd-8e42-3aae24aa8baf-config-data" (OuterVolumeSpecName: "config-data") pod "f4adff55-e451-46fd-8e42-3aae24aa8baf" (UID: "f4adff55-e451-46fd-8e42-3aae24aa8baf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:04.447440 master-0 kubenswrapper[27835]: I0318 13:44:04.434632 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8645fd5fb8-gm6gg"] Mar 18 13:44:04.447440 master-0 kubenswrapper[27835]: I0318 13:44:04.443091 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8645fd5fb8-gm6gg" Mar 18 13:44:04.454606 master-0 kubenswrapper[27835]: I0318 13:44:04.451015 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8645fd5fb8-gm6gg"] Mar 18 13:44:04.491683 master-0 kubenswrapper[27835]: I0318 13:44:04.491626 27835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4adff55-e451-46fd-8e42-3aae24aa8baf-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:04.496738 master-0 kubenswrapper[27835]: I0318 13:44:04.495547 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-07518-scheduler-0" Mar 18 13:44:04.711909 master-0 kubenswrapper[27835]: I0318 13:44:04.711816 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-07518-volume-lvm-iscsi-0"] Mar 18 13:44:04.815868 master-0 kubenswrapper[27835]: I0318 13:44:04.814785 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a24d7c40-e543-4f45-81ab-2769f7077efb-public-tls-certs\") pod \"placement-8645fd5fb8-gm6gg\" (UID: \"a24d7c40-e543-4f45-81ab-2769f7077efb\") " pod="openstack/placement-8645fd5fb8-gm6gg" Mar 18 13:44:04.815868 master-0 kubenswrapper[27835]: I0318 13:44:04.814852 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w6t7\" (UniqueName: \"kubernetes.io/projected/a24d7c40-e543-4f45-81ab-2769f7077efb-kube-api-access-2w6t7\") pod \"placement-8645fd5fb8-gm6gg\" (UID: \"a24d7c40-e543-4f45-81ab-2769f7077efb\") " pod="openstack/placement-8645fd5fb8-gm6gg" Mar 18 13:44:04.815868 master-0 kubenswrapper[27835]: I0318 13:44:04.814893 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a24d7c40-e543-4f45-81ab-2769f7077efb-logs\") pod \"placement-8645fd5fb8-gm6gg\" (UID: \"a24d7c40-e543-4f45-81ab-2769f7077efb\") " pod="openstack/placement-8645fd5fb8-gm6gg" Mar 18 13:44:04.815868 master-0 kubenswrapper[27835]: I0318 13:44:04.815083 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a24d7c40-e543-4f45-81ab-2769f7077efb-config-data\") pod \"placement-8645fd5fb8-gm6gg\" (UID: \"a24d7c40-e543-4f45-81ab-2769f7077efb\") " pod="openstack/placement-8645fd5fb8-gm6gg" Mar 18 13:44:04.815868 master-0 kubenswrapper[27835]: I0318 13:44:04.815138 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a24d7c40-e543-4f45-81ab-2769f7077efb-internal-tls-certs\") pod \"placement-8645fd5fb8-gm6gg\" (UID: \"a24d7c40-e543-4f45-81ab-2769f7077efb\") " pod="openstack/placement-8645fd5fb8-gm6gg" Mar 18 13:44:04.815868 master-0 kubenswrapper[27835]: I0318 13:44:04.815191 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a24d7c40-e543-4f45-81ab-2769f7077efb-scripts\") pod \"placement-8645fd5fb8-gm6gg\" (UID: \"a24d7c40-e543-4f45-81ab-2769f7077efb\") " pod="openstack/placement-8645fd5fb8-gm6gg" Mar 18 13:44:04.815868 master-0 kubenswrapper[27835]: I0318 13:44:04.815219 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a24d7c40-e543-4f45-81ab-2769f7077efb-combined-ca-bundle\") pod \"placement-8645fd5fb8-gm6gg\" (UID: \"a24d7c40-e543-4f45-81ab-2769f7077efb\") " pod="openstack/placement-8645fd5fb8-gm6gg" Mar 18 13:44:04.918739 master-0 kubenswrapper[27835]: I0318 13:44:04.918684 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a24d7c40-e543-4f45-81ab-2769f7077efb-config-data\") pod \"placement-8645fd5fb8-gm6gg\" (UID: \"a24d7c40-e543-4f45-81ab-2769f7077efb\") " pod="openstack/placement-8645fd5fb8-gm6gg" Mar 18 13:44:04.919507 master-0 kubenswrapper[27835]: I0318 13:44:04.918770 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a24d7c40-e543-4f45-81ab-2769f7077efb-internal-tls-certs\") pod \"placement-8645fd5fb8-gm6gg\" (UID: \"a24d7c40-e543-4f45-81ab-2769f7077efb\") " pod="openstack/placement-8645fd5fb8-gm6gg" Mar 18 13:44:04.919507 master-0 kubenswrapper[27835]: I0318 13:44:04.918827 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a24d7c40-e543-4f45-81ab-2769f7077efb-scripts\") pod \"placement-8645fd5fb8-gm6gg\" (UID: \"a24d7c40-e543-4f45-81ab-2769f7077efb\") " pod="openstack/placement-8645fd5fb8-gm6gg" Mar 18 13:44:04.919507 master-0 kubenswrapper[27835]: I0318 13:44:04.918854 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a24d7c40-e543-4f45-81ab-2769f7077efb-combined-ca-bundle\") pod \"placement-8645fd5fb8-gm6gg\" (UID: \"a24d7c40-e543-4f45-81ab-2769f7077efb\") " pod="openstack/placement-8645fd5fb8-gm6gg" Mar 18 13:44:04.919507 master-0 kubenswrapper[27835]: I0318 13:44:04.918881 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a24d7c40-e543-4f45-81ab-2769f7077efb-public-tls-certs\") pod \"placement-8645fd5fb8-gm6gg\" (UID: \"a24d7c40-e543-4f45-81ab-2769f7077efb\") " pod="openstack/placement-8645fd5fb8-gm6gg" Mar 18 13:44:04.919507 master-0 kubenswrapper[27835]: I0318 13:44:04.918906 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w6t7\" (UniqueName: \"kubernetes.io/projected/a24d7c40-e543-4f45-81ab-2769f7077efb-kube-api-access-2w6t7\") pod \"placement-8645fd5fb8-gm6gg\" (UID: \"a24d7c40-e543-4f45-81ab-2769f7077efb\") " pod="openstack/placement-8645fd5fb8-gm6gg" Mar 18 13:44:04.919507 master-0 kubenswrapper[27835]: I0318 13:44:04.918942 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a24d7c40-e543-4f45-81ab-2769f7077efb-logs\") pod \"placement-8645fd5fb8-gm6gg\" (UID: \"a24d7c40-e543-4f45-81ab-2769f7077efb\") " pod="openstack/placement-8645fd5fb8-gm6gg" Mar 18 13:44:04.919507 master-0 kubenswrapper[27835]: I0318 13:44:04.919430 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a24d7c40-e543-4f45-81ab-2769f7077efb-logs\") pod \"placement-8645fd5fb8-gm6gg\" (UID: \"a24d7c40-e543-4f45-81ab-2769f7077efb\") " pod="openstack/placement-8645fd5fb8-gm6gg" Mar 18 13:44:04.925866 master-0 kubenswrapper[27835]: I0318 13:44:04.925813 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a24d7c40-e543-4f45-81ab-2769f7077efb-scripts\") pod \"placement-8645fd5fb8-gm6gg\" (UID: \"a24d7c40-e543-4f45-81ab-2769f7077efb\") " pod="openstack/placement-8645fd5fb8-gm6gg" Mar 18 13:44:04.938737 master-0 kubenswrapper[27835]: I0318 13:44:04.938277 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a24d7c40-e543-4f45-81ab-2769f7077efb-public-tls-certs\") pod \"placement-8645fd5fb8-gm6gg\" (UID: \"a24d7c40-e543-4f45-81ab-2769f7077efb\") " pod="openstack/placement-8645fd5fb8-gm6gg" Mar 18 13:44:04.943127 master-0 kubenswrapper[27835]: I0318 13:44:04.943074 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a24d7c40-e543-4f45-81ab-2769f7077efb-combined-ca-bundle\") pod \"placement-8645fd5fb8-gm6gg\" (UID: \"a24d7c40-e543-4f45-81ab-2769f7077efb\") " pod="openstack/placement-8645fd5fb8-gm6gg" Mar 18 13:44:04.969624 master-0 kubenswrapper[27835]: I0318 13:44:04.969545 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a24d7c40-e543-4f45-81ab-2769f7077efb-config-data\") pod \"placement-8645fd5fb8-gm6gg\" (UID: \"a24d7c40-e543-4f45-81ab-2769f7077efb\") " pod="openstack/placement-8645fd5fb8-gm6gg" Mar 18 13:44:04.971993 master-0 kubenswrapper[27835]: I0318 13:44:04.971955 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w6t7\" (UniqueName: \"kubernetes.io/projected/a24d7c40-e543-4f45-81ab-2769f7077efb-kube-api-access-2w6t7\") pod \"placement-8645fd5fb8-gm6gg\" (UID: \"a24d7c40-e543-4f45-81ab-2769f7077efb\") " pod="openstack/placement-8645fd5fb8-gm6gg" Mar 18 13:44:05.005570 master-0 kubenswrapper[27835]: I0318 13:44:05.004977 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a24d7c40-e543-4f45-81ab-2769f7077efb-internal-tls-certs\") pod \"placement-8645fd5fb8-gm6gg\" (UID: \"a24d7c40-e543-4f45-81ab-2769f7077efb\") " pod="openstack/placement-8645fd5fb8-gm6gg" Mar 18 13:44:05.075816 master-0 kubenswrapper[27835]: I0318 13:44:05.075503 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-volume-lvm-iscsi-0" event={"ID":"223b1b5f-e043-4216-99da-3329720c45d7","Type":"ContainerStarted","Data":"7028a302e32629727293aee5fde8a85d47cc650f4c95cd9574f8a0fb4e397b2d"} Mar 18 13:44:05.085874 master-0 kubenswrapper[27835]: I0318 13:44:05.085774 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"d2a793d4-62c6-4482-a5e5-21ed4cc72e33","Type":"ContainerStarted","Data":"77055bbe4a09d76f58dd576c6942b559e38e1b98aee5e90f39a0b055c15af8a2"} Mar 18 13:44:05.107893 master-0 kubenswrapper[27835]: I0318 13:44:05.106605 27835 generic.go:334] "Generic (PLEG): container finished" podID="f5e2d50f-621d-4ba6-9293-7c3d111e08dc" containerID="348d62f48a132c626487cbc1c2cd4311e27b136a076c26d4e6774cd9baf88793" exitCode=0 Mar 18 13:44:05.107893 master-0 kubenswrapper[27835]: I0318 13:44:05.106692 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-64fd4cfc77-nwzfl" event={"ID":"f5e2d50f-621d-4ba6-9293-7c3d111e08dc","Type":"ContainerDied","Data":"348d62f48a132c626487cbc1c2cd4311e27b136a076c26d4e6774cd9baf88793"} Mar 18 13:44:05.116187 master-0 kubenswrapper[27835]: I0318 13:44:05.116119 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-07518-backup-0"] Mar 18 13:44:05.132946 master-0 kubenswrapper[27835]: I0318 13:44:05.132888 27835 generic.go:334] "Generic (PLEG): container finished" podID="ca0a3287-6ed1-4a7e-a07a-80284820fbc3" containerID="0bd2e74fc8c138cd4452503a6d152f8a27073dd02a3c032f53848b34c53732f1" exitCode=0 Mar 18 13:44:05.133632 master-0 kubenswrapper[27835]: I0318 13:44:05.133564 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-857bc4bdd5-pjcbx" event={"ID":"ca0a3287-6ed1-4a7e-a07a-80284820fbc3","Type":"ContainerDied","Data":"0bd2e74fc8c138cd4452503a6d152f8a27073dd02a3c032f53848b34c53732f1"} Mar 18 13:44:05.185617 master-0 kubenswrapper[27835]: W0318 13:44:05.185557 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76243274_a619_4c3f_8b9c_19a8c89eb6f1.slice/crio-53a3be9f56d4999151746a2e00c39bc012f4fd0a39bba12857725909c76499a6 WatchSource:0}: Error finding container 53a3be9f56d4999151746a2e00c39bc012f4fd0a39bba12857725909c76499a6: Status 404 returned error can't find the container with id 53a3be9f56d4999151746a2e00c39bc012f4fd0a39bba12857725909c76499a6 Mar 18 13:44:05.185906 master-0 kubenswrapper[27835]: I0318 13:44:05.185865 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8645fd5fb8-gm6gg" Mar 18 13:44:05.192435 master-0 kubenswrapper[27835]: I0318 13:44:05.188594 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-07518-backup-0"] Mar 18 13:44:05.232671 master-0 kubenswrapper[27835]: I0318 13:44:05.229502 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-07518-backup-0"] Mar 18 13:44:05.238439 master-0 kubenswrapper[27835]: I0318 13:44:05.237073 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.245555 master-0 kubenswrapper[27835]: I0318 13:44:05.243776 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-07518-backup-config-data" Mar 18 13:44:05.273834 master-0 kubenswrapper[27835]: I0318 13:44:05.272581 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-07518-scheduler-0"] Mar 18 13:44:05.330306 master-0 kubenswrapper[27835]: I0318 13:44:05.325360 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-07518-backup-0"] Mar 18 13:44:05.343851 master-0 kubenswrapper[27835]: I0318 13:44:05.343797 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/59594f38-4062-4f67-a913-2d334dba30c0-dev\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.343851 master-0 kubenswrapper[27835]: I0318 13:44:05.343849 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59594f38-4062-4f67-a913-2d334dba30c0-lib-modules\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.344107 master-0 kubenswrapper[27835]: I0318 13:44:05.343874 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/59594f38-4062-4f67-a913-2d334dba30c0-var-locks-cinder\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.344107 master-0 kubenswrapper[27835]: I0318 13:44:05.343926 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/59594f38-4062-4f67-a913-2d334dba30c0-var-lib-cinder\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.344227 master-0 kubenswrapper[27835]: I0318 13:44:05.344176 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59594f38-4062-4f67-a913-2d334dba30c0-combined-ca-bundle\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.347434 master-0 kubenswrapper[27835]: I0318 13:44:05.344301 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59594f38-4062-4f67-a913-2d334dba30c0-scripts\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.347434 master-0 kubenswrapper[27835]: I0318 13:44:05.344357 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/59594f38-4062-4f67-a913-2d334dba30c0-run\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.347434 master-0 kubenswrapper[27835]: I0318 13:44:05.344532 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59594f38-4062-4f67-a913-2d334dba30c0-config-data\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.347434 master-0 kubenswrapper[27835]: I0318 13:44:05.344829 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/59594f38-4062-4f67-a913-2d334dba30c0-sys\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.347434 master-0 kubenswrapper[27835]: I0318 13:44:05.344933 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/59594f38-4062-4f67-a913-2d334dba30c0-etc-iscsi\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.347434 master-0 kubenswrapper[27835]: I0318 13:44:05.344990 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/59594f38-4062-4f67-a913-2d334dba30c0-var-locks-brick\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.347434 master-0 kubenswrapper[27835]: I0318 13:44:05.345028 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/59594f38-4062-4f67-a913-2d334dba30c0-etc-machine-id\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.347434 master-0 kubenswrapper[27835]: I0318 13:44:05.345064 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/59594f38-4062-4f67-a913-2d334dba30c0-etc-nvme\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.347434 master-0 kubenswrapper[27835]: I0318 13:44:05.345111 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzz6k\" (UniqueName: \"kubernetes.io/projected/59594f38-4062-4f67-a913-2d334dba30c0-kube-api-access-kzz6k\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.347434 master-0 kubenswrapper[27835]: I0318 13:44:05.345150 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59594f38-4062-4f67-a913-2d334dba30c0-config-data-custom\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.451832 master-0 kubenswrapper[27835]: I0318 13:44:05.451769 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/59594f38-4062-4f67-a913-2d334dba30c0-var-lib-cinder\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.451908 master-0 kubenswrapper[27835]: I0318 13:44:05.451845 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59594f38-4062-4f67-a913-2d334dba30c0-combined-ca-bundle\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.451908 master-0 kubenswrapper[27835]: I0318 13:44:05.451871 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59594f38-4062-4f67-a913-2d334dba30c0-scripts\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.451908 master-0 kubenswrapper[27835]: I0318 13:44:05.451899 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/59594f38-4062-4f67-a913-2d334dba30c0-run\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.452010 master-0 kubenswrapper[27835]: I0318 13:44:05.451928 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59594f38-4062-4f67-a913-2d334dba30c0-config-data\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.452107 master-0 kubenswrapper[27835]: I0318 13:44:05.452060 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/59594f38-4062-4f67-a913-2d334dba30c0-sys\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.452107 master-0 kubenswrapper[27835]: I0318 13:44:05.452095 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/59594f38-4062-4f67-a913-2d334dba30c0-etc-iscsi\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.452185 master-0 kubenswrapper[27835]: I0318 13:44:05.452120 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/59594f38-4062-4f67-a913-2d334dba30c0-var-locks-brick\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.452185 master-0 kubenswrapper[27835]: I0318 13:44:05.452140 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/59594f38-4062-4f67-a913-2d334dba30c0-etc-machine-id\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.452185 master-0 kubenswrapper[27835]: I0318 13:44:05.452157 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/59594f38-4062-4f67-a913-2d334dba30c0-etc-nvme\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.452185 master-0 kubenswrapper[27835]: I0318 13:44:05.452177 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzz6k\" (UniqueName: \"kubernetes.io/projected/59594f38-4062-4f67-a913-2d334dba30c0-kube-api-access-kzz6k\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.452379 master-0 kubenswrapper[27835]: I0318 13:44:05.452192 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59594f38-4062-4f67-a913-2d334dba30c0-config-data-custom\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.452379 master-0 kubenswrapper[27835]: I0318 13:44:05.452259 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/59594f38-4062-4f67-a913-2d334dba30c0-dev\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.452379 master-0 kubenswrapper[27835]: I0318 13:44:05.452276 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59594f38-4062-4f67-a913-2d334dba30c0-lib-modules\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.452379 master-0 kubenswrapper[27835]: I0318 13:44:05.452296 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/59594f38-4062-4f67-a913-2d334dba30c0-var-locks-cinder\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.453234 master-0 kubenswrapper[27835]: I0318 13:44:05.453206 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/59594f38-4062-4f67-a913-2d334dba30c0-var-lib-cinder\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.453294 master-0 kubenswrapper[27835]: I0318 13:44:05.453235 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/59594f38-4062-4f67-a913-2d334dba30c0-etc-machine-id\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.453294 master-0 kubenswrapper[27835]: I0318 13:44:05.453262 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/59594f38-4062-4f67-a913-2d334dba30c0-etc-nvme\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.453358 master-0 kubenswrapper[27835]: I0318 13:44:05.453298 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/59594f38-4062-4f67-a913-2d334dba30c0-var-locks-brick\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.455305 master-0 kubenswrapper[27835]: I0318 13:44:05.454255 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/59594f38-4062-4f67-a913-2d334dba30c0-dev\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.455444 master-0 kubenswrapper[27835]: I0318 13:44:05.455366 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59594f38-4062-4f67-a913-2d334dba30c0-lib-modules\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.455492 master-0 kubenswrapper[27835]: I0318 13:44:05.455478 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/59594f38-4062-4f67-a913-2d334dba30c0-var-locks-cinder\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.457098 master-0 kubenswrapper[27835]: I0318 13:44:05.457062 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/59594f38-4062-4f67-a913-2d334dba30c0-etc-iscsi\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.457200 master-0 kubenswrapper[27835]: I0318 13:44:05.457131 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/59594f38-4062-4f67-a913-2d334dba30c0-sys\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.457478 master-0 kubenswrapper[27835]: I0318 13:44:05.457433 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/59594f38-4062-4f67-a913-2d334dba30c0-run\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.461092 master-0 kubenswrapper[27835]: I0318 13:44:05.460309 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59594f38-4062-4f67-a913-2d334dba30c0-combined-ca-bundle\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.463798 master-0 kubenswrapper[27835]: I0318 13:44:05.463758 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59594f38-4062-4f67-a913-2d334dba30c0-config-data-custom\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.475248 master-0 kubenswrapper[27835]: I0318 13:44:05.475205 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzz6k\" (UniqueName: \"kubernetes.io/projected/59594f38-4062-4f67-a913-2d334dba30c0-kube-api-access-kzz6k\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.483957 master-0 kubenswrapper[27835]: I0318 13:44:05.483904 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59594f38-4062-4f67-a913-2d334dba30c0-scripts\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.535350 master-0 kubenswrapper[27835]: I0318 13:44:05.534499 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59594f38-4062-4f67-a913-2d334dba30c0-config-data\") pod \"cinder-07518-backup-0\" (UID: \"59594f38-4062-4f67-a913-2d334dba30c0\") " pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.654378 master-0 kubenswrapper[27835]: I0318 13:44:05.651314 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-07518-backup-0" Mar 18 13:44:05.928332 master-0 kubenswrapper[27835]: I0318 13:44:05.928285 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-07518-api-0" Mar 18 13:44:06.163678 master-0 kubenswrapper[27835]: I0318 13:44:06.163622 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-857bc4bdd5-pjcbx" event={"ID":"ca0a3287-6ed1-4a7e-a07a-80284820fbc3","Type":"ContainerStarted","Data":"58905c798f20deffc7cbfe67c14685482089c73ef259d1e46e94a436132d9d72"} Mar 18 13:44:06.185796 master-0 kubenswrapper[27835]: I0318 13:44:06.185739 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8645fd5fb8-gm6gg"] Mar 18 13:44:06.187608 master-0 kubenswrapper[27835]: I0318 13:44:06.187561 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-volume-lvm-iscsi-0" event={"ID":"223b1b5f-e043-4216-99da-3329720c45d7","Type":"ContainerStarted","Data":"deeff6dc1e0a41172e8a19caee142dea3ebd311bf984737313bedcc36f401bdf"} Mar 18 13:44:06.193257 master-0 kubenswrapper[27835]: I0318 13:44:06.193197 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-scheduler-0" event={"ID":"76243274-a619-4c3f-8b9c-19a8c89eb6f1","Type":"ContainerStarted","Data":"53a3be9f56d4999151746a2e00c39bc012f4fd0a39bba12857725909c76499a6"} Mar 18 13:44:06.205950 master-0 kubenswrapper[27835]: I0318 13:44:06.205863 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-64fd4cfc77-nwzfl" event={"ID":"f5e2d50f-621d-4ba6-9293-7c3d111e08dc","Type":"ContainerStarted","Data":"b8b02cccc46861a6e156d92d5b37a2480556787adc5da48e30f9f4f8a925111a"} Mar 18 13:44:06.218964 master-0 kubenswrapper[27835]: I0318 13:44:06.218910 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8645fd5fb8-gm6gg" event={"ID":"a24d7c40-e543-4f45-81ab-2769f7077efb","Type":"ContainerStarted","Data":"07488ee3583d14b55e5b7580122467c98721eb26eed6376997798347352c19a9"} Mar 18 13:44:06.451907 master-0 kubenswrapper[27835]: I0318 13:44:06.442540 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4adff55-e451-46fd-8e42-3aae24aa8baf" path="/var/lib/kubelet/pods/f4adff55-e451-46fd-8e42-3aae24aa8baf/volumes" Mar 18 13:44:06.611921 master-0 kubenswrapper[27835]: I0318 13:44:06.610941 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-07518-backup-0"] Mar 18 13:44:07.269567 master-0 kubenswrapper[27835]: I0318 13:44:07.267798 27835 generic.go:334] "Generic (PLEG): container finished" podID="ca0a3287-6ed1-4a7e-a07a-80284820fbc3" containerID="2dab4872bc42731e2fa1c1936d57b2d76523a8f6799131eb819f053ba1fae172" exitCode=1 Mar 18 13:44:07.269567 master-0 kubenswrapper[27835]: I0318 13:44:07.268572 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-857bc4bdd5-pjcbx" event={"ID":"ca0a3287-6ed1-4a7e-a07a-80284820fbc3","Type":"ContainerDied","Data":"2dab4872bc42731e2fa1c1936d57b2d76523a8f6799131eb819f053ba1fae172"} Mar 18 13:44:07.271846 master-0 kubenswrapper[27835]: I0318 13:44:07.271705 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-volume-lvm-iscsi-0" event={"ID":"223b1b5f-e043-4216-99da-3329720c45d7","Type":"ContainerStarted","Data":"a731ccd039100997c047ed26eaa7ee5ec00343b69e7ec4160a88df28c044c3dd"} Mar 18 13:44:07.275256 master-0 kubenswrapper[27835]: I0318 13:44:07.275199 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-backup-0" event={"ID":"59594f38-4062-4f67-a913-2d334dba30c0","Type":"ContainerStarted","Data":"49de9089a83d34f29f396785d0d3adaa310adf9583ddf6796f20f53d04aa4414"} Mar 18 13:44:07.280726 master-0 kubenswrapper[27835]: I0318 13:44:07.280456 27835 scope.go:117] "RemoveContainer" containerID="2dab4872bc42731e2fa1c1936d57b2d76523a8f6799131eb819f053ba1fae172" Mar 18 13:44:07.282315 master-0 kubenswrapper[27835]: I0318 13:44:07.282262 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-scheduler-0" event={"ID":"76243274-a619-4c3f-8b9c-19a8c89eb6f1","Type":"ContainerStarted","Data":"7c0407c719db14fb459313898108d1bb5a39e19ac2feb25a26bc88bc360c2fe6"} Mar 18 13:44:07.307974 master-0 kubenswrapper[27835]: I0318 13:44:07.307923 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-58d6fd9d55-fhlft" Mar 18 13:44:07.454794 master-0 kubenswrapper[27835]: I0318 13:44:07.453761 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-07518-volume-lvm-iscsi-0" podStartSLOduration=4.453741651 podStartE2EDuration="4.453741651s" podCreationTimestamp="2026-03-18 13:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:44:07.427405152 +0000 UTC m=+1211.392616712" watchObservedRunningTime="2026-03-18 13:44:07.453741651 +0000 UTC m=+1211.418953211" Mar 18 13:44:08.306917 master-0 kubenswrapper[27835]: I0318 13:44:08.306774 27835 generic.go:334] "Generic (PLEG): container finished" podID="cc7df07d-4c6b-469f-b007-e3d799a49fd5" containerID="1bd2458b7437e971916523f8a3129ce49df3b5e69e5aff24455222f6316315ba" exitCode=1 Mar 18 13:44:08.311074 master-0 kubenswrapper[27835]: I0318 13:44:08.311030 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" event={"ID":"cc7df07d-4c6b-469f-b007-e3d799a49fd5","Type":"ContainerDied","Data":"1bd2458b7437e971916523f8a3129ce49df3b5e69e5aff24455222f6316315ba"} Mar 18 13:44:08.312583 master-0 kubenswrapper[27835]: I0318 13:44:08.312526 27835 scope.go:117] "RemoveContainer" containerID="1bd2458b7437e971916523f8a3129ce49df3b5e69e5aff24455222f6316315ba" Mar 18 13:44:08.312794 master-0 kubenswrapper[27835]: I0318 13:44:08.312645 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-backup-0" event={"ID":"59594f38-4062-4f67-a913-2d334dba30c0","Type":"ContainerStarted","Data":"50d9bd16d7e588ffaf58334142dd8928ea0b3d34608ce3a28892ab38c490ddcb"} Mar 18 13:44:08.316175 master-0 kubenswrapper[27835]: I0318 13:44:08.316070 27835 generic.go:334] "Generic (PLEG): container finished" podID="d2a793d4-62c6-4482-a5e5-21ed4cc72e33" containerID="77055bbe4a09d76f58dd576c6942b559e38e1b98aee5e90f39a0b055c15af8a2" exitCode=0 Mar 18 13:44:08.316175 master-0 kubenswrapper[27835]: I0318 13:44:08.316161 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"d2a793d4-62c6-4482-a5e5-21ed4cc72e33","Type":"ContainerDied","Data":"77055bbe4a09d76f58dd576c6942b559e38e1b98aee5e90f39a0b055c15af8a2"} Mar 18 13:44:08.323656 master-0 kubenswrapper[27835]: I0318 13:44:08.321323 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-64fd4cfc77-nwzfl" event={"ID":"f5e2d50f-621d-4ba6-9293-7c3d111e08dc","Type":"ContainerStarted","Data":"6e5116d56d041d4d0027cddd797e6890252348c07fbc58cae793209fbe3a9653"} Mar 18 13:44:08.323656 master-0 kubenswrapper[27835]: I0318 13:44:08.322263 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-64fd4cfc77-nwzfl" Mar 18 13:44:08.338596 master-0 kubenswrapper[27835]: I0318 13:44:08.338206 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8645fd5fb8-gm6gg" event={"ID":"a24d7c40-e543-4f45-81ab-2769f7077efb","Type":"ContainerStarted","Data":"be97cd50e777b8d400df9ca9731429349589050201c2ab9057e713b01ba8aff7"} Mar 18 13:44:08.469466 master-0 kubenswrapper[27835]: I0318 13:44:08.467867 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-07518-backup-0" podStartSLOduration=3.467822957 podStartE2EDuration="3.467822957s" podCreationTimestamp="2026-03-18 13:44:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:44:08.461799178 +0000 UTC m=+1212.427010738" watchObservedRunningTime="2026-03-18 13:44:08.467822957 +0000 UTC m=+1212.433034517" Mar 18 13:44:08.471437 master-0 kubenswrapper[27835]: I0318 13:44:08.470248 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-64fd4cfc77-nwzfl" podStartSLOduration=9.470241282 podStartE2EDuration="9.470241282s" podCreationTimestamp="2026-03-18 13:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:44:08.438119399 +0000 UTC m=+1212.403330969" watchObservedRunningTime="2026-03-18 13:44:08.470241282 +0000 UTC m=+1212.435452842" Mar 18 13:44:09.012738 master-0 kubenswrapper[27835]: I0318 13:44:09.012589 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:09.205747 master-0 kubenswrapper[27835]: I0318 13:44:09.202479 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 18 13:44:09.205747 master-0 kubenswrapper[27835]: I0318 13:44:09.204678 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 18 13:44:09.232478 master-0 kubenswrapper[27835]: I0318 13:44:09.232275 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 18 13:44:09.232478 master-0 kubenswrapper[27835]: I0318 13:44:09.232343 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 18 13:44:09.233553 master-0 kubenswrapper[27835]: I0318 13:44:09.233529 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 18 13:44:09.359516 master-0 kubenswrapper[27835]: I0318 13:44:09.357437 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8645fd5fb8-gm6gg" event={"ID":"a24d7c40-e543-4f45-81ab-2769f7077efb","Type":"ContainerStarted","Data":"7dcb03c70e60ddaa1ad5e60c034f69ed4b14db1f7071a3d7197ffd2638ba3c2f"} Mar 18 13:44:09.359516 master-0 kubenswrapper[27835]: I0318 13:44:09.358703 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-8645fd5fb8-gm6gg" Mar 18 13:44:09.359516 master-0 kubenswrapper[27835]: I0318 13:44:09.358731 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-8645fd5fb8-gm6gg" Mar 18 13:44:09.360589 master-0 kubenswrapper[27835]: I0318 13:44:09.360430 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" event={"ID":"cc7df07d-4c6b-469f-b007-e3d799a49fd5","Type":"ContainerStarted","Data":"80b4b2995fefbec0d442226eccc7a5eb10c5dacc44d174dfeb0f7efe356f2ee5"} Mar 18 13:44:09.361194 master-0 kubenswrapper[27835]: I0318 13:44:09.361164 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" Mar 18 13:44:09.373624 master-0 kubenswrapper[27835]: I0318 13:44:09.365507 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c7433697-622c-4928-be5c-cd3a3c65cc8c-openstack-config\") pod \"openstackclient\" (UID: \"c7433697-622c-4928-be5c-cd3a3c65cc8c\") " pod="openstack/openstackclient" Mar 18 13:44:09.373624 master-0 kubenswrapper[27835]: I0318 13:44:09.365615 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7433697-622c-4928-be5c-cd3a3c65cc8c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c7433697-622c-4928-be5c-cd3a3c65cc8c\") " pod="openstack/openstackclient" Mar 18 13:44:09.373624 master-0 kubenswrapper[27835]: I0318 13:44:09.366027 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c7433697-622c-4928-be5c-cd3a3c65cc8c-openstack-config-secret\") pod \"openstackclient\" (UID: \"c7433697-622c-4928-be5c-cd3a3c65cc8c\") " pod="openstack/openstackclient" Mar 18 13:44:09.373624 master-0 kubenswrapper[27835]: I0318 13:44:09.366082 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4ct9\" (UniqueName: \"kubernetes.io/projected/c7433697-622c-4928-be5c-cd3a3c65cc8c-kube-api-access-z4ct9\") pod \"openstackclient\" (UID: \"c7433697-622c-4928-be5c-cd3a3c65cc8c\") " pod="openstack/openstackclient" Mar 18 13:44:09.407244 master-0 kubenswrapper[27835]: I0318 13:44:09.404674 27835 generic.go:334] "Generic (PLEG): container finished" podID="ca0a3287-6ed1-4a7e-a07a-80284820fbc3" containerID="705af0a32d09d6134e6b5fe2a5079c120271380ceae54c11718af0ecb3fdb6f8" exitCode=1 Mar 18 13:44:09.407244 master-0 kubenswrapper[27835]: I0318 13:44:09.404781 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-857bc4bdd5-pjcbx" event={"ID":"ca0a3287-6ed1-4a7e-a07a-80284820fbc3","Type":"ContainerDied","Data":"705af0a32d09d6134e6b5fe2a5079c120271380ceae54c11718af0ecb3fdb6f8"} Mar 18 13:44:09.407244 master-0 kubenswrapper[27835]: I0318 13:44:09.404814 27835 scope.go:117] "RemoveContainer" containerID="2dab4872bc42731e2fa1c1936d57b2d76523a8f6799131eb819f053ba1fae172" Mar 18 13:44:09.407244 master-0 kubenswrapper[27835]: I0318 13:44:09.405640 27835 scope.go:117] "RemoveContainer" containerID="705af0a32d09d6134e6b5fe2a5079c120271380ceae54c11718af0ecb3fdb6f8" Mar 18 13:44:09.407244 master-0 kubenswrapper[27835]: E0318 13:44:09.405883 27835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-857bc4bdd5-pjcbx_openstack(ca0a3287-6ed1-4a7e-a07a-80284820fbc3)\"" pod="openstack/ironic-857bc4bdd5-pjcbx" podUID="ca0a3287-6ed1-4a7e-a07a-80284820fbc3" Mar 18 13:44:09.465487 master-0 kubenswrapper[27835]: I0318 13:44:09.464458 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-8645fd5fb8-gm6gg" podStartSLOduration=5.46443513 podStartE2EDuration="5.46443513s" podCreationTimestamp="2026-03-18 13:44:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:44:09.384609104 +0000 UTC m=+1213.349820664" watchObservedRunningTime="2026-03-18 13:44:09.46443513 +0000 UTC m=+1213.429646700" Mar 18 13:44:09.470385 master-0 kubenswrapper[27835]: I0318 13:44:09.469894 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c7433697-622c-4928-be5c-cd3a3c65cc8c-openstack-config-secret\") pod \"openstackclient\" (UID: \"c7433697-622c-4928-be5c-cd3a3c65cc8c\") " pod="openstack/openstackclient" Mar 18 13:44:09.470385 master-0 kubenswrapper[27835]: I0318 13:44:09.469957 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4ct9\" (UniqueName: \"kubernetes.io/projected/c7433697-622c-4928-be5c-cd3a3c65cc8c-kube-api-access-z4ct9\") pod \"openstackclient\" (UID: \"c7433697-622c-4928-be5c-cd3a3c65cc8c\") " pod="openstack/openstackclient" Mar 18 13:44:09.470385 master-0 kubenswrapper[27835]: I0318 13:44:09.470030 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c7433697-622c-4928-be5c-cd3a3c65cc8c-openstack-config\") pod \"openstackclient\" (UID: \"c7433697-622c-4928-be5c-cd3a3c65cc8c\") " pod="openstack/openstackclient" Mar 18 13:44:09.470385 master-0 kubenswrapper[27835]: I0318 13:44:09.470053 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7433697-622c-4928-be5c-cd3a3c65cc8c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c7433697-622c-4928-be5c-cd3a3c65cc8c\") " pod="openstack/openstackclient" Mar 18 13:44:09.484138 master-0 kubenswrapper[27835]: I0318 13:44:09.472116 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c7433697-622c-4928-be5c-cd3a3c65cc8c-openstack-config\") pod \"openstackclient\" (UID: \"c7433697-622c-4928-be5c-cd3a3c65cc8c\") " pod="openstack/openstackclient" Mar 18 13:44:09.486129 master-0 kubenswrapper[27835]: I0318 13:44:09.484726 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-backup-0" event={"ID":"59594f38-4062-4f67-a913-2d334dba30c0","Type":"ContainerStarted","Data":"9756672f815113ac3c34fc5399981fb7fc82d8620b3d6f62553c0f8d8bef9c26"} Mar 18 13:44:09.486129 master-0 kubenswrapper[27835]: I0318 13:44:09.484915 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c7433697-622c-4928-be5c-cd3a3c65cc8c-openstack-config-secret\") pod \"openstackclient\" (UID: \"c7433697-622c-4928-be5c-cd3a3c65cc8c\") " pod="openstack/openstackclient" Mar 18 13:44:09.502466 master-0 kubenswrapper[27835]: I0318 13:44:09.502354 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-07518-scheduler-0" event={"ID":"76243274-a619-4c3f-8b9c-19a8c89eb6f1","Type":"ContainerStarted","Data":"fef373a67544d2355d0abda2fbee2b77356d5f160fd60724a89e2cc1feff72ab"} Mar 18 13:44:09.513316 master-0 kubenswrapper[27835]: I0318 13:44:09.513260 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4ct9\" (UniqueName: \"kubernetes.io/projected/c7433697-622c-4928-be5c-cd3a3c65cc8c-kube-api-access-z4ct9\") pod \"openstackclient\" (UID: \"c7433697-622c-4928-be5c-cd3a3c65cc8c\") " pod="openstack/openstackclient" Mar 18 13:44:09.540749 master-0 kubenswrapper[27835]: I0318 13:44:09.532495 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7433697-622c-4928-be5c-cd3a3c65cc8c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c7433697-622c-4928-be5c-cd3a3c65cc8c\") " pod="openstack/openstackclient" Mar 18 13:44:09.569214 master-0 kubenswrapper[27835]: I0318 13:44:09.567884 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 18 13:44:09.601428 master-0 kubenswrapper[27835]: I0318 13:44:09.601339 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-07518-scheduler-0" podStartSLOduration=6.601311309 podStartE2EDuration="6.601311309s" podCreationTimestamp="2026-03-18 13:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:44:09.546001593 +0000 UTC m=+1213.511213153" watchObservedRunningTime="2026-03-18 13:44:09.601311309 +0000 UTC m=+1213.566522879" Mar 18 13:44:10.140734 master-0 kubenswrapper[27835]: I0318 13:44:10.140567 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-sync-s7cvj"] Mar 18 13:44:10.142258 master-0 kubenswrapper[27835]: I0318 13:44:10.142232 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-s7cvj" Mar 18 13:44:10.154227 master-0 kubenswrapper[27835]: I0318 13:44:10.154068 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Mar 18 13:44:10.156403 master-0 kubenswrapper[27835]: I0318 13:44:10.154650 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Mar 18 13:44:10.173138 master-0 kubenswrapper[27835]: I0318 13:44:10.173068 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c1fc873e-3d35-4632-a144-08e9b6e74e02-config\") pod \"ironic-inspector-db-sync-s7cvj\" (UID: \"c1fc873e-3d35-4632-a144-08e9b6e74e02\") " pod="openstack/ironic-inspector-db-sync-s7cvj" Mar 18 13:44:10.173548 master-0 kubenswrapper[27835]: I0318 13:44:10.173408 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqvjk\" (UniqueName: \"kubernetes.io/projected/c1fc873e-3d35-4632-a144-08e9b6e74e02-kube-api-access-nqvjk\") pod \"ironic-inspector-db-sync-s7cvj\" (UID: \"c1fc873e-3d35-4632-a144-08e9b6e74e02\") " pod="openstack/ironic-inspector-db-sync-s7cvj" Mar 18 13:44:10.173872 master-0 kubenswrapper[27835]: I0318 13:44:10.173789 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1fc873e-3d35-4632-a144-08e9b6e74e02-combined-ca-bundle\") pod \"ironic-inspector-db-sync-s7cvj\" (UID: \"c1fc873e-3d35-4632-a144-08e9b6e74e02\") " pod="openstack/ironic-inspector-db-sync-s7cvj" Mar 18 13:44:10.173872 master-0 kubenswrapper[27835]: I0318 13:44:10.173831 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/c1fc873e-3d35-4632-a144-08e9b6e74e02-etc-podinfo\") pod \"ironic-inspector-db-sync-s7cvj\" (UID: \"c1fc873e-3d35-4632-a144-08e9b6e74e02\") " pod="openstack/ironic-inspector-db-sync-s7cvj" Mar 18 13:44:10.174495 master-0 kubenswrapper[27835]: I0318 13:44:10.173966 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/c1fc873e-3d35-4632-a144-08e9b6e74e02-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-s7cvj\" (UID: \"c1fc873e-3d35-4632-a144-08e9b6e74e02\") " pod="openstack/ironic-inspector-db-sync-s7cvj" Mar 18 13:44:10.174495 master-0 kubenswrapper[27835]: I0318 13:44:10.174447 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1fc873e-3d35-4632-a144-08e9b6e74e02-scripts\") pod \"ironic-inspector-db-sync-s7cvj\" (UID: \"c1fc873e-3d35-4632-a144-08e9b6e74e02\") " pod="openstack/ironic-inspector-db-sync-s7cvj" Mar 18 13:44:10.174495 master-0 kubenswrapper[27835]: I0318 13:44:10.174472 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/c1fc873e-3d35-4632-a144-08e9b6e74e02-var-lib-ironic\") pod \"ironic-inspector-db-sync-s7cvj\" (UID: \"c1fc873e-3d35-4632-a144-08e9b6e74e02\") " pod="openstack/ironic-inspector-db-sync-s7cvj" Mar 18 13:44:10.331227 master-0 kubenswrapper[27835]: I0318 13:44:10.330957 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-s7cvj"] Mar 18 13:44:10.337486 master-0 kubenswrapper[27835]: I0318 13:44:10.337439 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqvjk\" (UniqueName: \"kubernetes.io/projected/c1fc873e-3d35-4632-a144-08e9b6e74e02-kube-api-access-nqvjk\") pod \"ironic-inspector-db-sync-s7cvj\" (UID: \"c1fc873e-3d35-4632-a144-08e9b6e74e02\") " pod="openstack/ironic-inspector-db-sync-s7cvj" Mar 18 13:44:10.337609 master-0 kubenswrapper[27835]: I0318 13:44:10.337523 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1fc873e-3d35-4632-a144-08e9b6e74e02-combined-ca-bundle\") pod \"ironic-inspector-db-sync-s7cvj\" (UID: \"c1fc873e-3d35-4632-a144-08e9b6e74e02\") " pod="openstack/ironic-inspector-db-sync-s7cvj" Mar 18 13:44:10.337609 master-0 kubenswrapper[27835]: I0318 13:44:10.337564 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/c1fc873e-3d35-4632-a144-08e9b6e74e02-etc-podinfo\") pod \"ironic-inspector-db-sync-s7cvj\" (UID: \"c1fc873e-3d35-4632-a144-08e9b6e74e02\") " pod="openstack/ironic-inspector-db-sync-s7cvj" Mar 18 13:44:10.338136 master-0 kubenswrapper[27835]: I0318 13:44:10.337748 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/c1fc873e-3d35-4632-a144-08e9b6e74e02-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-s7cvj\" (UID: \"c1fc873e-3d35-4632-a144-08e9b6e74e02\") " pod="openstack/ironic-inspector-db-sync-s7cvj" Mar 18 13:44:10.338136 master-0 kubenswrapper[27835]: I0318 13:44:10.337779 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1fc873e-3d35-4632-a144-08e9b6e74e02-scripts\") pod \"ironic-inspector-db-sync-s7cvj\" (UID: \"c1fc873e-3d35-4632-a144-08e9b6e74e02\") " pod="openstack/ironic-inspector-db-sync-s7cvj" Mar 18 13:44:10.338136 master-0 kubenswrapper[27835]: I0318 13:44:10.337805 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/c1fc873e-3d35-4632-a144-08e9b6e74e02-var-lib-ironic\") pod \"ironic-inspector-db-sync-s7cvj\" (UID: \"c1fc873e-3d35-4632-a144-08e9b6e74e02\") " pod="openstack/ironic-inspector-db-sync-s7cvj" Mar 18 13:44:10.339278 master-0 kubenswrapper[27835]: I0318 13:44:10.338596 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c1fc873e-3d35-4632-a144-08e9b6e74e02-config\") pod \"ironic-inspector-db-sync-s7cvj\" (UID: \"c1fc873e-3d35-4632-a144-08e9b6e74e02\") " pod="openstack/ironic-inspector-db-sync-s7cvj" Mar 18 13:44:10.345077 master-0 kubenswrapper[27835]: I0318 13:44:10.344584 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/c1fc873e-3d35-4632-a144-08e9b6e74e02-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-s7cvj\" (UID: \"c1fc873e-3d35-4632-a144-08e9b6e74e02\") " pod="openstack/ironic-inspector-db-sync-s7cvj" Mar 18 13:44:10.358552 master-0 kubenswrapper[27835]: I0318 13:44:10.358497 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c1fc873e-3d35-4632-a144-08e9b6e74e02-config\") pod \"ironic-inspector-db-sync-s7cvj\" (UID: \"c1fc873e-3d35-4632-a144-08e9b6e74e02\") " pod="openstack/ironic-inspector-db-sync-s7cvj" Mar 18 13:44:10.363108 master-0 kubenswrapper[27835]: I0318 13:44:10.363043 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/c1fc873e-3d35-4632-a144-08e9b6e74e02-var-lib-ironic\") pod \"ironic-inspector-db-sync-s7cvj\" (UID: \"c1fc873e-3d35-4632-a144-08e9b6e74e02\") " pod="openstack/ironic-inspector-db-sync-s7cvj" Mar 18 13:44:10.367457 master-0 kubenswrapper[27835]: I0318 13:44:10.365246 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1fc873e-3d35-4632-a144-08e9b6e74e02-scripts\") pod \"ironic-inspector-db-sync-s7cvj\" (UID: \"c1fc873e-3d35-4632-a144-08e9b6e74e02\") " pod="openstack/ironic-inspector-db-sync-s7cvj" Mar 18 13:44:10.368508 master-0 kubenswrapper[27835]: I0318 13:44:10.368374 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1fc873e-3d35-4632-a144-08e9b6e74e02-combined-ca-bundle\") pod \"ironic-inspector-db-sync-s7cvj\" (UID: \"c1fc873e-3d35-4632-a144-08e9b6e74e02\") " pod="openstack/ironic-inspector-db-sync-s7cvj" Mar 18 13:44:10.404638 master-0 kubenswrapper[27835]: I0318 13:44:10.403275 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqvjk\" (UniqueName: \"kubernetes.io/projected/c1fc873e-3d35-4632-a144-08e9b6e74e02-kube-api-access-nqvjk\") pod \"ironic-inspector-db-sync-s7cvj\" (UID: \"c1fc873e-3d35-4632-a144-08e9b6e74e02\") " pod="openstack/ironic-inspector-db-sync-s7cvj" Mar 18 13:44:10.427925 master-0 kubenswrapper[27835]: I0318 13:44:10.427800 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/c1fc873e-3d35-4632-a144-08e9b6e74e02-etc-podinfo\") pod \"ironic-inspector-db-sync-s7cvj\" (UID: \"c1fc873e-3d35-4632-a144-08e9b6e74e02\") " pod="openstack/ironic-inspector-db-sync-s7cvj" Mar 18 13:44:10.501588 master-0 kubenswrapper[27835]: I0318 13:44:10.500638 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 18 13:44:10.557038 master-0 kubenswrapper[27835]: I0318 13:44:10.556841 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c7433697-622c-4928-be5c-cd3a3c65cc8c","Type":"ContainerStarted","Data":"d73a3660908e22e30ac852b56288f3d11edcea504ad7ba95fec9548446f89a4c"} Mar 18 13:44:10.566400 master-0 kubenswrapper[27835]: I0318 13:44:10.565887 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6d475bdf48-lskhl" Mar 18 13:44:10.593618 master-0 kubenswrapper[27835]: I0318 13:44:10.593092 27835 scope.go:117] "RemoveContainer" containerID="705af0a32d09d6134e6b5fe2a5079c120271380ceae54c11718af0ecb3fdb6f8" Mar 18 13:44:10.593618 master-0 kubenswrapper[27835]: E0318 13:44:10.593345 27835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-857bc4bdd5-pjcbx_openstack(ca0a3287-6ed1-4a7e-a07a-80284820fbc3)\"" pod="openstack/ironic-857bc4bdd5-pjcbx" podUID="ca0a3287-6ed1-4a7e-a07a-80284820fbc3" Mar 18 13:44:10.616359 master-0 kubenswrapper[27835]: I0318 13:44:10.616039 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-s7cvj" Mar 18 13:44:10.653936 master-0 kubenswrapper[27835]: I0318 13:44:10.653579 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-07518-backup-0" Mar 18 13:44:11.122611 master-0 kubenswrapper[27835]: I0318 13:44:11.115753 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77bd5547c7-x2vlw" Mar 18 13:44:11.213663 master-0 kubenswrapper[27835]: I0318 13:44:11.208907 27835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-857bc4bdd5-pjcbx" Mar 18 13:44:11.213663 master-0 kubenswrapper[27835]: I0318 13:44:11.208960 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-857bc4bdd5-pjcbx" Mar 18 13:44:11.269341 master-0 kubenswrapper[27835]: I0318 13:44:11.269239 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b7d9bdfc-bnzwf"] Mar 18 13:44:11.269570 master-0 kubenswrapper[27835]: I0318 13:44:11.269490 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84b7d9bdfc-bnzwf" podUID="784c7158-5c02-4b97-bf4b-11241e7ebc40" containerName="dnsmasq-dns" containerID="cri-o://61fcf7fd30d8e17a9af269aeed2b7b7d6ae9eb1b61e05c0e88026a4a3f5da3fc" gracePeriod=10 Mar 18 13:44:11.409267 master-0 kubenswrapper[27835]: I0318 13:44:11.408602 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-s7cvj"] Mar 18 13:44:11.451564 master-0 kubenswrapper[27835]: W0318 13:44:11.449091 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1fc873e_3d35_4632_a144_08e9b6e74e02.slice/crio-3287b2de7232f09d948cffddfa4eedc0a7248a437eeec98cadc5d62d97419047 WatchSource:0}: Error finding container 3287b2de7232f09d948cffddfa4eedc0a7248a437eeec98cadc5d62d97419047: Status 404 returned error can't find the container with id 3287b2de7232f09d948cffddfa4eedc0a7248a437eeec98cadc5d62d97419047 Mar 18 13:44:11.646282 master-0 kubenswrapper[27835]: I0318 13:44:11.646131 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-s7cvj" event={"ID":"c1fc873e-3d35-4632-a144-08e9b6e74e02","Type":"ContainerStarted","Data":"3287b2de7232f09d948cffddfa4eedc0a7248a437eeec98cadc5d62d97419047"} Mar 18 13:44:11.662854 master-0 kubenswrapper[27835]: I0318 13:44:11.661480 27835 generic.go:334] "Generic (PLEG): container finished" podID="784c7158-5c02-4b97-bf4b-11241e7ebc40" containerID="61fcf7fd30d8e17a9af269aeed2b7b7d6ae9eb1b61e05c0e88026a4a3f5da3fc" exitCode=0 Mar 18 13:44:11.662854 master-0 kubenswrapper[27835]: I0318 13:44:11.662623 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b7d9bdfc-bnzwf" event={"ID":"784c7158-5c02-4b97-bf4b-11241e7ebc40","Type":"ContainerDied","Data":"61fcf7fd30d8e17a9af269aeed2b7b7d6ae9eb1b61e05c0e88026a4a3f5da3fc"} Mar 18 13:44:11.664380 master-0 kubenswrapper[27835]: I0318 13:44:11.664244 27835 scope.go:117] "RemoveContainer" containerID="705af0a32d09d6134e6b5fe2a5079c120271380ceae54c11718af0ecb3fdb6f8" Mar 18 13:44:11.664971 master-0 kubenswrapper[27835]: E0318 13:44:11.664702 27835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-857bc4bdd5-pjcbx_openstack(ca0a3287-6ed1-4a7e-a07a-80284820fbc3)\"" pod="openstack/ironic-857bc4bdd5-pjcbx" podUID="ca0a3287-6ed1-4a7e-a07a-80284820fbc3" Mar 18 13:44:12.294180 master-0 kubenswrapper[27835]: I0318 13:44:12.294093 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b7d9bdfc-bnzwf" Mar 18 13:44:12.499325 master-0 kubenswrapper[27835]: I0318 13:44:12.499179 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/784c7158-5c02-4b97-bf4b-11241e7ebc40-dns-swift-storage-0\") pod \"784c7158-5c02-4b97-bf4b-11241e7ebc40\" (UID: \"784c7158-5c02-4b97-bf4b-11241e7ebc40\") " Mar 18 13:44:12.500056 master-0 kubenswrapper[27835]: I0318 13:44:12.500031 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/784c7158-5c02-4b97-bf4b-11241e7ebc40-config\") pod \"784c7158-5c02-4b97-bf4b-11241e7ebc40\" (UID: \"784c7158-5c02-4b97-bf4b-11241e7ebc40\") " Mar 18 13:44:12.500244 master-0 kubenswrapper[27835]: I0318 13:44:12.500228 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/784c7158-5c02-4b97-bf4b-11241e7ebc40-ovsdbserver-sb\") pod \"784c7158-5c02-4b97-bf4b-11241e7ebc40\" (UID: \"784c7158-5c02-4b97-bf4b-11241e7ebc40\") " Mar 18 13:44:12.500400 master-0 kubenswrapper[27835]: I0318 13:44:12.500387 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69cmv\" (UniqueName: \"kubernetes.io/projected/784c7158-5c02-4b97-bf4b-11241e7ebc40-kube-api-access-69cmv\") pod \"784c7158-5c02-4b97-bf4b-11241e7ebc40\" (UID: \"784c7158-5c02-4b97-bf4b-11241e7ebc40\") " Mar 18 13:44:12.500570 master-0 kubenswrapper[27835]: I0318 13:44:12.500557 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/784c7158-5c02-4b97-bf4b-11241e7ebc40-ovsdbserver-nb\") pod \"784c7158-5c02-4b97-bf4b-11241e7ebc40\" (UID: \"784c7158-5c02-4b97-bf4b-11241e7ebc40\") " Mar 18 13:44:12.500732 master-0 kubenswrapper[27835]: I0318 13:44:12.500717 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/784c7158-5c02-4b97-bf4b-11241e7ebc40-dns-svc\") pod \"784c7158-5c02-4b97-bf4b-11241e7ebc40\" (UID: \"784c7158-5c02-4b97-bf4b-11241e7ebc40\") " Mar 18 13:44:12.528936 master-0 kubenswrapper[27835]: I0318 13:44:12.527743 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/784c7158-5c02-4b97-bf4b-11241e7ebc40-kube-api-access-69cmv" (OuterVolumeSpecName: "kube-api-access-69cmv") pod "784c7158-5c02-4b97-bf4b-11241e7ebc40" (UID: "784c7158-5c02-4b97-bf4b-11241e7ebc40"). InnerVolumeSpecName "kube-api-access-69cmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:44:12.604680 master-0 kubenswrapper[27835]: I0318 13:44:12.604624 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69cmv\" (UniqueName: \"kubernetes.io/projected/784c7158-5c02-4b97-bf4b-11241e7ebc40-kube-api-access-69cmv\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:12.635335 master-0 kubenswrapper[27835]: I0318 13:44:12.635276 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/784c7158-5c02-4b97-bf4b-11241e7ebc40-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "784c7158-5c02-4b97-bf4b-11241e7ebc40" (UID: "784c7158-5c02-4b97-bf4b-11241e7ebc40"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:44:12.659546 master-0 kubenswrapper[27835]: I0318 13:44:12.659492 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/784c7158-5c02-4b97-bf4b-11241e7ebc40-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "784c7158-5c02-4b97-bf4b-11241e7ebc40" (UID: "784c7158-5c02-4b97-bf4b-11241e7ebc40"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:44:12.679974 master-0 kubenswrapper[27835]: I0318 13:44:12.679873 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/784c7158-5c02-4b97-bf4b-11241e7ebc40-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "784c7158-5c02-4b97-bf4b-11241e7ebc40" (UID: "784c7158-5c02-4b97-bf4b-11241e7ebc40"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:44:12.681399 master-0 kubenswrapper[27835]: I0318 13:44:12.681356 27835 scope.go:117] "RemoveContainer" containerID="705af0a32d09d6134e6b5fe2a5079c120271380ceae54c11718af0ecb3fdb6f8" Mar 18 13:44:12.685441 master-0 kubenswrapper[27835]: E0318 13:44:12.681668 27835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-857bc4bdd5-pjcbx_openstack(ca0a3287-6ed1-4a7e-a07a-80284820fbc3)\"" pod="openstack/ironic-857bc4bdd5-pjcbx" podUID="ca0a3287-6ed1-4a7e-a07a-80284820fbc3" Mar 18 13:44:12.685441 master-0 kubenswrapper[27835]: I0318 13:44:12.682055 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b7d9bdfc-bnzwf" Mar 18 13:44:12.685441 master-0 kubenswrapper[27835]: I0318 13:44:12.682614 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b7d9bdfc-bnzwf" event={"ID":"784c7158-5c02-4b97-bf4b-11241e7ebc40","Type":"ContainerDied","Data":"8b95b1872b02020d195b68ed5a773100977db05c120e249340f92a3b65c6c4af"} Mar 18 13:44:12.685441 master-0 kubenswrapper[27835]: I0318 13:44:12.682650 27835 scope.go:117] "RemoveContainer" containerID="61fcf7fd30d8e17a9af269aeed2b7b7d6ae9eb1b61e05c0e88026a4a3f5da3fc" Mar 18 13:44:12.711373 master-0 kubenswrapper[27835]: I0318 13:44:12.709831 27835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/784c7158-5c02-4b97-bf4b-11241e7ebc40-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:12.711373 master-0 kubenswrapper[27835]: I0318 13:44:12.709872 27835 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/784c7158-5c02-4b97-bf4b-11241e7ebc40-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:12.711373 master-0 kubenswrapper[27835]: I0318 13:44:12.709886 27835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/784c7158-5c02-4b97-bf4b-11241e7ebc40-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:12.717326 master-0 kubenswrapper[27835]: I0318 13:44:12.717252 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/784c7158-5c02-4b97-bf4b-11241e7ebc40-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "784c7158-5c02-4b97-bf4b-11241e7ebc40" (UID: "784c7158-5c02-4b97-bf4b-11241e7ebc40"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:44:12.739453 master-0 kubenswrapper[27835]: I0318 13:44:12.738329 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/784c7158-5c02-4b97-bf4b-11241e7ebc40-config" (OuterVolumeSpecName: "config") pod "784c7158-5c02-4b97-bf4b-11241e7ebc40" (UID: "784c7158-5c02-4b97-bf4b-11241e7ebc40"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:44:12.816708 master-0 kubenswrapper[27835]: I0318 13:44:12.814586 27835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/784c7158-5c02-4b97-bf4b-11241e7ebc40-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:12.816708 master-0 kubenswrapper[27835]: I0318 13:44:12.814630 27835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/784c7158-5c02-4b97-bf4b-11241e7ebc40-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:12.816708 master-0 kubenswrapper[27835]: I0318 13:44:12.814693 27835 scope.go:117] "RemoveContainer" containerID="518954abcd4ba774aae84929c5c606de15aafa5ec705809dc6fc2a486dcfd130" Mar 18 13:44:13.055508 master-0 kubenswrapper[27835]: I0318 13:44:13.052851 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b7d9bdfc-bnzwf"] Mar 18 13:44:13.068145 master-0 kubenswrapper[27835]: I0318 13:44:13.067814 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84b7d9bdfc-bnzwf"] Mar 18 13:44:13.657097 master-0 kubenswrapper[27835]: I0318 13:44:13.656943 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-64fd4cfc77-nwzfl" Mar 18 13:44:13.884087 master-0 kubenswrapper[27835]: I0318 13:44:13.884021 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-857bc4bdd5-pjcbx"] Mar 18 13:44:13.902468 master-0 kubenswrapper[27835]: I0318 13:44:13.899839 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-857bc4bdd5-pjcbx" podUID="ca0a3287-6ed1-4a7e-a07a-80284820fbc3" containerName="ironic-api-log" containerID="cri-o://58905c798f20deffc7cbfe67c14685482089c73ef259d1e46e94a436132d9d72" gracePeriod=60 Mar 18 13:44:14.308472 master-0 kubenswrapper[27835]: I0318 13:44:14.307683 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="784c7158-5c02-4b97-bf4b-11241e7ebc40" path="/var/lib/kubelet/pods/784c7158-5c02-4b97-bf4b-11241e7ebc40/volumes" Mar 18 13:44:14.367164 master-0 kubenswrapper[27835]: I0318 13:44:14.367116 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-07518-volume-lvm-iscsi-0" Mar 18 13:44:14.500486 master-0 kubenswrapper[27835]: I0318 13:44:14.496096 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-07518-scheduler-0" Mar 18 13:44:14.622225 master-0 kubenswrapper[27835]: I0318 13:44:14.622096 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-857bc4bdd5-pjcbx" Mar 18 13:44:14.746321 master-0 kubenswrapper[27835]: I0318 13:44:14.746113 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-config-data-merged\") pod \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\" (UID: \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\") " Mar 18 13:44:14.746321 master-0 kubenswrapper[27835]: I0318 13:44:14.746306 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-scripts\") pod \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\" (UID: \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\") " Mar 18 13:44:14.746942 master-0 kubenswrapper[27835]: I0318 13:44:14.746389 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-config-data-custom\") pod \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\" (UID: \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\") " Mar 18 13:44:14.746942 master-0 kubenswrapper[27835]: I0318 13:44:14.746522 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-logs\") pod \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\" (UID: \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\") " Mar 18 13:44:14.746942 master-0 kubenswrapper[27835]: I0318 13:44:14.746543 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-etc-podinfo\") pod \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\" (UID: \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\") " Mar 18 13:44:14.746942 master-0 kubenswrapper[27835]: I0318 13:44:14.746689 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-combined-ca-bundle\") pod \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\" (UID: \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\") " Mar 18 13:44:14.746942 master-0 kubenswrapper[27835]: I0318 13:44:14.746706 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-config-data\") pod \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\" (UID: \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\") " Mar 18 13:44:14.746942 master-0 kubenswrapper[27835]: I0318 13:44:14.746832 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrdz6\" (UniqueName: \"kubernetes.io/projected/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-kube-api-access-qrdz6\") pod \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\" (UID: \"ca0a3287-6ed1-4a7e-a07a-80284820fbc3\") " Mar 18 13:44:14.760672 master-0 kubenswrapper[27835]: I0318 13:44:14.749599 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-scripts" (OuterVolumeSpecName: "scripts") pod "ca0a3287-6ed1-4a7e-a07a-80284820fbc3" (UID: "ca0a3287-6ed1-4a7e-a07a-80284820fbc3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:14.760672 master-0 kubenswrapper[27835]: I0318 13:44:14.750053 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "ca0a3287-6ed1-4a7e-a07a-80284820fbc3" (UID: "ca0a3287-6ed1-4a7e-a07a-80284820fbc3"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:44:14.760672 master-0 kubenswrapper[27835]: I0318 13:44:14.751957 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-logs" (OuterVolumeSpecName: "logs") pod "ca0a3287-6ed1-4a7e-a07a-80284820fbc3" (UID: "ca0a3287-6ed1-4a7e-a07a-80284820fbc3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:44:14.760672 master-0 kubenswrapper[27835]: I0318 13:44:14.756768 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "ca0a3287-6ed1-4a7e-a07a-80284820fbc3" (UID: "ca0a3287-6ed1-4a7e-a07a-80284820fbc3"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 18 13:44:14.760672 master-0 kubenswrapper[27835]: I0318 13:44:14.756925 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ca0a3287-6ed1-4a7e-a07a-80284820fbc3" (UID: "ca0a3287-6ed1-4a7e-a07a-80284820fbc3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:14.767578 master-0 kubenswrapper[27835]: I0318 13:44:14.761986 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-kube-api-access-qrdz6" (OuterVolumeSpecName: "kube-api-access-qrdz6") pod "ca0a3287-6ed1-4a7e-a07a-80284820fbc3" (UID: "ca0a3287-6ed1-4a7e-a07a-80284820fbc3"). InnerVolumeSpecName "kube-api-access-qrdz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:44:14.795538 master-0 kubenswrapper[27835]: I0318 13:44:14.794080 27835 generic.go:334] "Generic (PLEG): container finished" podID="ca0a3287-6ed1-4a7e-a07a-80284820fbc3" containerID="58905c798f20deffc7cbfe67c14685482089c73ef259d1e46e94a436132d9d72" exitCode=143 Mar 18 13:44:14.795538 master-0 kubenswrapper[27835]: I0318 13:44:14.794141 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-857bc4bdd5-pjcbx" event={"ID":"ca0a3287-6ed1-4a7e-a07a-80284820fbc3","Type":"ContainerDied","Data":"58905c798f20deffc7cbfe67c14685482089c73ef259d1e46e94a436132d9d72"} Mar 18 13:44:14.795538 master-0 kubenswrapper[27835]: I0318 13:44:14.794165 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-857bc4bdd5-pjcbx" Mar 18 13:44:14.795538 master-0 kubenswrapper[27835]: I0318 13:44:14.794169 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-857bc4bdd5-pjcbx" event={"ID":"ca0a3287-6ed1-4a7e-a07a-80284820fbc3","Type":"ContainerDied","Data":"a3d8568801f12916302771df0c3d0b0bd3e991a53157ff1307b76c4d24a6897d"} Mar 18 13:44:14.795538 master-0 kubenswrapper[27835]: I0318 13:44:14.794182 27835 scope.go:117] "RemoveContainer" containerID="705af0a32d09d6134e6b5fe2a5079c120271380ceae54c11718af0ecb3fdb6f8" Mar 18 13:44:14.821501 master-0 kubenswrapper[27835]: I0318 13:44:14.816487 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-config-data" (OuterVolumeSpecName: "config-data") pod "ca0a3287-6ed1-4a7e-a07a-80284820fbc3" (UID: "ca0a3287-6ed1-4a7e-a07a-80284820fbc3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:14.843156 master-0 kubenswrapper[27835]: I0318 13:44:14.836744 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca0a3287-6ed1-4a7e-a07a-80284820fbc3" (UID: "ca0a3287-6ed1-4a7e-a07a-80284820fbc3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:14.850800 master-0 kubenswrapper[27835]: I0318 13:44:14.850646 27835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:14.850800 master-0 kubenswrapper[27835]: I0318 13:44:14.850690 27835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:14.850800 master-0 kubenswrapper[27835]: I0318 13:44:14.850700 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrdz6\" (UniqueName: \"kubernetes.io/projected/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-kube-api-access-qrdz6\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:14.850800 master-0 kubenswrapper[27835]: I0318 13:44:14.850712 27835 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-config-data-merged\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:14.850800 master-0 kubenswrapper[27835]: I0318 13:44:14.850720 27835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:14.850800 master-0 kubenswrapper[27835]: I0318 13:44:14.850733 27835 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:14.850800 master-0 kubenswrapper[27835]: I0318 13:44:14.850747 27835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:14.851100 master-0 kubenswrapper[27835]: I0318 13:44:14.851045 27835 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/ca0a3287-6ed1-4a7e-a07a-80284820fbc3-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:14.885292 master-0 kubenswrapper[27835]: I0318 13:44:14.883647 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-07518-scheduler-0" Mar 18 13:44:15.183269 master-0 kubenswrapper[27835]: I0318 13:44:15.183190 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-857bc4bdd5-pjcbx"] Mar 18 13:44:15.208549 master-0 kubenswrapper[27835]: I0318 13:44:15.206651 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-857bc4bdd5-pjcbx"] Mar 18 13:44:15.361663 master-0 kubenswrapper[27835]: E0318 13:44:15.361391 27835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 80b4b2995fefbec0d442226eccc7a5eb10c5dacc44d174dfeb0f7efe356f2ee5 is running failed: container process not found" containerID="80b4b2995fefbec0d442226eccc7a5eb10c5dacc44d174dfeb0f7efe356f2ee5" cmd=["/bin/true"] Mar 18 13:44:15.361663 master-0 kubenswrapper[27835]: E0318 13:44:15.361391 27835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 80b4b2995fefbec0d442226eccc7a5eb10c5dacc44d174dfeb0f7efe356f2ee5 is running failed: container process not found" containerID="80b4b2995fefbec0d442226eccc7a5eb10c5dacc44d174dfeb0f7efe356f2ee5" cmd=["/bin/true"] Mar 18 13:44:15.362799 master-0 kubenswrapper[27835]: E0318 13:44:15.362696 27835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 80b4b2995fefbec0d442226eccc7a5eb10c5dacc44d174dfeb0f7efe356f2ee5 is running failed: container process not found" containerID="80b4b2995fefbec0d442226eccc7a5eb10c5dacc44d174dfeb0f7efe356f2ee5" cmd=["/bin/true"] Mar 18 13:44:15.362799 master-0 kubenswrapper[27835]: E0318 13:44:15.362776 27835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 80b4b2995fefbec0d442226eccc7a5eb10c5dacc44d174dfeb0f7efe356f2ee5 is running failed: container process not found" containerID="80b4b2995fefbec0d442226eccc7a5eb10c5dacc44d174dfeb0f7efe356f2ee5" cmd=["/bin/true"] Mar 18 13:44:15.363615 master-0 kubenswrapper[27835]: E0318 13:44:15.363444 27835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 80b4b2995fefbec0d442226eccc7a5eb10c5dacc44d174dfeb0f7efe356f2ee5 is running failed: container process not found" containerID="80b4b2995fefbec0d442226eccc7a5eb10c5dacc44d174dfeb0f7efe356f2ee5" cmd=["/bin/true"] Mar 18 13:44:15.363615 master-0 kubenswrapper[27835]: E0318 13:44:15.363511 27835 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 80b4b2995fefbec0d442226eccc7a5eb10c5dacc44d174dfeb0f7efe356f2ee5 is running failed: container process not found" probeType="Liveness" pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" podUID="cc7df07d-4c6b-469f-b007-e3d799a49fd5" containerName="ironic-neutron-agent" Mar 18 13:44:15.363752 master-0 kubenswrapper[27835]: E0318 13:44:15.363581 27835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 80b4b2995fefbec0d442226eccc7a5eb10c5dacc44d174dfeb0f7efe356f2ee5 is running failed: container process not found" containerID="80b4b2995fefbec0d442226eccc7a5eb10c5dacc44d174dfeb0f7efe356f2ee5" cmd=["/bin/true"] Mar 18 13:44:15.363752 master-0 kubenswrapper[27835]: E0318 13:44:15.363671 27835 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 80b4b2995fefbec0d442226eccc7a5eb10c5dacc44d174dfeb0f7efe356f2ee5 is running failed: container process not found" probeType="Readiness" pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" podUID="cc7df07d-4c6b-469f-b007-e3d799a49fd5" containerName="ironic-neutron-agent" Mar 18 13:44:15.636227 master-0 kubenswrapper[27835]: I0318 13:44:15.636115 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6bf5c56f77-ccf49" Mar 18 13:44:15.763196 master-0 kubenswrapper[27835]: I0318 13:44:15.761540 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6d475bdf48-lskhl"] Mar 18 13:44:15.763196 master-0 kubenswrapper[27835]: I0318 13:44:15.762125 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6d475bdf48-lskhl" podUID="8fb691f3-dd32-4f7d-afa4-4d0980740b64" containerName="neutron-api" containerID="cri-o://26067cdf644c0ff8b6005aa4255c18ea6a5029445373767e9f274f38676036a1" gracePeriod=30 Mar 18 13:44:15.763196 master-0 kubenswrapper[27835]: I0318 13:44:15.762752 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6d475bdf48-lskhl" podUID="8fb691f3-dd32-4f7d-afa4-4d0980740b64" containerName="neutron-httpd" containerID="cri-o://e862540dc618ae0a90bacb8bc47bc230cb6d3a28836d17d998ad9600dce1898f" gracePeriod=30 Mar 18 13:44:16.116454 master-0 kubenswrapper[27835]: I0318 13:44:16.116331 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-07518-backup-0" Mar 18 13:44:16.312380 master-0 kubenswrapper[27835]: I0318 13:44:16.312307 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca0a3287-6ed1-4a7e-a07a-80284820fbc3" path="/var/lib/kubelet/pods/ca0a3287-6ed1-4a7e-a07a-80284820fbc3/volumes" Mar 18 13:44:17.687266 master-0 kubenswrapper[27835]: I0318 13:44:17.685658 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6f6cbd65db-gzvz8"] Mar 18 13:44:17.687266 master-0 kubenswrapper[27835]: E0318 13:44:17.686227 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca0a3287-6ed1-4a7e-a07a-80284820fbc3" containerName="ironic-api-log" Mar 18 13:44:17.687266 master-0 kubenswrapper[27835]: I0318 13:44:17.686252 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca0a3287-6ed1-4a7e-a07a-80284820fbc3" containerName="ironic-api-log" Mar 18 13:44:17.687266 master-0 kubenswrapper[27835]: E0318 13:44:17.686281 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="784c7158-5c02-4b97-bf4b-11241e7ebc40" containerName="dnsmasq-dns" Mar 18 13:44:17.687266 master-0 kubenswrapper[27835]: I0318 13:44:17.686287 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="784c7158-5c02-4b97-bf4b-11241e7ebc40" containerName="dnsmasq-dns" Mar 18 13:44:17.687266 master-0 kubenswrapper[27835]: E0318 13:44:17.686309 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca0a3287-6ed1-4a7e-a07a-80284820fbc3" containerName="ironic-api" Mar 18 13:44:17.687266 master-0 kubenswrapper[27835]: I0318 13:44:17.686315 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca0a3287-6ed1-4a7e-a07a-80284820fbc3" containerName="ironic-api" Mar 18 13:44:17.687266 master-0 kubenswrapper[27835]: E0318 13:44:17.686328 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="784c7158-5c02-4b97-bf4b-11241e7ebc40" containerName="init" Mar 18 13:44:17.687266 master-0 kubenswrapper[27835]: I0318 13:44:17.686334 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="784c7158-5c02-4b97-bf4b-11241e7ebc40" containerName="init" Mar 18 13:44:17.687266 master-0 kubenswrapper[27835]: E0318 13:44:17.686352 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca0a3287-6ed1-4a7e-a07a-80284820fbc3" containerName="init" Mar 18 13:44:17.687266 master-0 kubenswrapper[27835]: I0318 13:44:17.686359 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca0a3287-6ed1-4a7e-a07a-80284820fbc3" containerName="init" Mar 18 13:44:17.687266 master-0 kubenswrapper[27835]: I0318 13:44:17.686602 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca0a3287-6ed1-4a7e-a07a-80284820fbc3" containerName="ironic-api" Mar 18 13:44:17.687266 master-0 kubenswrapper[27835]: I0318 13:44:17.686633 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca0a3287-6ed1-4a7e-a07a-80284820fbc3" containerName="ironic-api" Mar 18 13:44:17.687266 master-0 kubenswrapper[27835]: I0318 13:44:17.686672 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="784c7158-5c02-4b97-bf4b-11241e7ebc40" containerName="dnsmasq-dns" Mar 18 13:44:17.687266 master-0 kubenswrapper[27835]: I0318 13:44:17.686684 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca0a3287-6ed1-4a7e-a07a-80284820fbc3" containerName="ironic-api-log" Mar 18 13:44:17.688476 master-0 kubenswrapper[27835]: E0318 13:44:17.687908 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca0a3287-6ed1-4a7e-a07a-80284820fbc3" containerName="ironic-api" Mar 18 13:44:17.688476 master-0 kubenswrapper[27835]: I0318 13:44:17.688067 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca0a3287-6ed1-4a7e-a07a-80284820fbc3" containerName="ironic-api" Mar 18 13:44:17.692181 master-0 kubenswrapper[27835]: I0318 13:44:17.691862 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6f6cbd65db-gzvz8" Mar 18 13:44:17.697610 master-0 kubenswrapper[27835]: I0318 13:44:17.694654 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Mar 18 13:44:17.697610 master-0 kubenswrapper[27835]: I0318 13:44:17.694971 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 18 13:44:17.697610 master-0 kubenswrapper[27835]: I0318 13:44:17.695099 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Mar 18 13:44:17.726712 master-0 kubenswrapper[27835]: I0318 13:44:17.726649 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6f6cbd65db-gzvz8"] Mar 18 13:44:17.804874 master-0 kubenswrapper[27835]: I0318 13:44:17.802358 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8db812c3-c391-4147-8220-fdd68cdd11d3-public-tls-certs\") pod \"swift-proxy-6f6cbd65db-gzvz8\" (UID: \"8db812c3-c391-4147-8220-fdd68cdd11d3\") " pod="openstack/swift-proxy-6f6cbd65db-gzvz8" Mar 18 13:44:17.804874 master-0 kubenswrapper[27835]: I0318 13:44:17.802544 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8db812c3-c391-4147-8220-fdd68cdd11d3-log-httpd\") pod \"swift-proxy-6f6cbd65db-gzvz8\" (UID: \"8db812c3-c391-4147-8220-fdd68cdd11d3\") " pod="openstack/swift-proxy-6f6cbd65db-gzvz8" Mar 18 13:44:17.804874 master-0 kubenswrapper[27835]: I0318 13:44:17.802626 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8db812c3-c391-4147-8220-fdd68cdd11d3-combined-ca-bundle\") pod \"swift-proxy-6f6cbd65db-gzvz8\" (UID: \"8db812c3-c391-4147-8220-fdd68cdd11d3\") " pod="openstack/swift-proxy-6f6cbd65db-gzvz8" Mar 18 13:44:17.804874 master-0 kubenswrapper[27835]: I0318 13:44:17.802819 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8db812c3-c391-4147-8220-fdd68cdd11d3-internal-tls-certs\") pod \"swift-proxy-6f6cbd65db-gzvz8\" (UID: \"8db812c3-c391-4147-8220-fdd68cdd11d3\") " pod="openstack/swift-proxy-6f6cbd65db-gzvz8" Mar 18 13:44:17.804874 master-0 kubenswrapper[27835]: I0318 13:44:17.803147 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npf85\" (UniqueName: \"kubernetes.io/projected/8db812c3-c391-4147-8220-fdd68cdd11d3-kube-api-access-npf85\") pod \"swift-proxy-6f6cbd65db-gzvz8\" (UID: \"8db812c3-c391-4147-8220-fdd68cdd11d3\") " pod="openstack/swift-proxy-6f6cbd65db-gzvz8" Mar 18 13:44:17.804874 master-0 kubenswrapper[27835]: I0318 13:44:17.803251 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8db812c3-c391-4147-8220-fdd68cdd11d3-run-httpd\") pod \"swift-proxy-6f6cbd65db-gzvz8\" (UID: \"8db812c3-c391-4147-8220-fdd68cdd11d3\") " pod="openstack/swift-proxy-6f6cbd65db-gzvz8" Mar 18 13:44:17.804874 master-0 kubenswrapper[27835]: I0318 13:44:17.803785 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8db812c3-c391-4147-8220-fdd68cdd11d3-config-data\") pod \"swift-proxy-6f6cbd65db-gzvz8\" (UID: \"8db812c3-c391-4147-8220-fdd68cdd11d3\") " pod="openstack/swift-proxy-6f6cbd65db-gzvz8" Mar 18 13:44:17.804874 master-0 kubenswrapper[27835]: I0318 13:44:17.803898 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8db812c3-c391-4147-8220-fdd68cdd11d3-etc-swift\") pod \"swift-proxy-6f6cbd65db-gzvz8\" (UID: \"8db812c3-c391-4147-8220-fdd68cdd11d3\") " pod="openstack/swift-proxy-6f6cbd65db-gzvz8" Mar 18 13:44:17.912472 master-0 kubenswrapper[27835]: I0318 13:44:17.912406 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8db812c3-c391-4147-8220-fdd68cdd11d3-etc-swift\") pod \"swift-proxy-6f6cbd65db-gzvz8\" (UID: \"8db812c3-c391-4147-8220-fdd68cdd11d3\") " pod="openstack/swift-proxy-6f6cbd65db-gzvz8" Mar 18 13:44:17.914621 master-0 kubenswrapper[27835]: I0318 13:44:17.912599 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8db812c3-c391-4147-8220-fdd68cdd11d3-public-tls-certs\") pod \"swift-proxy-6f6cbd65db-gzvz8\" (UID: \"8db812c3-c391-4147-8220-fdd68cdd11d3\") " pod="openstack/swift-proxy-6f6cbd65db-gzvz8" Mar 18 13:44:17.914621 master-0 kubenswrapper[27835]: I0318 13:44:17.912657 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8db812c3-c391-4147-8220-fdd68cdd11d3-log-httpd\") pod \"swift-proxy-6f6cbd65db-gzvz8\" (UID: \"8db812c3-c391-4147-8220-fdd68cdd11d3\") " pod="openstack/swift-proxy-6f6cbd65db-gzvz8" Mar 18 13:44:17.914621 master-0 kubenswrapper[27835]: I0318 13:44:17.912689 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8db812c3-c391-4147-8220-fdd68cdd11d3-combined-ca-bundle\") pod \"swift-proxy-6f6cbd65db-gzvz8\" (UID: \"8db812c3-c391-4147-8220-fdd68cdd11d3\") " pod="openstack/swift-proxy-6f6cbd65db-gzvz8" Mar 18 13:44:17.914621 master-0 kubenswrapper[27835]: I0318 13:44:17.912796 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8db812c3-c391-4147-8220-fdd68cdd11d3-internal-tls-certs\") pod \"swift-proxy-6f6cbd65db-gzvz8\" (UID: \"8db812c3-c391-4147-8220-fdd68cdd11d3\") " pod="openstack/swift-proxy-6f6cbd65db-gzvz8" Mar 18 13:44:17.914621 master-0 kubenswrapper[27835]: I0318 13:44:17.912817 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npf85\" (UniqueName: \"kubernetes.io/projected/8db812c3-c391-4147-8220-fdd68cdd11d3-kube-api-access-npf85\") pod \"swift-proxy-6f6cbd65db-gzvz8\" (UID: \"8db812c3-c391-4147-8220-fdd68cdd11d3\") " pod="openstack/swift-proxy-6f6cbd65db-gzvz8" Mar 18 13:44:17.914621 master-0 kubenswrapper[27835]: I0318 13:44:17.912869 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8db812c3-c391-4147-8220-fdd68cdd11d3-run-httpd\") pod \"swift-proxy-6f6cbd65db-gzvz8\" (UID: \"8db812c3-c391-4147-8220-fdd68cdd11d3\") " pod="openstack/swift-proxy-6f6cbd65db-gzvz8" Mar 18 13:44:17.914621 master-0 kubenswrapper[27835]: I0318 13:44:17.912953 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8db812c3-c391-4147-8220-fdd68cdd11d3-config-data\") pod \"swift-proxy-6f6cbd65db-gzvz8\" (UID: \"8db812c3-c391-4147-8220-fdd68cdd11d3\") " pod="openstack/swift-proxy-6f6cbd65db-gzvz8" Mar 18 13:44:17.914919 master-0 kubenswrapper[27835]: I0318 13:44:17.914850 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8db812c3-c391-4147-8220-fdd68cdd11d3-run-httpd\") pod \"swift-proxy-6f6cbd65db-gzvz8\" (UID: \"8db812c3-c391-4147-8220-fdd68cdd11d3\") " pod="openstack/swift-proxy-6f6cbd65db-gzvz8" Mar 18 13:44:17.927748 master-0 kubenswrapper[27835]: I0318 13:44:17.916298 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8db812c3-c391-4147-8220-fdd68cdd11d3-log-httpd\") pod \"swift-proxy-6f6cbd65db-gzvz8\" (UID: \"8db812c3-c391-4147-8220-fdd68cdd11d3\") " pod="openstack/swift-proxy-6f6cbd65db-gzvz8" Mar 18 13:44:17.927748 master-0 kubenswrapper[27835]: I0318 13:44:17.923638 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8db812c3-c391-4147-8220-fdd68cdd11d3-combined-ca-bundle\") pod \"swift-proxy-6f6cbd65db-gzvz8\" (UID: \"8db812c3-c391-4147-8220-fdd68cdd11d3\") " pod="openstack/swift-proxy-6f6cbd65db-gzvz8" Mar 18 13:44:17.927748 master-0 kubenswrapper[27835]: I0318 13:44:17.924473 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8db812c3-c391-4147-8220-fdd68cdd11d3-etc-swift\") pod \"swift-proxy-6f6cbd65db-gzvz8\" (UID: \"8db812c3-c391-4147-8220-fdd68cdd11d3\") " pod="openstack/swift-proxy-6f6cbd65db-gzvz8" Mar 18 13:44:17.938094 master-0 kubenswrapper[27835]: I0318 13:44:17.936069 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8db812c3-c391-4147-8220-fdd68cdd11d3-config-data\") pod \"swift-proxy-6f6cbd65db-gzvz8\" (UID: \"8db812c3-c391-4147-8220-fdd68cdd11d3\") " pod="openstack/swift-proxy-6f6cbd65db-gzvz8" Mar 18 13:44:17.938869 master-0 kubenswrapper[27835]: I0318 13:44:17.938835 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8db812c3-c391-4147-8220-fdd68cdd11d3-public-tls-certs\") pod \"swift-proxy-6f6cbd65db-gzvz8\" (UID: \"8db812c3-c391-4147-8220-fdd68cdd11d3\") " pod="openstack/swift-proxy-6f6cbd65db-gzvz8" Mar 18 13:44:17.949493 master-0 kubenswrapper[27835]: I0318 13:44:17.944882 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-gwfch"] Mar 18 13:44:17.949493 master-0 kubenswrapper[27835]: I0318 13:44:17.948771 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-gwfch" Mar 18 13:44:17.949493 master-0 kubenswrapper[27835]: I0318 13:44:17.949195 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8db812c3-c391-4147-8220-fdd68cdd11d3-internal-tls-certs\") pod \"swift-proxy-6f6cbd65db-gzvz8\" (UID: \"8db812c3-c391-4147-8220-fdd68cdd11d3\") " pod="openstack/swift-proxy-6f6cbd65db-gzvz8" Mar 18 13:44:17.958178 master-0 kubenswrapper[27835]: I0318 13:44:17.956481 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npf85\" (UniqueName: \"kubernetes.io/projected/8db812c3-c391-4147-8220-fdd68cdd11d3-kube-api-access-npf85\") pod \"swift-proxy-6f6cbd65db-gzvz8\" (UID: \"8db812c3-c391-4147-8220-fdd68cdd11d3\") " pod="openstack/swift-proxy-6f6cbd65db-gzvz8" Mar 18 13:44:17.982709 master-0 kubenswrapper[27835]: I0318 13:44:17.967784 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-gwfch"] Mar 18 13:44:18.035456 master-0 kubenswrapper[27835]: I0318 13:44:18.034383 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/549326ca-2cc7-49f2-bffb-a76953703d01-operator-scripts\") pod \"nova-api-db-create-gwfch\" (UID: \"549326ca-2cc7-49f2-bffb-a76953703d01\") " pod="openstack/nova-api-db-create-gwfch" Mar 18 13:44:18.035456 master-0 kubenswrapper[27835]: I0318 13:44:18.034554 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztkmx\" (UniqueName: \"kubernetes.io/projected/549326ca-2cc7-49f2-bffb-a76953703d01-kube-api-access-ztkmx\") pod \"nova-api-db-create-gwfch\" (UID: \"549326ca-2cc7-49f2-bffb-a76953703d01\") " pod="openstack/nova-api-db-create-gwfch" Mar 18 13:44:18.095668 master-0 kubenswrapper[27835]: I0318 13:44:18.095406 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6f6cbd65db-gzvz8" Mar 18 13:44:18.107527 master-0 kubenswrapper[27835]: I0318 13:44:18.103507 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-g2nft"] Mar 18 13:44:18.127585 master-0 kubenswrapper[27835]: I0318 13:44:18.125159 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-g2nft" Mar 18 13:44:18.169875 master-0 kubenswrapper[27835]: I0318 13:44:18.144108 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztkmx\" (UniqueName: \"kubernetes.io/projected/549326ca-2cc7-49f2-bffb-a76953703d01-kube-api-access-ztkmx\") pod \"nova-api-db-create-gwfch\" (UID: \"549326ca-2cc7-49f2-bffb-a76953703d01\") " pod="openstack/nova-api-db-create-gwfch" Mar 18 13:44:18.169875 master-0 kubenswrapper[27835]: I0318 13:44:18.144426 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/549326ca-2cc7-49f2-bffb-a76953703d01-operator-scripts\") pod \"nova-api-db-create-gwfch\" (UID: \"549326ca-2cc7-49f2-bffb-a76953703d01\") " pod="openstack/nova-api-db-create-gwfch" Mar 18 13:44:18.169875 master-0 kubenswrapper[27835]: I0318 13:44:18.149568 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-ff10-account-create-update-49jqz"] Mar 18 13:44:18.169875 master-0 kubenswrapper[27835]: I0318 13:44:18.151144 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ff10-account-create-update-49jqz" Mar 18 13:44:18.169875 master-0 kubenswrapper[27835]: I0318 13:44:18.160155 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/549326ca-2cc7-49f2-bffb-a76953703d01-operator-scripts\") pod \"nova-api-db-create-gwfch\" (UID: \"549326ca-2cc7-49f2-bffb-a76953703d01\") " pod="openstack/nova-api-db-create-gwfch" Mar 18 13:44:18.179564 master-0 kubenswrapper[27835]: I0318 13:44:18.175789 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 18 13:44:18.229563 master-0 kubenswrapper[27835]: I0318 13:44:18.218835 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-g2nft"] Mar 18 13:44:18.259614 master-0 kubenswrapper[27835]: I0318 13:44:18.255206 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms4tj\" (UniqueName: \"kubernetes.io/projected/d8553452-d2f4-4ad0-9fe0-0d2d984be2b0-kube-api-access-ms4tj\") pod \"nova-api-ff10-account-create-update-49jqz\" (UID: \"d8553452-d2f4-4ad0-9fe0-0d2d984be2b0\") " pod="openstack/nova-api-ff10-account-create-update-49jqz" Mar 18 13:44:18.259614 master-0 kubenswrapper[27835]: I0318 13:44:18.255344 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pvz9\" (UniqueName: \"kubernetes.io/projected/83e94368-fc4d-4fdd-bb0e-266a8d57bfd1-kube-api-access-6pvz9\") pod \"nova-cell0-db-create-g2nft\" (UID: \"83e94368-fc4d-4fdd-bb0e-266a8d57bfd1\") " pod="openstack/nova-cell0-db-create-g2nft" Mar 18 13:44:18.259614 master-0 kubenswrapper[27835]: I0318 13:44:18.255427 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8553452-d2f4-4ad0-9fe0-0d2d984be2b0-operator-scripts\") pod \"nova-api-ff10-account-create-update-49jqz\" (UID: \"d8553452-d2f4-4ad0-9fe0-0d2d984be2b0\") " pod="openstack/nova-api-ff10-account-create-update-49jqz" Mar 18 13:44:18.259614 master-0 kubenswrapper[27835]: I0318 13:44:18.255512 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83e94368-fc4d-4fdd-bb0e-266a8d57bfd1-operator-scripts\") pod \"nova-cell0-db-create-g2nft\" (UID: \"83e94368-fc4d-4fdd-bb0e-266a8d57bfd1\") " pod="openstack/nova-cell0-db-create-g2nft" Mar 18 13:44:18.366932 master-0 kubenswrapper[27835]: I0318 13:44:18.366037 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms4tj\" (UniqueName: \"kubernetes.io/projected/d8553452-d2f4-4ad0-9fe0-0d2d984be2b0-kube-api-access-ms4tj\") pod \"nova-api-ff10-account-create-update-49jqz\" (UID: \"d8553452-d2f4-4ad0-9fe0-0d2d984be2b0\") " pod="openstack/nova-api-ff10-account-create-update-49jqz" Mar 18 13:44:18.366932 master-0 kubenswrapper[27835]: I0318 13:44:18.366274 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pvz9\" (UniqueName: \"kubernetes.io/projected/83e94368-fc4d-4fdd-bb0e-266a8d57bfd1-kube-api-access-6pvz9\") pod \"nova-cell0-db-create-g2nft\" (UID: \"83e94368-fc4d-4fdd-bb0e-266a8d57bfd1\") " pod="openstack/nova-cell0-db-create-g2nft" Mar 18 13:44:18.366932 master-0 kubenswrapper[27835]: I0318 13:44:18.366492 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8553452-d2f4-4ad0-9fe0-0d2d984be2b0-operator-scripts\") pod \"nova-api-ff10-account-create-update-49jqz\" (UID: \"d8553452-d2f4-4ad0-9fe0-0d2d984be2b0\") " pod="openstack/nova-api-ff10-account-create-update-49jqz" Mar 18 13:44:18.366932 master-0 kubenswrapper[27835]: I0318 13:44:18.366674 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83e94368-fc4d-4fdd-bb0e-266a8d57bfd1-operator-scripts\") pod \"nova-cell0-db-create-g2nft\" (UID: \"83e94368-fc4d-4fdd-bb0e-266a8d57bfd1\") " pod="openstack/nova-cell0-db-create-g2nft" Mar 18 13:44:18.372964 master-0 kubenswrapper[27835]: I0318 13:44:18.372922 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8553452-d2f4-4ad0-9fe0-0d2d984be2b0-operator-scripts\") pod \"nova-api-ff10-account-create-update-49jqz\" (UID: \"d8553452-d2f4-4ad0-9fe0-0d2d984be2b0\") " pod="openstack/nova-api-ff10-account-create-update-49jqz" Mar 18 13:44:18.380067 master-0 kubenswrapper[27835]: I0318 13:44:18.379997 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83e94368-fc4d-4fdd-bb0e-266a8d57bfd1-operator-scripts\") pod \"nova-cell0-db-create-g2nft\" (UID: \"83e94368-fc4d-4fdd-bb0e-266a8d57bfd1\") " pod="openstack/nova-cell0-db-create-g2nft" Mar 18 13:44:18.914894 master-0 kubenswrapper[27835]: I0318 13:44:18.914625 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ff10-account-create-update-49jqz"] Mar 18 13:44:19.020279 master-0 kubenswrapper[27835]: I0318 13:44:19.019657 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms4tj\" (UniqueName: \"kubernetes.io/projected/d8553452-d2f4-4ad0-9fe0-0d2d984be2b0-kube-api-access-ms4tj\") pod \"nova-api-ff10-account-create-update-49jqz\" (UID: \"d8553452-d2f4-4ad0-9fe0-0d2d984be2b0\") " pod="openstack/nova-api-ff10-account-create-update-49jqz" Mar 18 13:44:19.030321 master-0 kubenswrapper[27835]: I0318 13:44:19.029902 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztkmx\" (UniqueName: \"kubernetes.io/projected/549326ca-2cc7-49f2-bffb-a76953703d01-kube-api-access-ztkmx\") pod \"nova-api-db-create-gwfch\" (UID: \"549326ca-2cc7-49f2-bffb-a76953703d01\") " pod="openstack/nova-api-db-create-gwfch" Mar 18 13:44:19.048548 master-0 kubenswrapper[27835]: I0318 13:44:19.046999 27835 scope.go:117] "RemoveContainer" containerID="58905c798f20deffc7cbfe67c14685482089c73ef259d1e46e94a436132d9d72" Mar 18 13:44:19.055595 master-0 kubenswrapper[27835]: I0318 13:44:19.051214 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pvz9\" (UniqueName: \"kubernetes.io/projected/83e94368-fc4d-4fdd-bb0e-266a8d57bfd1-kube-api-access-6pvz9\") pod \"nova-cell0-db-create-g2nft\" (UID: \"83e94368-fc4d-4fdd-bb0e-266a8d57bfd1\") " pod="openstack/nova-cell0-db-create-g2nft" Mar 18 13:44:19.104307 master-0 kubenswrapper[27835]: I0318 13:44:19.098051 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-g2nft" Mar 18 13:44:19.132689 master-0 kubenswrapper[27835]: I0318 13:44:19.132633 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-63a9-account-create-update-dbfqn"] Mar 18 13:44:19.135340 master-0 kubenswrapper[27835]: I0318 13:44:19.135288 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-63a9-account-create-update-dbfqn" Mar 18 13:44:19.138613 master-0 kubenswrapper[27835]: I0318 13:44:19.138569 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ff10-account-create-update-49jqz" Mar 18 13:44:19.144902 master-0 kubenswrapper[27835]: I0318 13:44:19.144833 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 18 13:44:19.171228 master-0 kubenswrapper[27835]: I0318 13:44:19.171171 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-63a9-account-create-update-dbfqn"] Mar 18 13:44:19.191366 master-0 kubenswrapper[27835]: I0318 13:44:19.191246 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gr5q\" (UniqueName: \"kubernetes.io/projected/38fdf2ba-5074-47b8-b534-c12270b771e8-kube-api-access-5gr5q\") pod \"nova-cell0-63a9-account-create-update-dbfqn\" (UID: \"38fdf2ba-5074-47b8-b534-c12270b771e8\") " pod="openstack/nova-cell0-63a9-account-create-update-dbfqn" Mar 18 13:44:19.192044 master-0 kubenswrapper[27835]: I0318 13:44:19.192010 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38fdf2ba-5074-47b8-b534-c12270b771e8-operator-scripts\") pod \"nova-cell0-63a9-account-create-update-dbfqn\" (UID: \"38fdf2ba-5074-47b8-b534-c12270b771e8\") " pod="openstack/nova-cell0-63a9-account-create-update-dbfqn" Mar 18 13:44:19.288808 master-0 kubenswrapper[27835]: I0318 13:44:19.270617 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-g9ddk"] Mar 18 13:44:19.288808 master-0 kubenswrapper[27835]: I0318 13:44:19.272442 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-g9ddk" Mar 18 13:44:19.290768 master-0 kubenswrapper[27835]: I0318 13:44:19.289829 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-gwfch" Mar 18 13:44:19.290768 master-0 kubenswrapper[27835]: I0318 13:44:19.290254 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-g9ddk"] Mar 18 13:44:19.305230 master-0 kubenswrapper[27835]: I0318 13:44:19.298136 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8aeac094-6720-488f-a255-c3042b569033-operator-scripts\") pod \"nova-cell1-db-create-g9ddk\" (UID: \"8aeac094-6720-488f-a255-c3042b569033\") " pod="openstack/nova-cell1-db-create-g9ddk" Mar 18 13:44:19.305230 master-0 kubenswrapper[27835]: I0318 13:44:19.298680 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gr5q\" (UniqueName: \"kubernetes.io/projected/38fdf2ba-5074-47b8-b534-c12270b771e8-kube-api-access-5gr5q\") pod \"nova-cell0-63a9-account-create-update-dbfqn\" (UID: \"38fdf2ba-5074-47b8-b534-c12270b771e8\") " pod="openstack/nova-cell0-63a9-account-create-update-dbfqn" Mar 18 13:44:19.305230 master-0 kubenswrapper[27835]: I0318 13:44:19.298786 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlr6p\" (UniqueName: \"kubernetes.io/projected/8aeac094-6720-488f-a255-c3042b569033-kube-api-access-hlr6p\") pod \"nova-cell1-db-create-g9ddk\" (UID: \"8aeac094-6720-488f-a255-c3042b569033\") " pod="openstack/nova-cell1-db-create-g9ddk" Mar 18 13:44:19.305230 master-0 kubenswrapper[27835]: I0318 13:44:19.298835 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38fdf2ba-5074-47b8-b534-c12270b771e8-operator-scripts\") pod \"nova-cell0-63a9-account-create-update-dbfqn\" (UID: \"38fdf2ba-5074-47b8-b534-c12270b771e8\") " pod="openstack/nova-cell0-63a9-account-create-update-dbfqn" Mar 18 13:44:19.305230 master-0 kubenswrapper[27835]: I0318 13:44:19.299924 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38fdf2ba-5074-47b8-b534-c12270b771e8-operator-scripts\") pod \"nova-cell0-63a9-account-create-update-dbfqn\" (UID: \"38fdf2ba-5074-47b8-b534-c12270b771e8\") " pod="openstack/nova-cell0-63a9-account-create-update-dbfqn" Mar 18 13:44:19.307146 master-0 kubenswrapper[27835]: I0318 13:44:19.306828 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-ea54-account-create-update-85drm"] Mar 18 13:44:19.311640 master-0 kubenswrapper[27835]: I0318 13:44:19.311611 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ea54-account-create-update-85drm" Mar 18 13:44:19.314744 master-0 kubenswrapper[27835]: I0318 13:44:19.314622 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 18 13:44:19.332669 master-0 kubenswrapper[27835]: I0318 13:44:19.324727 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ea54-account-create-update-85drm"] Mar 18 13:44:19.374798 master-0 kubenswrapper[27835]: I0318 13:44:19.373966 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gr5q\" (UniqueName: \"kubernetes.io/projected/38fdf2ba-5074-47b8-b534-c12270b771e8-kube-api-access-5gr5q\") pod \"nova-cell0-63a9-account-create-update-dbfqn\" (UID: \"38fdf2ba-5074-47b8-b534-c12270b771e8\") " pod="openstack/nova-cell0-63a9-account-create-update-dbfqn" Mar 18 13:44:19.417526 master-0 kubenswrapper[27835]: I0318 13:44:19.415714 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l762l\" (UniqueName: \"kubernetes.io/projected/1ceb0690-8659-42ff-929b-faf3879c7ffb-kube-api-access-l762l\") pod \"nova-cell1-ea54-account-create-update-85drm\" (UID: \"1ceb0690-8659-42ff-929b-faf3879c7ffb\") " pod="openstack/nova-cell1-ea54-account-create-update-85drm" Mar 18 13:44:19.417526 master-0 kubenswrapper[27835]: I0318 13:44:19.415914 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlr6p\" (UniqueName: \"kubernetes.io/projected/8aeac094-6720-488f-a255-c3042b569033-kube-api-access-hlr6p\") pod \"nova-cell1-db-create-g9ddk\" (UID: \"8aeac094-6720-488f-a255-c3042b569033\") " pod="openstack/nova-cell1-db-create-g9ddk" Mar 18 13:44:19.417526 master-0 kubenswrapper[27835]: I0318 13:44:19.416365 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ceb0690-8659-42ff-929b-faf3879c7ffb-operator-scripts\") pod \"nova-cell1-ea54-account-create-update-85drm\" (UID: \"1ceb0690-8659-42ff-929b-faf3879c7ffb\") " pod="openstack/nova-cell1-ea54-account-create-update-85drm" Mar 18 13:44:19.417526 master-0 kubenswrapper[27835]: I0318 13:44:19.416399 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8aeac094-6720-488f-a255-c3042b569033-operator-scripts\") pod \"nova-cell1-db-create-g9ddk\" (UID: \"8aeac094-6720-488f-a255-c3042b569033\") " pod="openstack/nova-cell1-db-create-g9ddk" Mar 18 13:44:19.424675 master-0 kubenswrapper[27835]: I0318 13:44:19.423728 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8aeac094-6720-488f-a255-c3042b569033-operator-scripts\") pod \"nova-cell1-db-create-g9ddk\" (UID: \"8aeac094-6720-488f-a255-c3042b569033\") " pod="openstack/nova-cell1-db-create-g9ddk" Mar 18 13:44:19.466262 master-0 kubenswrapper[27835]: I0318 13:44:19.466209 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlr6p\" (UniqueName: \"kubernetes.io/projected/8aeac094-6720-488f-a255-c3042b569033-kube-api-access-hlr6p\") pod \"nova-cell1-db-create-g9ddk\" (UID: \"8aeac094-6720-488f-a255-c3042b569033\") " pod="openstack/nova-cell1-db-create-g9ddk" Mar 18 13:44:19.518614 master-0 kubenswrapper[27835]: I0318 13:44:19.518570 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ceb0690-8659-42ff-929b-faf3879c7ffb-operator-scripts\") pod \"nova-cell1-ea54-account-create-update-85drm\" (UID: \"1ceb0690-8659-42ff-929b-faf3879c7ffb\") " pod="openstack/nova-cell1-ea54-account-create-update-85drm" Mar 18 13:44:19.518878 master-0 kubenswrapper[27835]: I0318 13:44:19.518864 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l762l\" (UniqueName: \"kubernetes.io/projected/1ceb0690-8659-42ff-929b-faf3879c7ffb-kube-api-access-l762l\") pod \"nova-cell1-ea54-account-create-update-85drm\" (UID: \"1ceb0690-8659-42ff-929b-faf3879c7ffb\") " pod="openstack/nova-cell1-ea54-account-create-update-85drm" Mar 18 13:44:19.522640 master-0 kubenswrapper[27835]: I0318 13:44:19.522598 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ceb0690-8659-42ff-929b-faf3879c7ffb-operator-scripts\") pod \"nova-cell1-ea54-account-create-update-85drm\" (UID: \"1ceb0690-8659-42ff-929b-faf3879c7ffb\") " pod="openstack/nova-cell1-ea54-account-create-update-85drm" Mar 18 13:44:19.555554 master-0 kubenswrapper[27835]: I0318 13:44:19.555514 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l762l\" (UniqueName: \"kubernetes.io/projected/1ceb0690-8659-42ff-929b-faf3879c7ffb-kube-api-access-l762l\") pod \"nova-cell1-ea54-account-create-update-85drm\" (UID: \"1ceb0690-8659-42ff-929b-faf3879c7ffb\") " pod="openstack/nova-cell1-ea54-account-create-update-85drm" Mar 18 13:44:19.575542 master-0 kubenswrapper[27835]: I0318 13:44:19.575441 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-63a9-account-create-update-dbfqn" Mar 18 13:44:19.634033 master-0 kubenswrapper[27835]: I0318 13:44:19.615726 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-g9ddk" Mar 18 13:44:19.709538 master-0 kubenswrapper[27835]: I0318 13:44:19.709495 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ea54-account-create-update-85drm" Mar 18 13:44:19.953628 master-0 kubenswrapper[27835]: I0318 13:44:19.953574 27835 generic.go:334] "Generic (PLEG): container finished" podID="8fb691f3-dd32-4f7d-afa4-4d0980740b64" containerID="e862540dc618ae0a90bacb8bc47bc230cb6d3a28836d17d998ad9600dce1898f" exitCode=0 Mar 18 13:44:19.953628 master-0 kubenswrapper[27835]: I0318 13:44:19.953614 27835 generic.go:334] "Generic (PLEG): container finished" podID="8fb691f3-dd32-4f7d-afa4-4d0980740b64" containerID="26067cdf644c0ff8b6005aa4255c18ea6a5029445373767e9f274f38676036a1" exitCode=0 Mar 18 13:44:19.954431 master-0 kubenswrapper[27835]: I0318 13:44:19.953693 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d475bdf48-lskhl" event={"ID":"8fb691f3-dd32-4f7d-afa4-4d0980740b64","Type":"ContainerDied","Data":"e862540dc618ae0a90bacb8bc47bc230cb6d3a28836d17d998ad9600dce1898f"} Mar 18 13:44:19.954431 master-0 kubenswrapper[27835]: I0318 13:44:19.953723 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d475bdf48-lskhl" event={"ID":"8fb691f3-dd32-4f7d-afa4-4d0980740b64","Type":"ContainerDied","Data":"26067cdf644c0ff8b6005aa4255c18ea6a5029445373767e9f274f38676036a1"} Mar 18 13:44:19.959393 master-0 kubenswrapper[27835]: I0318 13:44:19.959052 27835 generic.go:334] "Generic (PLEG): container finished" podID="cc7df07d-4c6b-469f-b007-e3d799a49fd5" containerID="80b4b2995fefbec0d442226eccc7a5eb10c5dacc44d174dfeb0f7efe356f2ee5" exitCode=1 Mar 18 13:44:19.959393 master-0 kubenswrapper[27835]: I0318 13:44:19.959134 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" event={"ID":"cc7df07d-4c6b-469f-b007-e3d799a49fd5","Type":"ContainerDied","Data":"80b4b2995fefbec0d442226eccc7a5eb10c5dacc44d174dfeb0f7efe356f2ee5"} Mar 18 13:44:19.960218 master-0 kubenswrapper[27835]: I0318 13:44:19.960121 27835 scope.go:117] "RemoveContainer" containerID="80b4b2995fefbec0d442226eccc7a5eb10c5dacc44d174dfeb0f7efe356f2ee5" Mar 18 13:44:19.962236 master-0 kubenswrapper[27835]: E0318 13:44:19.960447 27835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-689c666fd-tjnb9_openstack(cc7df07d-4c6b-469f-b007-e3d799a49fd5)\"" pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" podUID="cc7df07d-4c6b-469f-b007-e3d799a49fd5" Mar 18 13:44:20.350522 master-0 kubenswrapper[27835]: I0318 13:44:20.350110 27835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" Mar 18 13:44:20.563445 master-0 kubenswrapper[27835]: I0318 13:44:20.562341 27835 scope.go:117] "RemoveContainer" containerID="0bd2e74fc8c138cd4452503a6d152f8a27073dd02a3c032f53848b34c53732f1" Mar 18 13:44:20.776980 master-0 kubenswrapper[27835]: I0318 13:44:20.776930 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d475bdf48-lskhl" Mar 18 13:44:20.824162 master-0 kubenswrapper[27835]: I0318 13:44:20.821624 27835 scope.go:117] "RemoveContainer" containerID="705af0a32d09d6134e6b5fe2a5079c120271380ceae54c11718af0ecb3fdb6f8" Mar 18 13:44:20.824162 master-0 kubenswrapper[27835]: E0318 13:44:20.823392 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"705af0a32d09d6134e6b5fe2a5079c120271380ceae54c11718af0ecb3fdb6f8\": container with ID starting with 705af0a32d09d6134e6b5fe2a5079c120271380ceae54c11718af0ecb3fdb6f8 not found: ID does not exist" containerID="705af0a32d09d6134e6b5fe2a5079c120271380ceae54c11718af0ecb3fdb6f8" Mar 18 13:44:20.824162 master-0 kubenswrapper[27835]: I0318 13:44:20.823447 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"705af0a32d09d6134e6b5fe2a5079c120271380ceae54c11718af0ecb3fdb6f8"} err="failed to get container status \"705af0a32d09d6134e6b5fe2a5079c120271380ceae54c11718af0ecb3fdb6f8\": rpc error: code = NotFound desc = could not find container \"705af0a32d09d6134e6b5fe2a5079c120271380ceae54c11718af0ecb3fdb6f8\": container with ID starting with 705af0a32d09d6134e6b5fe2a5079c120271380ceae54c11718af0ecb3fdb6f8 not found: ID does not exist" Mar 18 13:44:20.824162 master-0 kubenswrapper[27835]: I0318 13:44:20.823477 27835 scope.go:117] "RemoveContainer" containerID="58905c798f20deffc7cbfe67c14685482089c73ef259d1e46e94a436132d9d72" Mar 18 13:44:20.824162 master-0 kubenswrapper[27835]: E0318 13:44:20.823904 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58905c798f20deffc7cbfe67c14685482089c73ef259d1e46e94a436132d9d72\": container with ID starting with 58905c798f20deffc7cbfe67c14685482089c73ef259d1e46e94a436132d9d72 not found: ID does not exist" containerID="58905c798f20deffc7cbfe67c14685482089c73ef259d1e46e94a436132d9d72" Mar 18 13:44:20.824162 master-0 kubenswrapper[27835]: I0318 13:44:20.823933 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58905c798f20deffc7cbfe67c14685482089c73ef259d1e46e94a436132d9d72"} err="failed to get container status \"58905c798f20deffc7cbfe67c14685482089c73ef259d1e46e94a436132d9d72\": rpc error: code = NotFound desc = could not find container \"58905c798f20deffc7cbfe67c14685482089c73ef259d1e46e94a436132d9d72\": container with ID starting with 58905c798f20deffc7cbfe67c14685482089c73ef259d1e46e94a436132d9d72 not found: ID does not exist" Mar 18 13:44:20.824162 master-0 kubenswrapper[27835]: I0318 13:44:20.823952 27835 scope.go:117] "RemoveContainer" containerID="0bd2e74fc8c138cd4452503a6d152f8a27073dd02a3c032f53848b34c53732f1" Mar 18 13:44:20.824558 master-0 kubenswrapper[27835]: E0318 13:44:20.824243 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bd2e74fc8c138cd4452503a6d152f8a27073dd02a3c032f53848b34c53732f1\": container with ID starting with 0bd2e74fc8c138cd4452503a6d152f8a27073dd02a3c032f53848b34c53732f1 not found: ID does not exist" containerID="0bd2e74fc8c138cd4452503a6d152f8a27073dd02a3c032f53848b34c53732f1" Mar 18 13:44:20.824558 master-0 kubenswrapper[27835]: I0318 13:44:20.824284 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bd2e74fc8c138cd4452503a6d152f8a27073dd02a3c032f53848b34c53732f1"} err="failed to get container status \"0bd2e74fc8c138cd4452503a6d152f8a27073dd02a3c032f53848b34c53732f1\": rpc error: code = NotFound desc = could not find container \"0bd2e74fc8c138cd4452503a6d152f8a27073dd02a3c032f53848b34c53732f1\": container with ID starting with 0bd2e74fc8c138cd4452503a6d152f8a27073dd02a3c032f53848b34c53732f1 not found: ID does not exist" Mar 18 13:44:20.824558 master-0 kubenswrapper[27835]: I0318 13:44:20.824301 27835 scope.go:117] "RemoveContainer" containerID="1bd2458b7437e971916523f8a3129ce49df3b5e69e5aff24455222f6316315ba" Mar 18 13:44:20.866971 master-0 kubenswrapper[27835]: I0318 13:44:20.866913 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fb691f3-dd32-4f7d-afa4-4d0980740b64-combined-ca-bundle\") pod \"8fb691f3-dd32-4f7d-afa4-4d0980740b64\" (UID: \"8fb691f3-dd32-4f7d-afa4-4d0980740b64\") " Mar 18 13:44:20.871740 master-0 kubenswrapper[27835]: I0318 13:44:20.871243 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rr6cs\" (UniqueName: \"kubernetes.io/projected/8fb691f3-dd32-4f7d-afa4-4d0980740b64-kube-api-access-rr6cs\") pod \"8fb691f3-dd32-4f7d-afa4-4d0980740b64\" (UID: \"8fb691f3-dd32-4f7d-afa4-4d0980740b64\") " Mar 18 13:44:20.871740 master-0 kubenswrapper[27835]: I0318 13:44:20.871458 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8fb691f3-dd32-4f7d-afa4-4d0980740b64-httpd-config\") pod \"8fb691f3-dd32-4f7d-afa4-4d0980740b64\" (UID: \"8fb691f3-dd32-4f7d-afa4-4d0980740b64\") " Mar 18 13:44:20.871740 master-0 kubenswrapper[27835]: I0318 13:44:20.871521 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fb691f3-dd32-4f7d-afa4-4d0980740b64-ovndb-tls-certs\") pod \"8fb691f3-dd32-4f7d-afa4-4d0980740b64\" (UID: \"8fb691f3-dd32-4f7d-afa4-4d0980740b64\") " Mar 18 13:44:20.871740 master-0 kubenswrapper[27835]: I0318 13:44:20.871567 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8fb691f3-dd32-4f7d-afa4-4d0980740b64-config\") pod \"8fb691f3-dd32-4f7d-afa4-4d0980740b64\" (UID: \"8fb691f3-dd32-4f7d-afa4-4d0980740b64\") " Mar 18 13:44:20.880278 master-0 kubenswrapper[27835]: I0318 13:44:20.880050 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fb691f3-dd32-4f7d-afa4-4d0980740b64-kube-api-access-rr6cs" (OuterVolumeSpecName: "kube-api-access-rr6cs") pod "8fb691f3-dd32-4f7d-afa4-4d0980740b64" (UID: "8fb691f3-dd32-4f7d-afa4-4d0980740b64"). InnerVolumeSpecName "kube-api-access-rr6cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:44:20.943910 master-0 kubenswrapper[27835]: I0318 13:44:20.943807 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fb691f3-dd32-4f7d-afa4-4d0980740b64-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8fb691f3-dd32-4f7d-afa4-4d0980740b64" (UID: "8fb691f3-dd32-4f7d-afa4-4d0980740b64"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:20.974899 master-0 kubenswrapper[27835]: I0318 13:44:20.972634 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fb691f3-dd32-4f7d-afa4-4d0980740b64-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "8fb691f3-dd32-4f7d-afa4-4d0980740b64" (UID: "8fb691f3-dd32-4f7d-afa4-4d0980740b64"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:20.986346 master-0 kubenswrapper[27835]: I0318 13:44:20.986283 27835 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8fb691f3-dd32-4f7d-afa4-4d0980740b64-httpd-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:20.986346 master-0 kubenswrapper[27835]: I0318 13:44:20.986347 27835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fb691f3-dd32-4f7d-afa4-4d0980740b64-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:20.986346 master-0 kubenswrapper[27835]: I0318 13:44:20.986364 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rr6cs\" (UniqueName: \"kubernetes.io/projected/8fb691f3-dd32-4f7d-afa4-4d0980740b64-kube-api-access-rr6cs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:21.019004 master-0 kubenswrapper[27835]: I0318 13:44:21.018939 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d475bdf48-lskhl" event={"ID":"8fb691f3-dd32-4f7d-afa4-4d0980740b64","Type":"ContainerDied","Data":"cd19598ca1d89a078408de8f0131d30772ac5fb3b8af46ae88f7c16c35bcefb3"} Mar 18 13:44:21.019004 master-0 kubenswrapper[27835]: I0318 13:44:21.018965 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d475bdf48-lskhl" Mar 18 13:44:21.019535 master-0 kubenswrapper[27835]: I0318 13:44:21.019508 27835 scope.go:117] "RemoveContainer" containerID="80b4b2995fefbec0d442226eccc7a5eb10c5dacc44d174dfeb0f7efe356f2ee5" Mar 18 13:44:21.019826 master-0 kubenswrapper[27835]: E0318 13:44:21.019797 27835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-689c666fd-tjnb9_openstack(cc7df07d-4c6b-469f-b007-e3d799a49fd5)\"" pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" podUID="cc7df07d-4c6b-469f-b007-e3d799a49fd5" Mar 18 13:44:21.046808 master-0 kubenswrapper[27835]: I0318 13:44:21.046638 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fb691f3-dd32-4f7d-afa4-4d0980740b64-config" (OuterVolumeSpecName: "config") pod "8fb691f3-dd32-4f7d-afa4-4d0980740b64" (UID: "8fb691f3-dd32-4f7d-afa4-4d0980740b64"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:21.047930 master-0 kubenswrapper[27835]: I0318 13:44:21.047904 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fb691f3-dd32-4f7d-afa4-4d0980740b64-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "8fb691f3-dd32-4f7d-afa4-4d0980740b64" (UID: "8fb691f3-dd32-4f7d-afa4-4d0980740b64"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:21.052800 master-0 kubenswrapper[27835]: I0318 13:44:21.052286 27835 scope.go:117] "RemoveContainer" containerID="e862540dc618ae0a90bacb8bc47bc230cb6d3a28836d17d998ad9600dce1898f" Mar 18 13:44:21.088659 master-0 kubenswrapper[27835]: I0318 13:44:21.088610 27835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8fb691f3-dd32-4f7d-afa4-4d0980740b64-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:21.088779 master-0 kubenswrapper[27835]: I0318 13:44:21.088662 27835 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fb691f3-dd32-4f7d-afa4-4d0980740b64-ovndb-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:21.274836 master-0 kubenswrapper[27835]: I0318 13:44:21.267223 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6f6cbd65db-gzvz8"] Mar 18 13:44:21.348183 master-0 kubenswrapper[27835]: I0318 13:44:21.348131 27835 scope.go:117] "RemoveContainer" containerID="26067cdf644c0ff8b6005aa4255c18ea6a5029445373767e9f274f38676036a1" Mar 18 13:44:21.407241 master-0 kubenswrapper[27835]: W0318 13:44:21.406897 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8553452_d2f4_4ad0_9fe0_0d2d984be2b0.slice/crio-486a9983550ee5d88aea6e4e6da007eb1dc24a9553836ad1391b3c82073fd5d2 WatchSource:0}: Error finding container 486a9983550ee5d88aea6e4e6da007eb1dc24a9553836ad1391b3c82073fd5d2: Status 404 returned error can't find the container with id 486a9983550ee5d88aea6e4e6da007eb1dc24a9553836ad1391b3c82073fd5d2 Mar 18 13:44:21.409571 master-0 kubenswrapper[27835]: I0318 13:44:21.409464 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ff10-account-create-update-49jqz"] Mar 18 13:44:21.513689 master-0 kubenswrapper[27835]: I0318 13:44:21.513629 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6d475bdf48-lskhl"] Mar 18 13:44:21.565828 master-0 kubenswrapper[27835]: I0318 13:44:21.565779 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6d475bdf48-lskhl"] Mar 18 13:44:21.594547 master-0 kubenswrapper[27835]: I0318 13:44:21.594491 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-gwfch"] Mar 18 13:44:21.608801 master-0 kubenswrapper[27835]: I0318 13:44:21.608760 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-g2nft"] Mar 18 13:44:21.626378 master-0 kubenswrapper[27835]: W0318 13:44:21.625725 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod549326ca_2cc7_49f2_bffb_a76953703d01.slice/crio-e3195fbff100a17cc38a14ab17f8841fffe259de8e0910acc5acee6dd002c13a WatchSource:0}: Error finding container e3195fbff100a17cc38a14ab17f8841fffe259de8e0910acc5acee6dd002c13a: Status 404 returned error can't find the container with id e3195fbff100a17cc38a14ab17f8841fffe259de8e0910acc5acee6dd002c13a Mar 18 13:44:21.760581 master-0 kubenswrapper[27835]: I0318 13:44:21.760365 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-g9ddk"] Mar 18 13:44:21.772780 master-0 kubenswrapper[27835]: I0318 13:44:21.772054 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ea54-account-create-update-85drm"] Mar 18 13:44:21.807829 master-0 kubenswrapper[27835]: W0318 13:44:21.804987 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ceb0690_8659_42ff_929b_faf3879c7ffb.slice/crio-2aec8c9cb8648529eb655c9af7352dcbb0ef9e855e67d53f249d0610cd728b7c WatchSource:0}: Error finding container 2aec8c9cb8648529eb655c9af7352dcbb0ef9e855e67d53f249d0610cd728b7c: Status 404 returned error can't find the container with id 2aec8c9cb8648529eb655c9af7352dcbb0ef9e855e67d53f249d0610cd728b7c Mar 18 13:44:22.045073 master-0 kubenswrapper[27835]: I0318 13:44:22.045006 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6f6cbd65db-gzvz8" event={"ID":"8db812c3-c391-4147-8220-fdd68cdd11d3","Type":"ContainerStarted","Data":"f75f514b01b1b82c84aad457148f8b3a9001592ba1f3a3e2dbf52b5f06b410b3"} Mar 18 13:44:22.047709 master-0 kubenswrapper[27835]: I0318 13:44:22.047639 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-gwfch" event={"ID":"549326ca-2cc7-49f2-bffb-a76953703d01","Type":"ContainerStarted","Data":"e3195fbff100a17cc38a14ab17f8841fffe259de8e0910acc5acee6dd002c13a"} Mar 18 13:44:22.051401 master-0 kubenswrapper[27835]: I0318 13:44:22.051254 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-63a9-account-create-update-dbfqn"] Mar 18 13:44:22.051807 master-0 kubenswrapper[27835]: I0318 13:44:22.051777 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-g9ddk" event={"ID":"8aeac094-6720-488f-a255-c3042b569033","Type":"ContainerStarted","Data":"fc6674d4f08c980e3e90f55e33cd09b51e6bf239d65634ab264a56f19f2d8cac"} Mar 18 13:44:22.054131 master-0 kubenswrapper[27835]: I0318 13:44:22.054091 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-g2nft" event={"ID":"83e94368-fc4d-4fdd-bb0e-266a8d57bfd1","Type":"ContainerStarted","Data":"37e4b966137d701559ee67f00a93ed73cbf3780318ae565cf3684696d5bb348c"} Mar 18 13:44:22.066300 master-0 kubenswrapper[27835]: I0318 13:44:22.066193 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ea54-account-create-update-85drm" event={"ID":"1ceb0690-8659-42ff-929b-faf3879c7ffb","Type":"ContainerStarted","Data":"2aec8c9cb8648529eb655c9af7352dcbb0ef9e855e67d53f249d0610cd728b7c"} Mar 18 13:44:22.071861 master-0 kubenswrapper[27835]: I0318 13:44:22.071809 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ff10-account-create-update-49jqz" event={"ID":"d8553452-d2f4-4ad0-9fe0-0d2d984be2b0","Type":"ContainerStarted","Data":"486a9983550ee5d88aea6e4e6da007eb1dc24a9553836ad1391b3c82073fd5d2"} Mar 18 13:44:22.074712 master-0 kubenswrapper[27835]: I0318 13:44:22.074686 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-s7cvj" event={"ID":"c1fc873e-3d35-4632-a144-08e9b6e74e02","Type":"ContainerStarted","Data":"0312fd080177f511a07563c2683e72c0cda431a7a0cc86d7947c4415b10c3575"} Mar 18 13:44:22.300734 master-0 kubenswrapper[27835]: I0318 13:44:22.300618 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fb691f3-dd32-4f7d-afa4-4d0980740b64" path="/var/lib/kubelet/pods/8fb691f3-dd32-4f7d-afa4-4d0980740b64/volumes" Mar 18 13:44:22.403556 master-0 kubenswrapper[27835]: I0318 13:44:22.402611 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-db-sync-s7cvj" podStartSLOduration=3.910341593 podStartE2EDuration="13.402591213s" podCreationTimestamp="2026-03-18 13:44:09 +0000 UTC" firstStartedPulling="2026-03-18 13:44:11.466226914 +0000 UTC m=+1215.431438474" lastFinishedPulling="2026-03-18 13:44:20.958476524 +0000 UTC m=+1224.923688094" observedRunningTime="2026-03-18 13:44:22.397851817 +0000 UTC m=+1226.363063377" watchObservedRunningTime="2026-03-18 13:44:22.402591213 +0000 UTC m=+1226.367802773" Mar 18 13:44:23.101876 master-0 kubenswrapper[27835]: I0318 13:44:23.101730 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6f6cbd65db-gzvz8" event={"ID":"8db812c3-c391-4147-8220-fdd68cdd11d3","Type":"ContainerStarted","Data":"6dbf93ec8ba87edfdd5dfbfa23acbff134730c6b1ba74b76910a764092e6bc6b"} Mar 18 13:44:23.106843 master-0 kubenswrapper[27835]: I0318 13:44:23.106800 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-gwfch" event={"ID":"549326ca-2cc7-49f2-bffb-a76953703d01","Type":"ContainerStarted","Data":"5597e6d49e9547d152f4fa72804860c3ba1b6d6db87bee7409410b5edef5a1d8"} Mar 18 13:44:23.150677 master-0 kubenswrapper[27835]: I0318 13:44:23.150374 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-gwfch" podStartSLOduration=6.150354908 podStartE2EDuration="6.150354908s" podCreationTimestamp="2026-03-18 13:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:44:23.146922438 +0000 UTC m=+1227.112133998" watchObservedRunningTime="2026-03-18 13:44:23.150354908 +0000 UTC m=+1227.115566468" Mar 18 13:44:26.943084 master-0 kubenswrapper[27835]: I0318 13:44:26.943011 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-4f519-default-external-api-0"] Mar 18 13:44:26.943837 master-0 kubenswrapper[27835]: I0318 13:44:26.943343 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-4f519-default-external-api-0" podUID="70b8479b-b0d7-4f54-a717-0dd3289cf5be" containerName="glance-log" containerID="cri-o://8b770528ca0aacbabf474fe5cb46f81ed7316ab9070d767a1a2654780b3d081b" gracePeriod=30 Mar 18 13:44:26.943837 master-0 kubenswrapper[27835]: I0318 13:44:26.943555 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-4f519-default-external-api-0" podUID="70b8479b-b0d7-4f54-a717-0dd3289cf5be" containerName="glance-httpd" containerID="cri-o://db6da0c7160cdd36f1baf7c5cec389c7b26705bbdbbeea84501b520ea2a2ff54" gracePeriod=30 Mar 18 13:44:26.965557 master-0 kubenswrapper[27835]: I0318 13:44:26.964697 27835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-4f519-default-external-api-0" podUID="70b8479b-b0d7-4f54-a717-0dd3289cf5be" containerName="glance-log" probeResult="failure" output="Get \"https://10.128.0.212:9292/healthcheck\": EOF" Mar 18 13:44:26.965557 master-0 kubenswrapper[27835]: I0318 13:44:26.964926 27835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-4f519-default-external-api-0" podUID="70b8479b-b0d7-4f54-a717-0dd3289cf5be" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.128.0.212:9292/healthcheck\": EOF" Mar 18 13:44:26.965557 master-0 kubenswrapper[27835]: I0318 13:44:26.964697 27835 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/glance-4f519-default-external-api-0" podUID="70b8479b-b0d7-4f54-a717-0dd3289cf5be" containerName="glance-log" probeResult="failure" output="Get \"https://10.128.0.212:9292/healthcheck\": EOF" Mar 18 13:44:29.303274 master-0 kubenswrapper[27835]: W0318 13:44:29.303176 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38fdf2ba_5074_47b8_b534_c12270b771e8.slice/crio-b39100a7307ff83643100924a902625152d1195a3755b187ac443c45173c97d3 WatchSource:0}: Error finding container b39100a7307ff83643100924a902625152d1195a3755b187ac443c45173c97d3: Status 404 returned error can't find the container with id b39100a7307ff83643100924a902625152d1195a3755b187ac443c45173c97d3 Mar 18 13:44:29.353327 master-0 kubenswrapper[27835]: I0318 13:44:29.348534 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-4f519-default-internal-api-0"] Mar 18 13:44:29.353327 master-0 kubenswrapper[27835]: I0318 13:44:29.348803 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-4f519-default-internal-api-0" podUID="9bf459ab-dc8e-4a13-bbee-b68d6d031781" containerName="glance-log" containerID="cri-o://ca78313e63b64504e4ce64c347c881395f4cdb56b4dbf7fb7c3f21484da0ea8b" gracePeriod=30 Mar 18 13:44:29.353327 master-0 kubenswrapper[27835]: I0318 13:44:29.349278 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-4f519-default-internal-api-0" podUID="9bf459ab-dc8e-4a13-bbee-b68d6d031781" containerName="glance-httpd" containerID="cri-o://3558dec8b9ad35dc7342bcd32de8cdbe7f94961bdfbc20eaa99bee14dbc28448" gracePeriod=30 Mar 18 13:44:30.208054 master-0 kubenswrapper[27835]: I0318 13:44:30.208006 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ea54-account-create-update-85drm" event={"ID":"1ceb0690-8659-42ff-929b-faf3879c7ffb","Type":"ContainerStarted","Data":"be10b92153f7fe6759bf9e4d215e8f28cf950588bdef90198582f06abae990c4"} Mar 18 13:44:30.210768 master-0 kubenswrapper[27835]: I0318 13:44:30.210616 27835 generic.go:334] "Generic (PLEG): container finished" podID="549326ca-2cc7-49f2-bffb-a76953703d01" containerID="5597e6d49e9547d152f4fa72804860c3ba1b6d6db87bee7409410b5edef5a1d8" exitCode=0 Mar 18 13:44:30.210768 master-0 kubenswrapper[27835]: I0318 13:44:30.210746 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-gwfch" event={"ID":"549326ca-2cc7-49f2-bffb-a76953703d01","Type":"ContainerDied","Data":"5597e6d49e9547d152f4fa72804860c3ba1b6d6db87bee7409410b5edef5a1d8"} Mar 18 13:44:30.214352 master-0 kubenswrapper[27835]: I0318 13:44:30.214293 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-g9ddk" event={"ID":"8aeac094-6720-488f-a255-c3042b569033","Type":"ContainerStarted","Data":"1e6b1e8819e8a12a7b7e990a8e19d8e4f7984c05cffd7f031521a8ea82402bf1"} Mar 18 13:44:30.220744 master-0 kubenswrapper[27835]: I0318 13:44:30.220439 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-g2nft" event={"ID":"83e94368-fc4d-4fdd-bb0e-266a8d57bfd1","Type":"ContainerStarted","Data":"a5280cfd07f86827b6c9251e1e64c2bc65a4dad071c99d1ce42a445991bbc232"} Mar 18 13:44:30.222921 master-0 kubenswrapper[27835]: I0318 13:44:30.222883 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ff10-account-create-update-49jqz" event={"ID":"d8553452-d2f4-4ad0-9fe0-0d2d984be2b0","Type":"ContainerStarted","Data":"3cc53609c480eeb1c7b8bcc3fe08f25b4e51778c8ee75d8a10e210051a86414d"} Mar 18 13:44:30.232020 master-0 kubenswrapper[27835]: I0318 13:44:30.231900 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-ea54-account-create-update-85drm" podStartSLOduration=11.231887383 podStartE2EDuration="11.231887383s" podCreationTimestamp="2026-03-18 13:44:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:44:30.225838942 +0000 UTC m=+1234.191050492" watchObservedRunningTime="2026-03-18 13:44:30.231887383 +0000 UTC m=+1234.197098933" Mar 18 13:44:30.232516 master-0 kubenswrapper[27835]: I0318 13:44:30.232449 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-63a9-account-create-update-dbfqn" event={"ID":"38fdf2ba-5074-47b8-b534-c12270b771e8","Type":"ContainerStarted","Data":"b39100a7307ff83643100924a902625152d1195a3755b187ac443c45173c97d3"} Mar 18 13:44:30.236212 master-0 kubenswrapper[27835]: I0318 13:44:30.236163 27835 generic.go:334] "Generic (PLEG): container finished" podID="c1fc873e-3d35-4632-a144-08e9b6e74e02" containerID="0312fd080177f511a07563c2683e72c0cda431a7a0cc86d7947c4415b10c3575" exitCode=0 Mar 18 13:44:30.236312 master-0 kubenswrapper[27835]: I0318 13:44:30.236228 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-s7cvj" event={"ID":"c1fc873e-3d35-4632-a144-08e9b6e74e02","Type":"ContainerDied","Data":"0312fd080177f511a07563c2683e72c0cda431a7a0cc86d7947c4415b10c3575"} Mar 18 13:44:30.239514 master-0 kubenswrapper[27835]: I0318 13:44:30.238931 27835 generic.go:334] "Generic (PLEG): container finished" podID="70b8479b-b0d7-4f54-a717-0dd3289cf5be" containerID="8b770528ca0aacbabf474fe5cb46f81ed7316ab9070d767a1a2654780b3d081b" exitCode=143 Mar 18 13:44:30.239514 master-0 kubenswrapper[27835]: I0318 13:44:30.238970 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4f519-default-external-api-0" event={"ID":"70b8479b-b0d7-4f54-a717-0dd3289cf5be","Type":"ContainerDied","Data":"8b770528ca0aacbabf474fe5cb46f81ed7316ab9070d767a1a2654780b3d081b"} Mar 18 13:44:30.263923 master-0 kubenswrapper[27835]: I0318 13:44:30.263835 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-ff10-account-create-update-49jqz" podStartSLOduration=12.263815409 podStartE2EDuration="12.263815409s" podCreationTimestamp="2026-03-18 13:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:44:30.249184541 +0000 UTC m=+1234.214396101" watchObservedRunningTime="2026-03-18 13:44:30.263815409 +0000 UTC m=+1234.229026969" Mar 18 13:44:30.267890 master-0 kubenswrapper[27835]: I0318 13:44:30.266738 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-g2nft" podStartSLOduration=12.266720916 podStartE2EDuration="12.266720916s" podCreationTimestamp="2026-03-18 13:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:44:30.264738003 +0000 UTC m=+1234.229949563" watchObservedRunningTime="2026-03-18 13:44:30.266720916 +0000 UTC m=+1234.231932496" Mar 18 13:44:30.298352 master-0 kubenswrapper[27835]: I0318 13:44:30.297547 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-g9ddk" podStartSLOduration=11.297525913 podStartE2EDuration="11.297525913s" podCreationTimestamp="2026-03-18 13:44:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:44:30.283619754 +0000 UTC m=+1234.248831314" watchObservedRunningTime="2026-03-18 13:44:30.297525913 +0000 UTC m=+1234.262737473" Mar 18 13:44:32.690452 master-0 kubenswrapper[27835]: E0318 13:44:32.690302 27835 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70b8479b_b0d7_4f54_a717_0dd3289cf5be.slice/crio-conmon-db6da0c7160cdd36f1baf7c5cec389c7b26705bbdbbeea84501b520ea2a2ff54.scope\": RecentStats: unable to find data in memory cache]" Mar 18 13:44:34.302822 master-0 kubenswrapper[27835]: I0318 13:44:34.302761 27835 generic.go:334] "Generic (PLEG): container finished" podID="9bf459ab-dc8e-4a13-bbee-b68d6d031781" containerID="3558dec8b9ad35dc7342bcd32de8cdbe7f94961bdfbc20eaa99bee14dbc28448" exitCode=0 Mar 18 13:44:34.302822 master-0 kubenswrapper[27835]: I0318 13:44:34.302802 27835 generic.go:334] "Generic (PLEG): container finished" podID="9bf459ab-dc8e-4a13-bbee-b68d6d031781" containerID="ca78313e63b64504e4ce64c347c881395f4cdb56b4dbf7fb7c3f21484da0ea8b" exitCode=143 Mar 18 13:44:34.303446 master-0 kubenswrapper[27835]: I0318 13:44:34.302849 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4f519-default-internal-api-0" event={"ID":"9bf459ab-dc8e-4a13-bbee-b68d6d031781","Type":"ContainerDied","Data":"3558dec8b9ad35dc7342bcd32de8cdbe7f94961bdfbc20eaa99bee14dbc28448"} Mar 18 13:44:34.303446 master-0 kubenswrapper[27835]: I0318 13:44:34.303189 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4f519-default-internal-api-0" event={"ID":"9bf459ab-dc8e-4a13-bbee-b68d6d031781","Type":"ContainerDied","Data":"ca78313e63b64504e4ce64c347c881395f4cdb56b4dbf7fb7c3f21484da0ea8b"} Mar 18 13:44:34.309174 master-0 kubenswrapper[27835]: I0318 13:44:34.309131 27835 generic.go:334] "Generic (PLEG): container finished" podID="83e94368-fc4d-4fdd-bb0e-266a8d57bfd1" containerID="a5280cfd07f86827b6c9251e1e64c2bc65a4dad071c99d1ce42a445991bbc232" exitCode=0 Mar 18 13:44:34.309386 master-0 kubenswrapper[27835]: I0318 13:44:34.309210 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-g2nft" event={"ID":"83e94368-fc4d-4fdd-bb0e-266a8d57bfd1","Type":"ContainerDied","Data":"a5280cfd07f86827b6c9251e1e64c2bc65a4dad071c99d1ce42a445991bbc232"} Mar 18 13:44:34.317939 master-0 kubenswrapper[27835]: I0318 13:44:34.316751 27835 generic.go:334] "Generic (PLEG): container finished" podID="d8553452-d2f4-4ad0-9fe0-0d2d984be2b0" containerID="3cc53609c480eeb1c7b8bcc3fe08f25b4e51778c8ee75d8a10e210051a86414d" exitCode=0 Mar 18 13:44:34.317939 master-0 kubenswrapper[27835]: I0318 13:44:34.316854 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ff10-account-create-update-49jqz" event={"ID":"d8553452-d2f4-4ad0-9fe0-0d2d984be2b0","Type":"ContainerDied","Data":"3cc53609c480eeb1c7b8bcc3fe08f25b4e51778c8ee75d8a10e210051a86414d"} Mar 18 13:44:34.319907 master-0 kubenswrapper[27835]: I0318 13:44:34.319867 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-s7cvj" event={"ID":"c1fc873e-3d35-4632-a144-08e9b6e74e02","Type":"ContainerDied","Data":"3287b2de7232f09d948cffddfa4eedc0a7248a437eeec98cadc5d62d97419047"} Mar 18 13:44:34.320251 master-0 kubenswrapper[27835]: I0318 13:44:34.319914 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3287b2de7232f09d948cffddfa4eedc0a7248a437eeec98cadc5d62d97419047" Mar 18 13:44:34.322900 master-0 kubenswrapper[27835]: I0318 13:44:34.322847 27835 generic.go:334] "Generic (PLEG): container finished" podID="70b8479b-b0d7-4f54-a717-0dd3289cf5be" containerID="db6da0c7160cdd36f1baf7c5cec389c7b26705bbdbbeea84501b520ea2a2ff54" exitCode=0 Mar 18 13:44:34.322993 master-0 kubenswrapper[27835]: I0318 13:44:34.322925 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4f519-default-external-api-0" event={"ID":"70b8479b-b0d7-4f54-a717-0dd3289cf5be","Type":"ContainerDied","Data":"db6da0c7160cdd36f1baf7c5cec389c7b26705bbdbbeea84501b520ea2a2ff54"} Mar 18 13:44:34.339436 master-0 kubenswrapper[27835]: I0318 13:44:34.333695 27835 generic.go:334] "Generic (PLEG): container finished" podID="1ceb0690-8659-42ff-929b-faf3879c7ffb" containerID="be10b92153f7fe6759bf9e4d215e8f28cf950588bdef90198582f06abae990c4" exitCode=0 Mar 18 13:44:34.339436 master-0 kubenswrapper[27835]: I0318 13:44:34.333760 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ea54-account-create-update-85drm" event={"ID":"1ceb0690-8659-42ff-929b-faf3879c7ffb","Type":"ContainerDied","Data":"be10b92153f7fe6759bf9e4d215e8f28cf950588bdef90198582f06abae990c4"} Mar 18 13:44:34.339711 master-0 kubenswrapper[27835]: I0318 13:44:34.339519 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-gwfch" event={"ID":"549326ca-2cc7-49f2-bffb-a76953703d01","Type":"ContainerDied","Data":"e3195fbff100a17cc38a14ab17f8841fffe259de8e0910acc5acee6dd002c13a"} Mar 18 13:44:34.339711 master-0 kubenswrapper[27835]: I0318 13:44:34.339571 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3195fbff100a17cc38a14ab17f8841fffe259de8e0910acc5acee6dd002c13a" Mar 18 13:44:34.346968 master-0 kubenswrapper[27835]: I0318 13:44:34.346921 27835 generic.go:334] "Generic (PLEG): container finished" podID="8aeac094-6720-488f-a255-c3042b569033" containerID="1e6b1e8819e8a12a7b7e990a8e19d8e4f7984c05cffd7f031521a8ea82402bf1" exitCode=0 Mar 18 13:44:34.346968 master-0 kubenswrapper[27835]: I0318 13:44:34.346970 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-g9ddk" event={"ID":"8aeac094-6720-488f-a255-c3042b569033","Type":"ContainerDied","Data":"1e6b1e8819e8a12a7b7e990a8e19d8e4f7984c05cffd7f031521a8ea82402bf1"} Mar 18 13:44:34.370869 master-0 kubenswrapper[27835]: I0318 13:44:34.370238 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-gwfch" Mar 18 13:44:34.599397 master-0 kubenswrapper[27835]: I0318 13:44:34.599331 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-s7cvj" Mar 18 13:44:34.625852 master-0 kubenswrapper[27835]: I0318 13:44:34.625006 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/549326ca-2cc7-49f2-bffb-a76953703d01-operator-scripts\") pod \"549326ca-2cc7-49f2-bffb-a76953703d01\" (UID: \"549326ca-2cc7-49f2-bffb-a76953703d01\") " Mar 18 13:44:34.625852 master-0 kubenswrapper[27835]: I0318 13:44:34.625090 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztkmx\" (UniqueName: \"kubernetes.io/projected/549326ca-2cc7-49f2-bffb-a76953703d01-kube-api-access-ztkmx\") pod \"549326ca-2cc7-49f2-bffb-a76953703d01\" (UID: \"549326ca-2cc7-49f2-bffb-a76953703d01\") " Mar 18 13:44:34.625852 master-0 kubenswrapper[27835]: I0318 13:44:34.625139 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c1fc873e-3d35-4632-a144-08e9b6e74e02-config\") pod \"c1fc873e-3d35-4632-a144-08e9b6e74e02\" (UID: \"c1fc873e-3d35-4632-a144-08e9b6e74e02\") " Mar 18 13:44:34.625852 master-0 kubenswrapper[27835]: I0318 13:44:34.625204 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/c1fc873e-3d35-4632-a144-08e9b6e74e02-etc-podinfo\") pod \"c1fc873e-3d35-4632-a144-08e9b6e74e02\" (UID: \"c1fc873e-3d35-4632-a144-08e9b6e74e02\") " Mar 18 13:44:34.625852 master-0 kubenswrapper[27835]: I0318 13:44:34.625240 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1fc873e-3d35-4632-a144-08e9b6e74e02-scripts\") pod \"c1fc873e-3d35-4632-a144-08e9b6e74e02\" (UID: \"c1fc873e-3d35-4632-a144-08e9b6e74e02\") " Mar 18 13:44:34.625852 master-0 kubenswrapper[27835]: I0318 13:44:34.625293 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/c1fc873e-3d35-4632-a144-08e9b6e74e02-var-lib-ironic\") pod \"c1fc873e-3d35-4632-a144-08e9b6e74e02\" (UID: \"c1fc873e-3d35-4632-a144-08e9b6e74e02\") " Mar 18 13:44:34.625852 master-0 kubenswrapper[27835]: I0318 13:44:34.625319 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqvjk\" (UniqueName: \"kubernetes.io/projected/c1fc873e-3d35-4632-a144-08e9b6e74e02-kube-api-access-nqvjk\") pod \"c1fc873e-3d35-4632-a144-08e9b6e74e02\" (UID: \"c1fc873e-3d35-4632-a144-08e9b6e74e02\") " Mar 18 13:44:34.628479 master-0 kubenswrapper[27835]: I0318 13:44:34.625342 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1fc873e-3d35-4632-a144-08e9b6e74e02-combined-ca-bundle\") pod \"c1fc873e-3d35-4632-a144-08e9b6e74e02\" (UID: \"c1fc873e-3d35-4632-a144-08e9b6e74e02\") " Mar 18 13:44:34.628479 master-0 kubenswrapper[27835]: I0318 13:44:34.626000 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/c1fc873e-3d35-4632-a144-08e9b6e74e02-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"c1fc873e-3d35-4632-a144-08e9b6e74e02\" (UID: \"c1fc873e-3d35-4632-a144-08e9b6e74e02\") " Mar 18 13:44:34.628479 master-0 kubenswrapper[27835]: I0318 13:44:34.626870 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1fc873e-3d35-4632-a144-08e9b6e74e02-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "c1fc873e-3d35-4632-a144-08e9b6e74e02" (UID: "c1fc873e-3d35-4632-a144-08e9b6e74e02"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:44:34.631051 master-0 kubenswrapper[27835]: I0318 13:44:34.630340 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/549326ca-2cc7-49f2-bffb-a76953703d01-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "549326ca-2cc7-49f2-bffb-a76953703d01" (UID: "549326ca-2cc7-49f2-bffb-a76953703d01"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:44:34.631051 master-0 kubenswrapper[27835]: I0318 13:44:34.630915 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1fc873e-3d35-4632-a144-08e9b6e74e02-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "c1fc873e-3d35-4632-a144-08e9b6e74e02" (UID: "c1fc873e-3d35-4632-a144-08e9b6e74e02"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:44:34.650660 master-0 kubenswrapper[27835]: I0318 13:44:34.650601 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/c1fc873e-3d35-4632-a144-08e9b6e74e02-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "c1fc873e-3d35-4632-a144-08e9b6e74e02" (UID: "c1fc873e-3d35-4632-a144-08e9b6e74e02"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 18 13:44:34.666221 master-0 kubenswrapper[27835]: I0318 13:44:34.666152 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/549326ca-2cc7-49f2-bffb-a76953703d01-kube-api-access-ztkmx" (OuterVolumeSpecName: "kube-api-access-ztkmx") pod "549326ca-2cc7-49f2-bffb-a76953703d01" (UID: "549326ca-2cc7-49f2-bffb-a76953703d01"). InnerVolumeSpecName "kube-api-access-ztkmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:44:34.666221 master-0 kubenswrapper[27835]: I0318 13:44:34.666174 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1fc873e-3d35-4632-a144-08e9b6e74e02-scripts" (OuterVolumeSpecName: "scripts") pod "c1fc873e-3d35-4632-a144-08e9b6e74e02" (UID: "c1fc873e-3d35-4632-a144-08e9b6e74e02"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:34.672034 master-0 kubenswrapper[27835]: I0318 13:44:34.671838 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1fc873e-3d35-4632-a144-08e9b6e74e02-kube-api-access-nqvjk" (OuterVolumeSpecName: "kube-api-access-nqvjk") pod "c1fc873e-3d35-4632-a144-08e9b6e74e02" (UID: "c1fc873e-3d35-4632-a144-08e9b6e74e02"). InnerVolumeSpecName "kube-api-access-nqvjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:44:34.689824 master-0 kubenswrapper[27835]: I0318 13:44:34.689770 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1fc873e-3d35-4632-a144-08e9b6e74e02-config" (OuterVolumeSpecName: "config") pod "c1fc873e-3d35-4632-a144-08e9b6e74e02" (UID: "c1fc873e-3d35-4632-a144-08e9b6e74e02"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:34.697937 master-0 kubenswrapper[27835]: I0318 13:44:34.697893 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1fc873e-3d35-4632-a144-08e9b6e74e02-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1fc873e-3d35-4632-a144-08e9b6e74e02" (UID: "c1fc873e-3d35-4632-a144-08e9b6e74e02"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:34.727498 master-0 kubenswrapper[27835]: I0318 13:44:34.727438 27835 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/c1fc873e-3d35-4632-a144-08e9b6e74e02-var-lib-ironic\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:34.727498 master-0 kubenswrapper[27835]: I0318 13:44:34.727487 27835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1fc873e-3d35-4632-a144-08e9b6e74e02-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:34.727498 master-0 kubenswrapper[27835]: I0318 13:44:34.727500 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqvjk\" (UniqueName: \"kubernetes.io/projected/c1fc873e-3d35-4632-a144-08e9b6e74e02-kube-api-access-nqvjk\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:34.727761 master-0 kubenswrapper[27835]: I0318 13:44:34.727512 27835 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/c1fc873e-3d35-4632-a144-08e9b6e74e02-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:34.727761 master-0 kubenswrapper[27835]: I0318 13:44:34.727525 27835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/549326ca-2cc7-49f2-bffb-a76953703d01-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:34.727761 master-0 kubenswrapper[27835]: I0318 13:44:34.727534 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztkmx\" (UniqueName: \"kubernetes.io/projected/549326ca-2cc7-49f2-bffb-a76953703d01-kube-api-access-ztkmx\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:34.727761 master-0 kubenswrapper[27835]: I0318 13:44:34.727543 27835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c1fc873e-3d35-4632-a144-08e9b6e74e02-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:34.727761 master-0 kubenswrapper[27835]: I0318 13:44:34.727552 27835 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/c1fc873e-3d35-4632-a144-08e9b6e74e02-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:34.727761 master-0 kubenswrapper[27835]: I0318 13:44:34.727559 27835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1fc873e-3d35-4632-a144-08e9b6e74e02-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:34.814960 master-0 kubenswrapper[27835]: I0318 13:44:34.814906 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:34.932284 master-0 kubenswrapper[27835]: I0318 13:44:34.932236 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70b8479b-b0d7-4f54-a717-0dd3289cf5be-public-tls-certs\") pod \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\" (UID: \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\") " Mar 18 13:44:34.932838 master-0 kubenswrapper[27835]: I0318 13:44:34.932807 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8cd87a72-b057-4c0b-a353-f22fec868a61\") pod \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\" (UID: \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\") " Mar 18 13:44:34.933085 master-0 kubenswrapper[27835]: I0318 13:44:34.933033 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70b8479b-b0d7-4f54-a717-0dd3289cf5be-logs\") pod \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\" (UID: \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\") " Mar 18 13:44:34.933226 master-0 kubenswrapper[27835]: I0318 13:44:34.933212 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70b8479b-b0d7-4f54-a717-0dd3289cf5be-scripts\") pod \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\" (UID: \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\") " Mar 18 13:44:34.933328 master-0 kubenswrapper[27835]: I0318 13:44:34.933315 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/70b8479b-b0d7-4f54-a717-0dd3289cf5be-httpd-run\") pod \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\" (UID: \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\") " Mar 18 13:44:34.933579 master-0 kubenswrapper[27835]: I0318 13:44:34.933530 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgk8s\" (UniqueName: \"kubernetes.io/projected/70b8479b-b0d7-4f54-a717-0dd3289cf5be-kube-api-access-xgk8s\") pod \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\" (UID: \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\") " Mar 18 13:44:34.934001 master-0 kubenswrapper[27835]: I0318 13:44:34.933983 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70b8479b-b0d7-4f54-a717-0dd3289cf5be-config-data\") pod \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\" (UID: \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\") " Mar 18 13:44:34.934463 master-0 kubenswrapper[27835]: I0318 13:44:34.934443 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70b8479b-b0d7-4f54-a717-0dd3289cf5be-combined-ca-bundle\") pod \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\" (UID: \"70b8479b-b0d7-4f54-a717-0dd3289cf5be\") " Mar 18 13:44:34.936855 master-0 kubenswrapper[27835]: I0318 13:44:34.934438 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70b8479b-b0d7-4f54-a717-0dd3289cf5be-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "70b8479b-b0d7-4f54-a717-0dd3289cf5be" (UID: "70b8479b-b0d7-4f54-a717-0dd3289cf5be"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:44:34.937638 master-0 kubenswrapper[27835]: I0318 13:44:34.937435 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70b8479b-b0d7-4f54-a717-0dd3289cf5be-logs" (OuterVolumeSpecName: "logs") pod "70b8479b-b0d7-4f54-a717-0dd3289cf5be" (UID: "70b8479b-b0d7-4f54-a717-0dd3289cf5be"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:44:34.938388 master-0 kubenswrapper[27835]: I0318 13:44:34.938358 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70b8479b-b0d7-4f54-a717-0dd3289cf5be-scripts" (OuterVolumeSpecName: "scripts") pod "70b8479b-b0d7-4f54-a717-0dd3289cf5be" (UID: "70b8479b-b0d7-4f54-a717-0dd3289cf5be"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:34.945760 master-0 kubenswrapper[27835]: I0318 13:44:34.945691 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70b8479b-b0d7-4f54-a717-0dd3289cf5be-kube-api-access-xgk8s" (OuterVolumeSpecName: "kube-api-access-xgk8s") pod "70b8479b-b0d7-4f54-a717-0dd3289cf5be" (UID: "70b8479b-b0d7-4f54-a717-0dd3289cf5be"). InnerVolumeSpecName "kube-api-access-xgk8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:44:34.967787 master-0 kubenswrapper[27835]: I0318 13:44:34.967726 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70b8479b-b0d7-4f54-a717-0dd3289cf5be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70b8479b-b0d7-4f54-a717-0dd3289cf5be" (UID: "70b8479b-b0d7-4f54-a717-0dd3289cf5be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:34.969746 master-0 kubenswrapper[27835]: I0318 13:44:34.969703 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^8cd87a72-b057-4c0b-a353-f22fec868a61" (OuterVolumeSpecName: "glance") pod "70b8479b-b0d7-4f54-a717-0dd3289cf5be" (UID: "70b8479b-b0d7-4f54-a717-0dd3289cf5be"). InnerVolumeSpecName "pvc-ff789d6f-852a-4819-b19c-09444384ecbe". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 18 13:44:34.989308 master-0 kubenswrapper[27835]: I0318 13:44:34.987843 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70b8479b-b0d7-4f54-a717-0dd3289cf5be-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "70b8479b-b0d7-4f54-a717-0dd3289cf5be" (UID: "70b8479b-b0d7-4f54-a717-0dd3289cf5be"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:35.027441 master-0 kubenswrapper[27835]: I0318 13:44:35.022521 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70b8479b-b0d7-4f54-a717-0dd3289cf5be-config-data" (OuterVolumeSpecName: "config-data") pod "70b8479b-b0d7-4f54-a717-0dd3289cf5be" (UID: "70b8479b-b0d7-4f54-a717-0dd3289cf5be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:35.042438 master-0 kubenswrapper[27835]: I0318 13:44:35.038934 27835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70b8479b-b0d7-4f54-a717-0dd3289cf5be-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:35.042438 master-0 kubenswrapper[27835]: I0318 13:44:35.038988 27835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70b8479b-b0d7-4f54-a717-0dd3289cf5be-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:35.042438 master-0 kubenswrapper[27835]: I0318 13:44:35.038998 27835 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70b8479b-b0d7-4f54-a717-0dd3289cf5be-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:35.042438 master-0 kubenswrapper[27835]: I0318 13:44:35.039041 27835 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-ff789d6f-852a-4819-b19c-09444384ecbe\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8cd87a72-b057-4c0b-a353-f22fec868a61\") on node \"master-0\" " Mar 18 13:44:35.042438 master-0 kubenswrapper[27835]: I0318 13:44:35.039053 27835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70b8479b-b0d7-4f54-a717-0dd3289cf5be-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:35.042438 master-0 kubenswrapper[27835]: I0318 13:44:35.039067 27835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70b8479b-b0d7-4f54-a717-0dd3289cf5be-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:35.042438 master-0 kubenswrapper[27835]: I0318 13:44:35.039075 27835 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/70b8479b-b0d7-4f54-a717-0dd3289cf5be-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:35.042438 master-0 kubenswrapper[27835]: I0318 13:44:35.039101 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgk8s\" (UniqueName: \"kubernetes.io/projected/70b8479b-b0d7-4f54-a717-0dd3289cf5be-kube-api-access-xgk8s\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:35.064519 master-0 kubenswrapper[27835]: I0318 13:44:35.064116 27835 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 18 13:44:35.064519 master-0 kubenswrapper[27835]: I0318 13:44:35.064322 27835 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-ff789d6f-852a-4819-b19c-09444384ecbe" (UniqueName: "kubernetes.io/csi/topolvm.io^8cd87a72-b057-4c0b-a353-f22fec868a61") on node "master-0" Mar 18 13:44:35.143441 master-0 kubenswrapper[27835]: I0318 13:44:35.142371 27835 reconciler_common.go:293] "Volume detached for volume \"pvc-ff789d6f-852a-4819-b19c-09444384ecbe\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8cd87a72-b057-4c0b-a353-f22fec868a61\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:35.282326 master-0 kubenswrapper[27835]: I0318 13:44:35.282259 27835 scope.go:117] "RemoveContainer" containerID="80b4b2995fefbec0d442226eccc7a5eb10c5dacc44d174dfeb0f7efe356f2ee5" Mar 18 13:44:35.520435 master-0 kubenswrapper[27835]: I0318 13:44:35.519935 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c7433697-622c-4928-be5c-cd3a3c65cc8c","Type":"ContainerStarted","Data":"d158bad4b479a63f491c460947b85f851d6d38f4843c81157ec097dbeb38a44a"} Mar 18 13:44:35.538456 master-0 kubenswrapper[27835]: I0318 13:44:35.522944 27835 generic.go:334] "Generic (PLEG): container finished" podID="38fdf2ba-5074-47b8-b534-c12270b771e8" containerID="31040f8f4ece5c7a6678f73c7a1a815c6442ae198c89e35f867feb05700421bf" exitCode=0 Mar 18 13:44:35.538456 master-0 kubenswrapper[27835]: I0318 13:44:35.523036 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-63a9-account-create-update-dbfqn" event={"ID":"38fdf2ba-5074-47b8-b534-c12270b771e8","Type":"ContainerDied","Data":"31040f8f4ece5c7a6678f73c7a1a815c6442ae198c89e35f867feb05700421bf"} Mar 18 13:44:35.555434 master-0 kubenswrapper[27835]: I0318 13:44:35.547299 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4f519-default-external-api-0" event={"ID":"70b8479b-b0d7-4f54-a717-0dd3289cf5be","Type":"ContainerDied","Data":"d23185235997ef0f8220a1a23ccb2ba865e4e882eba38ecf441d6f35fcb9dbce"} Mar 18 13:44:35.555434 master-0 kubenswrapper[27835]: I0318 13:44:35.547377 27835 scope.go:117] "RemoveContainer" containerID="db6da0c7160cdd36f1baf7c5cec389c7b26705bbdbbeea84501b520ea2a2ff54" Mar 18 13:44:35.555434 master-0 kubenswrapper[27835]: I0318 13:44:35.548009 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:35.572429 master-0 kubenswrapper[27835]: I0318 13:44:35.568024 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"d2a793d4-62c6-4482-a5e5-21ed4cc72e33","Type":"ContainerStarted","Data":"5fd35d585f61f77da01e7b63f5236b4007ee2cd18fb459f5a50a3fe9098e8fed"} Mar 18 13:44:35.575654 master-0 kubenswrapper[27835]: I0318 13:44:35.573190 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-gwfch" Mar 18 13:44:35.575654 master-0 kubenswrapper[27835]: I0318 13:44:35.573251 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6f6cbd65db-gzvz8" event={"ID":"8db812c3-c391-4147-8220-fdd68cdd11d3","Type":"ContainerStarted","Data":"b740dcd8b7df91025f8bd4604c8a473eec1c48c783c9defc7e91874cbf3cdb30"} Mar 18 13:44:35.575654 master-0 kubenswrapper[27835]: I0318 13:44:35.574000 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6f6cbd65db-gzvz8" Mar 18 13:44:35.575654 master-0 kubenswrapper[27835]: I0318 13:44:35.574517 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6f6cbd65db-gzvz8" Mar 18 13:44:35.575654 master-0 kubenswrapper[27835]: I0318 13:44:35.574771 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-s7cvj" Mar 18 13:44:35.601518 master-0 kubenswrapper[27835]: I0318 13:44:35.597399 27835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-6f6cbd65db-gzvz8" podUID="8db812c3-c391-4147-8220-fdd68cdd11d3" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 18 13:44:35.601518 master-0 kubenswrapper[27835]: I0318 13:44:35.598766 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.763812905 podStartE2EDuration="26.598753036s" podCreationTimestamp="2026-03-18 13:44:09 +0000 UTC" firstStartedPulling="2026-03-18 13:44:10.344143284 +0000 UTC m=+1214.309354844" lastFinishedPulling="2026-03-18 13:44:34.179083415 +0000 UTC m=+1238.144294975" observedRunningTime="2026-03-18 13:44:35.558016945 +0000 UTC m=+1239.523228505" watchObservedRunningTime="2026-03-18 13:44:35.598753036 +0000 UTC m=+1239.563964596" Mar 18 13:44:35.733436 master-0 kubenswrapper[27835]: I0318 13:44:35.730745 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6f6cbd65db-gzvz8" podStartSLOduration=18.730723695000002 podStartE2EDuration="18.730723695s" podCreationTimestamp="2026-03-18 13:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:44:35.691157316 +0000 UTC m=+1239.656368896" watchObservedRunningTime="2026-03-18 13:44:35.730723695 +0000 UTC m=+1239.695935255" Mar 18 13:44:35.807688 master-0 kubenswrapper[27835]: I0318 13:44:35.800960 27835 scope.go:117] "RemoveContainer" containerID="8b770528ca0aacbabf474fe5cb46f81ed7316ab9070d767a1a2654780b3d081b" Mar 18 13:44:35.832434 master-0 kubenswrapper[27835]: I0318 13:44:35.830317 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:35.878568 master-0 kubenswrapper[27835]: I0318 13:44:35.876847 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-4f519-default-external-api-0"] Mar 18 13:44:35.927567 master-0 kubenswrapper[27835]: I0318 13:44:35.920498 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-4f519-default-external-api-0"] Mar 18 13:44:35.947977 master-0 kubenswrapper[27835]: I0318 13:44:35.945232 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bf459ab-dc8e-4a13-bbee-b68d6d031781-internal-tls-certs\") pod \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\" (UID: \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\") " Mar 18 13:44:35.947977 master-0 kubenswrapper[27835]: I0318 13:44:35.945363 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bf459ab-dc8e-4a13-bbee-b68d6d031781-logs\") pod \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\" (UID: \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\") " Mar 18 13:44:35.947977 master-0 kubenswrapper[27835]: I0318 13:44:35.945467 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bf459ab-dc8e-4a13-bbee-b68d6d031781-config-data\") pod \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\" (UID: \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\") " Mar 18 13:44:35.947977 master-0 kubenswrapper[27835]: I0318 13:44:35.945534 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8c4p\" (UniqueName: \"kubernetes.io/projected/9bf459ab-dc8e-4a13-bbee-b68d6d031781-kube-api-access-w8c4p\") pod \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\" (UID: \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\") " Mar 18 13:44:35.947977 master-0 kubenswrapper[27835]: I0318 13:44:35.945557 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bf459ab-dc8e-4a13-bbee-b68d6d031781-combined-ca-bundle\") pod \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\" (UID: \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\") " Mar 18 13:44:35.947977 master-0 kubenswrapper[27835]: I0318 13:44:35.945605 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9bf459ab-dc8e-4a13-bbee-b68d6d031781-httpd-run\") pod \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\" (UID: \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\") " Mar 18 13:44:35.947977 master-0 kubenswrapper[27835]: I0318 13:44:35.945818 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^919e8c73-7055-4dab-b7f5-393f87412c76\") pod \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\" (UID: \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\") " Mar 18 13:44:35.947977 master-0 kubenswrapper[27835]: I0318 13:44:35.945854 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bf459ab-dc8e-4a13-bbee-b68d6d031781-scripts\") pod \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\" (UID: \"9bf459ab-dc8e-4a13-bbee-b68d6d031781\") " Mar 18 13:44:35.956357 master-0 kubenswrapper[27835]: I0318 13:44:35.955542 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bf459ab-dc8e-4a13-bbee-b68d6d031781-logs" (OuterVolumeSpecName: "logs") pod "9bf459ab-dc8e-4a13-bbee-b68d6d031781" (UID: "9bf459ab-dc8e-4a13-bbee-b68d6d031781"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:44:35.956357 master-0 kubenswrapper[27835]: I0318 13:44:35.956000 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bf459ab-dc8e-4a13-bbee-b68d6d031781-kube-api-access-w8c4p" (OuterVolumeSpecName: "kube-api-access-w8c4p") pod "9bf459ab-dc8e-4a13-bbee-b68d6d031781" (UID: "9bf459ab-dc8e-4a13-bbee-b68d6d031781"). InnerVolumeSpecName "kube-api-access-w8c4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:44:35.959621 master-0 kubenswrapper[27835]: I0318 13:44:35.956812 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bf459ab-dc8e-4a13-bbee-b68d6d031781-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9bf459ab-dc8e-4a13-bbee-b68d6d031781" (UID: "9bf459ab-dc8e-4a13-bbee-b68d6d031781"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:44:35.959621 master-0 kubenswrapper[27835]: I0318 13:44:35.957695 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-4f519-default-external-api-0"] Mar 18 13:44:35.959621 master-0 kubenswrapper[27835]: E0318 13:44:35.958271 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bf459ab-dc8e-4a13-bbee-b68d6d031781" containerName="glance-httpd" Mar 18 13:44:35.959621 master-0 kubenswrapper[27835]: I0318 13:44:35.958288 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bf459ab-dc8e-4a13-bbee-b68d6d031781" containerName="glance-httpd" Mar 18 13:44:35.959621 master-0 kubenswrapper[27835]: E0318 13:44:35.958325 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb691f3-dd32-4f7d-afa4-4d0980740b64" containerName="neutron-httpd" Mar 18 13:44:35.959621 master-0 kubenswrapper[27835]: I0318 13:44:35.958334 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb691f3-dd32-4f7d-afa4-4d0980740b64" containerName="neutron-httpd" Mar 18 13:44:35.959621 master-0 kubenswrapper[27835]: E0318 13:44:35.958351 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70b8479b-b0d7-4f54-a717-0dd3289cf5be" containerName="glance-log" Mar 18 13:44:35.959621 master-0 kubenswrapper[27835]: I0318 13:44:35.958358 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="70b8479b-b0d7-4f54-a717-0dd3289cf5be" containerName="glance-log" Mar 18 13:44:35.959621 master-0 kubenswrapper[27835]: E0318 13:44:35.958377 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70b8479b-b0d7-4f54-a717-0dd3289cf5be" containerName="glance-httpd" Mar 18 13:44:35.959621 master-0 kubenswrapper[27835]: I0318 13:44:35.958383 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="70b8479b-b0d7-4f54-a717-0dd3289cf5be" containerName="glance-httpd" Mar 18 13:44:35.959621 master-0 kubenswrapper[27835]: E0318 13:44:35.958396 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bf459ab-dc8e-4a13-bbee-b68d6d031781" containerName="glance-log" Mar 18 13:44:35.959621 master-0 kubenswrapper[27835]: I0318 13:44:35.958403 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bf459ab-dc8e-4a13-bbee-b68d6d031781" containerName="glance-log" Mar 18 13:44:35.959621 master-0 kubenswrapper[27835]: E0318 13:44:35.958435 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb691f3-dd32-4f7d-afa4-4d0980740b64" containerName="neutron-api" Mar 18 13:44:35.959621 master-0 kubenswrapper[27835]: I0318 13:44:35.958443 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb691f3-dd32-4f7d-afa4-4d0980740b64" containerName="neutron-api" Mar 18 13:44:35.959621 master-0 kubenswrapper[27835]: E0318 13:44:35.958470 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="549326ca-2cc7-49f2-bffb-a76953703d01" containerName="mariadb-database-create" Mar 18 13:44:35.959621 master-0 kubenswrapper[27835]: I0318 13:44:35.958476 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="549326ca-2cc7-49f2-bffb-a76953703d01" containerName="mariadb-database-create" Mar 18 13:44:35.959621 master-0 kubenswrapper[27835]: E0318 13:44:35.958493 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1fc873e-3d35-4632-a144-08e9b6e74e02" containerName="ironic-inspector-db-sync" Mar 18 13:44:35.959621 master-0 kubenswrapper[27835]: I0318 13:44:35.958500 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1fc873e-3d35-4632-a144-08e9b6e74e02" containerName="ironic-inspector-db-sync" Mar 18 13:44:35.959621 master-0 kubenswrapper[27835]: I0318 13:44:35.958704 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="70b8479b-b0d7-4f54-a717-0dd3289cf5be" containerName="glance-httpd" Mar 18 13:44:35.959621 master-0 kubenswrapper[27835]: I0318 13:44:35.958741 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1fc873e-3d35-4632-a144-08e9b6e74e02" containerName="ironic-inspector-db-sync" Mar 18 13:44:35.959621 master-0 kubenswrapper[27835]: I0318 13:44:35.958755 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fb691f3-dd32-4f7d-afa4-4d0980740b64" containerName="neutron-httpd" Mar 18 13:44:35.959621 master-0 kubenswrapper[27835]: I0318 13:44:35.958770 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="549326ca-2cc7-49f2-bffb-a76953703d01" containerName="mariadb-database-create" Mar 18 13:44:35.959621 master-0 kubenswrapper[27835]: I0318 13:44:35.958784 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bf459ab-dc8e-4a13-bbee-b68d6d031781" containerName="glance-log" Mar 18 13:44:35.959621 master-0 kubenswrapper[27835]: I0318 13:44:35.958797 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bf459ab-dc8e-4a13-bbee-b68d6d031781" containerName="glance-httpd" Mar 18 13:44:35.959621 master-0 kubenswrapper[27835]: I0318 13:44:35.958810 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="70b8479b-b0d7-4f54-a717-0dd3289cf5be" containerName="glance-log" Mar 18 13:44:35.959621 master-0 kubenswrapper[27835]: I0318 13:44:35.958818 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fb691f3-dd32-4f7d-afa4-4d0980740b64" containerName="neutron-api" Mar 18 13:44:35.960832 master-0 kubenswrapper[27835]: I0318 13:44:35.960242 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:35.965405 master-0 kubenswrapper[27835]: I0318 13:44:35.961739 27835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bf459ab-dc8e-4a13-bbee-b68d6d031781-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:35.965405 master-0 kubenswrapper[27835]: I0318 13:44:35.961765 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8c4p\" (UniqueName: \"kubernetes.io/projected/9bf459ab-dc8e-4a13-bbee-b68d6d031781-kube-api-access-w8c4p\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:35.965405 master-0 kubenswrapper[27835]: I0318 13:44:35.961777 27835 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9bf459ab-dc8e-4a13-bbee-b68d6d031781-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:35.966053 master-0 kubenswrapper[27835]: I0318 13:44:35.965703 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-4f519-default-external-config-data" Mar 18 13:44:35.966053 master-0 kubenswrapper[27835]: I0318 13:44:35.965792 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 18 13:44:35.968795 master-0 kubenswrapper[27835]: I0318 13:44:35.968587 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bf459ab-dc8e-4a13-bbee-b68d6d031781-scripts" (OuterVolumeSpecName: "scripts") pod "9bf459ab-dc8e-4a13-bbee-b68d6d031781" (UID: "9bf459ab-dc8e-4a13-bbee-b68d6d031781"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:36.040045 master-0 kubenswrapper[27835]: I0318 13:44:36.039181 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4f519-default-external-api-0"] Mar 18 13:44:36.060845 master-0 kubenswrapper[27835]: I0318 13:44:36.060124 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^919e8c73-7055-4dab-b7f5-393f87412c76" (OuterVolumeSpecName: "glance") pod "9bf459ab-dc8e-4a13-bbee-b68d6d031781" (UID: "9bf459ab-dc8e-4a13-bbee-b68d6d031781"). InnerVolumeSpecName "pvc-3ead41c4-903a-4686-a384-328e4b9fb938". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 18 13:44:36.081185 master-0 kubenswrapper[27835]: I0318 13:44:36.075556 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1-httpd-run\") pod \"glance-4f519-default-external-api-0\" (UID: \"a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:36.081185 master-0 kubenswrapper[27835]: I0318 13:44:36.075618 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1-config-data\") pod \"glance-4f519-default-external-api-0\" (UID: \"a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:36.081185 master-0 kubenswrapper[27835]: I0318 13:44:36.075694 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1-scripts\") pod \"glance-4f519-default-external-api-0\" (UID: \"a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:36.081185 master-0 kubenswrapper[27835]: I0318 13:44:36.075819 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1-public-tls-certs\") pod \"glance-4f519-default-external-api-0\" (UID: \"a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:36.081185 master-0 kubenswrapper[27835]: I0318 13:44:36.075851 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p86tq\" (UniqueName: \"kubernetes.io/projected/a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1-kube-api-access-p86tq\") pod \"glance-4f519-default-external-api-0\" (UID: \"a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:36.081185 master-0 kubenswrapper[27835]: I0318 13:44:36.075983 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1-combined-ca-bundle\") pod \"glance-4f519-default-external-api-0\" (UID: \"a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:36.081185 master-0 kubenswrapper[27835]: I0318 13:44:36.076105 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ff789d6f-852a-4819-b19c-09444384ecbe\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8cd87a72-b057-4c0b-a353-f22fec868a61\") pod \"glance-4f519-default-external-api-0\" (UID: \"a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:36.081185 master-0 kubenswrapper[27835]: I0318 13:44:36.077520 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1-logs\") pod \"glance-4f519-default-external-api-0\" (UID: \"a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:36.081185 master-0 kubenswrapper[27835]: I0318 13:44:36.077677 27835 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-3ead41c4-903a-4686-a384-328e4b9fb938\" (UniqueName: \"kubernetes.io/csi/topolvm.io^919e8c73-7055-4dab-b7f5-393f87412c76\") on node \"master-0\" " Mar 18 13:44:36.081185 master-0 kubenswrapper[27835]: I0318 13:44:36.077696 27835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9bf459ab-dc8e-4a13-bbee-b68d6d031781-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:36.096639 master-0 kubenswrapper[27835]: I0318 13:44:36.096555 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bf459ab-dc8e-4a13-bbee-b68d6d031781-config-data" (OuterVolumeSpecName: "config-data") pod "9bf459ab-dc8e-4a13-bbee-b68d6d031781" (UID: "9bf459ab-dc8e-4a13-bbee-b68d6d031781"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:36.104586 master-0 kubenswrapper[27835]: I0318 13:44:36.104533 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bf459ab-dc8e-4a13-bbee-b68d6d031781-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9bf459ab-dc8e-4a13-bbee-b68d6d031781" (UID: "9bf459ab-dc8e-4a13-bbee-b68d6d031781"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:36.123405 master-0 kubenswrapper[27835]: I0318 13:44:36.122657 27835 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-6f6cbd65db-gzvz8" podUID="8db812c3-c391-4147-8220-fdd68cdd11d3" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 18 13:44:36.126964 master-0 kubenswrapper[27835]: I0318 13:44:36.126904 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bf459ab-dc8e-4a13-bbee-b68d6d031781-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9bf459ab-dc8e-4a13-bbee-b68d6d031781" (UID: "9bf459ab-dc8e-4a13-bbee-b68d6d031781"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:36.180342 master-0 kubenswrapper[27835]: I0318 13:44:36.180138 27835 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 18 13:44:36.180342 master-0 kubenswrapper[27835]: I0318 13:44:36.180312 27835 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-3ead41c4-903a-4686-a384-328e4b9fb938" (UniqueName: "kubernetes.io/csi/topolvm.io^919e8c73-7055-4dab-b7f5-393f87412c76") on node "master-0" Mar 18 13:44:36.182132 master-0 kubenswrapper[27835]: I0318 13:44:36.181591 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1-public-tls-certs\") pod \"glance-4f519-default-external-api-0\" (UID: \"a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:36.182271 master-0 kubenswrapper[27835]: I0318 13:44:36.182248 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p86tq\" (UniqueName: \"kubernetes.io/projected/a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1-kube-api-access-p86tq\") pod \"glance-4f519-default-external-api-0\" (UID: \"a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:36.203110 master-0 kubenswrapper[27835]: I0318 13:44:36.187827 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1-combined-ca-bundle\") pod \"glance-4f519-default-external-api-0\" (UID: \"a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:36.203110 master-0 kubenswrapper[27835]: I0318 13:44:36.187948 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ff789d6f-852a-4819-b19c-09444384ecbe\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8cd87a72-b057-4c0b-a353-f22fec868a61\") pod \"glance-4f519-default-external-api-0\" (UID: \"a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:36.203110 master-0 kubenswrapper[27835]: I0318 13:44:36.188132 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1-logs\") pod \"glance-4f519-default-external-api-0\" (UID: \"a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:36.203110 master-0 kubenswrapper[27835]: I0318 13:44:36.188302 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1-httpd-run\") pod \"glance-4f519-default-external-api-0\" (UID: \"a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:36.203110 master-0 kubenswrapper[27835]: I0318 13:44:36.188332 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1-config-data\") pod \"glance-4f519-default-external-api-0\" (UID: \"a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:36.203110 master-0 kubenswrapper[27835]: I0318 13:44:36.188425 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1-scripts\") pod \"glance-4f519-default-external-api-0\" (UID: \"a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:36.203110 master-0 kubenswrapper[27835]: I0318 13:44:36.188582 27835 reconciler_common.go:293] "Volume detached for volume \"pvc-3ead41c4-903a-4686-a384-328e4b9fb938\" (UniqueName: \"kubernetes.io/csi/topolvm.io^919e8c73-7055-4dab-b7f5-393f87412c76\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:36.203110 master-0 kubenswrapper[27835]: I0318 13:44:36.188604 27835 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bf459ab-dc8e-4a13-bbee-b68d6d031781-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:36.203110 master-0 kubenswrapper[27835]: I0318 13:44:36.188619 27835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bf459ab-dc8e-4a13-bbee-b68d6d031781-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:36.203110 master-0 kubenswrapper[27835]: I0318 13:44:36.188630 27835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bf459ab-dc8e-4a13-bbee-b68d6d031781-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:36.203110 master-0 kubenswrapper[27835]: I0318 13:44:36.189290 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1-logs\") pod \"glance-4f519-default-external-api-0\" (UID: \"a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:36.203110 master-0 kubenswrapper[27835]: I0318 13:44:36.189387 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-g2nft" Mar 18 13:44:36.203110 master-0 kubenswrapper[27835]: I0318 13:44:36.190013 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1-public-tls-certs\") pod \"glance-4f519-default-external-api-0\" (UID: \"a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:36.203110 master-0 kubenswrapper[27835]: I0318 13:44:36.190090 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1-httpd-run\") pod \"glance-4f519-default-external-api-0\" (UID: \"a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:36.203110 master-0 kubenswrapper[27835]: I0318 13:44:36.191722 27835 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 13:44:36.203110 master-0 kubenswrapper[27835]: I0318 13:44:36.191745 27835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ff789d6f-852a-4819-b19c-09444384ecbe\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8cd87a72-b057-4c0b-a353-f22fec868a61\") pod \"glance-4f519-default-external-api-0\" (UID: \"a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/d7d2380a2367ec81f9f9b44b1b86eaac9ba6ff0ab5cc582d80f5ba97c51d1f86/globalmount\"" pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:36.203110 master-0 kubenswrapper[27835]: I0318 13:44:36.193373 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1-combined-ca-bundle\") pod \"glance-4f519-default-external-api-0\" (UID: \"a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:36.203110 master-0 kubenswrapper[27835]: I0318 13:44:36.198502 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p86tq\" (UniqueName: \"kubernetes.io/projected/a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1-kube-api-access-p86tq\") pod \"glance-4f519-default-external-api-0\" (UID: \"a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:36.203110 master-0 kubenswrapper[27835]: I0318 13:44:36.200907 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1-scripts\") pod \"glance-4f519-default-external-api-0\" (UID: \"a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:36.210454 master-0 kubenswrapper[27835]: I0318 13:44:36.208101 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1-config-data\") pod \"glance-4f519-default-external-api-0\" (UID: \"a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:36.293215 master-0 kubenswrapper[27835]: I0318 13:44:36.291815 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pvz9\" (UniqueName: \"kubernetes.io/projected/83e94368-fc4d-4fdd-bb0e-266a8d57bfd1-kube-api-access-6pvz9\") pod \"83e94368-fc4d-4fdd-bb0e-266a8d57bfd1\" (UID: \"83e94368-fc4d-4fdd-bb0e-266a8d57bfd1\") " Mar 18 13:44:36.293215 master-0 kubenswrapper[27835]: I0318 13:44:36.291906 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83e94368-fc4d-4fdd-bb0e-266a8d57bfd1-operator-scripts\") pod \"83e94368-fc4d-4fdd-bb0e-266a8d57bfd1\" (UID: \"83e94368-fc4d-4fdd-bb0e-266a8d57bfd1\") " Mar 18 13:44:36.295676 master-0 kubenswrapper[27835]: I0318 13:44:36.295644 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83e94368-fc4d-4fdd-bb0e-266a8d57bfd1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "83e94368-fc4d-4fdd-bb0e-266a8d57bfd1" (UID: "83e94368-fc4d-4fdd-bb0e-266a8d57bfd1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:44:36.299780 master-0 kubenswrapper[27835]: I0318 13:44:36.299714 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83e94368-fc4d-4fdd-bb0e-266a8d57bfd1-kube-api-access-6pvz9" (OuterVolumeSpecName: "kube-api-access-6pvz9") pod "83e94368-fc4d-4fdd-bb0e-266a8d57bfd1" (UID: "83e94368-fc4d-4fdd-bb0e-266a8d57bfd1"). InnerVolumeSpecName "kube-api-access-6pvz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:44:36.303475 master-0 kubenswrapper[27835]: I0318 13:44:36.303406 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70b8479b-b0d7-4f54-a717-0dd3289cf5be" path="/var/lib/kubelet/pods/70b8479b-b0d7-4f54-a717-0dd3289cf5be/volumes" Mar 18 13:44:36.402406 master-0 kubenswrapper[27835]: I0318 13:44:36.402356 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pvz9\" (UniqueName: \"kubernetes.io/projected/83e94368-fc4d-4fdd-bb0e-266a8d57bfd1-kube-api-access-6pvz9\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:36.402639 master-0 kubenswrapper[27835]: I0318 13:44:36.402628 27835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83e94368-fc4d-4fdd-bb0e-266a8d57bfd1-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:36.624545 master-0 kubenswrapper[27835]: I0318 13:44:36.619192 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-g9ddk" event={"ID":"8aeac094-6720-488f-a255-c3042b569033","Type":"ContainerDied","Data":"fc6674d4f08c980e3e90f55e33cd09b51e6bf239d65634ab264a56f19f2d8cac"} Mar 18 13:44:36.624545 master-0 kubenswrapper[27835]: I0318 13:44:36.619246 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc6674d4f08c980e3e90f55e33cd09b51e6bf239d65634ab264a56f19f2d8cac" Mar 18 13:44:36.651055 master-0 kubenswrapper[27835]: I0318 13:44:36.650998 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4f519-default-internal-api-0" event={"ID":"9bf459ab-dc8e-4a13-bbee-b68d6d031781","Type":"ContainerDied","Data":"bf4e0bf001bf13a8d56c6ea89735e199aab41fed8898d008e6835d4b479eb57c"} Mar 18 13:44:36.651278 master-0 kubenswrapper[27835]: I0318 13:44:36.651067 27835 scope.go:117] "RemoveContainer" containerID="3558dec8b9ad35dc7342bcd32de8cdbe7f94961bdfbc20eaa99bee14dbc28448" Mar 18 13:44:36.651278 master-0 kubenswrapper[27835]: I0318 13:44:36.651203 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:36.661094 master-0 kubenswrapper[27835]: I0318 13:44:36.658030 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-g2nft" event={"ID":"83e94368-fc4d-4fdd-bb0e-266a8d57bfd1","Type":"ContainerDied","Data":"37e4b966137d701559ee67f00a93ed73cbf3780318ae565cf3684696d5bb348c"} Mar 18 13:44:36.661094 master-0 kubenswrapper[27835]: I0318 13:44:36.658073 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37e4b966137d701559ee67f00a93ed73cbf3780318ae565cf3684696d5bb348c" Mar 18 13:44:36.661094 master-0 kubenswrapper[27835]: I0318 13:44:36.658141 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-g2nft" Mar 18 13:44:36.664281 master-0 kubenswrapper[27835]: I0318 13:44:36.664234 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ff10-account-create-update-49jqz" event={"ID":"d8553452-d2f4-4ad0-9fe0-0d2d984be2b0","Type":"ContainerDied","Data":"486a9983550ee5d88aea6e4e6da007eb1dc24a9553836ad1391b3c82073fd5d2"} Mar 18 13:44:36.664395 master-0 kubenswrapper[27835]: I0318 13:44:36.664290 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="486a9983550ee5d88aea6e4e6da007eb1dc24a9553836ad1391b3c82073fd5d2" Mar 18 13:44:36.671612 master-0 kubenswrapper[27835]: I0318 13:44:36.669590 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ff10-account-create-update-49jqz" Mar 18 13:44:36.686493 master-0 kubenswrapper[27835]: I0318 13:44:36.685969 27835 scope.go:117] "RemoveContainer" containerID="ca78313e63b64504e4ce64c347c881395f4cdb56b4dbf7fb7c3f21484da0ea8b" Mar 18 13:44:36.686493 master-0 kubenswrapper[27835]: I0318 13:44:36.686129 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ea54-account-create-update-85drm" event={"ID":"1ceb0690-8659-42ff-929b-faf3879c7ffb","Type":"ContainerDied","Data":"2aec8c9cb8648529eb655c9af7352dcbb0ef9e855e67d53f249d0610cd728b7c"} Mar 18 13:44:36.686493 master-0 kubenswrapper[27835]: I0318 13:44:36.686153 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2aec8c9cb8648529eb655c9af7352dcbb0ef9e855e67d53f249d0610cd728b7c" Mar 18 13:44:36.706383 master-0 kubenswrapper[27835]: I0318 13:44:36.706332 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ea54-account-create-update-85drm" Mar 18 13:44:36.710768 master-0 kubenswrapper[27835]: I0318 13:44:36.710724 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" event={"ID":"cc7df07d-4c6b-469f-b007-e3d799a49fd5","Type":"ContainerStarted","Data":"c4afe50328f90619bc0c3f987f293d63b41dea331885733ac695f63916aa3529"} Mar 18 13:44:36.711261 master-0 kubenswrapper[27835]: I0318 13:44:36.711235 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" Mar 18 13:44:36.722747 master-0 kubenswrapper[27835]: I0318 13:44:36.721923 27835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-6f6cbd65db-gzvz8" podUID="8db812c3-c391-4147-8220-fdd68cdd11d3" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 18 13:44:36.723841 master-0 kubenswrapper[27835]: I0318 13:44:36.723795 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8553452-d2f4-4ad0-9fe0-0d2d984be2b0-operator-scripts\") pod \"d8553452-d2f4-4ad0-9fe0-0d2d984be2b0\" (UID: \"d8553452-d2f4-4ad0-9fe0-0d2d984be2b0\") " Mar 18 13:44:36.724030 master-0 kubenswrapper[27835]: I0318 13:44:36.724004 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms4tj\" (UniqueName: \"kubernetes.io/projected/d8553452-d2f4-4ad0-9fe0-0d2d984be2b0-kube-api-access-ms4tj\") pod \"d8553452-d2f4-4ad0-9fe0-0d2d984be2b0\" (UID: \"d8553452-d2f4-4ad0-9fe0-0d2d984be2b0\") " Mar 18 13:44:36.738393 master-0 kubenswrapper[27835]: I0318 13:44:36.737245 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8553452-d2f4-4ad0-9fe0-0d2d984be2b0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d8553452-d2f4-4ad0-9fe0-0d2d984be2b0" (UID: "d8553452-d2f4-4ad0-9fe0-0d2d984be2b0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:44:36.738944 master-0 kubenswrapper[27835]: I0318 13:44:36.738908 27835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8553452-d2f4-4ad0-9fe0-0d2d984be2b0-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:36.743310 master-0 kubenswrapper[27835]: I0318 13:44:36.743273 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8553452-d2f4-4ad0-9fe0-0d2d984be2b0-kube-api-access-ms4tj" (OuterVolumeSpecName: "kube-api-access-ms4tj") pod "d8553452-d2f4-4ad0-9fe0-0d2d984be2b0" (UID: "d8553452-d2f4-4ad0-9fe0-0d2d984be2b0"). InnerVolumeSpecName "kube-api-access-ms4tj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:44:36.747437 master-0 kubenswrapper[27835]: I0318 13:44:36.744252 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-g9ddk" Mar 18 13:44:36.768510 master-0 kubenswrapper[27835]: I0318 13:44:36.763756 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-4f519-default-internal-api-0"] Mar 18 13:44:36.791444 master-0 kubenswrapper[27835]: I0318 13:44:36.790480 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-4f519-default-internal-api-0"] Mar 18 13:44:36.859432 master-0 kubenswrapper[27835]: I0318 13:44:36.858680 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlr6p\" (UniqueName: \"kubernetes.io/projected/8aeac094-6720-488f-a255-c3042b569033-kube-api-access-hlr6p\") pod \"8aeac094-6720-488f-a255-c3042b569033\" (UID: \"8aeac094-6720-488f-a255-c3042b569033\") " Mar 18 13:44:36.859432 master-0 kubenswrapper[27835]: I0318 13:44:36.858806 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8aeac094-6720-488f-a255-c3042b569033-operator-scripts\") pod \"8aeac094-6720-488f-a255-c3042b569033\" (UID: \"8aeac094-6720-488f-a255-c3042b569033\") " Mar 18 13:44:36.859432 master-0 kubenswrapper[27835]: I0318 13:44:36.858853 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l762l\" (UniqueName: \"kubernetes.io/projected/1ceb0690-8659-42ff-929b-faf3879c7ffb-kube-api-access-l762l\") pod \"1ceb0690-8659-42ff-929b-faf3879c7ffb\" (UID: \"1ceb0690-8659-42ff-929b-faf3879c7ffb\") " Mar 18 13:44:36.859432 master-0 kubenswrapper[27835]: I0318 13:44:36.858956 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ceb0690-8659-42ff-929b-faf3879c7ffb-operator-scripts\") pod \"1ceb0690-8659-42ff-929b-faf3879c7ffb\" (UID: \"1ceb0690-8659-42ff-929b-faf3879c7ffb\") " Mar 18 13:44:36.859432 master-0 kubenswrapper[27835]: I0318 13:44:36.859275 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8aeac094-6720-488f-a255-c3042b569033-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8aeac094-6720-488f-a255-c3042b569033" (UID: "8aeac094-6720-488f-a255-c3042b569033"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:44:36.865436 master-0 kubenswrapper[27835]: I0318 13:44:36.859893 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ceb0690-8659-42ff-929b-faf3879c7ffb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1ceb0690-8659-42ff-929b-faf3879c7ffb" (UID: "1ceb0690-8659-42ff-929b-faf3879c7ffb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:44:36.865436 master-0 kubenswrapper[27835]: I0318 13:44:36.860277 27835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8aeac094-6720-488f-a255-c3042b569033-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:36.865436 master-0 kubenswrapper[27835]: I0318 13:44:36.860300 27835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ceb0690-8659-42ff-929b-faf3879c7ffb-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:36.865436 master-0 kubenswrapper[27835]: I0318 13:44:36.860314 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms4tj\" (UniqueName: \"kubernetes.io/projected/d8553452-d2f4-4ad0-9fe0-0d2d984be2b0-kube-api-access-ms4tj\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:36.865436 master-0 kubenswrapper[27835]: I0318 13:44:36.861503 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-4f519-default-internal-api-0"] Mar 18 13:44:36.865436 master-0 kubenswrapper[27835]: E0318 13:44:36.862076 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8553452-d2f4-4ad0-9fe0-0d2d984be2b0" containerName="mariadb-account-create-update" Mar 18 13:44:36.865436 master-0 kubenswrapper[27835]: I0318 13:44:36.862099 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8553452-d2f4-4ad0-9fe0-0d2d984be2b0" containerName="mariadb-account-create-update" Mar 18 13:44:36.865436 master-0 kubenswrapper[27835]: E0318 13:44:36.862129 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aeac094-6720-488f-a255-c3042b569033" containerName="mariadb-database-create" Mar 18 13:44:36.865436 master-0 kubenswrapper[27835]: I0318 13:44:36.862138 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aeac094-6720-488f-a255-c3042b569033" containerName="mariadb-database-create" Mar 18 13:44:36.865436 master-0 kubenswrapper[27835]: E0318 13:44:36.862173 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83e94368-fc4d-4fdd-bb0e-266a8d57bfd1" containerName="mariadb-database-create" Mar 18 13:44:36.865436 master-0 kubenswrapper[27835]: I0318 13:44:36.862181 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="83e94368-fc4d-4fdd-bb0e-266a8d57bfd1" containerName="mariadb-database-create" Mar 18 13:44:36.865436 master-0 kubenswrapper[27835]: E0318 13:44:36.862212 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ceb0690-8659-42ff-929b-faf3879c7ffb" containerName="mariadb-account-create-update" Mar 18 13:44:36.865436 master-0 kubenswrapper[27835]: I0318 13:44:36.862219 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ceb0690-8659-42ff-929b-faf3879c7ffb" containerName="mariadb-account-create-update" Mar 18 13:44:36.865436 master-0 kubenswrapper[27835]: I0318 13:44:36.862536 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="8aeac094-6720-488f-a255-c3042b569033" containerName="mariadb-database-create" Mar 18 13:44:36.865436 master-0 kubenswrapper[27835]: I0318 13:44:36.862567 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8553452-d2f4-4ad0-9fe0-0d2d984be2b0" containerName="mariadb-account-create-update" Mar 18 13:44:36.865436 master-0 kubenswrapper[27835]: I0318 13:44:36.862600 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="83e94368-fc4d-4fdd-bb0e-266a8d57bfd1" containerName="mariadb-database-create" Mar 18 13:44:36.865436 master-0 kubenswrapper[27835]: I0318 13:44:36.862621 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ceb0690-8659-42ff-929b-faf3879c7ffb" containerName="mariadb-account-create-update" Mar 18 13:44:36.917560 master-0 kubenswrapper[27835]: I0318 13:44:36.915616 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:36.925982 master-0 kubenswrapper[27835]: I0318 13:44:36.922472 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 18 13:44:36.943437 master-0 kubenswrapper[27835]: I0318 13:44:36.934559 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ceb0690-8659-42ff-929b-faf3879c7ffb-kube-api-access-l762l" (OuterVolumeSpecName: "kube-api-access-l762l") pod "1ceb0690-8659-42ff-929b-faf3879c7ffb" (UID: "1ceb0690-8659-42ff-929b-faf3879c7ffb"). InnerVolumeSpecName "kube-api-access-l762l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:44:36.943437 master-0 kubenswrapper[27835]: I0318 13:44:36.940747 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-4f519-default-internal-config-data" Mar 18 13:44:36.988634 master-0 kubenswrapper[27835]: I0318 13:44:36.983224 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7948c85d-504d-49e9-8c64-f201e15eae46-combined-ca-bundle\") pod \"glance-4f519-default-internal-api-0\" (UID: \"7948c85d-504d-49e9-8c64-f201e15eae46\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:36.988634 master-0 kubenswrapper[27835]: I0318 13:44:36.983360 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3ead41c4-903a-4686-a384-328e4b9fb938\" (UniqueName: \"kubernetes.io/csi/topolvm.io^919e8c73-7055-4dab-b7f5-393f87412c76\") pod \"glance-4f519-default-internal-api-0\" (UID: \"7948c85d-504d-49e9-8c64-f201e15eae46\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:36.988634 master-0 kubenswrapper[27835]: I0318 13:44:36.983485 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7948c85d-504d-49e9-8c64-f201e15eae46-httpd-run\") pod \"glance-4f519-default-internal-api-0\" (UID: \"7948c85d-504d-49e9-8c64-f201e15eae46\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:36.988634 master-0 kubenswrapper[27835]: I0318 13:44:36.983536 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7948c85d-504d-49e9-8c64-f201e15eae46-config-data\") pod \"glance-4f519-default-internal-api-0\" (UID: \"7948c85d-504d-49e9-8c64-f201e15eae46\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:36.988634 master-0 kubenswrapper[27835]: I0318 13:44:36.983587 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7948c85d-504d-49e9-8c64-f201e15eae46-internal-tls-certs\") pod \"glance-4f519-default-internal-api-0\" (UID: \"7948c85d-504d-49e9-8c64-f201e15eae46\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:36.988634 master-0 kubenswrapper[27835]: I0318 13:44:36.983619 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7948c85d-504d-49e9-8c64-f201e15eae46-scripts\") pod \"glance-4f519-default-internal-api-0\" (UID: \"7948c85d-504d-49e9-8c64-f201e15eae46\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:36.988634 master-0 kubenswrapper[27835]: I0318 13:44:36.983683 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n85m8\" (UniqueName: \"kubernetes.io/projected/7948c85d-504d-49e9-8c64-f201e15eae46-kube-api-access-n85m8\") pod \"glance-4f519-default-internal-api-0\" (UID: \"7948c85d-504d-49e9-8c64-f201e15eae46\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:36.988634 master-0 kubenswrapper[27835]: I0318 13:44:36.983818 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7948c85d-504d-49e9-8c64-f201e15eae46-logs\") pod \"glance-4f519-default-internal-api-0\" (UID: \"7948c85d-504d-49e9-8c64-f201e15eae46\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:36.988634 master-0 kubenswrapper[27835]: I0318 13:44:36.984001 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l762l\" (UniqueName: \"kubernetes.io/projected/1ceb0690-8659-42ff-929b-faf3879c7ffb-kube-api-access-l762l\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:36.993839 master-0 kubenswrapper[27835]: I0318 13:44:36.993718 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8aeac094-6720-488f-a255-c3042b569033-kube-api-access-hlr6p" (OuterVolumeSpecName: "kube-api-access-hlr6p") pod "8aeac094-6720-488f-a255-c3042b569033" (UID: "8aeac094-6720-488f-a255-c3042b569033"). InnerVolumeSpecName "kube-api-access-hlr6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:44:36.999805 master-0 kubenswrapper[27835]: I0318 13:44:36.996013 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4f519-default-internal-api-0"] Mar 18 13:44:37.089908 master-0 kubenswrapper[27835]: I0318 13:44:37.089761 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3ead41c4-903a-4686-a384-328e4b9fb938\" (UniqueName: \"kubernetes.io/csi/topolvm.io^919e8c73-7055-4dab-b7f5-393f87412c76\") pod \"glance-4f519-default-internal-api-0\" (UID: \"7948c85d-504d-49e9-8c64-f201e15eae46\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:37.089908 master-0 kubenswrapper[27835]: I0318 13:44:37.089894 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7948c85d-504d-49e9-8c64-f201e15eae46-httpd-run\") pod \"glance-4f519-default-internal-api-0\" (UID: \"7948c85d-504d-49e9-8c64-f201e15eae46\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:37.090244 master-0 kubenswrapper[27835]: I0318 13:44:37.089957 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7948c85d-504d-49e9-8c64-f201e15eae46-config-data\") pod \"glance-4f519-default-internal-api-0\" (UID: \"7948c85d-504d-49e9-8c64-f201e15eae46\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:37.090244 master-0 kubenswrapper[27835]: I0318 13:44:37.090014 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7948c85d-504d-49e9-8c64-f201e15eae46-internal-tls-certs\") pod \"glance-4f519-default-internal-api-0\" (UID: \"7948c85d-504d-49e9-8c64-f201e15eae46\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:37.091573 master-0 kubenswrapper[27835]: I0318 13:44:37.090828 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7948c85d-504d-49e9-8c64-f201e15eae46-httpd-run\") pod \"glance-4f519-default-internal-api-0\" (UID: \"7948c85d-504d-49e9-8c64-f201e15eae46\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:37.100203 master-0 kubenswrapper[27835]: I0318 13:44:37.100149 27835 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 13:44:37.100437 master-0 kubenswrapper[27835]: I0318 13:44:37.100204 27835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3ead41c4-903a-4686-a384-328e4b9fb938\" (UniqueName: \"kubernetes.io/csi/topolvm.io^919e8c73-7055-4dab-b7f5-393f87412c76\") pod \"glance-4f519-default-internal-api-0\" (UID: \"7948c85d-504d-49e9-8c64-f201e15eae46\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/fe2c78a82f6fd4790d1308034ee8953c7233fac051a28905339bd938cd5ef252/globalmount\"" pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:37.102112 master-0 kubenswrapper[27835]: I0318 13:44:37.102076 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7948c85d-504d-49e9-8c64-f201e15eae46-scripts\") pod \"glance-4f519-default-internal-api-0\" (UID: \"7948c85d-504d-49e9-8c64-f201e15eae46\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:37.102185 master-0 kubenswrapper[27835]: I0318 13:44:37.102171 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n85m8\" (UniqueName: \"kubernetes.io/projected/7948c85d-504d-49e9-8c64-f201e15eae46-kube-api-access-n85m8\") pod \"glance-4f519-default-internal-api-0\" (UID: \"7948c85d-504d-49e9-8c64-f201e15eae46\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:37.102375 master-0 kubenswrapper[27835]: I0318 13:44:37.102353 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7948c85d-504d-49e9-8c64-f201e15eae46-logs\") pod \"glance-4f519-default-internal-api-0\" (UID: \"7948c85d-504d-49e9-8c64-f201e15eae46\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:37.102501 master-0 kubenswrapper[27835]: I0318 13:44:37.102478 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7948c85d-504d-49e9-8c64-f201e15eae46-combined-ca-bundle\") pod \"glance-4f519-default-internal-api-0\" (UID: \"7948c85d-504d-49e9-8c64-f201e15eae46\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:37.102618 master-0 kubenswrapper[27835]: I0318 13:44:37.102596 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlr6p\" (UniqueName: \"kubernetes.io/projected/8aeac094-6720-488f-a255-c3042b569033-kube-api-access-hlr6p\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:37.127560 master-0 kubenswrapper[27835]: I0318 13:44:37.127499 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7948c85d-504d-49e9-8c64-f201e15eae46-internal-tls-certs\") pod \"glance-4f519-default-internal-api-0\" (UID: \"7948c85d-504d-49e9-8c64-f201e15eae46\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:37.131091 master-0 kubenswrapper[27835]: I0318 13:44:37.131045 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7948c85d-504d-49e9-8c64-f201e15eae46-logs\") pod \"glance-4f519-default-internal-api-0\" (UID: \"7948c85d-504d-49e9-8c64-f201e15eae46\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:37.132996 master-0 kubenswrapper[27835]: I0318 13:44:37.132943 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7948c85d-504d-49e9-8c64-f201e15eae46-config-data\") pod \"glance-4f519-default-internal-api-0\" (UID: \"7948c85d-504d-49e9-8c64-f201e15eae46\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:37.134440 master-0 kubenswrapper[27835]: I0318 13:44:37.134392 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7948c85d-504d-49e9-8c64-f201e15eae46-combined-ca-bundle\") pod \"glance-4f519-default-internal-api-0\" (UID: \"7948c85d-504d-49e9-8c64-f201e15eae46\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:37.142330 master-0 kubenswrapper[27835]: I0318 13:44:37.141523 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7948c85d-504d-49e9-8c64-f201e15eae46-scripts\") pod \"glance-4f519-default-internal-api-0\" (UID: \"7948c85d-504d-49e9-8c64-f201e15eae46\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:37.161963 master-0 kubenswrapper[27835]: I0318 13:44:37.152726 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n85m8\" (UniqueName: \"kubernetes.io/projected/7948c85d-504d-49e9-8c64-f201e15eae46-kube-api-access-n85m8\") pod \"glance-4f519-default-internal-api-0\" (UID: \"7948c85d-504d-49e9-8c64-f201e15eae46\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:37.241499 master-0 kubenswrapper[27835]: I0318 13:44:37.238807 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-758cc74c7c-r928t"] Mar 18 13:44:37.241499 master-0 kubenswrapper[27835]: I0318 13:44:37.241352 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ff789d6f-852a-4819-b19c-09444384ecbe\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8cd87a72-b057-4c0b-a353-f22fec868a61\") pod \"glance-4f519-default-external-api-0\" (UID: \"a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1\") " pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:37.255104 master-0 kubenswrapper[27835]: I0318 13:44:37.254917 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-758cc74c7c-r928t" Mar 18 13:44:37.316594 master-0 kubenswrapper[27835]: I0318 13:44:37.316481 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-758cc74c7c-r928t"] Mar 18 13:44:37.346513 master-0 kubenswrapper[27835]: I0318 13:44:37.343585 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:37.367395 master-0 kubenswrapper[27835]: I0318 13:44:37.360354 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Mar 18 13:44:37.367566 master-0 kubenswrapper[27835]: I0318 13:44:37.367149 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 18 13:44:37.376631 master-0 kubenswrapper[27835]: I0318 13:44:37.376557 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Mar 18 13:44:37.376829 master-0 kubenswrapper[27835]: I0318 13:44:37.376639 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Mar 18 13:44:37.377090 master-0 kubenswrapper[27835]: I0318 13:44:37.377039 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Mar 18 13:44:37.397515 master-0 kubenswrapper[27835]: I0318 13:44:37.392900 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Mar 18 13:44:37.442449 master-0 kubenswrapper[27835]: I0318 13:44:37.433128 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/673c137d-50de-48a3-aac1-036df40897d4-ovsdbserver-sb\") pod \"dnsmasq-dns-758cc74c7c-r928t\" (UID: \"673c137d-50de-48a3-aac1-036df40897d4\") " pod="openstack/dnsmasq-dns-758cc74c7c-r928t" Mar 18 13:44:37.442449 master-0 kubenswrapper[27835]: I0318 13:44:37.433183 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/673c137d-50de-48a3-aac1-036df40897d4-config\") pod \"dnsmasq-dns-758cc74c7c-r928t\" (UID: \"673c137d-50de-48a3-aac1-036df40897d4\") " pod="openstack/dnsmasq-dns-758cc74c7c-r928t" Mar 18 13:44:37.442449 master-0 kubenswrapper[27835]: I0318 13:44:37.433320 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/673c137d-50de-48a3-aac1-036df40897d4-dns-swift-storage-0\") pod \"dnsmasq-dns-758cc74c7c-r928t\" (UID: \"673c137d-50de-48a3-aac1-036df40897d4\") " pod="openstack/dnsmasq-dns-758cc74c7c-r928t" Mar 18 13:44:37.442449 master-0 kubenswrapper[27835]: I0318 13:44:37.433368 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6qjd\" (UniqueName: \"kubernetes.io/projected/673c137d-50de-48a3-aac1-036df40897d4-kube-api-access-b6qjd\") pod \"dnsmasq-dns-758cc74c7c-r928t\" (UID: \"673c137d-50de-48a3-aac1-036df40897d4\") " pod="openstack/dnsmasq-dns-758cc74c7c-r928t" Mar 18 13:44:37.442449 master-0 kubenswrapper[27835]: I0318 13:44:37.434369 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/673c137d-50de-48a3-aac1-036df40897d4-dns-svc\") pod \"dnsmasq-dns-758cc74c7c-r928t\" (UID: \"673c137d-50de-48a3-aac1-036df40897d4\") " pod="openstack/dnsmasq-dns-758cc74c7c-r928t" Mar 18 13:44:37.442449 master-0 kubenswrapper[27835]: I0318 13:44:37.435564 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/673c137d-50de-48a3-aac1-036df40897d4-ovsdbserver-nb\") pod \"dnsmasq-dns-758cc74c7c-r928t\" (UID: \"673c137d-50de-48a3-aac1-036df40897d4\") " pod="openstack/dnsmasq-dns-758cc74c7c-r928t" Mar 18 13:44:37.518274 master-0 kubenswrapper[27835]: I0318 13:44:37.515657 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-63a9-account-create-update-dbfqn" Mar 18 13:44:37.556391 master-0 kubenswrapper[27835]: I0318 13:44:37.556337 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/64f8cc11-8a96-401b-9e81-ebfc6db37453-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"64f8cc11-8a96-401b-9e81-ebfc6db37453\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:37.556614 master-0 kubenswrapper[27835]: I0318 13:44:37.556443 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/673c137d-50de-48a3-aac1-036df40897d4-dns-swift-storage-0\") pod \"dnsmasq-dns-758cc74c7c-r928t\" (UID: \"673c137d-50de-48a3-aac1-036df40897d4\") " pod="openstack/dnsmasq-dns-758cc74c7c-r928t" Mar 18 13:44:37.556614 master-0 kubenswrapper[27835]: I0318 13:44:37.556489 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmzxp\" (UniqueName: \"kubernetes.io/projected/64f8cc11-8a96-401b-9e81-ebfc6db37453-kube-api-access-lmzxp\") pod \"ironic-inspector-0\" (UID: \"64f8cc11-8a96-401b-9e81-ebfc6db37453\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:37.556614 master-0 kubenswrapper[27835]: I0318 13:44:37.556512 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6qjd\" (UniqueName: \"kubernetes.io/projected/673c137d-50de-48a3-aac1-036df40897d4-kube-api-access-b6qjd\") pod \"dnsmasq-dns-758cc74c7c-r928t\" (UID: \"673c137d-50de-48a3-aac1-036df40897d4\") " pod="openstack/dnsmasq-dns-758cc74c7c-r928t" Mar 18 13:44:37.556614 master-0 kubenswrapper[27835]: I0318 13:44:37.556544 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/673c137d-50de-48a3-aac1-036df40897d4-dns-svc\") pod \"dnsmasq-dns-758cc74c7c-r928t\" (UID: \"673c137d-50de-48a3-aac1-036df40897d4\") " pod="openstack/dnsmasq-dns-758cc74c7c-r928t" Mar 18 13:44:37.556748 master-0 kubenswrapper[27835]: I0318 13:44:37.556627 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/673c137d-50de-48a3-aac1-036df40897d4-ovsdbserver-nb\") pod \"dnsmasq-dns-758cc74c7c-r928t\" (UID: \"673c137d-50de-48a3-aac1-036df40897d4\") " pod="openstack/dnsmasq-dns-758cc74c7c-r928t" Mar 18 13:44:37.556748 master-0 kubenswrapper[27835]: I0318 13:44:37.556669 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/64f8cc11-8a96-401b-9e81-ebfc6db37453-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"64f8cc11-8a96-401b-9e81-ebfc6db37453\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:37.556748 master-0 kubenswrapper[27835]: I0318 13:44:37.556688 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64f8cc11-8a96-401b-9e81-ebfc6db37453-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"64f8cc11-8a96-401b-9e81-ebfc6db37453\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:37.556748 master-0 kubenswrapper[27835]: I0318 13:44:37.556717 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/64f8cc11-8a96-401b-9e81-ebfc6db37453-config\") pod \"ironic-inspector-0\" (UID: \"64f8cc11-8a96-401b-9e81-ebfc6db37453\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:37.556748 master-0 kubenswrapper[27835]: I0318 13:44:37.556739 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64f8cc11-8a96-401b-9e81-ebfc6db37453-scripts\") pod \"ironic-inspector-0\" (UID: \"64f8cc11-8a96-401b-9e81-ebfc6db37453\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:37.556895 master-0 kubenswrapper[27835]: I0318 13:44:37.556761 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/64f8cc11-8a96-401b-9e81-ebfc6db37453-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"64f8cc11-8a96-401b-9e81-ebfc6db37453\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:37.556895 master-0 kubenswrapper[27835]: I0318 13:44:37.556806 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/673c137d-50de-48a3-aac1-036df40897d4-ovsdbserver-sb\") pod \"dnsmasq-dns-758cc74c7c-r928t\" (UID: \"673c137d-50de-48a3-aac1-036df40897d4\") " pod="openstack/dnsmasq-dns-758cc74c7c-r928t" Mar 18 13:44:37.556895 master-0 kubenswrapper[27835]: I0318 13:44:37.556837 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/673c137d-50de-48a3-aac1-036df40897d4-config\") pod \"dnsmasq-dns-758cc74c7c-r928t\" (UID: \"673c137d-50de-48a3-aac1-036df40897d4\") " pod="openstack/dnsmasq-dns-758cc74c7c-r928t" Mar 18 13:44:37.562434 master-0 kubenswrapper[27835]: I0318 13:44:37.559728 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/673c137d-50de-48a3-aac1-036df40897d4-ovsdbserver-sb\") pod \"dnsmasq-dns-758cc74c7c-r928t\" (UID: \"673c137d-50de-48a3-aac1-036df40897d4\") " pod="openstack/dnsmasq-dns-758cc74c7c-r928t" Mar 18 13:44:37.562434 master-0 kubenswrapper[27835]: I0318 13:44:37.560638 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/673c137d-50de-48a3-aac1-036df40897d4-config\") pod \"dnsmasq-dns-758cc74c7c-r928t\" (UID: \"673c137d-50de-48a3-aac1-036df40897d4\") " pod="openstack/dnsmasq-dns-758cc74c7c-r928t" Mar 18 13:44:37.562882 master-0 kubenswrapper[27835]: I0318 13:44:37.562836 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/673c137d-50de-48a3-aac1-036df40897d4-dns-svc\") pod \"dnsmasq-dns-758cc74c7c-r928t\" (UID: \"673c137d-50de-48a3-aac1-036df40897d4\") " pod="openstack/dnsmasq-dns-758cc74c7c-r928t" Mar 18 13:44:37.564217 master-0 kubenswrapper[27835]: I0318 13:44:37.564175 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/673c137d-50de-48a3-aac1-036df40897d4-ovsdbserver-nb\") pod \"dnsmasq-dns-758cc74c7c-r928t\" (UID: \"673c137d-50de-48a3-aac1-036df40897d4\") " pod="openstack/dnsmasq-dns-758cc74c7c-r928t" Mar 18 13:44:37.570097 master-0 kubenswrapper[27835]: I0318 13:44:37.569367 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/673c137d-50de-48a3-aac1-036df40897d4-dns-swift-storage-0\") pod \"dnsmasq-dns-758cc74c7c-r928t\" (UID: \"673c137d-50de-48a3-aac1-036df40897d4\") " pod="openstack/dnsmasq-dns-758cc74c7c-r928t" Mar 18 13:44:37.590925 master-0 kubenswrapper[27835]: I0318 13:44:37.576713 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6qjd\" (UniqueName: \"kubernetes.io/projected/673c137d-50de-48a3-aac1-036df40897d4-kube-api-access-b6qjd\") pod \"dnsmasq-dns-758cc74c7c-r928t\" (UID: \"673c137d-50de-48a3-aac1-036df40897d4\") " pod="openstack/dnsmasq-dns-758cc74c7c-r928t" Mar 18 13:44:37.658086 master-0 kubenswrapper[27835]: I0318 13:44:37.658041 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gr5q\" (UniqueName: \"kubernetes.io/projected/38fdf2ba-5074-47b8-b534-c12270b771e8-kube-api-access-5gr5q\") pod \"38fdf2ba-5074-47b8-b534-c12270b771e8\" (UID: \"38fdf2ba-5074-47b8-b534-c12270b771e8\") " Mar 18 13:44:37.658737 master-0 kubenswrapper[27835]: I0318 13:44:37.658720 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38fdf2ba-5074-47b8-b534-c12270b771e8-operator-scripts\") pod \"38fdf2ba-5074-47b8-b534-c12270b771e8\" (UID: \"38fdf2ba-5074-47b8-b534-c12270b771e8\") " Mar 18 13:44:37.659229 master-0 kubenswrapper[27835]: I0318 13:44:37.659181 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38fdf2ba-5074-47b8-b534-c12270b771e8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "38fdf2ba-5074-47b8-b534-c12270b771e8" (UID: "38fdf2ba-5074-47b8-b534-c12270b771e8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:44:37.659323 master-0 kubenswrapper[27835]: I0318 13:44:37.659306 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/64f8cc11-8a96-401b-9e81-ebfc6db37453-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"64f8cc11-8a96-401b-9e81-ebfc6db37453\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:37.662673 master-0 kubenswrapper[27835]: I0318 13:44:37.660796 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64f8cc11-8a96-401b-9e81-ebfc6db37453-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"64f8cc11-8a96-401b-9e81-ebfc6db37453\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:37.662673 master-0 kubenswrapper[27835]: I0318 13:44:37.660889 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/64f8cc11-8a96-401b-9e81-ebfc6db37453-config\") pod \"ironic-inspector-0\" (UID: \"64f8cc11-8a96-401b-9e81-ebfc6db37453\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:37.662673 master-0 kubenswrapper[27835]: I0318 13:44:37.660934 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64f8cc11-8a96-401b-9e81-ebfc6db37453-scripts\") pod \"ironic-inspector-0\" (UID: \"64f8cc11-8a96-401b-9e81-ebfc6db37453\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:37.662673 master-0 kubenswrapper[27835]: I0318 13:44:37.660973 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/64f8cc11-8a96-401b-9e81-ebfc6db37453-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"64f8cc11-8a96-401b-9e81-ebfc6db37453\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:37.662673 master-0 kubenswrapper[27835]: I0318 13:44:37.661159 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/64f8cc11-8a96-401b-9e81-ebfc6db37453-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"64f8cc11-8a96-401b-9e81-ebfc6db37453\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:37.662673 master-0 kubenswrapper[27835]: I0318 13:44:37.661258 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmzxp\" (UniqueName: \"kubernetes.io/projected/64f8cc11-8a96-401b-9e81-ebfc6db37453-kube-api-access-lmzxp\") pod \"ironic-inspector-0\" (UID: \"64f8cc11-8a96-401b-9e81-ebfc6db37453\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:37.662673 master-0 kubenswrapper[27835]: I0318 13:44:37.661439 27835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38fdf2ba-5074-47b8-b534-c12270b771e8-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:37.662673 master-0 kubenswrapper[27835]: I0318 13:44:37.662399 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38fdf2ba-5074-47b8-b534-c12270b771e8-kube-api-access-5gr5q" (OuterVolumeSpecName: "kube-api-access-5gr5q") pod "38fdf2ba-5074-47b8-b534-c12270b771e8" (UID: "38fdf2ba-5074-47b8-b534-c12270b771e8"). InnerVolumeSpecName "kube-api-access-5gr5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:44:37.676156 master-0 kubenswrapper[27835]: I0318 13:44:37.663309 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/64f8cc11-8a96-401b-9e81-ebfc6db37453-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"64f8cc11-8a96-401b-9e81-ebfc6db37453\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:37.676156 master-0 kubenswrapper[27835]: I0318 13:44:37.663400 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/64f8cc11-8a96-401b-9e81-ebfc6db37453-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"64f8cc11-8a96-401b-9e81-ebfc6db37453\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:37.676156 master-0 kubenswrapper[27835]: I0318 13:44:37.663626 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/64f8cc11-8a96-401b-9e81-ebfc6db37453-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"64f8cc11-8a96-401b-9e81-ebfc6db37453\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:37.676156 master-0 kubenswrapper[27835]: I0318 13:44:37.667799 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/64f8cc11-8a96-401b-9e81-ebfc6db37453-config\") pod \"ironic-inspector-0\" (UID: \"64f8cc11-8a96-401b-9e81-ebfc6db37453\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:37.676156 master-0 kubenswrapper[27835]: I0318 13:44:37.668862 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64f8cc11-8a96-401b-9e81-ebfc6db37453-scripts\") pod \"ironic-inspector-0\" (UID: \"64f8cc11-8a96-401b-9e81-ebfc6db37453\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:37.676156 master-0 kubenswrapper[27835]: I0318 13:44:37.670709 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64f8cc11-8a96-401b-9e81-ebfc6db37453-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"64f8cc11-8a96-401b-9e81-ebfc6db37453\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:37.691923 master-0 kubenswrapper[27835]: I0318 13:44:37.678174 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmzxp\" (UniqueName: \"kubernetes.io/projected/64f8cc11-8a96-401b-9e81-ebfc6db37453-kube-api-access-lmzxp\") pod \"ironic-inspector-0\" (UID: \"64f8cc11-8a96-401b-9e81-ebfc6db37453\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:37.763521 master-0 kubenswrapper[27835]: I0318 13:44:37.763475 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gr5q\" (UniqueName: \"kubernetes.io/projected/38fdf2ba-5074-47b8-b534-c12270b771e8-kube-api-access-5gr5q\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:37.789424 master-0 kubenswrapper[27835]: I0318 13:44:37.787130 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-758cc74c7c-r928t" Mar 18 13:44:37.809123 master-0 kubenswrapper[27835]: I0318 13:44:37.808012 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 18 13:44:37.899304 master-0 kubenswrapper[27835]: I0318 13:44:37.895491 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-63a9-account-create-update-dbfqn" Mar 18 13:44:37.899704 master-0 kubenswrapper[27835]: I0318 13:44:37.899655 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-63a9-account-create-update-dbfqn" event={"ID":"38fdf2ba-5074-47b8-b534-c12270b771e8","Type":"ContainerDied","Data":"b39100a7307ff83643100924a902625152d1195a3755b187ac443c45173c97d3"} Mar 18 13:44:37.899771 master-0 kubenswrapper[27835]: I0318 13:44:37.899722 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b39100a7307ff83643100924a902625152d1195a3755b187ac443c45173c97d3" Mar 18 13:44:37.899771 master-0 kubenswrapper[27835]: I0318 13:44:37.898614 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ea54-account-create-update-85drm" Mar 18 13:44:37.919499 master-0 kubenswrapper[27835]: I0318 13:44:37.895573 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ff10-account-create-update-49jqz" Mar 18 13:44:37.922743 master-0 kubenswrapper[27835]: I0318 13:44:37.898535 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-g9ddk" Mar 18 13:44:38.027513 master-0 kubenswrapper[27835]: I0318 13:44:38.026619 27835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-6f6cbd65db-gzvz8" podUID="8db812c3-c391-4147-8220-fdd68cdd11d3" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 18 13:44:38.205436 master-0 kubenswrapper[27835]: I0318 13:44:38.205106 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3ead41c4-903a-4686-a384-328e4b9fb938\" (UniqueName: \"kubernetes.io/csi/topolvm.io^919e8c73-7055-4dab-b7f5-393f87412c76\") pod \"glance-4f519-default-internal-api-0\" (UID: \"7948c85d-504d-49e9-8c64-f201e15eae46\") " pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:38.244501 master-0 kubenswrapper[27835]: I0318 13:44:38.235946 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:38.340239 master-0 kubenswrapper[27835]: I0318 13:44:38.323876 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bf459ab-dc8e-4a13-bbee-b68d6d031781" path="/var/lib/kubelet/pods/9bf459ab-dc8e-4a13-bbee-b68d6d031781/volumes" Mar 18 13:44:38.340239 master-0 kubenswrapper[27835]: I0318 13:44:38.325175 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4f519-default-external-api-0"] Mar 18 13:44:38.340239 master-0 kubenswrapper[27835]: I0318 13:44:38.333809 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-8645fd5fb8-gm6gg" Mar 18 13:44:38.406005 master-0 kubenswrapper[27835]: I0318 13:44:38.405611 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-8645fd5fb8-gm6gg" Mar 18 13:44:38.570972 master-0 kubenswrapper[27835]: I0318 13:44:38.568778 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6c47db445d-25kc6"] Mar 18 13:44:38.570972 master-0 kubenswrapper[27835]: I0318 13:44:38.569046 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6c47db445d-25kc6" podUID="dd2c52d8-a2cf-41a1-ab31-fd9d02b61538" containerName="placement-log" containerID="cri-o://1ee0190063a88a212d966248f9e1f2c5d2ac68c455f5aff12f178d4ff423a4e1" gracePeriod=30 Mar 18 13:44:38.570972 master-0 kubenswrapper[27835]: I0318 13:44:38.569610 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6c47db445d-25kc6" podUID="dd2c52d8-a2cf-41a1-ab31-fd9d02b61538" containerName="placement-api" containerID="cri-o://2f8dfe58ba6f40cba9e5cee1635a31c77ec8ffe24c85dadafa02aacae14c92e8" gracePeriod=30 Mar 18 13:44:38.731461 master-0 kubenswrapper[27835]: I0318 13:44:38.728746 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-758cc74c7c-r928t"] Mar 18 13:44:38.731461 master-0 kubenswrapper[27835]: W0318 13:44:38.730603 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod673c137d_50de_48a3_aac1_036df40897d4.slice/crio-1dfe63b7755c206fe3b1b17d14ef6489735473f86131751c2c5f6c4f55401c8e WatchSource:0}: Error finding container 1dfe63b7755c206fe3b1b17d14ef6489735473f86131751c2c5f6c4f55401c8e: Status 404 returned error can't find the container with id 1dfe63b7755c206fe3b1b17d14ef6489735473f86131751c2c5f6c4f55401c8e Mar 18 13:44:38.927353 master-0 kubenswrapper[27835]: I0318 13:44:38.927211 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6f6cbd65db-gzvz8" Mar 18 13:44:39.028244 master-0 kubenswrapper[27835]: I0318 13:44:39.028206 27835 generic.go:334] "Generic (PLEG): container finished" podID="dd2c52d8-a2cf-41a1-ab31-fd9d02b61538" containerID="1ee0190063a88a212d966248f9e1f2c5d2ac68c455f5aff12f178d4ff423a4e1" exitCode=143 Mar 18 13:44:39.028330 master-0 kubenswrapper[27835]: I0318 13:44:39.028270 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c47db445d-25kc6" event={"ID":"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538","Type":"ContainerDied","Data":"1ee0190063a88a212d966248f9e1f2c5d2ac68c455f5aff12f178d4ff423a4e1"} Mar 18 13:44:39.030787 master-0 kubenswrapper[27835]: I0318 13:44:39.030735 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4f519-default-external-api-0" event={"ID":"a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1","Type":"ContainerStarted","Data":"0c8adc5212cf183b1e99a2a6e7ce9b689d73be1039c7a95a630cab6c2881f50b"} Mar 18 13:44:39.037938 master-0 kubenswrapper[27835]: I0318 13:44:39.035867 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-758cc74c7c-r928t" event={"ID":"673c137d-50de-48a3-aac1-036df40897d4","Type":"ContainerStarted","Data":"1dfe63b7755c206fe3b1b17d14ef6489735473f86131751c2c5f6c4f55401c8e"} Mar 18 13:44:39.051713 master-0 kubenswrapper[27835]: I0318 13:44:39.051497 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6f6cbd65db-gzvz8" Mar 18 13:44:39.125294 master-0 kubenswrapper[27835]: I0318 13:44:39.120559 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Mar 18 13:44:39.389354 master-0 kubenswrapper[27835]: I0318 13:44:39.389305 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4f519-default-internal-api-0"] Mar 18 13:44:39.641024 master-0 kubenswrapper[27835]: I0318 13:44:39.639868 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rnqgv"] Mar 18 13:44:39.641024 master-0 kubenswrapper[27835]: E0318 13:44:39.640612 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38fdf2ba-5074-47b8-b534-c12270b771e8" containerName="mariadb-account-create-update" Mar 18 13:44:39.641024 master-0 kubenswrapper[27835]: I0318 13:44:39.640634 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="38fdf2ba-5074-47b8-b534-c12270b771e8" containerName="mariadb-account-create-update" Mar 18 13:44:39.642160 master-0 kubenswrapper[27835]: I0318 13:44:39.642134 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="38fdf2ba-5074-47b8-b534-c12270b771e8" containerName="mariadb-account-create-update" Mar 18 13:44:39.643451 master-0 kubenswrapper[27835]: I0318 13:44:39.643289 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rnqgv" Mar 18 13:44:39.650447 master-0 kubenswrapper[27835]: I0318 13:44:39.649252 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 18 13:44:39.653466 master-0 kubenswrapper[27835]: I0318 13:44:39.652526 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Mar 18 13:44:39.661541 master-0 kubenswrapper[27835]: I0318 13:44:39.661496 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rnqgv"] Mar 18 13:44:39.736280 master-0 kubenswrapper[27835]: I0318 13:44:39.736224 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67rwb\" (UniqueName: \"kubernetes.io/projected/37131fa0-c66e-4abc-b58f-f84c492056df-kube-api-access-67rwb\") pod \"nova-cell0-conductor-db-sync-rnqgv\" (UID: \"37131fa0-c66e-4abc-b58f-f84c492056df\") " pod="openstack/nova-cell0-conductor-db-sync-rnqgv" Mar 18 13:44:39.736797 master-0 kubenswrapper[27835]: I0318 13:44:39.736319 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37131fa0-c66e-4abc-b58f-f84c492056df-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rnqgv\" (UID: \"37131fa0-c66e-4abc-b58f-f84c492056df\") " pod="openstack/nova-cell0-conductor-db-sync-rnqgv" Mar 18 13:44:39.736797 master-0 kubenswrapper[27835]: I0318 13:44:39.736382 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37131fa0-c66e-4abc-b58f-f84c492056df-scripts\") pod \"nova-cell0-conductor-db-sync-rnqgv\" (UID: \"37131fa0-c66e-4abc-b58f-f84c492056df\") " pod="openstack/nova-cell0-conductor-db-sync-rnqgv" Mar 18 13:44:39.736797 master-0 kubenswrapper[27835]: I0318 13:44:39.736484 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37131fa0-c66e-4abc-b58f-f84c492056df-config-data\") pod \"nova-cell0-conductor-db-sync-rnqgv\" (UID: \"37131fa0-c66e-4abc-b58f-f84c492056df\") " pod="openstack/nova-cell0-conductor-db-sync-rnqgv" Mar 18 13:44:39.843713 master-0 kubenswrapper[27835]: I0318 13:44:39.838770 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67rwb\" (UniqueName: \"kubernetes.io/projected/37131fa0-c66e-4abc-b58f-f84c492056df-kube-api-access-67rwb\") pod \"nova-cell0-conductor-db-sync-rnqgv\" (UID: \"37131fa0-c66e-4abc-b58f-f84c492056df\") " pod="openstack/nova-cell0-conductor-db-sync-rnqgv" Mar 18 13:44:39.843713 master-0 kubenswrapper[27835]: I0318 13:44:39.838856 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37131fa0-c66e-4abc-b58f-f84c492056df-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rnqgv\" (UID: \"37131fa0-c66e-4abc-b58f-f84c492056df\") " pod="openstack/nova-cell0-conductor-db-sync-rnqgv" Mar 18 13:44:39.843713 master-0 kubenswrapper[27835]: I0318 13:44:39.838921 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37131fa0-c66e-4abc-b58f-f84c492056df-scripts\") pod \"nova-cell0-conductor-db-sync-rnqgv\" (UID: \"37131fa0-c66e-4abc-b58f-f84c492056df\") " pod="openstack/nova-cell0-conductor-db-sync-rnqgv" Mar 18 13:44:39.843713 master-0 kubenswrapper[27835]: I0318 13:44:39.839006 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37131fa0-c66e-4abc-b58f-f84c492056df-config-data\") pod \"nova-cell0-conductor-db-sync-rnqgv\" (UID: \"37131fa0-c66e-4abc-b58f-f84c492056df\") " pod="openstack/nova-cell0-conductor-db-sync-rnqgv" Mar 18 13:44:39.844547 master-0 kubenswrapper[27835]: I0318 13:44:39.844500 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37131fa0-c66e-4abc-b58f-f84c492056df-config-data\") pod \"nova-cell0-conductor-db-sync-rnqgv\" (UID: \"37131fa0-c66e-4abc-b58f-f84c492056df\") " pod="openstack/nova-cell0-conductor-db-sync-rnqgv" Mar 18 13:44:39.859231 master-0 kubenswrapper[27835]: I0318 13:44:39.854990 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37131fa0-c66e-4abc-b58f-f84c492056df-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rnqgv\" (UID: \"37131fa0-c66e-4abc-b58f-f84c492056df\") " pod="openstack/nova-cell0-conductor-db-sync-rnqgv" Mar 18 13:44:39.859231 master-0 kubenswrapper[27835]: I0318 13:44:39.859110 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37131fa0-c66e-4abc-b58f-f84c492056df-scripts\") pod \"nova-cell0-conductor-db-sync-rnqgv\" (UID: \"37131fa0-c66e-4abc-b58f-f84c492056df\") " pod="openstack/nova-cell0-conductor-db-sync-rnqgv" Mar 18 13:44:39.866458 master-0 kubenswrapper[27835]: I0318 13:44:39.864371 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67rwb\" (UniqueName: \"kubernetes.io/projected/37131fa0-c66e-4abc-b58f-f84c492056df-kube-api-access-67rwb\") pod \"nova-cell0-conductor-db-sync-rnqgv\" (UID: \"37131fa0-c66e-4abc-b58f-f84c492056df\") " pod="openstack/nova-cell0-conductor-db-sync-rnqgv" Mar 18 13:44:39.970489 master-0 kubenswrapper[27835]: I0318 13:44:39.969632 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rnqgv" Mar 18 13:44:40.075448 master-0 kubenswrapper[27835]: I0318 13:44:40.074991 27835 generic.go:334] "Generic (PLEG): container finished" podID="673c137d-50de-48a3-aac1-036df40897d4" containerID="38d9ba65e9a3d0e9a2e16c37dc3a3bba8beb86991b1e456a034cc26bcb3d3546" exitCode=0 Mar 18 13:44:40.075448 master-0 kubenswrapper[27835]: I0318 13:44:40.075088 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-758cc74c7c-r928t" event={"ID":"673c137d-50de-48a3-aac1-036df40897d4","Type":"ContainerDied","Data":"38d9ba65e9a3d0e9a2e16c37dc3a3bba8beb86991b1e456a034cc26bcb3d3546"} Mar 18 13:44:40.080485 master-0 kubenswrapper[27835]: I0318 13:44:40.080454 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4f519-default-internal-api-0" event={"ID":"7948c85d-504d-49e9-8c64-f201e15eae46","Type":"ContainerStarted","Data":"df3a1c73e6480a424e036c29af79475b8339cfee7f4e1bff0376346849820962"} Mar 18 13:44:40.084879 master-0 kubenswrapper[27835]: I0318 13:44:40.084287 27835 generic.go:334] "Generic (PLEG): container finished" podID="64f8cc11-8a96-401b-9e81-ebfc6db37453" containerID="a57fdf20bbcb5de8e86b1fe52ef95f701e73cc170845538db1d93f8bbb0ea92f" exitCode=0 Mar 18 13:44:40.085059 master-0 kubenswrapper[27835]: I0318 13:44:40.084934 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"64f8cc11-8a96-401b-9e81-ebfc6db37453","Type":"ContainerDied","Data":"a57fdf20bbcb5de8e86b1fe52ef95f701e73cc170845538db1d93f8bbb0ea92f"} Mar 18 13:44:40.085059 master-0 kubenswrapper[27835]: I0318 13:44:40.084982 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"64f8cc11-8a96-401b-9e81-ebfc6db37453","Type":"ContainerStarted","Data":"dfd4c55d494a62067887016eb2c66c5f442cba7ede9fb07d8fe353b78174100d"} Mar 18 13:44:40.087976 master-0 kubenswrapper[27835]: I0318 13:44:40.087933 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4f519-default-external-api-0" event={"ID":"a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1","Type":"ContainerStarted","Data":"44084f4aca2bac1acb4ad00ab920470f6256c2cacd3d4f9438f9105620fb81a9"} Mar 18 13:44:40.132710 master-0 kubenswrapper[27835]: I0318 13:44:40.132399 27835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 13:44:40.416405 master-0 kubenswrapper[27835]: I0318 13:44:40.416354 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" Mar 18 13:44:40.577991 master-0 kubenswrapper[27835]: I0318 13:44:40.575543 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rnqgv"] Mar 18 13:44:40.584932 master-0 kubenswrapper[27835]: W0318 13:44:40.584736 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37131fa0_c66e_4abc_b58f_f84c492056df.slice/crio-ca29d494455632bd4f3b0a2cbe27883ecdd14fff84e580c862af493c2a0ecdcc WatchSource:0}: Error finding container ca29d494455632bd4f3b0a2cbe27883ecdd14fff84e580c862af493c2a0ecdcc: Status 404 returned error can't find the container with id ca29d494455632bd4f3b0a2cbe27883ecdd14fff84e580c862af493c2a0ecdcc Mar 18 13:44:41.109492 master-0 kubenswrapper[27835]: I0318 13:44:41.109207 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4f519-default-external-api-0" event={"ID":"a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1","Type":"ContainerStarted","Data":"bd19fd81dc676d8b74aa327e6a4204202c96ee8b47cb1ae23993fb163dc17c1e"} Mar 18 13:44:41.117370 master-0 kubenswrapper[27835]: I0318 13:44:41.115386 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-758cc74c7c-r928t" event={"ID":"673c137d-50de-48a3-aac1-036df40897d4","Type":"ContainerStarted","Data":"fcc331738f502bf6c2f7e8d9f9c13f69c47d421c8a114260ab7a59a82041a852"} Mar 18 13:44:41.117712 master-0 kubenswrapper[27835]: I0318 13:44:41.117379 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-758cc74c7c-r928t" Mar 18 13:44:41.119249 master-0 kubenswrapper[27835]: I0318 13:44:41.119184 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4f519-default-internal-api-0" event={"ID":"7948c85d-504d-49e9-8c64-f201e15eae46","Type":"ContainerStarted","Data":"43ec2924a1d169c58dc6a63f815a612549140733035d251ad47b574dd777d0d2"} Mar 18 13:44:41.121632 master-0 kubenswrapper[27835]: I0318 13:44:41.121570 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rnqgv" event={"ID":"37131fa0-c66e-4abc-b58f-f84c492056df","Type":"ContainerStarted","Data":"ca29d494455632bd4f3b0a2cbe27883ecdd14fff84e580c862af493c2a0ecdcc"} Mar 18 13:44:41.254038 master-0 kubenswrapper[27835]: I0318 13:44:41.247054 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-4f519-default-external-api-0" podStartSLOduration=6.24703498 podStartE2EDuration="6.24703498s" podCreationTimestamp="2026-03-18 13:44:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:44:41.239039768 +0000 UTC m=+1245.204251338" watchObservedRunningTime="2026-03-18 13:44:41.24703498 +0000 UTC m=+1245.212246540" Mar 18 13:44:41.280024 master-0 kubenswrapper[27835]: I0318 13:44:41.279955 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-758cc74c7c-r928t" podStartSLOduration=5.279937462 podStartE2EDuration="5.279937462s" podCreationTimestamp="2026-03-18 13:44:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:44:41.261660097 +0000 UTC m=+1245.226871667" watchObservedRunningTime="2026-03-18 13:44:41.279937462 +0000 UTC m=+1245.245149022" Mar 18 13:44:42.137847 master-0 kubenswrapper[27835]: I0318 13:44:42.137741 27835 generic.go:334] "Generic (PLEG): container finished" podID="dd2c52d8-a2cf-41a1-ab31-fd9d02b61538" containerID="2f8dfe58ba6f40cba9e5cee1635a31c77ec8ffe24c85dadafa02aacae14c92e8" exitCode=0 Mar 18 13:44:42.137847 master-0 kubenswrapper[27835]: I0318 13:44:42.137792 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c47db445d-25kc6" event={"ID":"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538","Type":"ContainerDied","Data":"2f8dfe58ba6f40cba9e5cee1635a31c77ec8ffe24c85dadafa02aacae14c92e8"} Mar 18 13:44:42.140168 master-0 kubenswrapper[27835]: I0318 13:44:42.140140 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"d2a793d4-62c6-4482-a5e5-21ed4cc72e33","Type":"ContainerDied","Data":"5fd35d585f61f77da01e7b63f5236b4007ee2cd18fb459f5a50a3fe9098e8fed"} Mar 18 13:44:42.140274 master-0 kubenswrapper[27835]: I0318 13:44:42.140087 27835 generic.go:334] "Generic (PLEG): container finished" podID="d2a793d4-62c6-4482-a5e5-21ed4cc72e33" containerID="5fd35d585f61f77da01e7b63f5236b4007ee2cd18fb459f5a50a3fe9098e8fed" exitCode=0 Mar 18 13:44:42.146758 master-0 kubenswrapper[27835]: I0318 13:44:42.145955 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4f519-default-internal-api-0" event={"ID":"7948c85d-504d-49e9-8c64-f201e15eae46","Type":"ContainerStarted","Data":"ed2bfd238f7e2a134f99eb5a1e27da1e85d44da07c919620e6a0aebf09846768"} Mar 18 13:44:42.228747 master-0 kubenswrapper[27835]: I0318 13:44:42.228600 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-4f519-default-internal-api-0" podStartSLOduration=6.228580404 podStartE2EDuration="6.228580404s" podCreationTimestamp="2026-03-18 13:44:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:44:42.225884272 +0000 UTC m=+1246.191095832" watchObservedRunningTime="2026-03-18 13:44:42.228580404 +0000 UTC m=+1246.193791954" Mar 18 13:44:42.538116 master-0 kubenswrapper[27835]: I0318 13:44:42.536493 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Mar 18 13:44:42.719747 master-0 kubenswrapper[27835]: I0318 13:44:42.719698 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c47db445d-25kc6" Mar 18 13:44:42.868367 master-0 kubenswrapper[27835]: I0318 13:44:42.868231 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-config-data\") pod \"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538\" (UID: \"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538\") " Mar 18 13:44:42.868581 master-0 kubenswrapper[27835]: I0318 13:44:42.868487 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-public-tls-certs\") pod \"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538\" (UID: \"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538\") " Mar 18 13:44:42.868581 master-0 kubenswrapper[27835]: I0318 13:44:42.868544 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-internal-tls-certs\") pod \"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538\" (UID: \"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538\") " Mar 18 13:44:42.871321 master-0 kubenswrapper[27835]: I0318 13:44:42.868981 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrvrb\" (UniqueName: \"kubernetes.io/projected/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-kube-api-access-zrvrb\") pod \"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538\" (UID: \"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538\") " Mar 18 13:44:42.871321 master-0 kubenswrapper[27835]: I0318 13:44:42.869190 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-logs\") pod \"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538\" (UID: \"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538\") " Mar 18 13:44:42.871321 master-0 kubenswrapper[27835]: I0318 13:44:42.869246 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-combined-ca-bundle\") pod \"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538\" (UID: \"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538\") " Mar 18 13:44:42.871321 master-0 kubenswrapper[27835]: I0318 13:44:42.869272 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-scripts\") pod \"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538\" (UID: \"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538\") " Mar 18 13:44:42.871321 master-0 kubenswrapper[27835]: I0318 13:44:42.870652 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-logs" (OuterVolumeSpecName: "logs") pod "dd2c52d8-a2cf-41a1-ab31-fd9d02b61538" (UID: "dd2c52d8-a2cf-41a1-ab31-fd9d02b61538"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:44:42.882373 master-0 kubenswrapper[27835]: I0318 13:44:42.880387 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-kube-api-access-zrvrb" (OuterVolumeSpecName: "kube-api-access-zrvrb") pod "dd2c52d8-a2cf-41a1-ab31-fd9d02b61538" (UID: "dd2c52d8-a2cf-41a1-ab31-fd9d02b61538"). InnerVolumeSpecName "kube-api-access-zrvrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:44:42.906727 master-0 kubenswrapper[27835]: I0318 13:44:42.906681 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-scripts" (OuterVolumeSpecName: "scripts") pod "dd2c52d8-a2cf-41a1-ab31-fd9d02b61538" (UID: "dd2c52d8-a2cf-41a1-ab31-fd9d02b61538"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:42.976255 master-0 kubenswrapper[27835]: I0318 13:44:42.976187 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrvrb\" (UniqueName: \"kubernetes.io/projected/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-kube-api-access-zrvrb\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:42.976255 master-0 kubenswrapper[27835]: I0318 13:44:42.976231 27835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:42.976255 master-0 kubenswrapper[27835]: I0318 13:44:42.976247 27835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:43.046543 master-0 kubenswrapper[27835]: I0318 13:44:43.046434 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd2c52d8-a2cf-41a1-ab31-fd9d02b61538" (UID: "dd2c52d8-a2cf-41a1-ab31-fd9d02b61538"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:43.050938 master-0 kubenswrapper[27835]: I0318 13:44:43.050877 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-config-data" (OuterVolumeSpecName: "config-data") pod "dd2c52d8-a2cf-41a1-ab31-fd9d02b61538" (UID: "dd2c52d8-a2cf-41a1-ab31-fd9d02b61538"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:43.078634 master-0 kubenswrapper[27835]: I0318 13:44:43.078574 27835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:43.078634 master-0 kubenswrapper[27835]: I0318 13:44:43.078634 27835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:43.121009 master-0 kubenswrapper[27835]: I0318 13:44:43.120881 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "dd2c52d8-a2cf-41a1-ab31-fd9d02b61538" (UID: "dd2c52d8-a2cf-41a1-ab31-fd9d02b61538"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:43.172439 master-0 kubenswrapper[27835]: I0318 13:44:43.170868 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "dd2c52d8-a2cf-41a1-ab31-fd9d02b61538" (UID: "dd2c52d8-a2cf-41a1-ab31-fd9d02b61538"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:43.184049 master-0 kubenswrapper[27835]: I0318 13:44:43.183990 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c47db445d-25kc6" Mar 18 13:44:43.184626 master-0 kubenswrapper[27835]: I0318 13:44:43.184507 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c47db445d-25kc6" event={"ID":"dd2c52d8-a2cf-41a1-ab31-fd9d02b61538","Type":"ContainerDied","Data":"3a71004c61f057c46e8c65260810d7aee461fe2099126319927dd2b6849e078a"} Mar 18 13:44:43.184626 master-0 kubenswrapper[27835]: I0318 13:44:43.184551 27835 scope.go:117] "RemoveContainer" containerID="2f8dfe58ba6f40cba9e5cee1635a31c77ec8ffe24c85dadafa02aacae14c92e8" Mar 18 13:44:43.184746 master-0 kubenswrapper[27835]: I0318 13:44:43.184661 27835 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:43.184746 master-0 kubenswrapper[27835]: I0318 13:44:43.184674 27835 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:43.244673 master-0 kubenswrapper[27835]: I0318 13:44:43.244599 27835 scope.go:117] "RemoveContainer" containerID="1ee0190063a88a212d966248f9e1f2c5d2ac68c455f5aff12f178d4ff423a4e1" Mar 18 13:44:43.303547 master-0 kubenswrapper[27835]: I0318 13:44:43.297889 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6c47db445d-25kc6"] Mar 18 13:44:43.303547 master-0 kubenswrapper[27835]: I0318 13:44:43.301442 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6c47db445d-25kc6"] Mar 18 13:44:44.311057 master-0 kubenswrapper[27835]: I0318 13:44:44.310987 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd2c52d8-a2cf-41a1-ab31-fd9d02b61538" path="/var/lib/kubelet/pods/dd2c52d8-a2cf-41a1-ab31-fd9d02b61538/volumes" Mar 18 13:44:45.350744 master-0 kubenswrapper[27835]: E0318 13:44:45.350666 27835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c4afe50328f90619bc0c3f987f293d63b41dea331885733ac695f63916aa3529 is running failed: container process not found" containerID="c4afe50328f90619bc0c3f987f293d63b41dea331885733ac695f63916aa3529" cmd=["/bin/true"] Mar 18 13:44:45.352735 master-0 kubenswrapper[27835]: E0318 13:44:45.352640 27835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c4afe50328f90619bc0c3f987f293d63b41dea331885733ac695f63916aa3529 is running failed: container process not found" containerID="c4afe50328f90619bc0c3f987f293d63b41dea331885733ac695f63916aa3529" cmd=["/bin/true"] Mar 18 13:44:45.352735 master-0 kubenswrapper[27835]: E0318 13:44:45.352664 27835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c4afe50328f90619bc0c3f987f293d63b41dea331885733ac695f63916aa3529 is running failed: container process not found" containerID="c4afe50328f90619bc0c3f987f293d63b41dea331885733ac695f63916aa3529" cmd=["/bin/true"] Mar 18 13:44:45.354113 master-0 kubenswrapper[27835]: E0318 13:44:45.354042 27835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c4afe50328f90619bc0c3f987f293d63b41dea331885733ac695f63916aa3529 is running failed: container process not found" containerID="c4afe50328f90619bc0c3f987f293d63b41dea331885733ac695f63916aa3529" cmd=["/bin/true"] Mar 18 13:44:45.354222 master-0 kubenswrapper[27835]: E0318 13:44:45.354130 27835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c4afe50328f90619bc0c3f987f293d63b41dea331885733ac695f63916aa3529 is running failed: container process not found" containerID="c4afe50328f90619bc0c3f987f293d63b41dea331885733ac695f63916aa3529" cmd=["/bin/true"] Mar 18 13:44:45.354222 master-0 kubenswrapper[27835]: E0318 13:44:45.354174 27835 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c4afe50328f90619bc0c3f987f293d63b41dea331885733ac695f63916aa3529 is running failed: container process not found" probeType="Liveness" pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" podUID="cc7df07d-4c6b-469f-b007-e3d799a49fd5" containerName="ironic-neutron-agent" Mar 18 13:44:45.358030 master-0 kubenswrapper[27835]: E0318 13:44:45.357995 27835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c4afe50328f90619bc0c3f987f293d63b41dea331885733ac695f63916aa3529 is running failed: container process not found" containerID="c4afe50328f90619bc0c3f987f293d63b41dea331885733ac695f63916aa3529" cmd=["/bin/true"] Mar 18 13:44:45.358129 master-0 kubenswrapper[27835]: E0318 13:44:45.358032 27835 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c4afe50328f90619bc0c3f987f293d63b41dea331885733ac695f63916aa3529 is running failed: container process not found" probeType="Readiness" pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" podUID="cc7df07d-4c6b-469f-b007-e3d799a49fd5" containerName="ironic-neutron-agent" Mar 18 13:44:47.281290 master-0 kubenswrapper[27835]: I0318 13:44:47.281126 27835 generic.go:334] "Generic (PLEG): container finished" podID="cc7df07d-4c6b-469f-b007-e3d799a49fd5" containerID="c4afe50328f90619bc0c3f987f293d63b41dea331885733ac695f63916aa3529" exitCode=1 Mar 18 13:44:47.281290 master-0 kubenswrapper[27835]: I0318 13:44:47.281215 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" event={"ID":"cc7df07d-4c6b-469f-b007-e3d799a49fd5","Type":"ContainerDied","Data":"c4afe50328f90619bc0c3f987f293d63b41dea331885733ac695f63916aa3529"} Mar 18 13:44:47.281290 master-0 kubenswrapper[27835]: I0318 13:44:47.281257 27835 scope.go:117] "RemoveContainer" containerID="80b4b2995fefbec0d442226eccc7a5eb10c5dacc44d174dfeb0f7efe356f2ee5" Mar 18 13:44:47.282586 master-0 kubenswrapper[27835]: I0318 13:44:47.282375 27835 scope.go:117] "RemoveContainer" containerID="c4afe50328f90619bc0c3f987f293d63b41dea331885733ac695f63916aa3529" Mar 18 13:44:47.282932 master-0 kubenswrapper[27835]: E0318 13:44:47.282827 27835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-689c666fd-tjnb9_openstack(cc7df07d-4c6b-469f-b007-e3d799a49fd5)\"" pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" podUID="cc7df07d-4c6b-469f-b007-e3d799a49fd5" Mar 18 13:44:47.346799 master-0 kubenswrapper[27835]: I0318 13:44:47.346737 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:47.346799 master-0 kubenswrapper[27835]: I0318 13:44:47.346800 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:47.405549 master-0 kubenswrapper[27835]: I0318 13:44:47.405490 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:47.406816 master-0 kubenswrapper[27835]: I0318 13:44:47.406369 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:47.789983 master-0 kubenswrapper[27835]: I0318 13:44:47.789900 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-758cc74c7c-r928t" Mar 18 13:44:47.971183 master-0 kubenswrapper[27835]: I0318 13:44:47.971110 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77bd5547c7-x2vlw"] Mar 18 13:44:47.971616 master-0 kubenswrapper[27835]: I0318 13:44:47.971436 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77bd5547c7-x2vlw" podUID="e55b5aa7-9a4f-4042-91b6-6f03aaeada53" containerName="dnsmasq-dns" containerID="cri-o://d23f8cc53a3e4ddc3643e1801a64fc62b1e0b556bc933ce54e5a6a65b3886338" gracePeriod=10 Mar 18 13:44:48.237695 master-0 kubenswrapper[27835]: I0318 13:44:48.237612 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:48.237695 master-0 kubenswrapper[27835]: I0318 13:44:48.237665 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:48.282151 master-0 kubenswrapper[27835]: I0318 13:44:48.280379 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:48.317815 master-0 kubenswrapper[27835]: I0318 13:44:48.317698 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:48.317815 master-0 kubenswrapper[27835]: I0318 13:44:48.317777 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:48.317815 master-0 kubenswrapper[27835]: I0318 13:44:48.317789 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:48.317815 master-0 kubenswrapper[27835]: I0318 13:44:48.317799 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:48.317815 master-0 kubenswrapper[27835]: I0318 13:44:48.317808 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:50.340970 master-0 kubenswrapper[27835]: I0318 13:44:50.340106 27835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 13:44:50.340970 master-0 kubenswrapper[27835]: I0318 13:44:50.340149 27835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 13:44:50.359239 master-0 kubenswrapper[27835]: I0318 13:44:50.359172 27835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" Mar 18 13:44:50.359485 master-0 kubenswrapper[27835]: I0318 13:44:50.359383 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" Mar 18 13:44:50.360872 master-0 kubenswrapper[27835]: I0318 13:44:50.360240 27835 scope.go:117] "RemoveContainer" containerID="c4afe50328f90619bc0c3f987f293d63b41dea331885733ac695f63916aa3529" Mar 18 13:44:50.360872 master-0 kubenswrapper[27835]: E0318 13:44:50.360599 27835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-689c666fd-tjnb9_openstack(cc7df07d-4c6b-469f-b007-e3d799a49fd5)\"" pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" podUID="cc7df07d-4c6b-469f-b007-e3d799a49fd5" Mar 18 13:44:51.112229 master-0 kubenswrapper[27835]: I0318 13:44:51.112159 27835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77bd5547c7-x2vlw" podUID="e55b5aa7-9a4f-4042-91b6-6f03aaeada53" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.227:5353: connect: connection refused" Mar 18 13:44:51.352105 master-0 kubenswrapper[27835]: I0318 13:44:51.352046 27835 scope.go:117] "RemoveContainer" containerID="c4afe50328f90619bc0c3f987f293d63b41dea331885733ac695f63916aa3529" Mar 18 13:44:51.357629 master-0 kubenswrapper[27835]: E0318 13:44:51.352427 27835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-689c666fd-tjnb9_openstack(cc7df07d-4c6b-469f-b007-e3d799a49fd5)\"" pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" podUID="cc7df07d-4c6b-469f-b007-e3d799a49fd5" Mar 18 13:44:52.269010 master-0 kubenswrapper[27835]: I0318 13:44:52.268944 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:52.269304 master-0 kubenswrapper[27835]: I0318 13:44:52.269065 27835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 13:44:52.270672 master-0 kubenswrapper[27835]: I0318 13:44:52.270634 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-4f519-default-external-api-0" Mar 18 13:44:52.317252 master-0 kubenswrapper[27835]: I0318 13:44:52.317205 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:52.317465 master-0 kubenswrapper[27835]: I0318 13:44:52.317261 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-4f519-default-internal-api-0" Mar 18 13:44:53.390312 master-0 kubenswrapper[27835]: I0318 13:44:53.390260 27835 generic.go:334] "Generic (PLEG): container finished" podID="e55b5aa7-9a4f-4042-91b6-6f03aaeada53" containerID="d23f8cc53a3e4ddc3643e1801a64fc62b1e0b556bc933ce54e5a6a65b3886338" exitCode=0 Mar 18 13:44:53.391002 master-0 kubenswrapper[27835]: I0318 13:44:53.390960 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77bd5547c7-x2vlw" event={"ID":"e55b5aa7-9a4f-4042-91b6-6f03aaeada53","Type":"ContainerDied","Data":"d23f8cc53a3e4ddc3643e1801a64fc62b1e0b556bc933ce54e5a6a65b3886338"} Mar 18 13:44:53.391102 master-0 kubenswrapper[27835]: I0318 13:44:53.391088 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77bd5547c7-x2vlw" event={"ID":"e55b5aa7-9a4f-4042-91b6-6f03aaeada53","Type":"ContainerDied","Data":"35fa65e6f777e4169c00c7b85d77ee3394f35b34c6ef7bb12fffc4bf39fa8197"} Mar 18 13:44:53.391185 master-0 kubenswrapper[27835]: I0318 13:44:53.391167 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35fa65e6f777e4169c00c7b85d77ee3394f35b34c6ef7bb12fffc4bf39fa8197" Mar 18 13:44:53.411319 master-0 kubenswrapper[27835]: I0318 13:44:53.411275 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77bd5547c7-x2vlw" Mar 18 13:44:53.592491 master-0 kubenswrapper[27835]: I0318 13:44:53.591722 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e55b5aa7-9a4f-4042-91b6-6f03aaeada53-ovsdbserver-nb\") pod \"e55b5aa7-9a4f-4042-91b6-6f03aaeada53\" (UID: \"e55b5aa7-9a4f-4042-91b6-6f03aaeada53\") " Mar 18 13:44:53.593770 master-0 kubenswrapper[27835]: I0318 13:44:53.593726 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e55b5aa7-9a4f-4042-91b6-6f03aaeada53-dns-swift-storage-0\") pod \"e55b5aa7-9a4f-4042-91b6-6f03aaeada53\" (UID: \"e55b5aa7-9a4f-4042-91b6-6f03aaeada53\") " Mar 18 13:44:53.593908 master-0 kubenswrapper[27835]: I0318 13:44:53.593890 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e55b5aa7-9a4f-4042-91b6-6f03aaeada53-ovsdbserver-sb\") pod \"e55b5aa7-9a4f-4042-91b6-6f03aaeada53\" (UID: \"e55b5aa7-9a4f-4042-91b6-6f03aaeada53\") " Mar 18 13:44:53.593946 master-0 kubenswrapper[27835]: I0318 13:44:53.593922 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8r7n8\" (UniqueName: \"kubernetes.io/projected/e55b5aa7-9a4f-4042-91b6-6f03aaeada53-kube-api-access-8r7n8\") pod \"e55b5aa7-9a4f-4042-91b6-6f03aaeada53\" (UID: \"e55b5aa7-9a4f-4042-91b6-6f03aaeada53\") " Mar 18 13:44:53.594000 master-0 kubenswrapper[27835]: I0318 13:44:53.593980 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e55b5aa7-9a4f-4042-91b6-6f03aaeada53-config\") pod \"e55b5aa7-9a4f-4042-91b6-6f03aaeada53\" (UID: \"e55b5aa7-9a4f-4042-91b6-6f03aaeada53\") " Mar 18 13:44:53.594044 master-0 kubenswrapper[27835]: I0318 13:44:53.594027 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e55b5aa7-9a4f-4042-91b6-6f03aaeada53-dns-svc\") pod \"e55b5aa7-9a4f-4042-91b6-6f03aaeada53\" (UID: \"e55b5aa7-9a4f-4042-91b6-6f03aaeada53\") " Mar 18 13:44:53.612933 master-0 kubenswrapper[27835]: I0318 13:44:53.612840 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e55b5aa7-9a4f-4042-91b6-6f03aaeada53-kube-api-access-8r7n8" (OuterVolumeSpecName: "kube-api-access-8r7n8") pod "e55b5aa7-9a4f-4042-91b6-6f03aaeada53" (UID: "e55b5aa7-9a4f-4042-91b6-6f03aaeada53"). InnerVolumeSpecName "kube-api-access-8r7n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:44:53.697672 master-0 kubenswrapper[27835]: I0318 13:44:53.697546 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8r7n8\" (UniqueName: \"kubernetes.io/projected/e55b5aa7-9a4f-4042-91b6-6f03aaeada53-kube-api-access-8r7n8\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:53.726267 master-0 kubenswrapper[27835]: I0318 13:44:53.726208 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e55b5aa7-9a4f-4042-91b6-6f03aaeada53-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e55b5aa7-9a4f-4042-91b6-6f03aaeada53" (UID: "e55b5aa7-9a4f-4042-91b6-6f03aaeada53"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:44:53.726544 master-0 kubenswrapper[27835]: I0318 13:44:53.726371 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e55b5aa7-9a4f-4042-91b6-6f03aaeada53-config" (OuterVolumeSpecName: "config") pod "e55b5aa7-9a4f-4042-91b6-6f03aaeada53" (UID: "e55b5aa7-9a4f-4042-91b6-6f03aaeada53"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:44:53.749110 master-0 kubenswrapper[27835]: I0318 13:44:53.749043 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e55b5aa7-9a4f-4042-91b6-6f03aaeada53-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e55b5aa7-9a4f-4042-91b6-6f03aaeada53" (UID: "e55b5aa7-9a4f-4042-91b6-6f03aaeada53"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:44:53.752392 master-0 kubenswrapper[27835]: I0318 13:44:53.752311 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e55b5aa7-9a4f-4042-91b6-6f03aaeada53-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e55b5aa7-9a4f-4042-91b6-6f03aaeada53" (UID: "e55b5aa7-9a4f-4042-91b6-6f03aaeada53"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:44:53.797017 master-0 kubenswrapper[27835]: I0318 13:44:53.796953 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e55b5aa7-9a4f-4042-91b6-6f03aaeada53-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e55b5aa7-9a4f-4042-91b6-6f03aaeada53" (UID: "e55b5aa7-9a4f-4042-91b6-6f03aaeada53"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:44:53.800216 master-0 kubenswrapper[27835]: I0318 13:44:53.800172 27835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e55b5aa7-9a4f-4042-91b6-6f03aaeada53-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:53.800216 master-0 kubenswrapper[27835]: I0318 13:44:53.800214 27835 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e55b5aa7-9a4f-4042-91b6-6f03aaeada53-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:53.800337 master-0 kubenswrapper[27835]: I0318 13:44:53.800229 27835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e55b5aa7-9a4f-4042-91b6-6f03aaeada53-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:53.800337 master-0 kubenswrapper[27835]: I0318 13:44:53.800241 27835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e55b5aa7-9a4f-4042-91b6-6f03aaeada53-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:53.800337 master-0 kubenswrapper[27835]: I0318 13:44:53.800249 27835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e55b5aa7-9a4f-4042-91b6-6f03aaeada53-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:54.406791 master-0 kubenswrapper[27835]: I0318 13:44:54.406724 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rnqgv" event={"ID":"37131fa0-c66e-4abc-b58f-f84c492056df","Type":"ContainerStarted","Data":"705840c8987c6ebbd191fd8898d2385aa5455856b8e6b692fd7890c2f4164429"} Mar 18 13:44:54.411896 master-0 kubenswrapper[27835]: I0318 13:44:54.411830 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"64f8cc11-8a96-401b-9e81-ebfc6db37453","Type":"ContainerStarted","Data":"47e69a823b28388293644102dd5d36dcddc7e022adc32762fafb5d3869c30601"} Mar 18 13:44:54.412157 master-0 kubenswrapper[27835]: I0318 13:44:54.412121 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-inspector-0" podUID="64f8cc11-8a96-401b-9e81-ebfc6db37453" containerName="inspector-pxe-init" containerID="cri-o://47e69a823b28388293644102dd5d36dcddc7e022adc32762fafb5d3869c30601" gracePeriod=60 Mar 18 13:44:54.418567 master-0 kubenswrapper[27835]: I0318 13:44:54.418517 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77bd5547c7-x2vlw" Mar 18 13:44:54.418782 master-0 kubenswrapper[27835]: I0318 13:44:54.418623 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"d2a793d4-62c6-4482-a5e5-21ed4cc72e33","Type":"ContainerStarted","Data":"13960cb98952b9f1cac17e9e61119ce462da2b6a0fa529dcc66cf748054afe7c"} Mar 18 13:44:54.450579 master-0 kubenswrapper[27835]: I0318 13:44:54.450502 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-rnqgv" podStartSLOduration=2.714945365 podStartE2EDuration="15.450486855s" podCreationTimestamp="2026-03-18 13:44:39 +0000 UTC" firstStartedPulling="2026-03-18 13:44:40.589614649 +0000 UTC m=+1244.554826209" lastFinishedPulling="2026-03-18 13:44:53.325156139 +0000 UTC m=+1257.290367699" observedRunningTime="2026-03-18 13:44:54.440950743 +0000 UTC m=+1258.406162313" watchObservedRunningTime="2026-03-18 13:44:54.450486855 +0000 UTC m=+1258.415698425" Mar 18 13:44:54.549449 master-0 kubenswrapper[27835]: I0318 13:44:54.547346 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77bd5547c7-x2vlw"] Mar 18 13:44:54.573149 master-0 kubenswrapper[27835]: I0318 13:44:54.573006 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77bd5547c7-x2vlw"] Mar 18 13:44:55.238308 master-0 kubenswrapper[27835]: I0318 13:44:55.237184 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 18 13:44:55.380975 master-0 kubenswrapper[27835]: I0318 13:44:55.380819 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64f8cc11-8a96-401b-9e81-ebfc6db37453-scripts\") pod \"64f8cc11-8a96-401b-9e81-ebfc6db37453\" (UID: \"64f8cc11-8a96-401b-9e81-ebfc6db37453\") " Mar 18 13:44:55.380975 master-0 kubenswrapper[27835]: I0318 13:44:55.380903 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/64f8cc11-8a96-401b-9e81-ebfc6db37453-etc-podinfo\") pod \"64f8cc11-8a96-401b-9e81-ebfc6db37453\" (UID: \"64f8cc11-8a96-401b-9e81-ebfc6db37453\") " Mar 18 13:44:55.381258 master-0 kubenswrapper[27835]: I0318 13:44:55.381049 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/64f8cc11-8a96-401b-9e81-ebfc6db37453-config\") pod \"64f8cc11-8a96-401b-9e81-ebfc6db37453\" (UID: \"64f8cc11-8a96-401b-9e81-ebfc6db37453\") " Mar 18 13:44:55.381258 master-0 kubenswrapper[27835]: I0318 13:44:55.381155 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/64f8cc11-8a96-401b-9e81-ebfc6db37453-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"64f8cc11-8a96-401b-9e81-ebfc6db37453\" (UID: \"64f8cc11-8a96-401b-9e81-ebfc6db37453\") " Mar 18 13:44:55.381258 master-0 kubenswrapper[27835]: I0318 13:44:55.381213 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmzxp\" (UniqueName: \"kubernetes.io/projected/64f8cc11-8a96-401b-9e81-ebfc6db37453-kube-api-access-lmzxp\") pod \"64f8cc11-8a96-401b-9e81-ebfc6db37453\" (UID: \"64f8cc11-8a96-401b-9e81-ebfc6db37453\") " Mar 18 13:44:55.381258 master-0 kubenswrapper[27835]: I0318 13:44:55.381244 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64f8cc11-8a96-401b-9e81-ebfc6db37453-combined-ca-bundle\") pod \"64f8cc11-8a96-401b-9e81-ebfc6db37453\" (UID: \"64f8cc11-8a96-401b-9e81-ebfc6db37453\") " Mar 18 13:44:55.381496 master-0 kubenswrapper[27835]: I0318 13:44:55.381266 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/64f8cc11-8a96-401b-9e81-ebfc6db37453-var-lib-ironic\") pod \"64f8cc11-8a96-401b-9e81-ebfc6db37453\" (UID: \"64f8cc11-8a96-401b-9e81-ebfc6db37453\") " Mar 18 13:44:55.381593 master-0 kubenswrapper[27835]: I0318 13:44:55.381529 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64f8cc11-8a96-401b-9e81-ebfc6db37453-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "64f8cc11-8a96-401b-9e81-ebfc6db37453" (UID: "64f8cc11-8a96-401b-9e81-ebfc6db37453"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:44:55.382031 master-0 kubenswrapper[27835]: I0318 13:44:55.381995 27835 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/64f8cc11-8a96-401b-9e81-ebfc6db37453-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:55.384897 master-0 kubenswrapper[27835]: I0318 13:44:55.384854 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64f8cc11-8a96-401b-9e81-ebfc6db37453-scripts" (OuterVolumeSpecName: "scripts") pod "64f8cc11-8a96-401b-9e81-ebfc6db37453" (UID: "64f8cc11-8a96-401b-9e81-ebfc6db37453"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:55.385908 master-0 kubenswrapper[27835]: I0318 13:44:55.385608 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/64f8cc11-8a96-401b-9e81-ebfc6db37453-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "64f8cc11-8a96-401b-9e81-ebfc6db37453" (UID: "64f8cc11-8a96-401b-9e81-ebfc6db37453"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 18 13:44:55.386016 master-0 kubenswrapper[27835]: I0318 13:44:55.385898 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64f8cc11-8a96-401b-9e81-ebfc6db37453-config" (OuterVolumeSpecName: "config") pod "64f8cc11-8a96-401b-9e81-ebfc6db37453" (UID: "64f8cc11-8a96-401b-9e81-ebfc6db37453"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:55.387528 master-0 kubenswrapper[27835]: I0318 13:44:55.387479 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64f8cc11-8a96-401b-9e81-ebfc6db37453-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "64f8cc11-8a96-401b-9e81-ebfc6db37453" (UID: "64f8cc11-8a96-401b-9e81-ebfc6db37453"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:44:55.397357 master-0 kubenswrapper[27835]: I0318 13:44:55.397300 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64f8cc11-8a96-401b-9e81-ebfc6db37453-kube-api-access-lmzxp" (OuterVolumeSpecName: "kube-api-access-lmzxp") pod "64f8cc11-8a96-401b-9e81-ebfc6db37453" (UID: "64f8cc11-8a96-401b-9e81-ebfc6db37453"). InnerVolumeSpecName "kube-api-access-lmzxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:44:55.430191 master-0 kubenswrapper[27835]: I0318 13:44:55.430123 27835 generic.go:334] "Generic (PLEG): container finished" podID="64f8cc11-8a96-401b-9e81-ebfc6db37453" containerID="47e69a823b28388293644102dd5d36dcddc7e022adc32762fafb5d3869c30601" exitCode=0 Mar 18 13:44:55.430894 master-0 kubenswrapper[27835]: I0318 13:44:55.430633 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"64f8cc11-8a96-401b-9e81-ebfc6db37453","Type":"ContainerDied","Data":"47e69a823b28388293644102dd5d36dcddc7e022adc32762fafb5d3869c30601"} Mar 18 13:44:55.430894 master-0 kubenswrapper[27835]: I0318 13:44:55.430688 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 18 13:44:55.430894 master-0 kubenswrapper[27835]: I0318 13:44:55.430726 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"64f8cc11-8a96-401b-9e81-ebfc6db37453","Type":"ContainerDied","Data":"dfd4c55d494a62067887016eb2c66c5f442cba7ede9fb07d8fe353b78174100d"} Mar 18 13:44:55.430894 master-0 kubenswrapper[27835]: I0318 13:44:55.430752 27835 scope.go:117] "RemoveContainer" containerID="47e69a823b28388293644102dd5d36dcddc7e022adc32762fafb5d3869c30601" Mar 18 13:44:55.447659 master-0 kubenswrapper[27835]: I0318 13:44:55.446504 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64f8cc11-8a96-401b-9e81-ebfc6db37453-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64f8cc11-8a96-401b-9e81-ebfc6db37453" (UID: "64f8cc11-8a96-401b-9e81-ebfc6db37453"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:44:55.484698 master-0 kubenswrapper[27835]: I0318 13:44:55.484628 27835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64f8cc11-8a96-401b-9e81-ebfc6db37453-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:55.484698 master-0 kubenswrapper[27835]: I0318 13:44:55.484692 27835 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/64f8cc11-8a96-401b-9e81-ebfc6db37453-var-lib-ironic\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:55.484698 master-0 kubenswrapper[27835]: I0318 13:44:55.484711 27835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64f8cc11-8a96-401b-9e81-ebfc6db37453-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:55.485036 master-0 kubenswrapper[27835]: I0318 13:44:55.484727 27835 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/64f8cc11-8a96-401b-9e81-ebfc6db37453-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:55.485036 master-0 kubenswrapper[27835]: I0318 13:44:55.484745 27835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/64f8cc11-8a96-401b-9e81-ebfc6db37453-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:55.485036 master-0 kubenswrapper[27835]: I0318 13:44:55.484763 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmzxp\" (UniqueName: \"kubernetes.io/projected/64f8cc11-8a96-401b-9e81-ebfc6db37453-kube-api-access-lmzxp\") on node \"master-0\" DevicePath \"\"" Mar 18 13:44:55.514578 master-0 kubenswrapper[27835]: I0318 13:44:55.514381 27835 scope.go:117] "RemoveContainer" containerID="a57fdf20bbcb5de8e86b1fe52ef95f701e73cc170845538db1d93f8bbb0ea92f" Mar 18 13:44:55.535399 master-0 kubenswrapper[27835]: I0318 13:44:55.535353 27835 scope.go:117] "RemoveContainer" containerID="47e69a823b28388293644102dd5d36dcddc7e022adc32762fafb5d3869c30601" Mar 18 13:44:55.535837 master-0 kubenswrapper[27835]: E0318 13:44:55.535798 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47e69a823b28388293644102dd5d36dcddc7e022adc32762fafb5d3869c30601\": container with ID starting with 47e69a823b28388293644102dd5d36dcddc7e022adc32762fafb5d3869c30601 not found: ID does not exist" containerID="47e69a823b28388293644102dd5d36dcddc7e022adc32762fafb5d3869c30601" Mar 18 13:44:55.535897 master-0 kubenswrapper[27835]: I0318 13:44:55.535842 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47e69a823b28388293644102dd5d36dcddc7e022adc32762fafb5d3869c30601"} err="failed to get container status \"47e69a823b28388293644102dd5d36dcddc7e022adc32762fafb5d3869c30601\": rpc error: code = NotFound desc = could not find container \"47e69a823b28388293644102dd5d36dcddc7e022adc32762fafb5d3869c30601\": container with ID starting with 47e69a823b28388293644102dd5d36dcddc7e022adc32762fafb5d3869c30601 not found: ID does not exist" Mar 18 13:44:55.535897 master-0 kubenswrapper[27835]: I0318 13:44:55.535868 27835 scope.go:117] "RemoveContainer" containerID="a57fdf20bbcb5de8e86b1fe52ef95f701e73cc170845538db1d93f8bbb0ea92f" Mar 18 13:44:55.536353 master-0 kubenswrapper[27835]: E0318 13:44:55.536293 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a57fdf20bbcb5de8e86b1fe52ef95f701e73cc170845538db1d93f8bbb0ea92f\": container with ID starting with a57fdf20bbcb5de8e86b1fe52ef95f701e73cc170845538db1d93f8bbb0ea92f not found: ID does not exist" containerID="a57fdf20bbcb5de8e86b1fe52ef95f701e73cc170845538db1d93f8bbb0ea92f" Mar 18 13:44:55.536442 master-0 kubenswrapper[27835]: I0318 13:44:55.536362 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a57fdf20bbcb5de8e86b1fe52ef95f701e73cc170845538db1d93f8bbb0ea92f"} err="failed to get container status \"a57fdf20bbcb5de8e86b1fe52ef95f701e73cc170845538db1d93f8bbb0ea92f\": rpc error: code = NotFound desc = could not find container \"a57fdf20bbcb5de8e86b1fe52ef95f701e73cc170845538db1d93f8bbb0ea92f\": container with ID starting with a57fdf20bbcb5de8e86b1fe52ef95f701e73cc170845538db1d93f8bbb0ea92f not found: ID does not exist" Mar 18 13:44:55.842409 master-0 kubenswrapper[27835]: I0318 13:44:55.842072 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Mar 18 13:44:55.872378 master-0 kubenswrapper[27835]: I0318 13:44:55.871977 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-0"] Mar 18 13:44:55.897251 master-0 kubenswrapper[27835]: I0318 13:44:55.897178 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Mar 18 13:44:55.898132 master-0 kubenswrapper[27835]: E0318 13:44:55.898107 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e55b5aa7-9a4f-4042-91b6-6f03aaeada53" containerName="dnsmasq-dns" Mar 18 13:44:55.898256 master-0 kubenswrapper[27835]: I0318 13:44:55.898241 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="e55b5aa7-9a4f-4042-91b6-6f03aaeada53" containerName="dnsmasq-dns" Mar 18 13:44:55.898517 master-0 kubenswrapper[27835]: E0318 13:44:55.898381 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e55b5aa7-9a4f-4042-91b6-6f03aaeada53" containerName="init" Mar 18 13:44:55.898628 master-0 kubenswrapper[27835]: I0318 13:44:55.898613 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="e55b5aa7-9a4f-4042-91b6-6f03aaeada53" containerName="init" Mar 18 13:44:55.898710 master-0 kubenswrapper[27835]: E0318 13:44:55.898699 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2c52d8-a2cf-41a1-ab31-fd9d02b61538" containerName="placement-log" Mar 18 13:44:55.898775 master-0 kubenswrapper[27835]: I0318 13:44:55.898765 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2c52d8-a2cf-41a1-ab31-fd9d02b61538" containerName="placement-log" Mar 18 13:44:55.898866 master-0 kubenswrapper[27835]: E0318 13:44:55.898856 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2c52d8-a2cf-41a1-ab31-fd9d02b61538" containerName="placement-api" Mar 18 13:44:55.898927 master-0 kubenswrapper[27835]: I0318 13:44:55.898917 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2c52d8-a2cf-41a1-ab31-fd9d02b61538" containerName="placement-api" Mar 18 13:44:55.898990 master-0 kubenswrapper[27835]: E0318 13:44:55.898980 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64f8cc11-8a96-401b-9e81-ebfc6db37453" containerName="ironic-python-agent-init" Mar 18 13:44:55.899046 master-0 kubenswrapper[27835]: I0318 13:44:55.899036 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="64f8cc11-8a96-401b-9e81-ebfc6db37453" containerName="ironic-python-agent-init" Mar 18 13:44:55.899117 master-0 kubenswrapper[27835]: E0318 13:44:55.899107 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64f8cc11-8a96-401b-9e81-ebfc6db37453" containerName="inspector-pxe-init" Mar 18 13:44:55.899175 master-0 kubenswrapper[27835]: I0318 13:44:55.899166 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="64f8cc11-8a96-401b-9e81-ebfc6db37453" containerName="inspector-pxe-init" Mar 18 13:44:55.899499 master-0 kubenswrapper[27835]: I0318 13:44:55.899484 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd2c52d8-a2cf-41a1-ab31-fd9d02b61538" containerName="placement-api" Mar 18 13:44:55.899589 master-0 kubenswrapper[27835]: I0318 13:44:55.899578 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="e55b5aa7-9a4f-4042-91b6-6f03aaeada53" containerName="dnsmasq-dns" Mar 18 13:44:55.899671 master-0 kubenswrapper[27835]: I0318 13:44:55.899660 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="64f8cc11-8a96-401b-9e81-ebfc6db37453" containerName="inspector-pxe-init" Mar 18 13:44:55.899735 master-0 kubenswrapper[27835]: I0318 13:44:55.899725 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd2c52d8-a2cf-41a1-ab31-fd9d02b61538" containerName="placement-log" Mar 18 13:44:55.903519 master-0 kubenswrapper[27835]: I0318 13:44:55.903475 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 18 13:44:55.907483 master-0 kubenswrapper[27835]: I0318 13:44:55.907289 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Mar 18 13:44:55.908012 master-0 kubenswrapper[27835]: I0318 13:44:55.907978 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-public-svc" Mar 18 13:44:55.908189 master-0 kubenswrapper[27835]: I0318 13:44:55.908137 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Mar 18 13:44:55.909344 master-0 kubenswrapper[27835]: I0318 13:44:55.908265 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Mar 18 13:44:55.909344 master-0 kubenswrapper[27835]: I0318 13:44:55.908386 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-internal-svc" Mar 18 13:44:55.909344 master-0 kubenswrapper[27835]: I0318 13:44:55.909271 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Mar 18 13:44:55.997347 master-0 kubenswrapper[27835]: I0318 13:44:55.997271 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/3d880f5c-a4f4-4e41-aa98-185af9802996-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"3d880f5c-a4f4-4e41-aa98-185af9802996\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:55.997569 master-0 kubenswrapper[27835]: I0318 13:44:55.997372 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/3d880f5c-a4f4-4e41-aa98-185af9802996-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"3d880f5c-a4f4-4e41-aa98-185af9802996\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:55.997569 master-0 kubenswrapper[27835]: I0318 13:44:55.997461 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d880f5c-a4f4-4e41-aa98-185af9802996-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"3d880f5c-a4f4-4e41-aa98-185af9802996\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:55.997569 master-0 kubenswrapper[27835]: I0318 13:44:55.997484 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d880f5c-a4f4-4e41-aa98-185af9802996-scripts\") pod \"ironic-inspector-0\" (UID: \"3d880f5c-a4f4-4e41-aa98-185af9802996\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:55.997569 master-0 kubenswrapper[27835]: I0318 13:44:55.997507 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/3d880f5c-a4f4-4e41-aa98-185af9802996-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"3d880f5c-a4f4-4e41-aa98-185af9802996\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:55.997569 master-0 kubenswrapper[27835]: I0318 13:44:55.997560 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpwv4\" (UniqueName: \"kubernetes.io/projected/3d880f5c-a4f4-4e41-aa98-185af9802996-kube-api-access-gpwv4\") pod \"ironic-inspector-0\" (UID: \"3d880f5c-a4f4-4e41-aa98-185af9802996\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:55.997947 master-0 kubenswrapper[27835]: I0318 13:44:55.997581 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d880f5c-a4f4-4e41-aa98-185af9802996-config\") pod \"ironic-inspector-0\" (UID: \"3d880f5c-a4f4-4e41-aa98-185af9802996\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:55.997947 master-0 kubenswrapper[27835]: I0318 13:44:55.997611 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d880f5c-a4f4-4e41-aa98-185af9802996-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"3d880f5c-a4f4-4e41-aa98-185af9802996\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:55.997947 master-0 kubenswrapper[27835]: I0318 13:44:55.997666 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d880f5c-a4f4-4e41-aa98-185af9802996-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"3d880f5c-a4f4-4e41-aa98-185af9802996\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:56.099848 master-0 kubenswrapper[27835]: I0318 13:44:56.099733 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/3d880f5c-a4f4-4e41-aa98-185af9802996-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"3d880f5c-a4f4-4e41-aa98-185af9802996\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:56.100053 master-0 kubenswrapper[27835]: I0318 13:44:56.099964 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/3d880f5c-a4f4-4e41-aa98-185af9802996-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"3d880f5c-a4f4-4e41-aa98-185af9802996\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:56.100053 master-0 kubenswrapper[27835]: I0318 13:44:56.100042 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d880f5c-a4f4-4e41-aa98-185af9802996-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"3d880f5c-a4f4-4e41-aa98-185af9802996\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:56.100325 master-0 kubenswrapper[27835]: I0318 13:44:56.100255 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d880f5c-a4f4-4e41-aa98-185af9802996-scripts\") pod \"ironic-inspector-0\" (UID: \"3d880f5c-a4f4-4e41-aa98-185af9802996\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:56.100390 master-0 kubenswrapper[27835]: I0318 13:44:56.100371 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/3d880f5c-a4f4-4e41-aa98-185af9802996-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"3d880f5c-a4f4-4e41-aa98-185af9802996\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:56.100697 master-0 kubenswrapper[27835]: I0318 13:44:56.100668 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpwv4\" (UniqueName: \"kubernetes.io/projected/3d880f5c-a4f4-4e41-aa98-185af9802996-kube-api-access-gpwv4\") pod \"ironic-inspector-0\" (UID: \"3d880f5c-a4f4-4e41-aa98-185af9802996\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:56.100765 master-0 kubenswrapper[27835]: I0318 13:44:56.100744 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d880f5c-a4f4-4e41-aa98-185af9802996-config\") pod \"ironic-inspector-0\" (UID: \"3d880f5c-a4f4-4e41-aa98-185af9802996\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:56.100861 master-0 kubenswrapper[27835]: I0318 13:44:56.100835 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d880f5c-a4f4-4e41-aa98-185af9802996-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"3d880f5c-a4f4-4e41-aa98-185af9802996\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:56.101045 master-0 kubenswrapper[27835]: I0318 13:44:56.101003 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d880f5c-a4f4-4e41-aa98-185af9802996-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"3d880f5c-a4f4-4e41-aa98-185af9802996\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:56.103300 master-0 kubenswrapper[27835]: I0318 13:44:56.102817 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/3d880f5c-a4f4-4e41-aa98-185af9802996-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"3d880f5c-a4f4-4e41-aa98-185af9802996\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:56.103836 master-0 kubenswrapper[27835]: I0318 13:44:56.103797 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/3d880f5c-a4f4-4e41-aa98-185af9802996-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"3d880f5c-a4f4-4e41-aa98-185af9802996\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:56.105651 master-0 kubenswrapper[27835]: I0318 13:44:56.105621 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d880f5c-a4f4-4e41-aa98-185af9802996-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"3d880f5c-a4f4-4e41-aa98-185af9802996\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:56.105892 master-0 kubenswrapper[27835]: I0318 13:44:56.105854 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d880f5c-a4f4-4e41-aa98-185af9802996-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"3d880f5c-a4f4-4e41-aa98-185af9802996\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:56.106578 master-0 kubenswrapper[27835]: I0318 13:44:56.106548 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d880f5c-a4f4-4e41-aa98-185af9802996-scripts\") pod \"ironic-inspector-0\" (UID: \"3d880f5c-a4f4-4e41-aa98-185af9802996\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:56.106628 master-0 kubenswrapper[27835]: I0318 13:44:56.106547 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/3d880f5c-a4f4-4e41-aa98-185af9802996-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"3d880f5c-a4f4-4e41-aa98-185af9802996\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:56.108945 master-0 kubenswrapper[27835]: I0318 13:44:56.108895 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d880f5c-a4f4-4e41-aa98-185af9802996-config\") pod \"ironic-inspector-0\" (UID: \"3d880f5c-a4f4-4e41-aa98-185af9802996\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:56.109249 master-0 kubenswrapper[27835]: I0318 13:44:56.109210 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d880f5c-a4f4-4e41-aa98-185af9802996-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"3d880f5c-a4f4-4e41-aa98-185af9802996\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:56.180190 master-0 kubenswrapper[27835]: I0318 13:44:56.180123 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpwv4\" (UniqueName: \"kubernetes.io/projected/3d880f5c-a4f4-4e41-aa98-185af9802996-kube-api-access-gpwv4\") pod \"ironic-inspector-0\" (UID: \"3d880f5c-a4f4-4e41-aa98-185af9802996\") " pod="openstack/ironic-inspector-0" Mar 18 13:44:56.241253 master-0 kubenswrapper[27835]: I0318 13:44:56.241195 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 18 13:44:56.307060 master-0 kubenswrapper[27835]: I0318 13:44:56.306994 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64f8cc11-8a96-401b-9e81-ebfc6db37453" path="/var/lib/kubelet/pods/64f8cc11-8a96-401b-9e81-ebfc6db37453/volumes" Mar 18 13:44:56.308079 master-0 kubenswrapper[27835]: I0318 13:44:56.308004 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e55b5aa7-9a4f-4042-91b6-6f03aaeada53" path="/var/lib/kubelet/pods/e55b5aa7-9a4f-4042-91b6-6f03aaeada53/volumes" Mar 18 13:44:56.887164 master-0 kubenswrapper[27835]: I0318 13:44:56.887095 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Mar 18 13:44:57.491335 master-0 kubenswrapper[27835]: I0318 13:44:57.491233 27835 generic.go:334] "Generic (PLEG): container finished" podID="3d880f5c-a4f4-4e41-aa98-185af9802996" containerID="6d969490ada95a5d41f486486596e49789db0468dabcd5b31d9b6a40045aad9b" exitCode=0 Mar 18 13:44:57.491335 master-0 kubenswrapper[27835]: I0318 13:44:57.491329 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"3d880f5c-a4f4-4e41-aa98-185af9802996","Type":"ContainerDied","Data":"6d969490ada95a5d41f486486596e49789db0468dabcd5b31d9b6a40045aad9b"} Mar 18 13:44:57.491851 master-0 kubenswrapper[27835]: I0318 13:44:57.491374 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"3d880f5c-a4f4-4e41-aa98-185af9802996","Type":"ContainerStarted","Data":"6ec9357d65bbd2306dd1bebea7af636d16d6ddf73e3bf39aa1fcb5ec041c46f4"} Mar 18 13:44:57.497506 master-0 kubenswrapper[27835]: I0318 13:44:57.497440 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Mar 18 13:44:58.507209 master-0 kubenswrapper[27835]: I0318 13:44:58.507140 27835 generic.go:334] "Generic (PLEG): container finished" podID="3d880f5c-a4f4-4e41-aa98-185af9802996" containerID="e620b151a42a65689177ae607564409e959ca7dd14e5b33f8d2501850927dbb4" exitCode=0 Mar 18 13:44:58.507209 master-0 kubenswrapper[27835]: I0318 13:44:58.507209 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"3d880f5c-a4f4-4e41-aa98-185af9802996","Type":"ContainerDied","Data":"e620b151a42a65689177ae607564409e959ca7dd14e5b33f8d2501850927dbb4"} Mar 18 13:44:59.523308 master-0 kubenswrapper[27835]: I0318 13:44:59.523254 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"3d880f5c-a4f4-4e41-aa98-185af9802996","Type":"ContainerStarted","Data":"aa125b75bb12a64ed0bbcf54cee80e586bb5f905823ea6d4bf9b3fd8bd6beabd"} Mar 18 13:45:00.547653 master-0 kubenswrapper[27835]: I0318 13:45:00.546218 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"3d880f5c-a4f4-4e41-aa98-185af9802996","Type":"ContainerStarted","Data":"6316e3ffabc16e1a5ba35c132aa084e708901303361e59cef74f92d55ed9c4ea"} Mar 18 13:45:00.547653 master-0 kubenswrapper[27835]: I0318 13:45:00.546297 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"3d880f5c-a4f4-4e41-aa98-185af9802996","Type":"ContainerStarted","Data":"e723ecbfeb4fab2b137083b9e57ce22dfaa6880fc2652f648816aaa9e83a605d"} Mar 18 13:45:01.568476 master-0 kubenswrapper[27835]: I0318 13:45:01.566786 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"3d880f5c-a4f4-4e41-aa98-185af9802996","Type":"ContainerStarted","Data":"3e8c0e6524e432beb3a3d04565f2cf6c15a83a22f07808075b8872d59dfecb9c"} Mar 18 13:45:02.584236 master-0 kubenswrapper[27835]: I0318 13:45:02.584153 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"3d880f5c-a4f4-4e41-aa98-185af9802996","Type":"ContainerStarted","Data":"96c163f9bd46583b4e0c459a8a2eb36dabd770470d20ce39d84a284d5edf076b"} Mar 18 13:45:02.584930 master-0 kubenswrapper[27835]: I0318 13:45:02.584395 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Mar 18 13:45:02.637521 master-0 kubenswrapper[27835]: I0318 13:45:02.636110 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-0" podStartSLOduration=7.636090342 podStartE2EDuration="7.636090342s" podCreationTimestamp="2026-03-18 13:44:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:45:02.622936613 +0000 UTC m=+1266.588148183" watchObservedRunningTime="2026-03-18 13:45:02.636090342 +0000 UTC m=+1266.601301902" Mar 18 13:45:03.600674 master-0 kubenswrapper[27835]: I0318 13:45:03.600590 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Mar 18 13:45:04.296634 master-0 kubenswrapper[27835]: I0318 13:45:04.296567 27835 scope.go:117] "RemoveContainer" containerID="c4afe50328f90619bc0c3f987f293d63b41dea331885733ac695f63916aa3529" Mar 18 13:45:04.635493 master-0 kubenswrapper[27835]: I0318 13:45:04.635079 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" event={"ID":"cc7df07d-4c6b-469f-b007-e3d799a49fd5","Type":"ContainerStarted","Data":"984d54c31a59f111577abf380043629cbc1a9377b9d288b21d858d842aaa28ac"} Mar 18 13:45:04.635493 master-0 kubenswrapper[27835]: I0318 13:45:04.635296 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" Mar 18 13:45:04.677438 master-0 kubenswrapper[27835]: I0318 13:45:04.676672 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Mar 18 13:45:05.649456 master-0 kubenswrapper[27835]: I0318 13:45:05.649357 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Mar 18 13:45:06.242401 master-0 kubenswrapper[27835]: I0318 13:45:06.241953 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Mar 18 13:45:06.242681 master-0 kubenswrapper[27835]: I0318 13:45:06.242437 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Mar 18 13:45:06.242681 master-0 kubenswrapper[27835]: I0318 13:45:06.242455 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Mar 18 13:45:06.242681 master-0 kubenswrapper[27835]: I0318 13:45:06.242467 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Mar 18 13:45:06.267690 master-0 kubenswrapper[27835]: I0318 13:45:06.267622 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Mar 18 13:45:06.271159 master-0 kubenswrapper[27835]: I0318 13:45:06.271105 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Mar 18 13:45:06.682365 master-0 kubenswrapper[27835]: I0318 13:45:06.682289 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Mar 18 13:45:06.683948 master-0 kubenswrapper[27835]: I0318 13:45:06.683916 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Mar 18 13:45:10.377449 master-0 kubenswrapper[27835]: I0318 13:45:10.377364 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-689c666fd-tjnb9" Mar 18 13:45:11.727908 master-0 kubenswrapper[27835]: I0318 13:45:11.727835 27835 generic.go:334] "Generic (PLEG): container finished" podID="37131fa0-c66e-4abc-b58f-f84c492056df" containerID="705840c8987c6ebbd191fd8898d2385aa5455856b8e6b692fd7890c2f4164429" exitCode=0 Mar 18 13:45:11.727908 master-0 kubenswrapper[27835]: I0318 13:45:11.727908 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rnqgv" event={"ID":"37131fa0-c66e-4abc-b58f-f84c492056df","Type":"ContainerDied","Data":"705840c8987c6ebbd191fd8898d2385aa5455856b8e6b692fd7890c2f4164429"} Mar 18 13:45:13.270348 master-0 kubenswrapper[27835]: I0318 13:45:13.270298 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rnqgv" Mar 18 13:45:13.394830 master-0 kubenswrapper[27835]: I0318 13:45:13.394609 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37131fa0-c66e-4abc-b58f-f84c492056df-combined-ca-bundle\") pod \"37131fa0-c66e-4abc-b58f-f84c492056df\" (UID: \"37131fa0-c66e-4abc-b58f-f84c492056df\") " Mar 18 13:45:13.394830 master-0 kubenswrapper[27835]: I0318 13:45:13.394822 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37131fa0-c66e-4abc-b58f-f84c492056df-config-data\") pod \"37131fa0-c66e-4abc-b58f-f84c492056df\" (UID: \"37131fa0-c66e-4abc-b58f-f84c492056df\") " Mar 18 13:45:13.395118 master-0 kubenswrapper[27835]: I0318 13:45:13.394917 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67rwb\" (UniqueName: \"kubernetes.io/projected/37131fa0-c66e-4abc-b58f-f84c492056df-kube-api-access-67rwb\") pod \"37131fa0-c66e-4abc-b58f-f84c492056df\" (UID: \"37131fa0-c66e-4abc-b58f-f84c492056df\") " Mar 18 13:45:13.395118 master-0 kubenswrapper[27835]: I0318 13:45:13.394961 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37131fa0-c66e-4abc-b58f-f84c492056df-scripts\") pod \"37131fa0-c66e-4abc-b58f-f84c492056df\" (UID: \"37131fa0-c66e-4abc-b58f-f84c492056df\") " Mar 18 13:45:13.399037 master-0 kubenswrapper[27835]: I0318 13:45:13.398998 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37131fa0-c66e-4abc-b58f-f84c492056df-scripts" (OuterVolumeSpecName: "scripts") pod "37131fa0-c66e-4abc-b58f-f84c492056df" (UID: "37131fa0-c66e-4abc-b58f-f84c492056df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:45:13.400756 master-0 kubenswrapper[27835]: I0318 13:45:13.400672 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37131fa0-c66e-4abc-b58f-f84c492056df-kube-api-access-67rwb" (OuterVolumeSpecName: "kube-api-access-67rwb") pod "37131fa0-c66e-4abc-b58f-f84c492056df" (UID: "37131fa0-c66e-4abc-b58f-f84c492056df"). InnerVolumeSpecName "kube-api-access-67rwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:45:13.421112 master-0 kubenswrapper[27835]: I0318 13:45:13.420965 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37131fa0-c66e-4abc-b58f-f84c492056df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37131fa0-c66e-4abc-b58f-f84c492056df" (UID: "37131fa0-c66e-4abc-b58f-f84c492056df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:45:13.422194 master-0 kubenswrapper[27835]: I0318 13:45:13.422150 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37131fa0-c66e-4abc-b58f-f84c492056df-config-data" (OuterVolumeSpecName: "config-data") pod "37131fa0-c66e-4abc-b58f-f84c492056df" (UID: "37131fa0-c66e-4abc-b58f-f84c492056df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:45:13.497782 master-0 kubenswrapper[27835]: I0318 13:45:13.497710 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67rwb\" (UniqueName: \"kubernetes.io/projected/37131fa0-c66e-4abc-b58f-f84c492056df-kube-api-access-67rwb\") on node \"master-0\" DevicePath \"\"" Mar 18 13:45:13.497782 master-0 kubenswrapper[27835]: I0318 13:45:13.497757 27835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37131fa0-c66e-4abc-b58f-f84c492056df-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:45:13.497782 master-0 kubenswrapper[27835]: I0318 13:45:13.497770 27835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37131fa0-c66e-4abc-b58f-f84c492056df-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:45:13.497782 master-0 kubenswrapper[27835]: I0318 13:45:13.497779 27835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37131fa0-c66e-4abc-b58f-f84c492056df-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 13:45:13.755337 master-0 kubenswrapper[27835]: I0318 13:45:13.755205 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rnqgv" event={"ID":"37131fa0-c66e-4abc-b58f-f84c492056df","Type":"ContainerDied","Data":"ca29d494455632bd4f3b0a2cbe27883ecdd14fff84e580c862af493c2a0ecdcc"} Mar 18 13:45:13.755337 master-0 kubenswrapper[27835]: I0318 13:45:13.755264 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca29d494455632bd4f3b0a2cbe27883ecdd14fff84e580c862af493c2a0ecdcc" Mar 18 13:45:13.755337 master-0 kubenswrapper[27835]: I0318 13:45:13.755281 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rnqgv" Mar 18 13:45:13.911154 master-0 kubenswrapper[27835]: I0318 13:45:13.911073 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 18 13:45:13.911791 master-0 kubenswrapper[27835]: E0318 13:45:13.911749 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37131fa0-c66e-4abc-b58f-f84c492056df" containerName="nova-cell0-conductor-db-sync" Mar 18 13:45:13.911791 master-0 kubenswrapper[27835]: I0318 13:45:13.911781 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="37131fa0-c66e-4abc-b58f-f84c492056df" containerName="nova-cell0-conductor-db-sync" Mar 18 13:45:13.912127 master-0 kubenswrapper[27835]: I0318 13:45:13.912081 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="37131fa0-c66e-4abc-b58f-f84c492056df" containerName="nova-cell0-conductor-db-sync" Mar 18 13:45:13.913073 master-0 kubenswrapper[27835]: I0318 13:45:13.913030 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 18 13:45:13.915069 master-0 kubenswrapper[27835]: I0318 13:45:13.915016 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 18 13:45:13.924684 master-0 kubenswrapper[27835]: I0318 13:45:13.924627 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 18 13:45:14.011133 master-0 kubenswrapper[27835]: I0318 13:45:14.010994 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc6cc7b-4e38-41fc-9f09-66f46a74cdbc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"abc6cc7b-4e38-41fc-9f09-66f46a74cdbc\") " pod="openstack/nova-cell0-conductor-0" Mar 18 13:45:14.011133 master-0 kubenswrapper[27835]: I0318 13:45:14.011133 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abc6cc7b-4e38-41fc-9f09-66f46a74cdbc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"abc6cc7b-4e38-41fc-9f09-66f46a74cdbc\") " pod="openstack/nova-cell0-conductor-0" Mar 18 13:45:14.011401 master-0 kubenswrapper[27835]: I0318 13:45:14.011371 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc5j9\" (UniqueName: \"kubernetes.io/projected/abc6cc7b-4e38-41fc-9f09-66f46a74cdbc-kube-api-access-mc5j9\") pod \"nova-cell0-conductor-0\" (UID: \"abc6cc7b-4e38-41fc-9f09-66f46a74cdbc\") " pod="openstack/nova-cell0-conductor-0" Mar 18 13:45:14.113971 master-0 kubenswrapper[27835]: I0318 13:45:14.113862 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc5j9\" (UniqueName: \"kubernetes.io/projected/abc6cc7b-4e38-41fc-9f09-66f46a74cdbc-kube-api-access-mc5j9\") pod \"nova-cell0-conductor-0\" (UID: \"abc6cc7b-4e38-41fc-9f09-66f46a74cdbc\") " pod="openstack/nova-cell0-conductor-0" Mar 18 13:45:14.114210 master-0 kubenswrapper[27835]: I0318 13:45:14.114108 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc6cc7b-4e38-41fc-9f09-66f46a74cdbc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"abc6cc7b-4e38-41fc-9f09-66f46a74cdbc\") " pod="openstack/nova-cell0-conductor-0" Mar 18 13:45:14.114210 master-0 kubenswrapper[27835]: I0318 13:45:14.114155 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abc6cc7b-4e38-41fc-9f09-66f46a74cdbc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"abc6cc7b-4e38-41fc-9f09-66f46a74cdbc\") " pod="openstack/nova-cell0-conductor-0" Mar 18 13:45:14.118463 master-0 kubenswrapper[27835]: I0318 13:45:14.117853 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abc6cc7b-4e38-41fc-9f09-66f46a74cdbc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"abc6cc7b-4e38-41fc-9f09-66f46a74cdbc\") " pod="openstack/nova-cell0-conductor-0" Mar 18 13:45:14.119627 master-0 kubenswrapper[27835]: I0318 13:45:14.119569 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abc6cc7b-4e38-41fc-9f09-66f46a74cdbc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"abc6cc7b-4e38-41fc-9f09-66f46a74cdbc\") " pod="openstack/nova-cell0-conductor-0" Mar 18 13:45:14.130979 master-0 kubenswrapper[27835]: I0318 13:45:14.130822 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc5j9\" (UniqueName: \"kubernetes.io/projected/abc6cc7b-4e38-41fc-9f09-66f46a74cdbc-kube-api-access-mc5j9\") pod \"nova-cell0-conductor-0\" (UID: \"abc6cc7b-4e38-41fc-9f09-66f46a74cdbc\") " pod="openstack/nova-cell0-conductor-0" Mar 18 13:45:14.277538 master-0 kubenswrapper[27835]: I0318 13:45:14.277370 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 18 13:45:14.778669 master-0 kubenswrapper[27835]: I0318 13:45:14.778597 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 18 13:45:15.797115 master-0 kubenswrapper[27835]: I0318 13:45:15.797037 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"abc6cc7b-4e38-41fc-9f09-66f46a74cdbc","Type":"ContainerStarted","Data":"d8208dc82c9a849c528145ccbfe65c9dc582de66cffc9f2a6223debf23f00343"} Mar 18 13:45:15.797115 master-0 kubenswrapper[27835]: I0318 13:45:15.797116 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"abc6cc7b-4e38-41fc-9f09-66f46a74cdbc","Type":"ContainerStarted","Data":"525b35fda0cec097c78475fc2f3c6cd8976e5d135032dd29d0d8f6d4b50a4c19"} Mar 18 13:45:15.798026 master-0 kubenswrapper[27835]: I0318 13:45:15.797478 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 18 13:45:15.824815 master-0 kubenswrapper[27835]: I0318 13:45:15.824714 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.824694454 podStartE2EDuration="2.824694454s" podCreationTimestamp="2026-03-18 13:45:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:45:15.819896257 +0000 UTC m=+1279.785107847" watchObservedRunningTime="2026-03-18 13:45:15.824694454 +0000 UTC m=+1279.789906034" Mar 18 13:45:19.322951 master-0 kubenswrapper[27835]: I0318 13:45:19.322873 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Mar 18 13:45:19.900519 master-0 kubenswrapper[27835]: I0318 13:45:19.900456 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-qd8n8"] Mar 18 13:45:19.902855 master-0 kubenswrapper[27835]: I0318 13:45:19.902823 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qd8n8" Mar 18 13:45:19.906021 master-0 kubenswrapper[27835]: I0318 13:45:19.905969 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Mar 18 13:45:19.906271 master-0 kubenswrapper[27835]: I0318 13:45:19.906234 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Mar 18 13:45:19.940055 master-0 kubenswrapper[27835]: I0318 13:45:19.939920 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-qd8n8"] Mar 18 13:45:20.112285 master-0 kubenswrapper[27835]: I0318 13:45:20.107552 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Mar 18 13:45:20.112565 master-0 kubenswrapper[27835]: I0318 13:45:20.112373 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 18 13:45:20.128518 master-0 kubenswrapper[27835]: I0318 13:45:20.119850 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw4dj\" (UniqueName: \"kubernetes.io/projected/d09230d0-21c5-4e63-b56d-f9346dce706d-kube-api-access-vw4dj\") pod \"nova-cell0-cell-mapping-qd8n8\" (UID: \"d09230d0-21c5-4e63-b56d-f9346dce706d\") " pod="openstack/nova-cell0-cell-mapping-qd8n8" Mar 18 13:45:20.128518 master-0 kubenswrapper[27835]: I0318 13:45:20.120024 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d09230d0-21c5-4e63-b56d-f9346dce706d-config-data\") pod \"nova-cell0-cell-mapping-qd8n8\" (UID: \"d09230d0-21c5-4e63-b56d-f9346dce706d\") " pod="openstack/nova-cell0-cell-mapping-qd8n8" Mar 18 13:45:20.128518 master-0 kubenswrapper[27835]: I0318 13:45:20.120306 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d09230d0-21c5-4e63-b56d-f9346dce706d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qd8n8\" (UID: \"d09230d0-21c5-4e63-b56d-f9346dce706d\") " pod="openstack/nova-cell0-cell-mapping-qd8n8" Mar 18 13:45:20.128518 master-0 kubenswrapper[27835]: I0318 13:45:20.120332 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d09230d0-21c5-4e63-b56d-f9346dce706d-scripts\") pod \"nova-cell0-cell-mapping-qd8n8\" (UID: \"d09230d0-21c5-4e63-b56d-f9346dce706d\") " pod="openstack/nova-cell0-cell-mapping-qd8n8" Mar 18 13:45:20.133435 master-0 kubenswrapper[27835]: I0318 13:45:20.132706 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-ironic-compute-config-data" Mar 18 13:45:20.184055 master-0 kubenswrapper[27835]: I0318 13:45:20.178500 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Mar 18 13:45:20.221444 master-0 kubenswrapper[27835]: I0318 13:45:20.221081 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 18 13:45:20.227792 master-0 kubenswrapper[27835]: I0318 13:45:20.227657 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 13:45:20.234896 master-0 kubenswrapper[27835]: I0318 13:45:20.228073 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d09230d0-21c5-4e63-b56d-f9346dce706d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qd8n8\" (UID: \"d09230d0-21c5-4e63-b56d-f9346dce706d\") " pod="openstack/nova-cell0-cell-mapping-qd8n8" Mar 18 13:45:20.235165 master-0 kubenswrapper[27835]: I0318 13:45:20.235136 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d09230d0-21c5-4e63-b56d-f9346dce706d-scripts\") pod \"nova-cell0-cell-mapping-qd8n8\" (UID: \"d09230d0-21c5-4e63-b56d-f9346dce706d\") " pod="openstack/nova-cell0-cell-mapping-qd8n8" Mar 18 13:45:20.235342 master-0 kubenswrapper[27835]: I0318 13:45:20.235325 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a015210-4858-461a-955f-761275fc2b6a-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"0a015210-4858-461a-955f-761275fc2b6a\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 18 13:45:20.235606 master-0 kubenswrapper[27835]: I0318 13:45:20.235591 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw4dj\" (UniqueName: \"kubernetes.io/projected/d09230d0-21c5-4e63-b56d-f9346dce706d-kube-api-access-vw4dj\") pod \"nova-cell0-cell-mapping-qd8n8\" (UID: \"d09230d0-21c5-4e63-b56d-f9346dce706d\") " pod="openstack/nova-cell0-cell-mapping-qd8n8" Mar 18 13:45:20.235862 master-0 kubenswrapper[27835]: I0318 13:45:20.235815 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a015210-4858-461a-955f-761275fc2b6a-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"0a015210-4858-461a-955f-761275fc2b6a\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 18 13:45:20.235994 master-0 kubenswrapper[27835]: I0318 13:45:20.235980 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d09230d0-21c5-4e63-b56d-f9346dce706d-config-data\") pod \"nova-cell0-cell-mapping-qd8n8\" (UID: \"d09230d0-21c5-4e63-b56d-f9346dce706d\") " pod="openstack/nova-cell0-cell-mapping-qd8n8" Mar 18 13:45:20.236226 master-0 kubenswrapper[27835]: I0318 13:45:20.236167 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r97vm\" (UniqueName: \"kubernetes.io/projected/0a015210-4858-461a-955f-761275fc2b6a-kube-api-access-r97vm\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"0a015210-4858-461a-955f-761275fc2b6a\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 18 13:45:20.239839 master-0 kubenswrapper[27835]: I0318 13:45:20.239775 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 18 13:45:20.241314 master-0 kubenswrapper[27835]: I0318 13:45:20.241282 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d09230d0-21c5-4e63-b56d-f9346dce706d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qd8n8\" (UID: \"d09230d0-21c5-4e63-b56d-f9346dce706d\") " pod="openstack/nova-cell0-cell-mapping-qd8n8" Mar 18 13:45:20.281706 master-0 kubenswrapper[27835]: I0318 13:45:20.281591 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 13:45:20.300709 master-0 kubenswrapper[27835]: I0318 13:45:20.300670 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d09230d0-21c5-4e63-b56d-f9346dce706d-scripts\") pod \"nova-cell0-cell-mapping-qd8n8\" (UID: \"d09230d0-21c5-4e63-b56d-f9346dce706d\") " pod="openstack/nova-cell0-cell-mapping-qd8n8" Mar 18 13:45:20.304821 master-0 kubenswrapper[27835]: I0318 13:45:20.304764 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d09230d0-21c5-4e63-b56d-f9346dce706d-config-data\") pod \"nova-cell0-cell-mapping-qd8n8\" (UID: \"d09230d0-21c5-4e63-b56d-f9346dce706d\") " pod="openstack/nova-cell0-cell-mapping-qd8n8" Mar 18 13:45:20.320493 master-0 kubenswrapper[27835]: I0318 13:45:20.315817 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw4dj\" (UniqueName: \"kubernetes.io/projected/d09230d0-21c5-4e63-b56d-f9346dce706d-kube-api-access-vw4dj\") pod \"nova-cell0-cell-mapping-qd8n8\" (UID: \"d09230d0-21c5-4e63-b56d-f9346dce706d\") " pod="openstack/nova-cell0-cell-mapping-qd8n8" Mar 18 13:45:20.322195 master-0 kubenswrapper[27835]: I0318 13:45:20.322166 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 18 13:45:20.330128 master-0 kubenswrapper[27835]: I0318 13:45:20.330080 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 18 13:45:20.343319 master-0 kubenswrapper[27835]: I0318 13:45:20.338931 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a015210-4858-461a-955f-761275fc2b6a-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"0a015210-4858-461a-955f-761275fc2b6a\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 18 13:45:20.343319 master-0 kubenswrapper[27835]: I0318 13:45:20.339117 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cc1e19c-125b-478a-98f3-aceffdc55b55-config-data\") pod \"nova-api-0\" (UID: \"5cc1e19c-125b-478a-98f3-aceffdc55b55\") " pod="openstack/nova-api-0" Mar 18 13:45:20.343319 master-0 kubenswrapper[27835]: I0318 13:45:20.339205 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhxx8\" (UniqueName: \"kubernetes.io/projected/29279da3-0357-41fb-8f23-25040f3130cb-kube-api-access-bhxx8\") pod \"nova-scheduler-0\" (UID: \"29279da3-0357-41fb-8f23-25040f3130cb\") " pod="openstack/nova-scheduler-0" Mar 18 13:45:20.343319 master-0 kubenswrapper[27835]: I0318 13:45:20.339249 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r97vm\" (UniqueName: \"kubernetes.io/projected/0a015210-4858-461a-955f-761275fc2b6a-kube-api-access-r97vm\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"0a015210-4858-461a-955f-761275fc2b6a\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 18 13:45:20.343319 master-0 kubenswrapper[27835]: I0318 13:45:20.339284 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29279da3-0357-41fb-8f23-25040f3130cb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"29279da3-0357-41fb-8f23-25040f3130cb\") " pod="openstack/nova-scheduler-0" Mar 18 13:45:20.343319 master-0 kubenswrapper[27835]: I0318 13:45:20.339407 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cc1e19c-125b-478a-98f3-aceffdc55b55-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5cc1e19c-125b-478a-98f3-aceffdc55b55\") " pod="openstack/nova-api-0" Mar 18 13:45:20.343319 master-0 kubenswrapper[27835]: I0318 13:45:20.339896 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a015210-4858-461a-955f-761275fc2b6a-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"0a015210-4858-461a-955f-761275fc2b6a\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 18 13:45:20.343319 master-0 kubenswrapper[27835]: I0318 13:45:20.340060 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8db6m\" (UniqueName: \"kubernetes.io/projected/5cc1e19c-125b-478a-98f3-aceffdc55b55-kube-api-access-8db6m\") pod \"nova-api-0\" (UID: \"5cc1e19c-125b-478a-98f3-aceffdc55b55\") " pod="openstack/nova-api-0" Mar 18 13:45:20.343319 master-0 kubenswrapper[27835]: I0318 13:45:20.340127 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cc1e19c-125b-478a-98f3-aceffdc55b55-logs\") pod \"nova-api-0\" (UID: \"5cc1e19c-125b-478a-98f3-aceffdc55b55\") " pod="openstack/nova-api-0" Mar 18 13:45:20.343319 master-0 kubenswrapper[27835]: I0318 13:45:20.340193 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29279da3-0357-41fb-8f23-25040f3130cb-config-data\") pod \"nova-scheduler-0\" (UID: \"29279da3-0357-41fb-8f23-25040f3130cb\") " pod="openstack/nova-scheduler-0" Mar 18 13:45:20.369919 master-0 kubenswrapper[27835]: I0318 13:45:20.369818 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a015210-4858-461a-955f-761275fc2b6a-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"0a015210-4858-461a-955f-761275fc2b6a\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 18 13:45:20.379576 master-0 kubenswrapper[27835]: I0318 13:45:20.373366 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a015210-4858-461a-955f-761275fc2b6a-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"0a015210-4858-461a-955f-761275fc2b6a\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 18 13:45:20.428838 master-0 kubenswrapper[27835]: I0318 13:45:20.424530 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r97vm\" (UniqueName: \"kubernetes.io/projected/0a015210-4858-461a-955f-761275fc2b6a-kube-api-access-r97vm\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"0a015210-4858-461a-955f-761275fc2b6a\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 18 13:45:20.443549 master-0 kubenswrapper[27835]: I0318 13:45:20.442734 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cc1e19c-125b-478a-98f3-aceffdc55b55-logs\") pod \"nova-api-0\" (UID: \"5cc1e19c-125b-478a-98f3-aceffdc55b55\") " pod="openstack/nova-api-0" Mar 18 13:45:20.443549 master-0 kubenswrapper[27835]: I0318 13:45:20.442823 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29279da3-0357-41fb-8f23-25040f3130cb-config-data\") pod \"nova-scheduler-0\" (UID: \"29279da3-0357-41fb-8f23-25040f3130cb\") " pod="openstack/nova-scheduler-0" Mar 18 13:45:20.443549 master-0 kubenswrapper[27835]: I0318 13:45:20.442901 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cc1e19c-125b-478a-98f3-aceffdc55b55-config-data\") pod \"nova-api-0\" (UID: \"5cc1e19c-125b-478a-98f3-aceffdc55b55\") " pod="openstack/nova-api-0" Mar 18 13:45:20.443549 master-0 kubenswrapper[27835]: I0318 13:45:20.442937 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhxx8\" (UniqueName: \"kubernetes.io/projected/29279da3-0357-41fb-8f23-25040f3130cb-kube-api-access-bhxx8\") pod \"nova-scheduler-0\" (UID: \"29279da3-0357-41fb-8f23-25040f3130cb\") " pod="openstack/nova-scheduler-0" Mar 18 13:45:20.443549 master-0 kubenswrapper[27835]: I0318 13:45:20.442971 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29279da3-0357-41fb-8f23-25040f3130cb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"29279da3-0357-41fb-8f23-25040f3130cb\") " pod="openstack/nova-scheduler-0" Mar 18 13:45:20.443549 master-0 kubenswrapper[27835]: I0318 13:45:20.443007 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cc1e19c-125b-478a-98f3-aceffdc55b55-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5cc1e19c-125b-478a-98f3-aceffdc55b55\") " pod="openstack/nova-api-0" Mar 18 13:45:20.443549 master-0 kubenswrapper[27835]: I0318 13:45:20.443090 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8db6m\" (UniqueName: \"kubernetes.io/projected/5cc1e19c-125b-478a-98f3-aceffdc55b55-kube-api-access-8db6m\") pod \"nova-api-0\" (UID: \"5cc1e19c-125b-478a-98f3-aceffdc55b55\") " pod="openstack/nova-api-0" Mar 18 13:45:20.444372 master-0 kubenswrapper[27835]: I0318 13:45:20.443969 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cc1e19c-125b-478a-98f3-aceffdc55b55-logs\") pod \"nova-api-0\" (UID: \"5cc1e19c-125b-478a-98f3-aceffdc55b55\") " pod="openstack/nova-api-0" Mar 18 13:45:20.448525 master-0 kubenswrapper[27835]: I0318 13:45:20.447807 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 18 13:45:20.448525 master-0 kubenswrapper[27835]: I0318 13:45:20.447868 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 13:45:20.455494 master-0 kubenswrapper[27835]: I0318 13:45:20.455392 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 18 13:45:20.464223 master-0 kubenswrapper[27835]: I0318 13:45:20.464129 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29279da3-0357-41fb-8f23-25040f3130cb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"29279da3-0357-41fb-8f23-25040f3130cb\") " pod="openstack/nova-scheduler-0" Mar 18 13:45:20.465717 master-0 kubenswrapper[27835]: I0318 13:45:20.465683 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cc1e19c-125b-478a-98f3-aceffdc55b55-config-data\") pod \"nova-api-0\" (UID: \"5cc1e19c-125b-478a-98f3-aceffdc55b55\") " pod="openstack/nova-api-0" Mar 18 13:45:20.478372 master-0 kubenswrapper[27835]: I0318 13:45:20.477216 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cc1e19c-125b-478a-98f3-aceffdc55b55-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5cc1e19c-125b-478a-98f3-aceffdc55b55\") " pod="openstack/nova-api-0" Mar 18 13:45:20.479720 master-0 kubenswrapper[27835]: I0318 13:45:20.479647 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 18 13:45:20.479874 master-0 kubenswrapper[27835]: I0318 13:45:20.479847 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 18 13:45:20.498305 master-0 kubenswrapper[27835]: I0318 13:45:20.498262 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 18 13:45:20.507479 master-0 kubenswrapper[27835]: I0318 13:45:20.506128 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhxx8\" (UniqueName: \"kubernetes.io/projected/29279da3-0357-41fb-8f23-25040f3130cb-kube-api-access-bhxx8\") pod \"nova-scheduler-0\" (UID: \"29279da3-0357-41fb-8f23-25040f3130cb\") " pod="openstack/nova-scheduler-0" Mar 18 13:45:20.517280 master-0 kubenswrapper[27835]: I0318 13:45:20.517183 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 18 13:45:20.528689 master-0 kubenswrapper[27835]: I0318 13:45:20.528638 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29279da3-0357-41fb-8f23-25040f3130cb-config-data\") pod \"nova-scheduler-0\" (UID: \"29279da3-0357-41fb-8f23-25040f3130cb\") " pod="openstack/nova-scheduler-0" Mar 18 13:45:20.571070 master-0 kubenswrapper[27835]: I0318 13:45:20.570999 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 18 13:45:20.577440 master-0 kubenswrapper[27835]: I0318 13:45:20.576998 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qd8n8" Mar 18 13:45:20.631067 master-0 kubenswrapper[27835]: I0318 13:45:20.630993 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 18 13:45:20.633221 master-0 kubenswrapper[27835]: I0318 13:45:20.633166 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 18 13:45:20.642405 master-0 kubenswrapper[27835]: I0318 13:45:20.637924 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 18 13:45:20.667399 master-0 kubenswrapper[27835]: I0318 13:45:20.667342 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8db6m\" (UniqueName: \"kubernetes.io/projected/5cc1e19c-125b-478a-98f3-aceffdc55b55-kube-api-access-8db6m\") pod \"nova-api-0\" (UID: \"5cc1e19c-125b-478a-98f3-aceffdc55b55\") " pod="openstack/nova-api-0" Mar 18 13:45:20.679169 master-0 kubenswrapper[27835]: I0318 13:45:20.679042 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92qhb\" (UniqueName: \"kubernetes.io/projected/13db2dec-a8ff-45b5-b3b0-3459de6bbf41-kube-api-access-92qhb\") pod \"nova-metadata-0\" (UID: \"13db2dec-a8ff-45b5-b3b0-3459de6bbf41\") " pod="openstack/nova-metadata-0" Mar 18 13:45:20.679543 master-0 kubenswrapper[27835]: I0318 13:45:20.679523 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13db2dec-a8ff-45b5-b3b0-3459de6bbf41-config-data\") pod \"nova-metadata-0\" (UID: \"13db2dec-a8ff-45b5-b3b0-3459de6bbf41\") " pod="openstack/nova-metadata-0" Mar 18 13:45:20.679938 master-0 kubenswrapper[27835]: I0318 13:45:20.679901 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13db2dec-a8ff-45b5-b3b0-3459de6bbf41-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"13db2dec-a8ff-45b5-b3b0-3459de6bbf41\") " pod="openstack/nova-metadata-0" Mar 18 13:45:20.680070 master-0 kubenswrapper[27835]: I0318 13:45:20.680055 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13db2dec-a8ff-45b5-b3b0-3459de6bbf41-logs\") pod \"nova-metadata-0\" (UID: \"13db2dec-a8ff-45b5-b3b0-3459de6bbf41\") " pod="openstack/nova-metadata-0" Mar 18 13:45:20.688898 master-0 kubenswrapper[27835]: I0318 13:45:20.687773 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 18 13:45:20.783698 master-0 kubenswrapper[27835]: I0318 13:45:20.783625 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13db2dec-a8ff-45b5-b3b0-3459de6bbf41-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"13db2dec-a8ff-45b5-b3b0-3459de6bbf41\") " pod="openstack/nova-metadata-0" Mar 18 13:45:20.783892 master-0 kubenswrapper[27835]: I0318 13:45:20.783756 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13db2dec-a8ff-45b5-b3b0-3459de6bbf41-logs\") pod \"nova-metadata-0\" (UID: \"13db2dec-a8ff-45b5-b3b0-3459de6bbf41\") " pod="openstack/nova-metadata-0" Mar 18 13:45:20.783892 master-0 kubenswrapper[27835]: I0318 13:45:20.783813 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e75fe206-50da-483e-8b9a-a86adf0082ac-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e75fe206-50da-483e-8b9a-a86adf0082ac\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 13:45:20.783892 master-0 kubenswrapper[27835]: I0318 13:45:20.783880 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92qhb\" (UniqueName: \"kubernetes.io/projected/13db2dec-a8ff-45b5-b3b0-3459de6bbf41-kube-api-access-92qhb\") pod \"nova-metadata-0\" (UID: \"13db2dec-a8ff-45b5-b3b0-3459de6bbf41\") " pod="openstack/nova-metadata-0" Mar 18 13:45:20.784074 master-0 kubenswrapper[27835]: I0318 13:45:20.783908 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dwrz\" (UniqueName: \"kubernetes.io/projected/e75fe206-50da-483e-8b9a-a86adf0082ac-kube-api-access-7dwrz\") pod \"nova-cell1-novncproxy-0\" (UID: \"e75fe206-50da-483e-8b9a-a86adf0082ac\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 13:45:20.784074 master-0 kubenswrapper[27835]: I0318 13:45:20.784022 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13db2dec-a8ff-45b5-b3b0-3459de6bbf41-config-data\") pod \"nova-metadata-0\" (UID: \"13db2dec-a8ff-45b5-b3b0-3459de6bbf41\") " pod="openstack/nova-metadata-0" Mar 18 13:45:20.784155 master-0 kubenswrapper[27835]: I0318 13:45:20.784104 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e75fe206-50da-483e-8b9a-a86adf0082ac-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e75fe206-50da-483e-8b9a-a86adf0082ac\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 13:45:20.785255 master-0 kubenswrapper[27835]: I0318 13:45:20.785198 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13db2dec-a8ff-45b5-b3b0-3459de6bbf41-logs\") pod \"nova-metadata-0\" (UID: \"13db2dec-a8ff-45b5-b3b0-3459de6bbf41\") " pod="openstack/nova-metadata-0" Mar 18 13:45:20.791373 master-0 kubenswrapper[27835]: I0318 13:45:20.791331 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13db2dec-a8ff-45b5-b3b0-3459de6bbf41-config-data\") pod \"nova-metadata-0\" (UID: \"13db2dec-a8ff-45b5-b3b0-3459de6bbf41\") " pod="openstack/nova-metadata-0" Mar 18 13:45:20.791518 master-0 kubenswrapper[27835]: I0318 13:45:20.791389 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78f68d4c8f-sdxd4"] Mar 18 13:45:20.794154 master-0 kubenswrapper[27835]: I0318 13:45:20.794115 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78f68d4c8f-sdxd4" Mar 18 13:45:20.800100 master-0 kubenswrapper[27835]: I0318 13:45:20.800034 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13db2dec-a8ff-45b5-b3b0-3459de6bbf41-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"13db2dec-a8ff-45b5-b3b0-3459de6bbf41\") " pod="openstack/nova-metadata-0" Mar 18 13:45:20.804690 master-0 kubenswrapper[27835]: I0318 13:45:20.804632 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78f68d4c8f-sdxd4"] Mar 18 13:45:20.812470 master-0 kubenswrapper[27835]: I0318 13:45:20.812379 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92qhb\" (UniqueName: \"kubernetes.io/projected/13db2dec-a8ff-45b5-b3b0-3459de6bbf41-kube-api-access-92qhb\") pod \"nova-metadata-0\" (UID: \"13db2dec-a8ff-45b5-b3b0-3459de6bbf41\") " pod="openstack/nova-metadata-0" Mar 18 13:45:20.831519 master-0 kubenswrapper[27835]: I0318 13:45:20.831486 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 13:45:20.890450 master-0 kubenswrapper[27835]: I0318 13:45:20.887571 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e75fe206-50da-483e-8b9a-a86adf0082ac-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e75fe206-50da-483e-8b9a-a86adf0082ac\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 13:45:20.890450 master-0 kubenswrapper[27835]: I0318 13:45:20.887643 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dwrz\" (UniqueName: \"kubernetes.io/projected/e75fe206-50da-483e-8b9a-a86adf0082ac-kube-api-access-7dwrz\") pod \"nova-cell1-novncproxy-0\" (UID: \"e75fe206-50da-483e-8b9a-a86adf0082ac\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 13:45:20.890450 master-0 kubenswrapper[27835]: I0318 13:45:20.887725 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e75fe206-50da-483e-8b9a-a86adf0082ac-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e75fe206-50da-483e-8b9a-a86adf0082ac\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 13:45:20.892561 master-0 kubenswrapper[27835]: I0318 13:45:20.892340 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 18 13:45:20.920843 master-0 kubenswrapper[27835]: I0318 13:45:20.919595 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e75fe206-50da-483e-8b9a-a86adf0082ac-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e75fe206-50da-483e-8b9a-a86adf0082ac\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 13:45:20.921061 master-0 kubenswrapper[27835]: I0318 13:45:20.920909 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e75fe206-50da-483e-8b9a-a86adf0082ac-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e75fe206-50da-483e-8b9a-a86adf0082ac\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 13:45:20.927738 master-0 kubenswrapper[27835]: I0318 13:45:20.925888 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dwrz\" (UniqueName: \"kubernetes.io/projected/e75fe206-50da-483e-8b9a-a86adf0082ac-kube-api-access-7dwrz\") pod \"nova-cell1-novncproxy-0\" (UID: \"e75fe206-50da-483e-8b9a-a86adf0082ac\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 13:45:20.994059 master-0 kubenswrapper[27835]: I0318 13:45:20.994020 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b59d4086-4f07-49d4-bbc4-6fbb69f545c7-dns-svc\") pod \"dnsmasq-dns-78f68d4c8f-sdxd4\" (UID: \"b59d4086-4f07-49d4-bbc4-6fbb69f545c7\") " pod="openstack/dnsmasq-dns-78f68d4c8f-sdxd4" Mar 18 13:45:20.994274 master-0 kubenswrapper[27835]: I0318 13:45:20.994249 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spmdr\" (UniqueName: \"kubernetes.io/projected/b59d4086-4f07-49d4-bbc4-6fbb69f545c7-kube-api-access-spmdr\") pod \"dnsmasq-dns-78f68d4c8f-sdxd4\" (UID: \"b59d4086-4f07-49d4-bbc4-6fbb69f545c7\") " pod="openstack/dnsmasq-dns-78f68d4c8f-sdxd4" Mar 18 13:45:21.001375 master-0 kubenswrapper[27835]: I0318 13:45:20.996018 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 18 13:45:21.001375 master-0 kubenswrapper[27835]: I0318 13:45:20.996483 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b59d4086-4f07-49d4-bbc4-6fbb69f545c7-config\") pod \"dnsmasq-dns-78f68d4c8f-sdxd4\" (UID: \"b59d4086-4f07-49d4-bbc4-6fbb69f545c7\") " pod="openstack/dnsmasq-dns-78f68d4c8f-sdxd4" Mar 18 13:45:21.001375 master-0 kubenswrapper[27835]: I0318 13:45:20.996602 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b59d4086-4f07-49d4-bbc4-6fbb69f545c7-ovsdbserver-nb\") pod \"dnsmasq-dns-78f68d4c8f-sdxd4\" (UID: \"b59d4086-4f07-49d4-bbc4-6fbb69f545c7\") " pod="openstack/dnsmasq-dns-78f68d4c8f-sdxd4" Mar 18 13:45:21.001375 master-0 kubenswrapper[27835]: I0318 13:45:20.999738 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b59d4086-4f07-49d4-bbc4-6fbb69f545c7-dns-swift-storage-0\") pod \"dnsmasq-dns-78f68d4c8f-sdxd4\" (UID: \"b59d4086-4f07-49d4-bbc4-6fbb69f545c7\") " pod="openstack/dnsmasq-dns-78f68d4c8f-sdxd4" Mar 18 13:45:21.001375 master-0 kubenswrapper[27835]: I0318 13:45:20.999888 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b59d4086-4f07-49d4-bbc4-6fbb69f545c7-ovsdbserver-sb\") pod \"dnsmasq-dns-78f68d4c8f-sdxd4\" (UID: \"b59d4086-4f07-49d4-bbc4-6fbb69f545c7\") " pod="openstack/dnsmasq-dns-78f68d4c8f-sdxd4" Mar 18 13:45:21.188624 master-0 kubenswrapper[27835]: I0318 13:45:21.102194 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b59d4086-4f07-49d4-bbc4-6fbb69f545c7-config\") pod \"dnsmasq-dns-78f68d4c8f-sdxd4\" (UID: \"b59d4086-4f07-49d4-bbc4-6fbb69f545c7\") " pod="openstack/dnsmasq-dns-78f68d4c8f-sdxd4" Mar 18 13:45:21.188624 master-0 kubenswrapper[27835]: I0318 13:45:21.103334 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b59d4086-4f07-49d4-bbc4-6fbb69f545c7-ovsdbserver-nb\") pod \"dnsmasq-dns-78f68d4c8f-sdxd4\" (UID: \"b59d4086-4f07-49d4-bbc4-6fbb69f545c7\") " pod="openstack/dnsmasq-dns-78f68d4c8f-sdxd4" Mar 18 13:45:21.188624 master-0 kubenswrapper[27835]: I0318 13:45:21.103434 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b59d4086-4f07-49d4-bbc4-6fbb69f545c7-dns-swift-storage-0\") pod \"dnsmasq-dns-78f68d4c8f-sdxd4\" (UID: \"b59d4086-4f07-49d4-bbc4-6fbb69f545c7\") " pod="openstack/dnsmasq-dns-78f68d4c8f-sdxd4" Mar 18 13:45:21.188624 master-0 kubenswrapper[27835]: I0318 13:45:21.103574 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b59d4086-4f07-49d4-bbc4-6fbb69f545c7-ovsdbserver-sb\") pod \"dnsmasq-dns-78f68d4c8f-sdxd4\" (UID: \"b59d4086-4f07-49d4-bbc4-6fbb69f545c7\") " pod="openstack/dnsmasq-dns-78f68d4c8f-sdxd4" Mar 18 13:45:21.188624 master-0 kubenswrapper[27835]: I0318 13:45:21.103652 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b59d4086-4f07-49d4-bbc4-6fbb69f545c7-config\") pod \"dnsmasq-dns-78f68d4c8f-sdxd4\" (UID: \"b59d4086-4f07-49d4-bbc4-6fbb69f545c7\") " pod="openstack/dnsmasq-dns-78f68d4c8f-sdxd4" Mar 18 13:45:21.188624 master-0 kubenswrapper[27835]: I0318 13:45:21.103724 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b59d4086-4f07-49d4-bbc4-6fbb69f545c7-dns-svc\") pod \"dnsmasq-dns-78f68d4c8f-sdxd4\" (UID: \"b59d4086-4f07-49d4-bbc4-6fbb69f545c7\") " pod="openstack/dnsmasq-dns-78f68d4c8f-sdxd4" Mar 18 13:45:21.188624 master-0 kubenswrapper[27835]: I0318 13:45:21.103804 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spmdr\" (UniqueName: \"kubernetes.io/projected/b59d4086-4f07-49d4-bbc4-6fbb69f545c7-kube-api-access-spmdr\") pod \"dnsmasq-dns-78f68d4c8f-sdxd4\" (UID: \"b59d4086-4f07-49d4-bbc4-6fbb69f545c7\") " pod="openstack/dnsmasq-dns-78f68d4c8f-sdxd4" Mar 18 13:45:21.188624 master-0 kubenswrapper[27835]: I0318 13:45:21.104298 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b59d4086-4f07-49d4-bbc4-6fbb69f545c7-dns-swift-storage-0\") pod \"dnsmasq-dns-78f68d4c8f-sdxd4\" (UID: \"b59d4086-4f07-49d4-bbc4-6fbb69f545c7\") " pod="openstack/dnsmasq-dns-78f68d4c8f-sdxd4" Mar 18 13:45:21.188624 master-0 kubenswrapper[27835]: I0318 13:45:21.104647 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b59d4086-4f07-49d4-bbc4-6fbb69f545c7-ovsdbserver-nb\") pod \"dnsmasq-dns-78f68d4c8f-sdxd4\" (UID: \"b59d4086-4f07-49d4-bbc4-6fbb69f545c7\") " pod="openstack/dnsmasq-dns-78f68d4c8f-sdxd4" Mar 18 13:45:21.188624 master-0 kubenswrapper[27835]: I0318 13:45:21.105352 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b59d4086-4f07-49d4-bbc4-6fbb69f545c7-dns-svc\") pod \"dnsmasq-dns-78f68d4c8f-sdxd4\" (UID: \"b59d4086-4f07-49d4-bbc4-6fbb69f545c7\") " pod="openstack/dnsmasq-dns-78f68d4c8f-sdxd4" Mar 18 13:45:21.188624 master-0 kubenswrapper[27835]: I0318 13:45:21.112313 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b59d4086-4f07-49d4-bbc4-6fbb69f545c7-ovsdbserver-sb\") pod \"dnsmasq-dns-78f68d4c8f-sdxd4\" (UID: \"b59d4086-4f07-49d4-bbc4-6fbb69f545c7\") " pod="openstack/dnsmasq-dns-78f68d4c8f-sdxd4" Mar 18 13:45:21.425307 master-0 kubenswrapper[27835]: I0318 13:45:21.425148 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spmdr\" (UniqueName: \"kubernetes.io/projected/b59d4086-4f07-49d4-bbc4-6fbb69f545c7-kube-api-access-spmdr\") pod \"dnsmasq-dns-78f68d4c8f-sdxd4\" (UID: \"b59d4086-4f07-49d4-bbc4-6fbb69f545c7\") " pod="openstack/dnsmasq-dns-78f68d4c8f-sdxd4" Mar 18 13:45:21.527563 master-0 kubenswrapper[27835]: I0318 13:45:21.525710 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78f68d4c8f-sdxd4" Mar 18 13:45:21.546552 master-0 kubenswrapper[27835]: I0318 13:45:21.546240 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-89c8v"] Mar 18 13:45:21.548624 master-0 kubenswrapper[27835]: I0318 13:45:21.548597 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-89c8v" Mar 18 13:45:21.552095 master-0 kubenswrapper[27835]: I0318 13:45:21.551940 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Mar 18 13:45:21.552674 master-0 kubenswrapper[27835]: I0318 13:45:21.552652 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 18 13:45:21.562385 master-0 kubenswrapper[27835]: I0318 13:45:21.562169 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-89c8v"] Mar 18 13:45:21.651713 master-0 kubenswrapper[27835]: I0318 13:45:21.651651 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv6r8\" (UniqueName: \"kubernetes.io/projected/aaa7ee44-2954-4d95-8cf3-a1fd004b87e6-kube-api-access-lv6r8\") pod \"nova-cell1-conductor-db-sync-89c8v\" (UID: \"aaa7ee44-2954-4d95-8cf3-a1fd004b87e6\") " pod="openstack/nova-cell1-conductor-db-sync-89c8v" Mar 18 13:45:21.652207 master-0 kubenswrapper[27835]: I0318 13:45:21.651727 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaa7ee44-2954-4d95-8cf3-a1fd004b87e6-scripts\") pod \"nova-cell1-conductor-db-sync-89c8v\" (UID: \"aaa7ee44-2954-4d95-8cf3-a1fd004b87e6\") " pod="openstack/nova-cell1-conductor-db-sync-89c8v" Mar 18 13:45:21.652207 master-0 kubenswrapper[27835]: I0318 13:45:21.651780 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa7ee44-2954-4d95-8cf3-a1fd004b87e6-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-89c8v\" (UID: \"aaa7ee44-2954-4d95-8cf3-a1fd004b87e6\") " pod="openstack/nova-cell1-conductor-db-sync-89c8v" Mar 18 13:45:21.652207 master-0 kubenswrapper[27835]: I0318 13:45:21.651913 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaa7ee44-2954-4d95-8cf3-a1fd004b87e6-config-data\") pod \"nova-cell1-conductor-db-sync-89c8v\" (UID: \"aaa7ee44-2954-4d95-8cf3-a1fd004b87e6\") " pod="openstack/nova-cell1-conductor-db-sync-89c8v" Mar 18 13:45:21.708763 master-0 kubenswrapper[27835]: I0318 13:45:21.707962 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Mar 18 13:45:21.751648 master-0 kubenswrapper[27835]: I0318 13:45:21.749235 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-qd8n8"] Mar 18 13:45:21.755568 master-0 kubenswrapper[27835]: I0318 13:45:21.755528 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaa7ee44-2954-4d95-8cf3-a1fd004b87e6-config-data\") pod \"nova-cell1-conductor-db-sync-89c8v\" (UID: \"aaa7ee44-2954-4d95-8cf3-a1fd004b87e6\") " pod="openstack/nova-cell1-conductor-db-sync-89c8v" Mar 18 13:45:21.755754 master-0 kubenswrapper[27835]: I0318 13:45:21.755701 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv6r8\" (UniqueName: \"kubernetes.io/projected/aaa7ee44-2954-4d95-8cf3-a1fd004b87e6-kube-api-access-lv6r8\") pod \"nova-cell1-conductor-db-sync-89c8v\" (UID: \"aaa7ee44-2954-4d95-8cf3-a1fd004b87e6\") " pod="openstack/nova-cell1-conductor-db-sync-89c8v" Mar 18 13:45:21.755808 master-0 kubenswrapper[27835]: I0318 13:45:21.755752 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaa7ee44-2954-4d95-8cf3-a1fd004b87e6-scripts\") pod \"nova-cell1-conductor-db-sync-89c8v\" (UID: \"aaa7ee44-2954-4d95-8cf3-a1fd004b87e6\") " pod="openstack/nova-cell1-conductor-db-sync-89c8v" Mar 18 13:45:21.755841 master-0 kubenswrapper[27835]: I0318 13:45:21.755815 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa7ee44-2954-4d95-8cf3-a1fd004b87e6-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-89c8v\" (UID: \"aaa7ee44-2954-4d95-8cf3-a1fd004b87e6\") " pod="openstack/nova-cell1-conductor-db-sync-89c8v" Mar 18 13:45:21.760945 master-0 kubenswrapper[27835]: I0318 13:45:21.760811 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa7ee44-2954-4d95-8cf3-a1fd004b87e6-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-89c8v\" (UID: \"aaa7ee44-2954-4d95-8cf3-a1fd004b87e6\") " pod="openstack/nova-cell1-conductor-db-sync-89c8v" Mar 18 13:45:21.777475 master-0 kubenswrapper[27835]: I0318 13:45:21.773199 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv6r8\" (UniqueName: \"kubernetes.io/projected/aaa7ee44-2954-4d95-8cf3-a1fd004b87e6-kube-api-access-lv6r8\") pod \"nova-cell1-conductor-db-sync-89c8v\" (UID: \"aaa7ee44-2954-4d95-8cf3-a1fd004b87e6\") " pod="openstack/nova-cell1-conductor-db-sync-89c8v" Mar 18 13:45:21.802053 master-0 kubenswrapper[27835]: I0318 13:45:21.801817 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaa7ee44-2954-4d95-8cf3-a1fd004b87e6-scripts\") pod \"nova-cell1-conductor-db-sync-89c8v\" (UID: \"aaa7ee44-2954-4d95-8cf3-a1fd004b87e6\") " pod="openstack/nova-cell1-conductor-db-sync-89c8v" Mar 18 13:45:21.822627 master-0 kubenswrapper[27835]: I0318 13:45:21.820583 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaa7ee44-2954-4d95-8cf3-a1fd004b87e6-config-data\") pod \"nova-cell1-conductor-db-sync-89c8v\" (UID: \"aaa7ee44-2954-4d95-8cf3-a1fd004b87e6\") " pod="openstack/nova-cell1-conductor-db-sync-89c8v" Mar 18 13:45:21.939927 master-0 kubenswrapper[27835]: I0318 13:45:21.939302 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qd8n8" event={"ID":"d09230d0-21c5-4e63-b56d-f9346dce706d","Type":"ContainerStarted","Data":"b0e4a72c06d475a3f44426a0a05748a7a7153564b4dc6652d75d26aa17284faf"} Mar 18 13:45:21.942906 master-0 kubenswrapper[27835]: I0318 13:45:21.940793 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"0a015210-4858-461a-955f-761275fc2b6a","Type":"ContainerStarted","Data":"0dabb0cf0c1a9bd44fcfd0920fddd2c7508c896f719e27365729b6f0436eea67"} Mar 18 13:45:22.041325 master-0 kubenswrapper[27835]: I0318 13:45:22.037293 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 13:45:22.053347 master-0 kubenswrapper[27835]: I0318 13:45:22.053124 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 18 13:45:22.072585 master-0 kubenswrapper[27835]: I0318 13:45:22.070284 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-89c8v" Mar 18 13:45:22.263589 master-0 kubenswrapper[27835]: I0318 13:45:22.261798 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 18 13:45:22.314452 master-0 kubenswrapper[27835]: I0318 13:45:22.313494 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 18 13:45:22.493162 master-0 kubenswrapper[27835]: I0318 13:45:22.493119 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78f68d4c8f-sdxd4"] Mar 18 13:45:22.706701 master-0 kubenswrapper[27835]: I0318 13:45:22.705075 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-89c8v"] Mar 18 13:45:22.977064 master-0 kubenswrapper[27835]: I0318 13:45:22.976177 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qd8n8" event={"ID":"d09230d0-21c5-4e63-b56d-f9346dce706d","Type":"ContainerStarted","Data":"653985effad5ec03456c4bfe74f3a1618ad985a342ef420b9306903ca9bd8592"} Mar 18 13:45:22.982328 master-0 kubenswrapper[27835]: I0318 13:45:22.979946 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-89c8v" event={"ID":"aaa7ee44-2954-4d95-8cf3-a1fd004b87e6","Type":"ContainerStarted","Data":"c619a97cbf9c36a88134b98363e7f371d1d891c3b7f9e4594c8c8880c8c93238"} Mar 18 13:45:22.986877 master-0 kubenswrapper[27835]: I0318 13:45:22.986424 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"29279da3-0357-41fb-8f23-25040f3130cb","Type":"ContainerStarted","Data":"bba82ce9557799868d95e09db7bd3433576be508491db6bf8578fba1f622f6f8"} Mar 18 13:45:22.989732 master-0 kubenswrapper[27835]: I0318 13:45:22.989470 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e75fe206-50da-483e-8b9a-a86adf0082ac","Type":"ContainerStarted","Data":"084196e01e9f971660ddd654dc5a05541b7a809dacd24125eca0d094f9edbbd8"} Mar 18 13:45:22.995549 master-0 kubenswrapper[27835]: I0318 13:45:22.995452 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5cc1e19c-125b-478a-98f3-aceffdc55b55","Type":"ContainerStarted","Data":"b03ee83f1b51e73d1cf77da5413dfa5ab9f67d479e5f6a09c4d755ebf92bf606"} Mar 18 13:45:22.999530 master-0 kubenswrapper[27835]: I0318 13:45:22.996692 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-qd8n8" podStartSLOduration=3.996680997 podStartE2EDuration="3.996680997s" podCreationTimestamp="2026-03-18 13:45:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:45:22.992522197 +0000 UTC m=+1286.957733757" watchObservedRunningTime="2026-03-18 13:45:22.996680997 +0000 UTC m=+1286.961892557" Mar 18 13:45:23.006635 master-0 kubenswrapper[27835]: I0318 13:45:23.003932 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"13db2dec-a8ff-45b5-b3b0-3459de6bbf41","Type":"ContainerStarted","Data":"1d66f82ba5ae412ba63f68f61d61d3b42793e3a88b3c5d56bc4623c1bed7487f"} Mar 18 13:45:23.034650 master-0 kubenswrapper[27835]: I0318 13:45:23.034592 27835 generic.go:334] "Generic (PLEG): container finished" podID="b59d4086-4f07-49d4-bbc4-6fbb69f545c7" containerID="f7c7b71330a3d369808c2843a4dc3a5a27109435ac9d1503d23a6f187959e5c3" exitCode=0 Mar 18 13:45:23.034919 master-0 kubenswrapper[27835]: I0318 13:45:23.034900 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78f68d4c8f-sdxd4" event={"ID":"b59d4086-4f07-49d4-bbc4-6fbb69f545c7","Type":"ContainerDied","Data":"f7c7b71330a3d369808c2843a4dc3a5a27109435ac9d1503d23a6f187959e5c3"} Mar 18 13:45:23.035005 master-0 kubenswrapper[27835]: I0318 13:45:23.034992 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78f68d4c8f-sdxd4" event={"ID":"b59d4086-4f07-49d4-bbc4-6fbb69f545c7","Type":"ContainerStarted","Data":"b6dc17a96f87bda3cd5426f233d94106e1bc48f14b1eeb254b0202bc013b842a"} Mar 18 13:45:24.054513 master-0 kubenswrapper[27835]: I0318 13:45:24.054349 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78f68d4c8f-sdxd4" event={"ID":"b59d4086-4f07-49d4-bbc4-6fbb69f545c7","Type":"ContainerStarted","Data":"41af6681e719bc8a416fb08df016e44322a925af8eb52c5c4bcbe18885a8b994"} Mar 18 13:45:24.055161 master-0 kubenswrapper[27835]: I0318 13:45:24.054506 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78f68d4c8f-sdxd4" Mar 18 13:45:24.058976 master-0 kubenswrapper[27835]: I0318 13:45:24.058931 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-89c8v" event={"ID":"aaa7ee44-2954-4d95-8cf3-a1fd004b87e6","Type":"ContainerStarted","Data":"b989668992718d35f26eb9f1e2a078655de8ac587183a2b8564ff5bbb51d870c"} Mar 18 13:45:24.090439 master-0 kubenswrapper[27835]: I0318 13:45:24.090104 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78f68d4c8f-sdxd4" podStartSLOduration=4.090087756 podStartE2EDuration="4.090087756s" podCreationTimestamp="2026-03-18 13:45:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:45:24.081846968 +0000 UTC m=+1288.047058528" watchObservedRunningTime="2026-03-18 13:45:24.090087756 +0000 UTC m=+1288.055299326" Mar 18 13:45:24.110448 master-0 kubenswrapper[27835]: I0318 13:45:24.109320 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-89c8v" podStartSLOduration=3.109303916 podStartE2EDuration="3.109303916s" podCreationTimestamp="2026-03-18 13:45:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:45:24.108193257 +0000 UTC m=+1288.073404817" watchObservedRunningTime="2026-03-18 13:45:24.109303916 +0000 UTC m=+1288.074515476" Mar 18 13:45:24.977880 master-0 kubenswrapper[27835]: I0318 13:45:24.977453 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 18 13:45:25.005422 master-0 kubenswrapper[27835]: I0318 13:45:25.005341 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 18 13:45:27.108247 master-0 kubenswrapper[27835]: I0318 13:45:27.107454 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"29279da3-0357-41fb-8f23-25040f3130cb","Type":"ContainerStarted","Data":"1ada06c52f8681119e4d0afd5452496fe5b40b505d0b2d21efd18b3be0212ad9"} Mar 18 13:45:27.111492 master-0 kubenswrapper[27835]: I0318 13:45:27.111437 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e75fe206-50da-483e-8b9a-a86adf0082ac","Type":"ContainerStarted","Data":"a00e95672bdcc02e6756e4747a859efb5c6d8d91a430733c5464707c1db0f9b9"} Mar 18 13:45:27.111614 master-0 kubenswrapper[27835]: I0318 13:45:27.111562 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="e75fe206-50da-483e-8b9a-a86adf0082ac" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://a00e95672bdcc02e6756e4747a859efb5c6d8d91a430733c5464707c1db0f9b9" gracePeriod=30 Mar 18 13:45:27.118699 master-0 kubenswrapper[27835]: I0318 13:45:27.118645 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5cc1e19c-125b-478a-98f3-aceffdc55b55","Type":"ContainerStarted","Data":"88d4b9396afcacfc5c1cf95b07fdc1d29ecebd143ebe7b2635ec6bb11c338dcc"} Mar 18 13:45:27.118699 master-0 kubenswrapper[27835]: I0318 13:45:27.118699 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5cc1e19c-125b-478a-98f3-aceffdc55b55","Type":"ContainerStarted","Data":"42e252d758b96532dbba770c102175a75ec18f9607df6d6dddf72fb4b9dccdcf"} Mar 18 13:45:27.124074 master-0 kubenswrapper[27835]: I0318 13:45:27.123680 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"13db2dec-a8ff-45b5-b3b0-3459de6bbf41","Type":"ContainerStarted","Data":"a2ab5fd243f343851e895f35f4654f27bb7a415afced56b2b02936c57c1d3cf9"} Mar 18 13:45:27.124074 master-0 kubenswrapper[27835]: I0318 13:45:27.123745 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"13db2dec-a8ff-45b5-b3b0-3459de6bbf41","Type":"ContainerStarted","Data":"f0803c29119c06a221f61326ef1c28cb4a037d79442e660fce97457d19ba195e"} Mar 18 13:45:27.124074 master-0 kubenswrapper[27835]: I0318 13:45:27.123750 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="13db2dec-a8ff-45b5-b3b0-3459de6bbf41" containerName="nova-metadata-log" containerID="cri-o://f0803c29119c06a221f61326ef1c28cb4a037d79442e660fce97457d19ba195e" gracePeriod=30 Mar 18 13:45:27.124074 master-0 kubenswrapper[27835]: I0318 13:45:27.123825 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="13db2dec-a8ff-45b5-b3b0-3459de6bbf41" containerName="nova-metadata-metadata" containerID="cri-o://a2ab5fd243f343851e895f35f4654f27bb7a415afced56b2b02936c57c1d3cf9" gracePeriod=30 Mar 18 13:45:27.135562 master-0 kubenswrapper[27835]: I0318 13:45:27.135314 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.81463779 podStartE2EDuration="7.135286324s" podCreationTimestamp="2026-03-18 13:45:20 +0000 UTC" firstStartedPulling="2026-03-18 13:45:22.055077642 +0000 UTC m=+1286.020289202" lastFinishedPulling="2026-03-18 13:45:26.375726176 +0000 UTC m=+1290.340937736" observedRunningTime="2026-03-18 13:45:27.129054929 +0000 UTC m=+1291.094266519" watchObservedRunningTime="2026-03-18 13:45:27.135286324 +0000 UTC m=+1291.100497884" Mar 18 13:45:27.203245 master-0 kubenswrapper[27835]: I0318 13:45:27.202014 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.038399633 podStartE2EDuration="7.201994423s" podCreationTimestamp="2026-03-18 13:45:20 +0000 UTC" firstStartedPulling="2026-03-18 13:45:22.225551982 +0000 UTC m=+1286.190763542" lastFinishedPulling="2026-03-18 13:45:26.389146772 +0000 UTC m=+1290.354358332" observedRunningTime="2026-03-18 13:45:27.164474989 +0000 UTC m=+1291.129686559" watchObservedRunningTime="2026-03-18 13:45:27.201994423 +0000 UTC m=+1291.167205983" Mar 18 13:45:27.205798 master-0 kubenswrapper[27835]: I0318 13:45:27.205740 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.990862222 podStartE2EDuration="7.205726042s" podCreationTimestamp="2026-03-18 13:45:20 +0000 UTC" firstStartedPulling="2026-03-18 13:45:22.248827699 +0000 UTC m=+1286.214039259" lastFinishedPulling="2026-03-18 13:45:26.463691519 +0000 UTC m=+1290.428903079" observedRunningTime="2026-03-18 13:45:27.185086395 +0000 UTC m=+1291.150297975" watchObservedRunningTime="2026-03-18 13:45:27.205726042 +0000 UTC m=+1291.170937602" Mar 18 13:45:27.215109 master-0 kubenswrapper[27835]: I0318 13:45:27.215038 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.8952239669999997 podStartE2EDuration="7.215012859s" podCreationTimestamp="2026-03-18 13:45:20 +0000 UTC" firstStartedPulling="2026-03-18 13:45:22.069083453 +0000 UTC m=+1286.034295013" lastFinishedPulling="2026-03-18 13:45:26.388872345 +0000 UTC m=+1290.354083905" observedRunningTime="2026-03-18 13:45:27.208337311 +0000 UTC m=+1291.173548871" watchObservedRunningTime="2026-03-18 13:45:27.215012859 +0000 UTC m=+1291.180224409" Mar 18 13:45:28.135331 master-0 kubenswrapper[27835]: I0318 13:45:28.135265 27835 generic.go:334] "Generic (PLEG): container finished" podID="13db2dec-a8ff-45b5-b3b0-3459de6bbf41" containerID="f0803c29119c06a221f61326ef1c28cb4a037d79442e660fce97457d19ba195e" exitCode=143 Mar 18 13:45:28.136293 master-0 kubenswrapper[27835]: I0318 13:45:28.136249 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"13db2dec-a8ff-45b5-b3b0-3459de6bbf41","Type":"ContainerDied","Data":"f0803c29119c06a221f61326ef1c28cb4a037d79442e660fce97457d19ba195e"} Mar 18 13:45:30.578231 master-0 kubenswrapper[27835]: I0318 13:45:30.578154 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 18 13:45:30.578950 master-0 kubenswrapper[27835]: I0318 13:45:30.578441 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 18 13:45:30.621766 master-0 kubenswrapper[27835]: I0318 13:45:30.621696 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 18 13:45:30.833085 master-0 kubenswrapper[27835]: I0318 13:45:30.832909 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 18 13:45:30.833085 master-0 kubenswrapper[27835]: I0318 13:45:30.833043 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 18 13:45:30.997087 master-0 kubenswrapper[27835]: I0318 13:45:30.997026 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 18 13:45:31.236508 master-0 kubenswrapper[27835]: I0318 13:45:31.236436 27835 generic.go:334] "Generic (PLEG): container finished" podID="d09230d0-21c5-4e63-b56d-f9346dce706d" containerID="653985effad5ec03456c4bfe74f3a1618ad985a342ef420b9306903ca9bd8592" exitCode=0 Mar 18 13:45:31.236822 master-0 kubenswrapper[27835]: I0318 13:45:31.236672 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qd8n8" event={"ID":"d09230d0-21c5-4e63-b56d-f9346dce706d","Type":"ContainerDied","Data":"653985effad5ec03456c4bfe74f3a1618ad985a342ef420b9306903ca9bd8592"} Mar 18 13:45:31.289357 master-0 kubenswrapper[27835]: I0318 13:45:31.289301 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 18 13:45:31.532020 master-0 kubenswrapper[27835]: I0318 13:45:31.531608 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78f68d4c8f-sdxd4" Mar 18 13:45:31.648724 master-0 kubenswrapper[27835]: I0318 13:45:31.648671 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-758cc74c7c-r928t"] Mar 18 13:45:31.649308 master-0 kubenswrapper[27835]: I0318 13:45:31.648909 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-758cc74c7c-r928t" podUID="673c137d-50de-48a3-aac1-036df40897d4" containerName="dnsmasq-dns" containerID="cri-o://fcc331738f502bf6c2f7e8d9f9c13f69c47d421c8a114260ab7a59a82041a852" gracePeriod=10 Mar 18 13:45:31.937568 master-0 kubenswrapper[27835]: I0318 13:45:31.937483 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5cc1e19c-125b-478a-98f3-aceffdc55b55" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.0.253:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 13:45:31.939219 master-0 kubenswrapper[27835]: I0318 13:45:31.937558 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5cc1e19c-125b-478a-98f3-aceffdc55b55" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.0.253:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 13:45:32.788513 master-0 kubenswrapper[27835]: I0318 13:45:32.788371 27835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-758cc74c7c-r928t" podUID="673c137d-50de-48a3-aac1-036df40897d4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.246:5353: connect: connection refused" Mar 18 13:45:37.085402 master-0 kubenswrapper[27835]: I0318 13:45:37.085318 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qd8n8" Mar 18 13:45:37.096938 master-0 kubenswrapper[27835]: I0318 13:45:37.096849 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vw4dj\" (UniqueName: \"kubernetes.io/projected/d09230d0-21c5-4e63-b56d-f9346dce706d-kube-api-access-vw4dj\") pod \"d09230d0-21c5-4e63-b56d-f9346dce706d\" (UID: \"d09230d0-21c5-4e63-b56d-f9346dce706d\") " Mar 18 13:45:37.097266 master-0 kubenswrapper[27835]: I0318 13:45:37.096954 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d09230d0-21c5-4e63-b56d-f9346dce706d-combined-ca-bundle\") pod \"d09230d0-21c5-4e63-b56d-f9346dce706d\" (UID: \"d09230d0-21c5-4e63-b56d-f9346dce706d\") " Mar 18 13:45:37.097266 master-0 kubenswrapper[27835]: I0318 13:45:37.097049 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d09230d0-21c5-4e63-b56d-f9346dce706d-scripts\") pod \"d09230d0-21c5-4e63-b56d-f9346dce706d\" (UID: \"d09230d0-21c5-4e63-b56d-f9346dce706d\") " Mar 18 13:45:37.097266 master-0 kubenswrapper[27835]: I0318 13:45:37.097118 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d09230d0-21c5-4e63-b56d-f9346dce706d-config-data\") pod \"d09230d0-21c5-4e63-b56d-f9346dce706d\" (UID: \"d09230d0-21c5-4e63-b56d-f9346dce706d\") " Mar 18 13:45:37.101733 master-0 kubenswrapper[27835]: I0318 13:45:37.101669 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d09230d0-21c5-4e63-b56d-f9346dce706d-kube-api-access-vw4dj" (OuterVolumeSpecName: "kube-api-access-vw4dj") pod "d09230d0-21c5-4e63-b56d-f9346dce706d" (UID: "d09230d0-21c5-4e63-b56d-f9346dce706d"). InnerVolumeSpecName "kube-api-access-vw4dj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:45:37.118564 master-0 kubenswrapper[27835]: I0318 13:45:37.118521 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d09230d0-21c5-4e63-b56d-f9346dce706d-scripts" (OuterVolumeSpecName: "scripts") pod "d09230d0-21c5-4e63-b56d-f9346dce706d" (UID: "d09230d0-21c5-4e63-b56d-f9346dce706d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:45:37.149109 master-0 kubenswrapper[27835]: I0318 13:45:37.148833 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d09230d0-21c5-4e63-b56d-f9346dce706d-config-data" (OuterVolumeSpecName: "config-data") pod "d09230d0-21c5-4e63-b56d-f9346dce706d" (UID: "d09230d0-21c5-4e63-b56d-f9346dce706d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:45:37.150389 master-0 kubenswrapper[27835]: I0318 13:45:37.149292 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d09230d0-21c5-4e63-b56d-f9346dce706d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d09230d0-21c5-4e63-b56d-f9346dce706d" (UID: "d09230d0-21c5-4e63-b56d-f9346dce706d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:45:37.199885 master-0 kubenswrapper[27835]: I0318 13:45:37.199822 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vw4dj\" (UniqueName: \"kubernetes.io/projected/d09230d0-21c5-4e63-b56d-f9346dce706d-kube-api-access-vw4dj\") on node \"master-0\" DevicePath \"\"" Mar 18 13:45:37.199885 master-0 kubenswrapper[27835]: I0318 13:45:37.199878 27835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d09230d0-21c5-4e63-b56d-f9346dce706d-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:45:37.199885 master-0 kubenswrapper[27835]: I0318 13:45:37.199892 27835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d09230d0-21c5-4e63-b56d-f9346dce706d-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:45:37.200145 master-0 kubenswrapper[27835]: I0318 13:45:37.199904 27835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d09230d0-21c5-4e63-b56d-f9346dce706d-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 13:45:37.338715 master-0 kubenswrapper[27835]: I0318 13:45:37.338647 27835 generic.go:334] "Generic (PLEG): container finished" podID="aaa7ee44-2954-4d95-8cf3-a1fd004b87e6" containerID="b989668992718d35f26eb9f1e2a078655de8ac587183a2b8564ff5bbb51d870c" exitCode=0 Mar 18 13:45:37.338933 master-0 kubenswrapper[27835]: I0318 13:45:37.338768 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-89c8v" event={"ID":"aaa7ee44-2954-4d95-8cf3-a1fd004b87e6","Type":"ContainerDied","Data":"b989668992718d35f26eb9f1e2a078655de8ac587183a2b8564ff5bbb51d870c"} Mar 18 13:45:37.346659 master-0 kubenswrapper[27835]: I0318 13:45:37.346338 27835 generic.go:334] "Generic (PLEG): container finished" podID="673c137d-50de-48a3-aac1-036df40897d4" containerID="fcc331738f502bf6c2f7e8d9f9c13f69c47d421c8a114260ab7a59a82041a852" exitCode=0 Mar 18 13:45:37.346659 master-0 kubenswrapper[27835]: I0318 13:45:37.346465 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-758cc74c7c-r928t" event={"ID":"673c137d-50de-48a3-aac1-036df40897d4","Type":"ContainerDied","Data":"fcc331738f502bf6c2f7e8d9f9c13f69c47d421c8a114260ab7a59a82041a852"} Mar 18 13:45:37.353075 master-0 kubenswrapper[27835]: I0318 13:45:37.352389 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qd8n8" event={"ID":"d09230d0-21c5-4e63-b56d-f9346dce706d","Type":"ContainerDied","Data":"b0e4a72c06d475a3f44426a0a05748a7a7153564b4dc6652d75d26aa17284faf"} Mar 18 13:45:37.353075 master-0 kubenswrapper[27835]: I0318 13:45:37.352523 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0e4a72c06d475a3f44426a0a05748a7a7153564b4dc6652d75d26aa17284faf" Mar 18 13:45:37.353075 master-0 kubenswrapper[27835]: I0318 13:45:37.352530 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qd8n8" Mar 18 13:45:37.413002 master-0 kubenswrapper[27835]: I0318 13:45:37.412282 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-758cc74c7c-r928t" Mar 18 13:45:37.514332 master-0 kubenswrapper[27835]: I0318 13:45:37.514165 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6qjd\" (UniqueName: \"kubernetes.io/projected/673c137d-50de-48a3-aac1-036df40897d4-kube-api-access-b6qjd\") pod \"673c137d-50de-48a3-aac1-036df40897d4\" (UID: \"673c137d-50de-48a3-aac1-036df40897d4\") " Mar 18 13:45:37.514553 master-0 kubenswrapper[27835]: I0318 13:45:37.514481 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/673c137d-50de-48a3-aac1-036df40897d4-ovsdbserver-sb\") pod \"673c137d-50de-48a3-aac1-036df40897d4\" (UID: \"673c137d-50de-48a3-aac1-036df40897d4\") " Mar 18 13:45:37.514607 master-0 kubenswrapper[27835]: I0318 13:45:37.514560 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/673c137d-50de-48a3-aac1-036df40897d4-dns-svc\") pod \"673c137d-50de-48a3-aac1-036df40897d4\" (UID: \"673c137d-50de-48a3-aac1-036df40897d4\") " Mar 18 13:45:37.514714 master-0 kubenswrapper[27835]: I0318 13:45:37.514682 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/673c137d-50de-48a3-aac1-036df40897d4-ovsdbserver-nb\") pod \"673c137d-50de-48a3-aac1-036df40897d4\" (UID: \"673c137d-50de-48a3-aac1-036df40897d4\") " Mar 18 13:45:37.514780 master-0 kubenswrapper[27835]: I0318 13:45:37.514730 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/673c137d-50de-48a3-aac1-036df40897d4-config\") pod \"673c137d-50de-48a3-aac1-036df40897d4\" (UID: \"673c137d-50de-48a3-aac1-036df40897d4\") " Mar 18 13:45:37.514856 master-0 kubenswrapper[27835]: I0318 13:45:37.514828 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/673c137d-50de-48a3-aac1-036df40897d4-dns-swift-storage-0\") pod \"673c137d-50de-48a3-aac1-036df40897d4\" (UID: \"673c137d-50de-48a3-aac1-036df40897d4\") " Mar 18 13:45:37.520247 master-0 kubenswrapper[27835]: I0318 13:45:37.520200 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/673c137d-50de-48a3-aac1-036df40897d4-kube-api-access-b6qjd" (OuterVolumeSpecName: "kube-api-access-b6qjd") pod "673c137d-50de-48a3-aac1-036df40897d4" (UID: "673c137d-50de-48a3-aac1-036df40897d4"). InnerVolumeSpecName "kube-api-access-b6qjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:45:37.569027 master-0 kubenswrapper[27835]: I0318 13:45:37.568964 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/673c137d-50de-48a3-aac1-036df40897d4-config" (OuterVolumeSpecName: "config") pod "673c137d-50de-48a3-aac1-036df40897d4" (UID: "673c137d-50de-48a3-aac1-036df40897d4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:45:37.579186 master-0 kubenswrapper[27835]: I0318 13:45:37.579091 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/673c137d-50de-48a3-aac1-036df40897d4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "673c137d-50de-48a3-aac1-036df40897d4" (UID: "673c137d-50de-48a3-aac1-036df40897d4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:45:37.585469 master-0 kubenswrapper[27835]: I0318 13:45:37.585391 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/673c137d-50de-48a3-aac1-036df40897d4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "673c137d-50de-48a3-aac1-036df40897d4" (UID: "673c137d-50de-48a3-aac1-036df40897d4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:45:37.590028 master-0 kubenswrapper[27835]: I0318 13:45:37.589963 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/673c137d-50de-48a3-aac1-036df40897d4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "673c137d-50de-48a3-aac1-036df40897d4" (UID: "673c137d-50de-48a3-aac1-036df40897d4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:45:37.619226 master-0 kubenswrapper[27835]: I0318 13:45:37.619148 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6qjd\" (UniqueName: \"kubernetes.io/projected/673c137d-50de-48a3-aac1-036df40897d4-kube-api-access-b6qjd\") on node \"master-0\" DevicePath \"\"" Mar 18 13:45:37.619226 master-0 kubenswrapper[27835]: I0318 13:45:37.619213 27835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/673c137d-50de-48a3-aac1-036df40897d4-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 18 13:45:37.619226 master-0 kubenswrapper[27835]: I0318 13:45:37.619228 27835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/673c137d-50de-48a3-aac1-036df40897d4-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 18 13:45:37.619226 master-0 kubenswrapper[27835]: I0318 13:45:37.619241 27835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/673c137d-50de-48a3-aac1-036df40897d4-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:45:37.619612 master-0 kubenswrapper[27835]: I0318 13:45:37.619254 27835 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/673c137d-50de-48a3-aac1-036df40897d4-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 18 13:45:37.645471 master-0 kubenswrapper[27835]: I0318 13:45:37.645347 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/673c137d-50de-48a3-aac1-036df40897d4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "673c137d-50de-48a3-aac1-036df40897d4" (UID: "673c137d-50de-48a3-aac1-036df40897d4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:45:37.721502 master-0 kubenswrapper[27835]: I0318 13:45:37.721374 27835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/673c137d-50de-48a3-aac1-036df40897d4-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 18 13:45:38.355635 master-0 kubenswrapper[27835]: I0318 13:45:38.355499 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 13:45:38.356219 master-0 kubenswrapper[27835]: I0318 13:45:38.355764 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="29279da3-0357-41fb-8f23-25040f3130cb" containerName="nova-scheduler-scheduler" containerID="cri-o://1ada06c52f8681119e4d0afd5452496fe5b40b505d0b2d21efd18b3be0212ad9" gracePeriod=30 Mar 18 13:45:38.384966 master-0 kubenswrapper[27835]: I0318 13:45:38.383717 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 18 13:45:38.384966 master-0 kubenswrapper[27835]: I0318 13:45:38.383993 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5cc1e19c-125b-478a-98f3-aceffdc55b55" containerName="nova-api-log" containerID="cri-o://42e252d758b96532dbba770c102175a75ec18f9607df6d6dddf72fb4b9dccdcf" gracePeriod=30 Mar 18 13:45:38.384966 master-0 kubenswrapper[27835]: I0318 13:45:38.384545 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5cc1e19c-125b-478a-98f3-aceffdc55b55" containerName="nova-api-api" containerID="cri-o://88d4b9396afcacfc5c1cf95b07fdc1d29ecebd143ebe7b2635ec6bb11c338dcc" gracePeriod=30 Mar 18 13:45:38.384966 master-0 kubenswrapper[27835]: I0318 13:45:38.384868 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-758cc74c7c-r928t" Mar 18 13:45:38.385496 master-0 kubenswrapper[27835]: I0318 13:45:38.385095 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-758cc74c7c-r928t" event={"ID":"673c137d-50de-48a3-aac1-036df40897d4","Type":"ContainerDied","Data":"1dfe63b7755c206fe3b1b17d14ef6489735473f86131751c2c5f6c4f55401c8e"} Mar 18 13:45:38.385496 master-0 kubenswrapper[27835]: I0318 13:45:38.385163 27835 scope.go:117] "RemoveContainer" containerID="fcc331738f502bf6c2f7e8d9f9c13f69c47d421c8a114260ab7a59a82041a852" Mar 18 13:45:38.398608 master-0 kubenswrapper[27835]: I0318 13:45:38.398371 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"0a015210-4858-461a-955f-761275fc2b6a","Type":"ContainerStarted","Data":"1d5655faa00a5d6f0d9ecb711ee6f6825c7c88d2e4a51bab7735b79e645e449b"} Mar 18 13:45:38.398748 master-0 kubenswrapper[27835]: I0318 13:45:38.398657 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 18 13:45:38.435895 master-0 kubenswrapper[27835]: I0318 13:45:38.435335 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-758cc74c7c-r928t"] Mar 18 13:45:38.454674 master-0 kubenswrapper[27835]: I0318 13:45:38.454635 27835 scope.go:117] "RemoveContainer" containerID="38d9ba65e9a3d0e9a2e16c37dc3a3bba8beb86991b1e456a034cc26bcb3d3546" Mar 18 13:45:38.456982 master-0 kubenswrapper[27835]: I0318 13:45:38.456935 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-758cc74c7c-r928t"] Mar 18 13:45:38.474075 master-0 kubenswrapper[27835]: I0318 13:45:38.473990 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-compute-ironic-compute-0" podStartSLOduration=4.003372605 podStartE2EDuration="19.47396973s" podCreationTimestamp="2026-03-18 13:45:19 +0000 UTC" firstStartedPulling="2026-03-18 13:45:21.676187517 +0000 UTC m=+1285.641399087" lastFinishedPulling="2026-03-18 13:45:37.146784632 +0000 UTC m=+1301.111996212" observedRunningTime="2026-03-18 13:45:38.4290899 +0000 UTC m=+1302.394301460" watchObservedRunningTime="2026-03-18 13:45:38.47396973 +0000 UTC m=+1302.439181300" Mar 18 13:45:38.477510 master-0 kubenswrapper[27835]: I0318 13:45:38.477457 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 18 13:45:38.832786 master-0 kubenswrapper[27835]: I0318 13:45:38.832725 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 18 13:45:38.833001 master-0 kubenswrapper[27835]: I0318 13:45:38.832800 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 18 13:45:38.842777 master-0 kubenswrapper[27835]: I0318 13:45:38.842722 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-89c8v" Mar 18 13:45:38.896741 master-0 kubenswrapper[27835]: I0318 13:45:38.894144 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 18 13:45:38.896741 master-0 kubenswrapper[27835]: I0318 13:45:38.894479 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 18 13:45:38.963045 master-0 kubenswrapper[27835]: I0318 13:45:38.962986 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lv6r8\" (UniqueName: \"kubernetes.io/projected/aaa7ee44-2954-4d95-8cf3-a1fd004b87e6-kube-api-access-lv6r8\") pod \"aaa7ee44-2954-4d95-8cf3-a1fd004b87e6\" (UID: \"aaa7ee44-2954-4d95-8cf3-a1fd004b87e6\") " Mar 18 13:45:38.963269 master-0 kubenswrapper[27835]: I0318 13:45:38.963168 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaa7ee44-2954-4d95-8cf3-a1fd004b87e6-scripts\") pod \"aaa7ee44-2954-4d95-8cf3-a1fd004b87e6\" (UID: \"aaa7ee44-2954-4d95-8cf3-a1fd004b87e6\") " Mar 18 13:45:38.963269 master-0 kubenswrapper[27835]: I0318 13:45:38.963255 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaa7ee44-2954-4d95-8cf3-a1fd004b87e6-config-data\") pod \"aaa7ee44-2954-4d95-8cf3-a1fd004b87e6\" (UID: \"aaa7ee44-2954-4d95-8cf3-a1fd004b87e6\") " Mar 18 13:45:38.963430 master-0 kubenswrapper[27835]: I0318 13:45:38.963275 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa7ee44-2954-4d95-8cf3-a1fd004b87e6-combined-ca-bundle\") pod \"aaa7ee44-2954-4d95-8cf3-a1fd004b87e6\" (UID: \"aaa7ee44-2954-4d95-8cf3-a1fd004b87e6\") " Mar 18 13:45:38.967505 master-0 kubenswrapper[27835]: I0318 13:45:38.967388 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaa7ee44-2954-4d95-8cf3-a1fd004b87e6-scripts" (OuterVolumeSpecName: "scripts") pod "aaa7ee44-2954-4d95-8cf3-a1fd004b87e6" (UID: "aaa7ee44-2954-4d95-8cf3-a1fd004b87e6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:45:38.974292 master-0 kubenswrapper[27835]: I0318 13:45:38.974119 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaa7ee44-2954-4d95-8cf3-a1fd004b87e6-kube-api-access-lv6r8" (OuterVolumeSpecName: "kube-api-access-lv6r8") pod "aaa7ee44-2954-4d95-8cf3-a1fd004b87e6" (UID: "aaa7ee44-2954-4d95-8cf3-a1fd004b87e6"). InnerVolumeSpecName "kube-api-access-lv6r8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:45:38.995836 master-0 kubenswrapper[27835]: I0318 13:45:38.995734 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaa7ee44-2954-4d95-8cf3-a1fd004b87e6-config-data" (OuterVolumeSpecName: "config-data") pod "aaa7ee44-2954-4d95-8cf3-a1fd004b87e6" (UID: "aaa7ee44-2954-4d95-8cf3-a1fd004b87e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:45:39.009393 master-0 kubenswrapper[27835]: I0318 13:45:39.009334 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaa7ee44-2954-4d95-8cf3-a1fd004b87e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aaa7ee44-2954-4d95-8cf3-a1fd004b87e6" (UID: "aaa7ee44-2954-4d95-8cf3-a1fd004b87e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:45:39.067428 master-0 kubenswrapper[27835]: I0318 13:45:39.066540 27835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaa7ee44-2954-4d95-8cf3-a1fd004b87e6-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:45:39.067428 master-0 kubenswrapper[27835]: I0318 13:45:39.066588 27835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaa7ee44-2954-4d95-8cf3-a1fd004b87e6-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 13:45:39.067428 master-0 kubenswrapper[27835]: I0318 13:45:39.066601 27835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa7ee44-2954-4d95-8cf3-a1fd004b87e6-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:45:39.067428 master-0 kubenswrapper[27835]: I0318 13:45:39.066616 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lv6r8\" (UniqueName: \"kubernetes.io/projected/aaa7ee44-2954-4d95-8cf3-a1fd004b87e6-kube-api-access-lv6r8\") on node \"master-0\" DevicePath \"\"" Mar 18 13:45:39.411292 master-0 kubenswrapper[27835]: I0318 13:45:39.411246 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-89c8v" Mar 18 13:45:39.414524 master-0 kubenswrapper[27835]: I0318 13:45:39.414481 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-89c8v" event={"ID":"aaa7ee44-2954-4d95-8cf3-a1fd004b87e6","Type":"ContainerDied","Data":"c619a97cbf9c36a88134b98363e7f371d1d891c3b7f9e4594c8c8880c8c93238"} Mar 18 13:45:39.414658 master-0 kubenswrapper[27835]: I0318 13:45:39.414530 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c619a97cbf9c36a88134b98363e7f371d1d891c3b7f9e4594c8c8880c8c93238" Mar 18 13:45:39.435924 master-0 kubenswrapper[27835]: I0318 13:45:39.434783 27835 generic.go:334] "Generic (PLEG): container finished" podID="5cc1e19c-125b-478a-98f3-aceffdc55b55" containerID="42e252d758b96532dbba770c102175a75ec18f9607df6d6dddf72fb4b9dccdcf" exitCode=143 Mar 18 13:45:39.436143 master-0 kubenswrapper[27835]: I0318 13:45:39.436005 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5cc1e19c-125b-478a-98f3-aceffdc55b55","Type":"ContainerDied","Data":"42e252d758b96532dbba770c102175a75ec18f9607df6d6dddf72fb4b9dccdcf"} Mar 18 13:45:39.477795 master-0 kubenswrapper[27835]: I0318 13:45:39.470548 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 18 13:45:39.477795 master-0 kubenswrapper[27835]: E0318 13:45:39.471036 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="673c137d-50de-48a3-aac1-036df40897d4" containerName="dnsmasq-dns" Mar 18 13:45:39.477795 master-0 kubenswrapper[27835]: I0318 13:45:39.471050 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="673c137d-50de-48a3-aac1-036df40897d4" containerName="dnsmasq-dns" Mar 18 13:45:39.477795 master-0 kubenswrapper[27835]: E0318 13:45:39.471075 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="673c137d-50de-48a3-aac1-036df40897d4" containerName="init" Mar 18 13:45:39.477795 master-0 kubenswrapper[27835]: I0318 13:45:39.471086 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="673c137d-50de-48a3-aac1-036df40897d4" containerName="init" Mar 18 13:45:39.477795 master-0 kubenswrapper[27835]: E0318 13:45:39.471093 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d09230d0-21c5-4e63-b56d-f9346dce706d" containerName="nova-manage" Mar 18 13:45:39.477795 master-0 kubenswrapper[27835]: I0318 13:45:39.471099 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d09230d0-21c5-4e63-b56d-f9346dce706d" containerName="nova-manage" Mar 18 13:45:39.477795 master-0 kubenswrapper[27835]: E0318 13:45:39.471112 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaa7ee44-2954-4d95-8cf3-a1fd004b87e6" containerName="nova-cell1-conductor-db-sync" Mar 18 13:45:39.477795 master-0 kubenswrapper[27835]: I0318 13:45:39.471118 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaa7ee44-2954-4d95-8cf3-a1fd004b87e6" containerName="nova-cell1-conductor-db-sync" Mar 18 13:45:39.477795 master-0 kubenswrapper[27835]: I0318 13:45:39.471399 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaa7ee44-2954-4d95-8cf3-a1fd004b87e6" containerName="nova-cell1-conductor-db-sync" Mar 18 13:45:39.477795 master-0 kubenswrapper[27835]: I0318 13:45:39.471432 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="673c137d-50de-48a3-aac1-036df40897d4" containerName="dnsmasq-dns" Mar 18 13:45:39.477795 master-0 kubenswrapper[27835]: I0318 13:45:39.471472 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="d09230d0-21c5-4e63-b56d-f9346dce706d" containerName="nova-manage" Mar 18 13:45:39.477795 master-0 kubenswrapper[27835]: I0318 13:45:39.472155 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 18 13:45:39.477795 master-0 kubenswrapper[27835]: I0318 13:45:39.475531 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 18 13:45:39.496440 master-0 kubenswrapper[27835]: I0318 13:45:39.495194 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 18 13:45:39.581062 master-0 kubenswrapper[27835]: I0318 13:45:39.580990 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c66e4d3-5b19-4466-b3c3-61bd46730848-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3c66e4d3-5b19-4466-b3c3-61bd46730848\") " pod="openstack/nova-cell1-conductor-0" Mar 18 13:45:39.581257 master-0 kubenswrapper[27835]: I0318 13:45:39.581179 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c66e4d3-5b19-4466-b3c3-61bd46730848-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3c66e4d3-5b19-4466-b3c3-61bd46730848\") " pod="openstack/nova-cell1-conductor-0" Mar 18 13:45:39.581257 master-0 kubenswrapper[27835]: I0318 13:45:39.581238 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4qwq\" (UniqueName: \"kubernetes.io/projected/3c66e4d3-5b19-4466-b3c3-61bd46730848-kube-api-access-z4qwq\") pod \"nova-cell1-conductor-0\" (UID: \"3c66e4d3-5b19-4466-b3c3-61bd46730848\") " pod="openstack/nova-cell1-conductor-0" Mar 18 13:45:39.683079 master-0 kubenswrapper[27835]: I0318 13:45:39.682924 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c66e4d3-5b19-4466-b3c3-61bd46730848-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3c66e4d3-5b19-4466-b3c3-61bd46730848\") " pod="openstack/nova-cell1-conductor-0" Mar 18 13:45:39.683079 master-0 kubenswrapper[27835]: I0318 13:45:39.682997 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c66e4d3-5b19-4466-b3c3-61bd46730848-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3c66e4d3-5b19-4466-b3c3-61bd46730848\") " pod="openstack/nova-cell1-conductor-0" Mar 18 13:45:39.683079 master-0 kubenswrapper[27835]: I0318 13:45:39.683040 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4qwq\" (UniqueName: \"kubernetes.io/projected/3c66e4d3-5b19-4466-b3c3-61bd46730848-kube-api-access-z4qwq\") pod \"nova-cell1-conductor-0\" (UID: \"3c66e4d3-5b19-4466-b3c3-61bd46730848\") " pod="openstack/nova-cell1-conductor-0" Mar 18 13:45:39.687254 master-0 kubenswrapper[27835]: I0318 13:45:39.687195 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c66e4d3-5b19-4466-b3c3-61bd46730848-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3c66e4d3-5b19-4466-b3c3-61bd46730848\") " pod="openstack/nova-cell1-conductor-0" Mar 18 13:45:39.691486 master-0 kubenswrapper[27835]: I0318 13:45:39.691386 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c66e4d3-5b19-4466-b3c3-61bd46730848-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3c66e4d3-5b19-4466-b3c3-61bd46730848\") " pod="openstack/nova-cell1-conductor-0" Mar 18 13:45:39.700993 master-0 kubenswrapper[27835]: I0318 13:45:39.700933 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4qwq\" (UniqueName: \"kubernetes.io/projected/3c66e4d3-5b19-4466-b3c3-61bd46730848-kube-api-access-z4qwq\") pod \"nova-cell1-conductor-0\" (UID: \"3c66e4d3-5b19-4466-b3c3-61bd46730848\") " pod="openstack/nova-cell1-conductor-0" Mar 18 13:45:39.845019 master-0 kubenswrapper[27835]: I0318 13:45:39.844946 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 18 13:45:40.298567 master-0 kubenswrapper[27835]: I0318 13:45:40.298406 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="673c137d-50de-48a3-aac1-036df40897d4" path="/var/lib/kubelet/pods/673c137d-50de-48a3-aac1-036df40897d4/volumes" Mar 18 13:45:40.345745 master-0 kubenswrapper[27835]: W0318 13:45:40.345680 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c66e4d3_5b19_4466_b3c3_61bd46730848.slice/crio-2bb84796cec00de30b1b469223feacf587f8fb7b6bcd1ed1bbd44eda0b7eaca5 WatchSource:0}: Error finding container 2bb84796cec00de30b1b469223feacf587f8fb7b6bcd1ed1bbd44eda0b7eaca5: Status 404 returned error can't find the container with id 2bb84796cec00de30b1b469223feacf587f8fb7b6bcd1ed1bbd44eda0b7eaca5 Mar 18 13:45:40.346200 master-0 kubenswrapper[27835]: I0318 13:45:40.346147 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 18 13:45:40.449336 master-0 kubenswrapper[27835]: I0318 13:45:40.449260 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3c66e4d3-5b19-4466-b3c3-61bd46730848","Type":"ContainerStarted","Data":"2bb84796cec00de30b1b469223feacf587f8fb7b6bcd1ed1bbd44eda0b7eaca5"} Mar 18 13:45:40.579906 master-0 kubenswrapper[27835]: E0318 13:45:40.579829 27835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1ada06c52f8681119e4d0afd5452496fe5b40b505d0b2d21efd18b3be0212ad9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 18 13:45:40.582049 master-0 kubenswrapper[27835]: E0318 13:45:40.582002 27835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1ada06c52f8681119e4d0afd5452496fe5b40b505d0b2d21efd18b3be0212ad9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 18 13:45:40.583981 master-0 kubenswrapper[27835]: E0318 13:45:40.583923 27835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1ada06c52f8681119e4d0afd5452496fe5b40b505d0b2d21efd18b3be0212ad9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 18 13:45:40.584133 master-0 kubenswrapper[27835]: E0318 13:45:40.584101 27835 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="29279da3-0357-41fb-8f23-25040f3130cb" containerName="nova-scheduler-scheduler" Mar 18 13:45:41.467511 master-0 kubenswrapper[27835]: I0318 13:45:41.466540 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3c66e4d3-5b19-4466-b3c3-61bd46730848","Type":"ContainerStarted","Data":"be57427a0ef8a86fb934c6cb532f610677bec88b78eb407df163dd7213fe92d5"} Mar 18 13:45:41.469443 master-0 kubenswrapper[27835]: I0318 13:45:41.468399 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Mar 18 13:45:41.638567 master-0 kubenswrapper[27835]: I0318 13:45:41.638261 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.638237464 podStartE2EDuration="2.638237464s" podCreationTimestamp="2026-03-18 13:45:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:45:41.636136369 +0000 UTC m=+1305.601347969" watchObservedRunningTime="2026-03-18 13:45:41.638237464 +0000 UTC m=+1305.603449024" Mar 18 13:45:42.160285 master-0 kubenswrapper[27835]: I0318 13:45:42.159587 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 13:45:42.255358 master-0 kubenswrapper[27835]: I0318 13:45:42.255282 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8db6m\" (UniqueName: \"kubernetes.io/projected/5cc1e19c-125b-478a-98f3-aceffdc55b55-kube-api-access-8db6m\") pod \"5cc1e19c-125b-478a-98f3-aceffdc55b55\" (UID: \"5cc1e19c-125b-478a-98f3-aceffdc55b55\") " Mar 18 13:45:42.255890 master-0 kubenswrapper[27835]: I0318 13:45:42.255551 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cc1e19c-125b-478a-98f3-aceffdc55b55-logs\") pod \"5cc1e19c-125b-478a-98f3-aceffdc55b55\" (UID: \"5cc1e19c-125b-478a-98f3-aceffdc55b55\") " Mar 18 13:45:42.255890 master-0 kubenswrapper[27835]: I0318 13:45:42.255717 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cc1e19c-125b-478a-98f3-aceffdc55b55-config-data\") pod \"5cc1e19c-125b-478a-98f3-aceffdc55b55\" (UID: \"5cc1e19c-125b-478a-98f3-aceffdc55b55\") " Mar 18 13:45:42.255890 master-0 kubenswrapper[27835]: I0318 13:45:42.255755 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cc1e19c-125b-478a-98f3-aceffdc55b55-combined-ca-bundle\") pod \"5cc1e19c-125b-478a-98f3-aceffdc55b55\" (UID: \"5cc1e19c-125b-478a-98f3-aceffdc55b55\") " Mar 18 13:45:42.256552 master-0 kubenswrapper[27835]: I0318 13:45:42.256515 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cc1e19c-125b-478a-98f3-aceffdc55b55-logs" (OuterVolumeSpecName: "logs") pod "5cc1e19c-125b-478a-98f3-aceffdc55b55" (UID: "5cc1e19c-125b-478a-98f3-aceffdc55b55"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:45:42.257032 master-0 kubenswrapper[27835]: I0318 13:45:42.257010 27835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cc1e19c-125b-478a-98f3-aceffdc55b55-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:45:42.258851 master-0 kubenswrapper[27835]: I0318 13:45:42.258797 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cc1e19c-125b-478a-98f3-aceffdc55b55-kube-api-access-8db6m" (OuterVolumeSpecName: "kube-api-access-8db6m") pod "5cc1e19c-125b-478a-98f3-aceffdc55b55" (UID: "5cc1e19c-125b-478a-98f3-aceffdc55b55"). InnerVolumeSpecName "kube-api-access-8db6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:45:42.287119 master-0 kubenswrapper[27835]: I0318 13:45:42.287058 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cc1e19c-125b-478a-98f3-aceffdc55b55-config-data" (OuterVolumeSpecName: "config-data") pod "5cc1e19c-125b-478a-98f3-aceffdc55b55" (UID: "5cc1e19c-125b-478a-98f3-aceffdc55b55"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:45:42.291178 master-0 kubenswrapper[27835]: I0318 13:45:42.291109 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cc1e19c-125b-478a-98f3-aceffdc55b55-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5cc1e19c-125b-478a-98f3-aceffdc55b55" (UID: "5cc1e19c-125b-478a-98f3-aceffdc55b55"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:45:42.369277 master-0 kubenswrapper[27835]: I0318 13:45:42.368968 27835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cc1e19c-125b-478a-98f3-aceffdc55b55-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 13:45:42.369277 master-0 kubenswrapper[27835]: I0318 13:45:42.369056 27835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cc1e19c-125b-478a-98f3-aceffdc55b55-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:45:42.369277 master-0 kubenswrapper[27835]: I0318 13:45:42.369068 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8db6m\" (UniqueName: \"kubernetes.io/projected/5cc1e19c-125b-478a-98f3-aceffdc55b55-kube-api-access-8db6m\") on node \"master-0\" DevicePath \"\"" Mar 18 13:45:42.482232 master-0 kubenswrapper[27835]: I0318 13:45:42.482168 27835 generic.go:334] "Generic (PLEG): container finished" podID="5cc1e19c-125b-478a-98f3-aceffdc55b55" containerID="88d4b9396afcacfc5c1cf95b07fdc1d29ecebd143ebe7b2635ec6bb11c338dcc" exitCode=0 Mar 18 13:45:42.482882 master-0 kubenswrapper[27835]: I0318 13:45:42.482290 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 13:45:42.491588 master-0 kubenswrapper[27835]: I0318 13:45:42.482377 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5cc1e19c-125b-478a-98f3-aceffdc55b55","Type":"ContainerDied","Data":"88d4b9396afcacfc5c1cf95b07fdc1d29ecebd143ebe7b2635ec6bb11c338dcc"} Mar 18 13:45:42.491953 master-0 kubenswrapper[27835]: I0318 13:45:42.491617 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5cc1e19c-125b-478a-98f3-aceffdc55b55","Type":"ContainerDied","Data":"b03ee83f1b51e73d1cf77da5413dfa5ab9f67d479e5f6a09c4d755ebf92bf606"} Mar 18 13:45:42.491953 master-0 kubenswrapper[27835]: I0318 13:45:42.491671 27835 scope.go:117] "RemoveContainer" containerID="88d4b9396afcacfc5c1cf95b07fdc1d29ecebd143ebe7b2635ec6bb11c338dcc" Mar 18 13:45:42.538790 master-0 kubenswrapper[27835]: I0318 13:45:42.538089 27835 scope.go:117] "RemoveContainer" containerID="42e252d758b96532dbba770c102175a75ec18f9607df6d6dddf72fb4b9dccdcf" Mar 18 13:45:42.560031 master-0 kubenswrapper[27835]: I0318 13:45:42.559901 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 18 13:45:42.606720 master-0 kubenswrapper[27835]: I0318 13:45:42.602294 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 18 13:45:42.606720 master-0 kubenswrapper[27835]: I0318 13:45:42.603251 27835 scope.go:117] "RemoveContainer" containerID="88d4b9396afcacfc5c1cf95b07fdc1d29ecebd143ebe7b2635ec6bb11c338dcc" Mar 18 13:45:42.606720 master-0 kubenswrapper[27835]: E0318 13:45:42.603811 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88d4b9396afcacfc5c1cf95b07fdc1d29ecebd143ebe7b2635ec6bb11c338dcc\": container with ID starting with 88d4b9396afcacfc5c1cf95b07fdc1d29ecebd143ebe7b2635ec6bb11c338dcc not found: ID does not exist" containerID="88d4b9396afcacfc5c1cf95b07fdc1d29ecebd143ebe7b2635ec6bb11c338dcc" Mar 18 13:45:42.606720 master-0 kubenswrapper[27835]: I0318 13:45:42.603868 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88d4b9396afcacfc5c1cf95b07fdc1d29ecebd143ebe7b2635ec6bb11c338dcc"} err="failed to get container status \"88d4b9396afcacfc5c1cf95b07fdc1d29ecebd143ebe7b2635ec6bb11c338dcc\": rpc error: code = NotFound desc = could not find container \"88d4b9396afcacfc5c1cf95b07fdc1d29ecebd143ebe7b2635ec6bb11c338dcc\": container with ID starting with 88d4b9396afcacfc5c1cf95b07fdc1d29ecebd143ebe7b2635ec6bb11c338dcc not found: ID does not exist" Mar 18 13:45:42.606720 master-0 kubenswrapper[27835]: I0318 13:45:42.603900 27835 scope.go:117] "RemoveContainer" containerID="42e252d758b96532dbba770c102175a75ec18f9607df6d6dddf72fb4b9dccdcf" Mar 18 13:45:42.606720 master-0 kubenswrapper[27835]: E0318 13:45:42.605690 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42e252d758b96532dbba770c102175a75ec18f9607df6d6dddf72fb4b9dccdcf\": container with ID starting with 42e252d758b96532dbba770c102175a75ec18f9607df6d6dddf72fb4b9dccdcf not found: ID does not exist" containerID="42e252d758b96532dbba770c102175a75ec18f9607df6d6dddf72fb4b9dccdcf" Mar 18 13:45:42.606720 master-0 kubenswrapper[27835]: I0318 13:45:42.605719 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42e252d758b96532dbba770c102175a75ec18f9607df6d6dddf72fb4b9dccdcf"} err="failed to get container status \"42e252d758b96532dbba770c102175a75ec18f9607df6d6dddf72fb4b9dccdcf\": rpc error: code = NotFound desc = could not find container \"42e252d758b96532dbba770c102175a75ec18f9607df6d6dddf72fb4b9dccdcf\": container with ID starting with 42e252d758b96532dbba770c102175a75ec18f9607df6d6dddf72fb4b9dccdcf not found: ID does not exist" Mar 18 13:45:42.620899 master-0 kubenswrapper[27835]: I0318 13:45:42.620795 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 18 13:45:42.621489 master-0 kubenswrapper[27835]: E0318 13:45:42.621451 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cc1e19c-125b-478a-98f3-aceffdc55b55" containerName="nova-api-api" Mar 18 13:45:42.621489 master-0 kubenswrapper[27835]: I0318 13:45:42.621477 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cc1e19c-125b-478a-98f3-aceffdc55b55" containerName="nova-api-api" Mar 18 13:45:42.621601 master-0 kubenswrapper[27835]: E0318 13:45:42.621508 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cc1e19c-125b-478a-98f3-aceffdc55b55" containerName="nova-api-log" Mar 18 13:45:42.621601 master-0 kubenswrapper[27835]: I0318 13:45:42.621517 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cc1e19c-125b-478a-98f3-aceffdc55b55" containerName="nova-api-log" Mar 18 13:45:42.622039 master-0 kubenswrapper[27835]: I0318 13:45:42.621758 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cc1e19c-125b-478a-98f3-aceffdc55b55" containerName="nova-api-api" Mar 18 13:45:42.622039 master-0 kubenswrapper[27835]: I0318 13:45:42.621827 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cc1e19c-125b-478a-98f3-aceffdc55b55" containerName="nova-api-log" Mar 18 13:45:42.623224 master-0 kubenswrapper[27835]: I0318 13:45:42.623179 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 13:45:42.626659 master-0 kubenswrapper[27835]: I0318 13:45:42.625944 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 18 13:45:42.636077 master-0 kubenswrapper[27835]: I0318 13:45:42.635995 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 18 13:45:42.689830 master-0 kubenswrapper[27835]: I0318 13:45:42.689669 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl9sm\" (UniqueName: \"kubernetes.io/projected/f9716d8b-d31c-4c84-af6b-fc8881a12372-kube-api-access-fl9sm\") pod \"nova-api-0\" (UID: \"f9716d8b-d31c-4c84-af6b-fc8881a12372\") " pod="openstack/nova-api-0" Mar 18 13:45:42.689830 master-0 kubenswrapper[27835]: I0318 13:45:42.689786 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9716d8b-d31c-4c84-af6b-fc8881a12372-logs\") pod \"nova-api-0\" (UID: \"f9716d8b-d31c-4c84-af6b-fc8881a12372\") " pod="openstack/nova-api-0" Mar 18 13:45:42.690387 master-0 kubenswrapper[27835]: I0318 13:45:42.689900 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9716d8b-d31c-4c84-af6b-fc8881a12372-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f9716d8b-d31c-4c84-af6b-fc8881a12372\") " pod="openstack/nova-api-0" Mar 18 13:45:42.690387 master-0 kubenswrapper[27835]: I0318 13:45:42.689983 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9716d8b-d31c-4c84-af6b-fc8881a12372-config-data\") pod \"nova-api-0\" (UID: \"f9716d8b-d31c-4c84-af6b-fc8881a12372\") " pod="openstack/nova-api-0" Mar 18 13:45:42.793288 master-0 kubenswrapper[27835]: I0318 13:45:42.793209 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9716d8b-d31c-4c84-af6b-fc8881a12372-config-data\") pod \"nova-api-0\" (UID: \"f9716d8b-d31c-4c84-af6b-fc8881a12372\") " pod="openstack/nova-api-0" Mar 18 13:45:42.793612 master-0 kubenswrapper[27835]: I0318 13:45:42.793423 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl9sm\" (UniqueName: \"kubernetes.io/projected/f9716d8b-d31c-4c84-af6b-fc8881a12372-kube-api-access-fl9sm\") pod \"nova-api-0\" (UID: \"f9716d8b-d31c-4c84-af6b-fc8881a12372\") " pod="openstack/nova-api-0" Mar 18 13:45:42.793612 master-0 kubenswrapper[27835]: I0318 13:45:42.793455 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9716d8b-d31c-4c84-af6b-fc8881a12372-logs\") pod \"nova-api-0\" (UID: \"f9716d8b-d31c-4c84-af6b-fc8881a12372\") " pod="openstack/nova-api-0" Mar 18 13:45:42.793612 master-0 kubenswrapper[27835]: I0318 13:45:42.793530 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9716d8b-d31c-4c84-af6b-fc8881a12372-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f9716d8b-d31c-4c84-af6b-fc8881a12372\") " pod="openstack/nova-api-0" Mar 18 13:45:42.794812 master-0 kubenswrapper[27835]: I0318 13:45:42.794761 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9716d8b-d31c-4c84-af6b-fc8881a12372-logs\") pod \"nova-api-0\" (UID: \"f9716d8b-d31c-4c84-af6b-fc8881a12372\") " pod="openstack/nova-api-0" Mar 18 13:45:42.798135 master-0 kubenswrapper[27835]: I0318 13:45:42.798032 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9716d8b-d31c-4c84-af6b-fc8881a12372-config-data\") pod \"nova-api-0\" (UID: \"f9716d8b-d31c-4c84-af6b-fc8881a12372\") " pod="openstack/nova-api-0" Mar 18 13:45:42.798379 master-0 kubenswrapper[27835]: I0318 13:45:42.798300 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9716d8b-d31c-4c84-af6b-fc8881a12372-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f9716d8b-d31c-4c84-af6b-fc8881a12372\") " pod="openstack/nova-api-0" Mar 18 13:45:42.811962 master-0 kubenswrapper[27835]: I0318 13:45:42.811867 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl9sm\" (UniqueName: \"kubernetes.io/projected/f9716d8b-d31c-4c84-af6b-fc8881a12372-kube-api-access-fl9sm\") pod \"nova-api-0\" (UID: \"f9716d8b-d31c-4c84-af6b-fc8881a12372\") " pod="openstack/nova-api-0" Mar 18 13:45:42.947163 master-0 kubenswrapper[27835]: I0318 13:45:42.947117 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 13:45:43.425061 master-0 kubenswrapper[27835]: I0318 13:45:43.425025 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 18 13:45:43.430549 master-0 kubenswrapper[27835]: W0318 13:45:43.430392 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9716d8b_d31c_4c84_af6b_fc8881a12372.slice/crio-d322f9bc8cdb7b68ec6c9076e5a8fde9efcb22bee8bba0616b6a9fd9d1127531 WatchSource:0}: Error finding container d322f9bc8cdb7b68ec6c9076e5a8fde9efcb22bee8bba0616b6a9fd9d1127531: Status 404 returned error can't find the container with id d322f9bc8cdb7b68ec6c9076e5a8fde9efcb22bee8bba0616b6a9fd9d1127531 Mar 18 13:45:43.504868 master-0 kubenswrapper[27835]: I0318 13:45:43.504797 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9716d8b-d31c-4c84-af6b-fc8881a12372","Type":"ContainerStarted","Data":"d322f9bc8cdb7b68ec6c9076e5a8fde9efcb22bee8bba0616b6a9fd9d1127531"} Mar 18 13:45:43.515808 master-0 kubenswrapper[27835]: I0318 13:45:43.507941 27835 generic.go:334] "Generic (PLEG): container finished" podID="29279da3-0357-41fb-8f23-25040f3130cb" containerID="1ada06c52f8681119e4d0afd5452496fe5b40b505d0b2d21efd18b3be0212ad9" exitCode=0 Mar 18 13:45:43.515808 master-0 kubenswrapper[27835]: I0318 13:45:43.507984 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"29279da3-0357-41fb-8f23-25040f3130cb","Type":"ContainerDied","Data":"1ada06c52f8681119e4d0afd5452496fe5b40b505d0b2d21efd18b3be0212ad9"} Mar 18 13:45:43.677990 master-0 kubenswrapper[27835]: I0318 13:45:43.677936 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 18 13:45:43.862838 master-0 kubenswrapper[27835]: I0318 13:45:43.862767 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhxx8\" (UniqueName: \"kubernetes.io/projected/29279da3-0357-41fb-8f23-25040f3130cb-kube-api-access-bhxx8\") pod \"29279da3-0357-41fb-8f23-25040f3130cb\" (UID: \"29279da3-0357-41fb-8f23-25040f3130cb\") " Mar 18 13:45:43.862838 master-0 kubenswrapper[27835]: I0318 13:45:43.862842 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29279da3-0357-41fb-8f23-25040f3130cb-combined-ca-bundle\") pod \"29279da3-0357-41fb-8f23-25040f3130cb\" (UID: \"29279da3-0357-41fb-8f23-25040f3130cb\") " Mar 18 13:45:43.863108 master-0 kubenswrapper[27835]: I0318 13:45:43.863073 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29279da3-0357-41fb-8f23-25040f3130cb-config-data\") pod \"29279da3-0357-41fb-8f23-25040f3130cb\" (UID: \"29279da3-0357-41fb-8f23-25040f3130cb\") " Mar 18 13:45:43.866818 master-0 kubenswrapper[27835]: I0318 13:45:43.866698 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29279da3-0357-41fb-8f23-25040f3130cb-kube-api-access-bhxx8" (OuterVolumeSpecName: "kube-api-access-bhxx8") pod "29279da3-0357-41fb-8f23-25040f3130cb" (UID: "29279da3-0357-41fb-8f23-25040f3130cb"). InnerVolumeSpecName "kube-api-access-bhxx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:45:43.897214 master-0 kubenswrapper[27835]: I0318 13:45:43.897144 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29279da3-0357-41fb-8f23-25040f3130cb-config-data" (OuterVolumeSpecName: "config-data") pod "29279da3-0357-41fb-8f23-25040f3130cb" (UID: "29279da3-0357-41fb-8f23-25040f3130cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:45:43.897461 master-0 kubenswrapper[27835]: I0318 13:45:43.897365 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29279da3-0357-41fb-8f23-25040f3130cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29279da3-0357-41fb-8f23-25040f3130cb" (UID: "29279da3-0357-41fb-8f23-25040f3130cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:45:43.967440 master-0 kubenswrapper[27835]: I0318 13:45:43.965994 27835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29279da3-0357-41fb-8f23-25040f3130cb-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 13:45:43.967440 master-0 kubenswrapper[27835]: I0318 13:45:43.966047 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhxx8\" (UniqueName: \"kubernetes.io/projected/29279da3-0357-41fb-8f23-25040f3130cb-kube-api-access-bhxx8\") on node \"master-0\" DevicePath \"\"" Mar 18 13:45:43.967440 master-0 kubenswrapper[27835]: I0318 13:45:43.966071 27835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29279da3-0357-41fb-8f23-25040f3130cb-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:45:44.301891 master-0 kubenswrapper[27835]: I0318 13:45:44.301447 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cc1e19c-125b-478a-98f3-aceffdc55b55" path="/var/lib/kubelet/pods/5cc1e19c-125b-478a-98f3-aceffdc55b55/volumes" Mar 18 13:45:44.564835 master-0 kubenswrapper[27835]: I0318 13:45:44.564702 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9716d8b-d31c-4c84-af6b-fc8881a12372","Type":"ContainerStarted","Data":"112ad3178a976b8cac301969378ca0f4ac69ac1e159bc964ae5b27acd99b2f2d"} Mar 18 13:45:44.565434 master-0 kubenswrapper[27835]: I0318 13:45:44.565391 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9716d8b-d31c-4c84-af6b-fc8881a12372","Type":"ContainerStarted","Data":"bdec1090d481e10ee7eb36db78c140a73ec64df0d2558f68b80654f1ef70bfe0"} Mar 18 13:45:44.567154 master-0 kubenswrapper[27835]: I0318 13:45:44.567115 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"29279da3-0357-41fb-8f23-25040f3130cb","Type":"ContainerDied","Data":"bba82ce9557799868d95e09db7bd3433576be508491db6bf8578fba1f622f6f8"} Mar 18 13:45:44.567240 master-0 kubenswrapper[27835]: I0318 13:45:44.567177 27835 scope.go:117] "RemoveContainer" containerID="1ada06c52f8681119e4d0afd5452496fe5b40b505d0b2d21efd18b3be0212ad9" Mar 18 13:45:44.567378 master-0 kubenswrapper[27835]: I0318 13:45:44.567355 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 18 13:45:44.605441 master-0 kubenswrapper[27835]: I0318 13:45:44.605122 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.605101475 podStartE2EDuration="2.605101475s" podCreationTimestamp="2026-03-18 13:45:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:45:44.598671895 +0000 UTC m=+1308.563883475" watchObservedRunningTime="2026-03-18 13:45:44.605101475 +0000 UTC m=+1308.570313045" Mar 18 13:45:44.642242 master-0 kubenswrapper[27835]: I0318 13:45:44.642168 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 13:45:44.654670 master-0 kubenswrapper[27835]: I0318 13:45:44.654549 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 13:45:44.667899 master-0 kubenswrapper[27835]: I0318 13:45:44.667825 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 13:45:44.668658 master-0 kubenswrapper[27835]: E0318 13:45:44.668622 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29279da3-0357-41fb-8f23-25040f3130cb" containerName="nova-scheduler-scheduler" Mar 18 13:45:44.668658 master-0 kubenswrapper[27835]: I0318 13:45:44.668657 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="29279da3-0357-41fb-8f23-25040f3130cb" containerName="nova-scheduler-scheduler" Mar 18 13:45:44.669189 master-0 kubenswrapper[27835]: I0318 13:45:44.669156 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="29279da3-0357-41fb-8f23-25040f3130cb" containerName="nova-scheduler-scheduler" Mar 18 13:45:44.670765 master-0 kubenswrapper[27835]: I0318 13:45:44.670705 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 18 13:45:44.674570 master-0 kubenswrapper[27835]: I0318 13:45:44.674520 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 18 13:45:44.684891 master-0 kubenswrapper[27835]: I0318 13:45:44.683883 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 13:45:44.795681 master-0 kubenswrapper[27835]: I0318 13:45:44.795621 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374b1e4c-f35c-4d46-9ea0-e65d45b825d8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"374b1e4c-f35c-4d46-9ea0-e65d45b825d8\") " pod="openstack/nova-scheduler-0" Mar 18 13:45:44.795919 master-0 kubenswrapper[27835]: I0318 13:45:44.795729 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/374b1e4c-f35c-4d46-9ea0-e65d45b825d8-config-data\") pod \"nova-scheduler-0\" (UID: \"374b1e4c-f35c-4d46-9ea0-e65d45b825d8\") " pod="openstack/nova-scheduler-0" Mar 18 13:45:44.795988 master-0 kubenswrapper[27835]: I0318 13:45:44.795958 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzlbz\" (UniqueName: \"kubernetes.io/projected/374b1e4c-f35c-4d46-9ea0-e65d45b825d8-kube-api-access-fzlbz\") pod \"nova-scheduler-0\" (UID: \"374b1e4c-f35c-4d46-9ea0-e65d45b825d8\") " pod="openstack/nova-scheduler-0" Mar 18 13:45:44.898982 master-0 kubenswrapper[27835]: I0318 13:45:44.898841 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374b1e4c-f35c-4d46-9ea0-e65d45b825d8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"374b1e4c-f35c-4d46-9ea0-e65d45b825d8\") " pod="openstack/nova-scheduler-0" Mar 18 13:45:44.898982 master-0 kubenswrapper[27835]: I0318 13:45:44.898913 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/374b1e4c-f35c-4d46-9ea0-e65d45b825d8-config-data\") pod \"nova-scheduler-0\" (UID: \"374b1e4c-f35c-4d46-9ea0-e65d45b825d8\") " pod="openstack/nova-scheduler-0" Mar 18 13:45:44.898982 master-0 kubenswrapper[27835]: I0318 13:45:44.898979 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzlbz\" (UniqueName: \"kubernetes.io/projected/374b1e4c-f35c-4d46-9ea0-e65d45b825d8-kube-api-access-fzlbz\") pod \"nova-scheduler-0\" (UID: \"374b1e4c-f35c-4d46-9ea0-e65d45b825d8\") " pod="openstack/nova-scheduler-0" Mar 18 13:45:44.902694 master-0 kubenswrapper[27835]: I0318 13:45:44.902658 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374b1e4c-f35c-4d46-9ea0-e65d45b825d8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"374b1e4c-f35c-4d46-9ea0-e65d45b825d8\") " pod="openstack/nova-scheduler-0" Mar 18 13:45:44.903548 master-0 kubenswrapper[27835]: I0318 13:45:44.903169 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/374b1e4c-f35c-4d46-9ea0-e65d45b825d8-config-data\") pod \"nova-scheduler-0\" (UID: \"374b1e4c-f35c-4d46-9ea0-e65d45b825d8\") " pod="openstack/nova-scheduler-0" Mar 18 13:45:44.917269 master-0 kubenswrapper[27835]: I0318 13:45:44.917211 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzlbz\" (UniqueName: \"kubernetes.io/projected/374b1e4c-f35c-4d46-9ea0-e65d45b825d8-kube-api-access-fzlbz\") pod \"nova-scheduler-0\" (UID: \"374b1e4c-f35c-4d46-9ea0-e65d45b825d8\") " pod="openstack/nova-scheduler-0" Mar 18 13:45:45.002130 master-0 kubenswrapper[27835]: I0318 13:45:45.002069 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 18 13:45:45.530919 master-0 kubenswrapper[27835]: I0318 13:45:45.530873 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 13:45:45.537136 master-0 kubenswrapper[27835]: W0318 13:45:45.537090 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod374b1e4c_f35c_4d46_9ea0_e65d45b825d8.slice/crio-84d6d8e8e4576ec25ac8bb82270677e1e515fff8b6abc9022baf622b858bad09 WatchSource:0}: Error finding container 84d6d8e8e4576ec25ac8bb82270677e1e515fff8b6abc9022baf622b858bad09: Status 404 returned error can't find the container with id 84d6d8e8e4576ec25ac8bb82270677e1e515fff8b6abc9022baf622b858bad09 Mar 18 13:45:45.581634 master-0 kubenswrapper[27835]: I0318 13:45:45.581573 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"374b1e4c-f35c-4d46-9ea0-e65d45b825d8","Type":"ContainerStarted","Data":"84d6d8e8e4576ec25ac8bb82270677e1e515fff8b6abc9022baf622b858bad09"} Mar 18 13:45:46.313270 master-0 kubenswrapper[27835]: I0318 13:45:46.313191 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29279da3-0357-41fb-8f23-25040f3130cb" path="/var/lib/kubelet/pods/29279da3-0357-41fb-8f23-25040f3130cb/volumes" Mar 18 13:45:46.596816 master-0 kubenswrapper[27835]: I0318 13:45:46.596516 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"374b1e4c-f35c-4d46-9ea0-e65d45b825d8","Type":"ContainerStarted","Data":"82b65ca3527690011b6a8817ba9fea65862d72f03e4b5dd3ad91a708d1f08938"} Mar 18 13:45:46.667769 master-0 kubenswrapper[27835]: I0318 13:45:46.667675 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.667654441 podStartE2EDuration="2.667654441s" podCreationTimestamp="2026-03-18 13:45:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:45:46.664351333 +0000 UTC m=+1310.629562903" watchObservedRunningTime="2026-03-18 13:45:46.667654441 +0000 UTC m=+1310.632866011" Mar 18 13:45:49.890760 master-0 kubenswrapper[27835]: I0318 13:45:49.890665 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Mar 18 13:45:50.003796 master-0 kubenswrapper[27835]: I0318 13:45:50.003728 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 18 13:45:52.948158 master-0 kubenswrapper[27835]: I0318 13:45:52.948072 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 18 13:45:52.948158 master-0 kubenswrapper[27835]: I0318 13:45:52.948137 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 18 13:45:54.029677 master-0 kubenswrapper[27835]: I0318 13:45:54.029620 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f9716d8b-d31c-4c84-af6b-fc8881a12372" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.1.4:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 13:45:54.030484 master-0 kubenswrapper[27835]: I0318 13:45:54.029622 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f9716d8b-d31c-4c84-af6b-fc8881a12372" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.1.4:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 13:45:55.005480 master-0 kubenswrapper[27835]: I0318 13:45:55.003721 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 18 13:45:55.035968 master-0 kubenswrapper[27835]: I0318 13:45:55.035903 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 18 13:45:55.760044 master-0 kubenswrapper[27835]: I0318 13:45:55.759984 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 18 13:45:56.743029 master-0 kubenswrapper[27835]: I0318 13:45:56.742963 27835 generic.go:334] "Generic (PLEG): container finished" podID="d2a793d4-62c6-4482-a5e5-21ed4cc72e33" containerID="13960cb98952b9f1cac17e9e61119ce462da2b6a0fa529dcc66cf748054afe7c" exitCode=0 Mar 18 13:45:56.743648 master-0 kubenswrapper[27835]: I0318 13:45:56.743048 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"d2a793d4-62c6-4482-a5e5-21ed4cc72e33","Type":"ContainerDied","Data":"13960cb98952b9f1cac17e9e61119ce462da2b6a0fa529dcc66cf748054afe7c"} Mar 18 13:45:57.630063 master-0 kubenswrapper[27835]: I0318 13:45:57.629966 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 18 13:45:57.725091 master-0 kubenswrapper[27835]: I0318 13:45:57.725057 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 18 13:45:57.759011 master-0 kubenswrapper[27835]: I0318 13:45:57.756841 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e75fe206-50da-483e-8b9a-a86adf0082ac-config-data\") pod \"e75fe206-50da-483e-8b9a-a86adf0082ac\" (UID: \"e75fe206-50da-483e-8b9a-a86adf0082ac\") " Mar 18 13:45:57.759011 master-0 kubenswrapper[27835]: I0318 13:45:57.757200 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e75fe206-50da-483e-8b9a-a86adf0082ac-combined-ca-bundle\") pod \"e75fe206-50da-483e-8b9a-a86adf0082ac\" (UID: \"e75fe206-50da-483e-8b9a-a86adf0082ac\") " Mar 18 13:45:57.759011 master-0 kubenswrapper[27835]: I0318 13:45:57.757385 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dwrz\" (UniqueName: \"kubernetes.io/projected/e75fe206-50da-483e-8b9a-a86adf0082ac-kube-api-access-7dwrz\") pod \"e75fe206-50da-483e-8b9a-a86adf0082ac\" (UID: \"e75fe206-50da-483e-8b9a-a86adf0082ac\") " Mar 18 13:45:57.764170 master-0 kubenswrapper[27835]: I0318 13:45:57.764105 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e75fe206-50da-483e-8b9a-a86adf0082ac-kube-api-access-7dwrz" (OuterVolumeSpecName: "kube-api-access-7dwrz") pod "e75fe206-50da-483e-8b9a-a86adf0082ac" (UID: "e75fe206-50da-483e-8b9a-a86adf0082ac"). InnerVolumeSpecName "kube-api-access-7dwrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:45:57.778715 master-0 kubenswrapper[27835]: I0318 13:45:57.778276 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"d2a793d4-62c6-4482-a5e5-21ed4cc72e33","Type":"ContainerStarted","Data":"a2dd8262db7b894577b813b9b3e1ad1061a6a741dc835b2e62a022851cea887a"} Mar 18 13:45:57.778715 master-0 kubenswrapper[27835]: I0318 13:45:57.778328 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"d2a793d4-62c6-4482-a5e5-21ed4cc72e33","Type":"ContainerStarted","Data":"9a116e984ff58a08f473a2beda3bf950db8971edae14b2748afc632d97a403da"} Mar 18 13:45:57.780691 master-0 kubenswrapper[27835]: I0318 13:45:57.780643 27835 generic.go:334] "Generic (PLEG): container finished" podID="e75fe206-50da-483e-8b9a-a86adf0082ac" containerID="a00e95672bdcc02e6756e4747a859efb5c6d8d91a430733c5464707c1db0f9b9" exitCode=137 Mar 18 13:45:57.780779 master-0 kubenswrapper[27835]: I0318 13:45:57.780715 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 18 13:45:57.780779 master-0 kubenswrapper[27835]: I0318 13:45:57.780738 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e75fe206-50da-483e-8b9a-a86adf0082ac","Type":"ContainerDied","Data":"a00e95672bdcc02e6756e4747a859efb5c6d8d91a430733c5464707c1db0f9b9"} Mar 18 13:45:57.780779 master-0 kubenswrapper[27835]: I0318 13:45:57.780768 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e75fe206-50da-483e-8b9a-a86adf0082ac","Type":"ContainerDied","Data":"084196e01e9f971660ddd654dc5a05541b7a809dacd24125eca0d094f9edbbd8"} Mar 18 13:45:57.780933 master-0 kubenswrapper[27835]: I0318 13:45:57.780789 27835 scope.go:117] "RemoveContainer" containerID="a00e95672bdcc02e6756e4747a859efb5c6d8d91a430733c5464707c1db0f9b9" Mar 18 13:45:57.783513 master-0 kubenswrapper[27835]: I0318 13:45:57.783197 27835 generic.go:334] "Generic (PLEG): container finished" podID="13db2dec-a8ff-45b5-b3b0-3459de6bbf41" containerID="a2ab5fd243f343851e895f35f4654f27bb7a415afced56b2b02936c57c1d3cf9" exitCode=137 Mar 18 13:45:57.783513 master-0 kubenswrapper[27835]: I0318 13:45:57.783245 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"13db2dec-a8ff-45b5-b3b0-3459de6bbf41","Type":"ContainerDied","Data":"a2ab5fd243f343851e895f35f4654f27bb7a415afced56b2b02936c57c1d3cf9"} Mar 18 13:45:57.783513 master-0 kubenswrapper[27835]: I0318 13:45:57.783273 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"13db2dec-a8ff-45b5-b3b0-3459de6bbf41","Type":"ContainerDied","Data":"1d66f82ba5ae412ba63f68f61d61d3b42793e3a88b3c5d56bc4623c1bed7487f"} Mar 18 13:45:57.783513 master-0 kubenswrapper[27835]: I0318 13:45:57.783328 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 18 13:45:57.805200 master-0 kubenswrapper[27835]: I0318 13:45:57.805152 27835 scope.go:117] "RemoveContainer" containerID="a00e95672bdcc02e6756e4747a859efb5c6d8d91a430733c5464707c1db0f9b9" Mar 18 13:45:57.805663 master-0 kubenswrapper[27835]: E0318 13:45:57.805621 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a00e95672bdcc02e6756e4747a859efb5c6d8d91a430733c5464707c1db0f9b9\": container with ID starting with a00e95672bdcc02e6756e4747a859efb5c6d8d91a430733c5464707c1db0f9b9 not found: ID does not exist" containerID="a00e95672bdcc02e6756e4747a859efb5c6d8d91a430733c5464707c1db0f9b9" Mar 18 13:45:57.805746 master-0 kubenswrapper[27835]: I0318 13:45:57.805656 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a00e95672bdcc02e6756e4747a859efb5c6d8d91a430733c5464707c1db0f9b9"} err="failed to get container status \"a00e95672bdcc02e6756e4747a859efb5c6d8d91a430733c5464707c1db0f9b9\": rpc error: code = NotFound desc = could not find container \"a00e95672bdcc02e6756e4747a859efb5c6d8d91a430733c5464707c1db0f9b9\": container with ID starting with a00e95672bdcc02e6756e4747a859efb5c6d8d91a430733c5464707c1db0f9b9 not found: ID does not exist" Mar 18 13:45:57.805746 master-0 kubenswrapper[27835]: I0318 13:45:57.805681 27835 scope.go:117] "RemoveContainer" containerID="a2ab5fd243f343851e895f35f4654f27bb7a415afced56b2b02936c57c1d3cf9" Mar 18 13:45:57.810653 master-0 kubenswrapper[27835]: I0318 13:45:57.810600 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e75fe206-50da-483e-8b9a-a86adf0082ac-config-data" (OuterVolumeSpecName: "config-data") pod "e75fe206-50da-483e-8b9a-a86adf0082ac" (UID: "e75fe206-50da-483e-8b9a-a86adf0082ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:45:57.817353 master-0 kubenswrapper[27835]: I0318 13:45:57.817325 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e75fe206-50da-483e-8b9a-a86adf0082ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e75fe206-50da-483e-8b9a-a86adf0082ac" (UID: "e75fe206-50da-483e-8b9a-a86adf0082ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:45:57.835304 master-0 kubenswrapper[27835]: I0318 13:45:57.835141 27835 scope.go:117] "RemoveContainer" containerID="f0803c29119c06a221f61326ef1c28cb4a037d79442e660fce97457d19ba195e" Mar 18 13:45:57.912948 master-0 kubenswrapper[27835]: I0318 13:45:57.859192 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13db2dec-a8ff-45b5-b3b0-3459de6bbf41-combined-ca-bundle\") pod \"13db2dec-a8ff-45b5-b3b0-3459de6bbf41\" (UID: \"13db2dec-a8ff-45b5-b3b0-3459de6bbf41\") " Mar 18 13:45:57.912948 master-0 kubenswrapper[27835]: I0318 13:45:57.859349 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13db2dec-a8ff-45b5-b3b0-3459de6bbf41-config-data\") pod \"13db2dec-a8ff-45b5-b3b0-3459de6bbf41\" (UID: \"13db2dec-a8ff-45b5-b3b0-3459de6bbf41\") " Mar 18 13:45:57.912948 master-0 kubenswrapper[27835]: I0318 13:45:57.859518 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13db2dec-a8ff-45b5-b3b0-3459de6bbf41-logs\") pod \"13db2dec-a8ff-45b5-b3b0-3459de6bbf41\" (UID: \"13db2dec-a8ff-45b5-b3b0-3459de6bbf41\") " Mar 18 13:45:57.912948 master-0 kubenswrapper[27835]: I0318 13:45:57.859637 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92qhb\" (UniqueName: \"kubernetes.io/projected/13db2dec-a8ff-45b5-b3b0-3459de6bbf41-kube-api-access-92qhb\") pod \"13db2dec-a8ff-45b5-b3b0-3459de6bbf41\" (UID: \"13db2dec-a8ff-45b5-b3b0-3459de6bbf41\") " Mar 18 13:45:57.912948 master-0 kubenswrapper[27835]: I0318 13:45:57.860858 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13db2dec-a8ff-45b5-b3b0-3459de6bbf41-logs" (OuterVolumeSpecName: "logs") pod "13db2dec-a8ff-45b5-b3b0-3459de6bbf41" (UID: "13db2dec-a8ff-45b5-b3b0-3459de6bbf41"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:45:57.912948 master-0 kubenswrapper[27835]: I0318 13:45:57.861113 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dwrz\" (UniqueName: \"kubernetes.io/projected/e75fe206-50da-483e-8b9a-a86adf0082ac-kube-api-access-7dwrz\") on node \"master-0\" DevicePath \"\"" Mar 18 13:45:57.912948 master-0 kubenswrapper[27835]: I0318 13:45:57.861129 27835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13db2dec-a8ff-45b5-b3b0-3459de6bbf41-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:45:57.912948 master-0 kubenswrapper[27835]: I0318 13:45:57.861139 27835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e75fe206-50da-483e-8b9a-a86adf0082ac-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 13:45:57.912948 master-0 kubenswrapper[27835]: I0318 13:45:57.861148 27835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e75fe206-50da-483e-8b9a-a86adf0082ac-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:45:57.923221 master-0 kubenswrapper[27835]: I0318 13:45:57.923154 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13db2dec-a8ff-45b5-b3b0-3459de6bbf41-kube-api-access-92qhb" (OuterVolumeSpecName: "kube-api-access-92qhb") pod "13db2dec-a8ff-45b5-b3b0-3459de6bbf41" (UID: "13db2dec-a8ff-45b5-b3b0-3459de6bbf41"). InnerVolumeSpecName "kube-api-access-92qhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:45:57.925133 master-0 kubenswrapper[27835]: I0318 13:45:57.925079 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13db2dec-a8ff-45b5-b3b0-3459de6bbf41-config-data" (OuterVolumeSpecName: "config-data") pod "13db2dec-a8ff-45b5-b3b0-3459de6bbf41" (UID: "13db2dec-a8ff-45b5-b3b0-3459de6bbf41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:45:57.959749 master-0 kubenswrapper[27835]: I0318 13:45:57.959679 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13db2dec-a8ff-45b5-b3b0-3459de6bbf41-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "13db2dec-a8ff-45b5-b3b0-3459de6bbf41" (UID: "13db2dec-a8ff-45b5-b3b0-3459de6bbf41"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:45:57.963599 master-0 kubenswrapper[27835]: I0318 13:45:57.963543 27835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13db2dec-a8ff-45b5-b3b0-3459de6bbf41-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:45:57.963599 master-0 kubenswrapper[27835]: I0318 13:45:57.963591 27835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13db2dec-a8ff-45b5-b3b0-3459de6bbf41-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 13:45:57.963599 master-0 kubenswrapper[27835]: I0318 13:45:57.963602 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92qhb\" (UniqueName: \"kubernetes.io/projected/13db2dec-a8ff-45b5-b3b0-3459de6bbf41-kube-api-access-92qhb\") on node \"master-0\" DevicePath \"\"" Mar 18 13:45:58.063902 master-0 kubenswrapper[27835]: I0318 13:45:58.063837 27835 scope.go:117] "RemoveContainer" containerID="a2ab5fd243f343851e895f35f4654f27bb7a415afced56b2b02936c57c1d3cf9" Mar 18 13:45:58.064776 master-0 kubenswrapper[27835]: E0318 13:45:58.064731 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2ab5fd243f343851e895f35f4654f27bb7a415afced56b2b02936c57c1d3cf9\": container with ID starting with a2ab5fd243f343851e895f35f4654f27bb7a415afced56b2b02936c57c1d3cf9 not found: ID does not exist" containerID="a2ab5fd243f343851e895f35f4654f27bb7a415afced56b2b02936c57c1d3cf9" Mar 18 13:45:58.064830 master-0 kubenswrapper[27835]: I0318 13:45:58.064766 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2ab5fd243f343851e895f35f4654f27bb7a415afced56b2b02936c57c1d3cf9"} err="failed to get container status \"a2ab5fd243f343851e895f35f4654f27bb7a415afced56b2b02936c57c1d3cf9\": rpc error: code = NotFound desc = could not find container \"a2ab5fd243f343851e895f35f4654f27bb7a415afced56b2b02936c57c1d3cf9\": container with ID starting with a2ab5fd243f343851e895f35f4654f27bb7a415afced56b2b02936c57c1d3cf9 not found: ID does not exist" Mar 18 13:45:58.064830 master-0 kubenswrapper[27835]: I0318 13:45:58.064793 27835 scope.go:117] "RemoveContainer" containerID="f0803c29119c06a221f61326ef1c28cb4a037d79442e660fce97457d19ba195e" Mar 18 13:45:58.065562 master-0 kubenswrapper[27835]: E0318 13:45:58.065540 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0803c29119c06a221f61326ef1c28cb4a037d79442e660fce97457d19ba195e\": container with ID starting with f0803c29119c06a221f61326ef1c28cb4a037d79442e660fce97457d19ba195e not found: ID does not exist" containerID="f0803c29119c06a221f61326ef1c28cb4a037d79442e660fce97457d19ba195e" Mar 18 13:45:58.065634 master-0 kubenswrapper[27835]: I0318 13:45:58.065563 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0803c29119c06a221f61326ef1c28cb4a037d79442e660fce97457d19ba195e"} err="failed to get container status \"f0803c29119c06a221f61326ef1c28cb4a037d79442e660fce97457d19ba195e\": rpc error: code = NotFound desc = could not find container \"f0803c29119c06a221f61326ef1c28cb4a037d79442e660fce97457d19ba195e\": container with ID starting with f0803c29119c06a221f61326ef1c28cb4a037d79442e660fce97457d19ba195e not found: ID does not exist" Mar 18 13:45:58.152499 master-0 kubenswrapper[27835]: I0318 13:45:58.151584 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 18 13:45:58.167757 master-0 kubenswrapper[27835]: I0318 13:45:58.167594 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 18 13:45:58.223676 master-0 kubenswrapper[27835]: I0318 13:45:58.223513 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 18 13:45:58.255442 master-0 kubenswrapper[27835]: I0318 13:45:58.248471 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 18 13:45:58.263318 master-0 kubenswrapper[27835]: I0318 13:45:58.263270 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 18 13:45:58.263852 master-0 kubenswrapper[27835]: E0318 13:45:58.263821 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e75fe206-50da-483e-8b9a-a86adf0082ac" containerName="nova-cell1-novncproxy-novncproxy" Mar 18 13:45:58.263852 master-0 kubenswrapper[27835]: I0318 13:45:58.263843 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="e75fe206-50da-483e-8b9a-a86adf0082ac" containerName="nova-cell1-novncproxy-novncproxy" Mar 18 13:45:58.264063 master-0 kubenswrapper[27835]: E0318 13:45:58.263869 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13db2dec-a8ff-45b5-b3b0-3459de6bbf41" containerName="nova-metadata-log" Mar 18 13:45:58.264063 master-0 kubenswrapper[27835]: I0318 13:45:58.263905 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="13db2dec-a8ff-45b5-b3b0-3459de6bbf41" containerName="nova-metadata-log" Mar 18 13:45:58.264063 master-0 kubenswrapper[27835]: E0318 13:45:58.263920 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13db2dec-a8ff-45b5-b3b0-3459de6bbf41" containerName="nova-metadata-metadata" Mar 18 13:45:58.264063 master-0 kubenswrapper[27835]: I0318 13:45:58.263929 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="13db2dec-a8ff-45b5-b3b0-3459de6bbf41" containerName="nova-metadata-metadata" Mar 18 13:45:58.264208 master-0 kubenswrapper[27835]: I0318 13:45:58.264182 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="13db2dec-a8ff-45b5-b3b0-3459de6bbf41" containerName="nova-metadata-metadata" Mar 18 13:45:58.264208 master-0 kubenswrapper[27835]: I0318 13:45:58.264204 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="13db2dec-a8ff-45b5-b3b0-3459de6bbf41" containerName="nova-metadata-log" Mar 18 13:45:58.264274 master-0 kubenswrapper[27835]: I0318 13:45:58.264224 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="e75fe206-50da-483e-8b9a-a86adf0082ac" containerName="nova-cell1-novncproxy-novncproxy" Mar 18 13:45:58.265038 master-0 kubenswrapper[27835]: I0318 13:45:58.265003 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 18 13:45:58.267889 master-0 kubenswrapper[27835]: I0318 13:45:58.267841 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Mar 18 13:45:58.268139 master-0 kubenswrapper[27835]: I0318 13:45:58.268014 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Mar 18 13:45:58.270102 master-0 kubenswrapper[27835]: I0318 13:45:58.270067 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 18 13:45:58.296754 master-0 kubenswrapper[27835]: I0318 13:45:58.296678 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13db2dec-a8ff-45b5-b3b0-3459de6bbf41" path="/var/lib/kubelet/pods/13db2dec-a8ff-45b5-b3b0-3459de6bbf41/volumes" Mar 18 13:45:58.300585 master-0 kubenswrapper[27835]: I0318 13:45:58.297570 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e75fe206-50da-483e-8b9a-a86adf0082ac" path="/var/lib/kubelet/pods/e75fe206-50da-483e-8b9a-a86adf0082ac/volumes" Mar 18 13:45:58.300585 master-0 kubenswrapper[27835]: I0318 13:45:58.298250 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 18 13:45:58.300732 master-0 kubenswrapper[27835]: I0318 13:45:58.300611 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 18 13:45:58.301141 master-0 kubenswrapper[27835]: I0318 13:45:58.301059 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 18 13:45:58.305611 master-0 kubenswrapper[27835]: I0318 13:45:58.302375 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 18 13:45:58.305611 master-0 kubenswrapper[27835]: I0318 13:45:58.303565 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 18 13:45:58.315618 master-0 kubenswrapper[27835]: I0318 13:45:58.313214 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 18 13:45:58.380453 master-0 kubenswrapper[27835]: I0318 13:45:58.379863 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3975b38-2c90-4255-b2d1-ab1b2fa723b5-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3975b38-2c90-4255-b2d1-ab1b2fa723b5\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 13:45:58.380453 master-0 kubenswrapper[27835]: I0318 13:45:58.379926 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcwcp\" (UniqueName: \"kubernetes.io/projected/b3975b38-2c90-4255-b2d1-ab1b2fa723b5-kube-api-access-mcwcp\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3975b38-2c90-4255-b2d1-ab1b2fa723b5\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 13:45:58.380453 master-0 kubenswrapper[27835]: I0318 13:45:58.380075 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3975b38-2c90-4255-b2d1-ab1b2fa723b5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3975b38-2c90-4255-b2d1-ab1b2fa723b5\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 13:45:58.380765 master-0 kubenswrapper[27835]: I0318 13:45:58.380639 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3975b38-2c90-4255-b2d1-ab1b2fa723b5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3975b38-2c90-4255-b2d1-ab1b2fa723b5\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 13:45:58.382535 master-0 kubenswrapper[27835]: I0318 13:45:58.381056 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3975b38-2c90-4255-b2d1-ab1b2fa723b5-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3975b38-2c90-4255-b2d1-ab1b2fa723b5\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 13:45:58.484255 master-0 kubenswrapper[27835]: I0318 13:45:58.484104 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3975b38-2c90-4255-b2d1-ab1b2fa723b5-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3975b38-2c90-4255-b2d1-ab1b2fa723b5\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 13:45:58.484655 master-0 kubenswrapper[27835]: I0318 13:45:58.484580 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79a347a5-ca15-4fa5-be5f-bb4d7e2c2332-config-data\") pod \"nova-metadata-0\" (UID: \"79a347a5-ca15-4fa5-be5f-bb4d7e2c2332\") " pod="openstack/nova-metadata-0" Mar 18 13:45:58.484982 master-0 kubenswrapper[27835]: I0318 13:45:58.484957 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3975b38-2c90-4255-b2d1-ab1b2fa723b5-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3975b38-2c90-4255-b2d1-ab1b2fa723b5\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 13:45:58.485098 master-0 kubenswrapper[27835]: I0318 13:45:58.485085 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcwcp\" (UniqueName: \"kubernetes.io/projected/b3975b38-2c90-4255-b2d1-ab1b2fa723b5-kube-api-access-mcwcp\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3975b38-2c90-4255-b2d1-ab1b2fa723b5\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 13:45:58.485615 master-0 kubenswrapper[27835]: I0318 13:45:58.485597 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3975b38-2c90-4255-b2d1-ab1b2fa723b5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3975b38-2c90-4255-b2d1-ab1b2fa723b5\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 13:45:58.485729 master-0 kubenswrapper[27835]: I0318 13:45:58.485713 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79a347a5-ca15-4fa5-be5f-bb4d7e2c2332-logs\") pod \"nova-metadata-0\" (UID: \"79a347a5-ca15-4fa5-be5f-bb4d7e2c2332\") " pod="openstack/nova-metadata-0" Mar 18 13:45:58.485812 master-0 kubenswrapper[27835]: I0318 13:45:58.485799 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7hlg\" (UniqueName: \"kubernetes.io/projected/79a347a5-ca15-4fa5-be5f-bb4d7e2c2332-kube-api-access-n7hlg\") pod \"nova-metadata-0\" (UID: \"79a347a5-ca15-4fa5-be5f-bb4d7e2c2332\") " pod="openstack/nova-metadata-0" Mar 18 13:45:58.485913 master-0 kubenswrapper[27835]: I0318 13:45:58.485899 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79a347a5-ca15-4fa5-be5f-bb4d7e2c2332-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"79a347a5-ca15-4fa5-be5f-bb4d7e2c2332\") " pod="openstack/nova-metadata-0" Mar 18 13:45:58.486193 master-0 kubenswrapper[27835]: I0318 13:45:58.486173 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3975b38-2c90-4255-b2d1-ab1b2fa723b5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3975b38-2c90-4255-b2d1-ab1b2fa723b5\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 13:45:58.486303 master-0 kubenswrapper[27835]: I0318 13:45:58.486288 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/79a347a5-ca15-4fa5-be5f-bb4d7e2c2332-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"79a347a5-ca15-4fa5-be5f-bb4d7e2c2332\") " pod="openstack/nova-metadata-0" Mar 18 13:45:58.487674 master-0 kubenswrapper[27835]: I0318 13:45:58.487630 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3975b38-2c90-4255-b2d1-ab1b2fa723b5-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3975b38-2c90-4255-b2d1-ab1b2fa723b5\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 13:45:58.488782 master-0 kubenswrapper[27835]: I0318 13:45:58.488733 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3975b38-2c90-4255-b2d1-ab1b2fa723b5-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3975b38-2c90-4255-b2d1-ab1b2fa723b5\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 13:45:58.491921 master-0 kubenswrapper[27835]: I0318 13:45:58.491876 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3975b38-2c90-4255-b2d1-ab1b2fa723b5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3975b38-2c90-4255-b2d1-ab1b2fa723b5\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 13:45:58.500497 master-0 kubenswrapper[27835]: I0318 13:45:58.500442 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3975b38-2c90-4255-b2d1-ab1b2fa723b5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3975b38-2c90-4255-b2d1-ab1b2fa723b5\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 13:45:58.509253 master-0 kubenswrapper[27835]: I0318 13:45:58.509152 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcwcp\" (UniqueName: \"kubernetes.io/projected/b3975b38-2c90-4255-b2d1-ab1b2fa723b5-kube-api-access-mcwcp\") pod \"nova-cell1-novncproxy-0\" (UID: \"b3975b38-2c90-4255-b2d1-ab1b2fa723b5\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 13:45:58.588312 master-0 kubenswrapper[27835]: I0318 13:45:58.588225 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79a347a5-ca15-4fa5-be5f-bb4d7e2c2332-logs\") pod \"nova-metadata-0\" (UID: \"79a347a5-ca15-4fa5-be5f-bb4d7e2c2332\") " pod="openstack/nova-metadata-0" Mar 18 13:45:58.588312 master-0 kubenswrapper[27835]: I0318 13:45:58.588304 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7hlg\" (UniqueName: \"kubernetes.io/projected/79a347a5-ca15-4fa5-be5f-bb4d7e2c2332-kube-api-access-n7hlg\") pod \"nova-metadata-0\" (UID: \"79a347a5-ca15-4fa5-be5f-bb4d7e2c2332\") " pod="openstack/nova-metadata-0" Mar 18 13:45:58.588312 master-0 kubenswrapper[27835]: I0318 13:45:58.588329 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79a347a5-ca15-4fa5-be5f-bb4d7e2c2332-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"79a347a5-ca15-4fa5-be5f-bb4d7e2c2332\") " pod="openstack/nova-metadata-0" Mar 18 13:45:58.588646 master-0 kubenswrapper[27835]: I0318 13:45:58.588403 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/79a347a5-ca15-4fa5-be5f-bb4d7e2c2332-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"79a347a5-ca15-4fa5-be5f-bb4d7e2c2332\") " pod="openstack/nova-metadata-0" Mar 18 13:45:58.588646 master-0 kubenswrapper[27835]: I0318 13:45:58.588514 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79a347a5-ca15-4fa5-be5f-bb4d7e2c2332-config-data\") pod \"nova-metadata-0\" (UID: \"79a347a5-ca15-4fa5-be5f-bb4d7e2c2332\") " pod="openstack/nova-metadata-0" Mar 18 13:45:58.589465 master-0 kubenswrapper[27835]: I0318 13:45:58.589428 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79a347a5-ca15-4fa5-be5f-bb4d7e2c2332-logs\") pod \"nova-metadata-0\" (UID: \"79a347a5-ca15-4fa5-be5f-bb4d7e2c2332\") " pod="openstack/nova-metadata-0" Mar 18 13:45:58.589975 master-0 kubenswrapper[27835]: I0318 13:45:58.589947 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 18 13:45:58.593545 master-0 kubenswrapper[27835]: I0318 13:45:58.593496 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79a347a5-ca15-4fa5-be5f-bb4d7e2c2332-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"79a347a5-ca15-4fa5-be5f-bb4d7e2c2332\") " pod="openstack/nova-metadata-0" Mar 18 13:45:58.596040 master-0 kubenswrapper[27835]: I0318 13:45:58.595995 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/79a347a5-ca15-4fa5-be5f-bb4d7e2c2332-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"79a347a5-ca15-4fa5-be5f-bb4d7e2c2332\") " pod="openstack/nova-metadata-0" Mar 18 13:45:58.603583 master-0 kubenswrapper[27835]: I0318 13:45:58.603531 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79a347a5-ca15-4fa5-be5f-bb4d7e2c2332-config-data\") pod \"nova-metadata-0\" (UID: \"79a347a5-ca15-4fa5-be5f-bb4d7e2c2332\") " pod="openstack/nova-metadata-0" Mar 18 13:45:58.607353 master-0 kubenswrapper[27835]: I0318 13:45:58.607324 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7hlg\" (UniqueName: \"kubernetes.io/projected/79a347a5-ca15-4fa5-be5f-bb4d7e2c2332-kube-api-access-n7hlg\") pod \"nova-metadata-0\" (UID: \"79a347a5-ca15-4fa5-be5f-bb4d7e2c2332\") " pod="openstack/nova-metadata-0" Mar 18 13:45:58.628438 master-0 kubenswrapper[27835]: I0318 13:45:58.626085 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 18 13:45:58.808698 master-0 kubenswrapper[27835]: I0318 13:45:58.808463 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"d2a793d4-62c6-4482-a5e5-21ed4cc72e33","Type":"ContainerStarted","Data":"14b2ec893e27a37e10e78132d83df1d711b5f4cab8df9cb9556a13020ae7f7b3"} Mar 18 13:45:58.810658 master-0 kubenswrapper[27835]: I0318 13:45:58.809266 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Mar 18 13:45:58.810658 master-0 kubenswrapper[27835]: I0318 13:45:58.809461 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Mar 18 13:45:58.860070 master-0 kubenswrapper[27835]: I0318 13:45:58.859980 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-conductor-0" podStartSLOduration=79.920005954 podStartE2EDuration="2m4.859961507s" podCreationTimestamp="2026-03-18 13:43:54 +0000 UTC" firstStartedPulling="2026-03-18 13:44:08.31894301 +0000 UTC m=+1212.284154570" lastFinishedPulling="2026-03-18 13:44:53.258898563 +0000 UTC m=+1257.224110123" observedRunningTime="2026-03-18 13:45:58.852296404 +0000 UTC m=+1322.817507984" watchObservedRunningTime="2026-03-18 13:45:58.859961507 +0000 UTC m=+1322.825173067" Mar 18 13:45:59.099360 master-0 kubenswrapper[27835]: I0318 13:45:59.099074 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 18 13:45:59.134996 master-0 kubenswrapper[27835]: I0318 13:45:59.134754 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-conductor-0" Mar 18 13:45:59.205057 master-0 kubenswrapper[27835]: I0318 13:45:59.204951 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 18 13:45:59.841963 master-0 kubenswrapper[27835]: I0318 13:45:59.841870 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b3975b38-2c90-4255-b2d1-ab1b2fa723b5","Type":"ContainerStarted","Data":"3186542d98a3f10c520829ea93529efd2ab0bcf4f7cb38458784bee41004fba1"} Mar 18 13:45:59.841963 master-0 kubenswrapper[27835]: I0318 13:45:59.841967 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b3975b38-2c90-4255-b2d1-ab1b2fa723b5","Type":"ContainerStarted","Data":"9507590e6267c58a4d07cfd12e149dbd9fca25e8b7a5f084f6c0b4b3bafbea3b"} Mar 18 13:45:59.846103 master-0 kubenswrapper[27835]: I0318 13:45:59.844863 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"79a347a5-ca15-4fa5-be5f-bb4d7e2c2332","Type":"ContainerStarted","Data":"f3610f6494fb27e9563ad2cb50befe234c773a41f0fe21f55ca6a508beb55696"} Mar 18 13:45:59.846103 master-0 kubenswrapper[27835]: I0318 13:45:59.844956 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"79a347a5-ca15-4fa5-be5f-bb4d7e2c2332","Type":"ContainerStarted","Data":"31d77bb97d052618340ff92a3dd7c1b07c30e9b59fbfd768fb7455fa70f9cefd"} Mar 18 13:45:59.846103 master-0 kubenswrapper[27835]: I0318 13:45:59.844974 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"79a347a5-ca15-4fa5-be5f-bb4d7e2c2332","Type":"ContainerStarted","Data":"96b8d3ae9e3ada1be16aac169cc30b24c19737cf3979d6559118f6d4099f4a38"} Mar 18 13:45:59.884488 master-0 kubenswrapper[27835]: I0318 13:45:59.883687 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.883656048 podStartE2EDuration="1.883656048s" podCreationTimestamp="2026-03-18 13:45:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:45:59.870085299 +0000 UTC m=+1323.835296869" watchObservedRunningTime="2026-03-18 13:45:59.883656048 +0000 UTC m=+1323.848867608" Mar 18 13:45:59.909461 master-0 kubenswrapper[27835]: I0318 13:45:59.908900 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.908864157 podStartE2EDuration="1.908864157s" podCreationTimestamp="2026-03-18 13:45:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:45:59.897455944 +0000 UTC m=+1323.862667524" watchObservedRunningTime="2026-03-18 13:45:59.908864157 +0000 UTC m=+1323.874075717" Mar 18 13:46:00.779134 master-0 kubenswrapper[27835]: I0318 13:46:00.779077 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/ironic-conductor-0" podUID="d2a793d4-62c6-4482-a5e5-21ed4cc72e33" containerName="ironic-conductor" probeResult="failure" output=< Mar 18 13:46:00.779134 master-0 kubenswrapper[27835]: ironic-conductor-0 is offline Mar 18 13:46:00.779134 master-0 kubenswrapper[27835]: > Mar 18 13:46:00.948140 master-0 kubenswrapper[27835]: I0318 13:46:00.948027 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 18 13:46:00.948140 master-0 kubenswrapper[27835]: I0318 13:46:00.948146 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 18 13:46:01.165856 master-0 kubenswrapper[27835]: I0318 13:46:01.165794 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Mar 18 13:46:02.482864 master-0 kubenswrapper[27835]: I0318 13:46:02.482795 27835 trace.go:236] Trace[409792872]: "Calculate volume metrics of glance for pod openstack/glance-4f519-default-internal-api-0" (18-Mar-2026 13:46:01.232) (total time: 1250ms): Mar 18 13:46:02.482864 master-0 kubenswrapper[27835]: Trace[409792872]: [1.250019182s] [1.250019182s] END Mar 18 13:46:02.545352 master-0 kubenswrapper[27835]: I0318 13:46:02.545189 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-conductor-0" Mar 18 13:46:02.903560 master-0 kubenswrapper[27835]: I0318 13:46:02.903381 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Mar 18 13:46:02.955014 master-0 kubenswrapper[27835]: I0318 13:46:02.954751 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 18 13:46:02.960407 master-0 kubenswrapper[27835]: I0318 13:46:02.960369 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 18 13:46:02.960988 master-0 kubenswrapper[27835]: I0318 13:46:02.960914 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 18 13:46:03.595514 master-0 kubenswrapper[27835]: I0318 13:46:03.590519 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 18 13:46:03.915936 master-0 kubenswrapper[27835]: I0318 13:46:03.915835 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 18 13:46:04.254196 master-0 kubenswrapper[27835]: I0318 13:46:04.254069 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54f4d7d767-8d7qt"] Mar 18 13:46:04.257037 master-0 kubenswrapper[27835]: I0318 13:46:04.257000 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f4d7d767-8d7qt" Mar 18 13:46:04.308107 master-0 kubenswrapper[27835]: I0318 13:46:04.308052 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54f4d7d767-8d7qt"] Mar 18 13:46:04.405014 master-0 kubenswrapper[27835]: I0318 13:46:04.404664 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hs5w\" (UniqueName: \"kubernetes.io/projected/eef4e478-7158-4599-af3f-b53306d36487-kube-api-access-7hs5w\") pod \"dnsmasq-dns-54f4d7d767-8d7qt\" (UID: \"eef4e478-7158-4599-af3f-b53306d36487\") " pod="openstack/dnsmasq-dns-54f4d7d767-8d7qt" Mar 18 13:46:04.405014 master-0 kubenswrapper[27835]: I0318 13:46:04.404723 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eef4e478-7158-4599-af3f-b53306d36487-config\") pod \"dnsmasq-dns-54f4d7d767-8d7qt\" (UID: \"eef4e478-7158-4599-af3f-b53306d36487\") " pod="openstack/dnsmasq-dns-54f4d7d767-8d7qt" Mar 18 13:46:04.405014 master-0 kubenswrapper[27835]: I0318 13:46:04.404761 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eef4e478-7158-4599-af3f-b53306d36487-dns-svc\") pod \"dnsmasq-dns-54f4d7d767-8d7qt\" (UID: \"eef4e478-7158-4599-af3f-b53306d36487\") " pod="openstack/dnsmasq-dns-54f4d7d767-8d7qt" Mar 18 13:46:04.405014 master-0 kubenswrapper[27835]: I0318 13:46:04.404789 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eef4e478-7158-4599-af3f-b53306d36487-ovsdbserver-nb\") pod \"dnsmasq-dns-54f4d7d767-8d7qt\" (UID: \"eef4e478-7158-4599-af3f-b53306d36487\") " pod="openstack/dnsmasq-dns-54f4d7d767-8d7qt" Mar 18 13:46:04.405336 master-0 kubenswrapper[27835]: I0318 13:46:04.405039 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eef4e478-7158-4599-af3f-b53306d36487-ovsdbserver-sb\") pod \"dnsmasq-dns-54f4d7d767-8d7qt\" (UID: \"eef4e478-7158-4599-af3f-b53306d36487\") " pod="openstack/dnsmasq-dns-54f4d7d767-8d7qt" Mar 18 13:46:04.405336 master-0 kubenswrapper[27835]: I0318 13:46:04.405287 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eef4e478-7158-4599-af3f-b53306d36487-dns-swift-storage-0\") pod \"dnsmasq-dns-54f4d7d767-8d7qt\" (UID: \"eef4e478-7158-4599-af3f-b53306d36487\") " pod="openstack/dnsmasq-dns-54f4d7d767-8d7qt" Mar 18 13:46:04.512521 master-0 kubenswrapper[27835]: I0318 13:46:04.509702 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hs5w\" (UniqueName: \"kubernetes.io/projected/eef4e478-7158-4599-af3f-b53306d36487-kube-api-access-7hs5w\") pod \"dnsmasq-dns-54f4d7d767-8d7qt\" (UID: \"eef4e478-7158-4599-af3f-b53306d36487\") " pod="openstack/dnsmasq-dns-54f4d7d767-8d7qt" Mar 18 13:46:04.512521 master-0 kubenswrapper[27835]: I0318 13:46:04.509776 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eef4e478-7158-4599-af3f-b53306d36487-config\") pod \"dnsmasq-dns-54f4d7d767-8d7qt\" (UID: \"eef4e478-7158-4599-af3f-b53306d36487\") " pod="openstack/dnsmasq-dns-54f4d7d767-8d7qt" Mar 18 13:46:04.512521 master-0 kubenswrapper[27835]: I0318 13:46:04.509823 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eef4e478-7158-4599-af3f-b53306d36487-dns-svc\") pod \"dnsmasq-dns-54f4d7d767-8d7qt\" (UID: \"eef4e478-7158-4599-af3f-b53306d36487\") " pod="openstack/dnsmasq-dns-54f4d7d767-8d7qt" Mar 18 13:46:04.512521 master-0 kubenswrapper[27835]: I0318 13:46:04.509865 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eef4e478-7158-4599-af3f-b53306d36487-ovsdbserver-nb\") pod \"dnsmasq-dns-54f4d7d767-8d7qt\" (UID: \"eef4e478-7158-4599-af3f-b53306d36487\") " pod="openstack/dnsmasq-dns-54f4d7d767-8d7qt" Mar 18 13:46:04.512521 master-0 kubenswrapper[27835]: I0318 13:46:04.509947 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eef4e478-7158-4599-af3f-b53306d36487-ovsdbserver-sb\") pod \"dnsmasq-dns-54f4d7d767-8d7qt\" (UID: \"eef4e478-7158-4599-af3f-b53306d36487\") " pod="openstack/dnsmasq-dns-54f4d7d767-8d7qt" Mar 18 13:46:04.512521 master-0 kubenswrapper[27835]: I0318 13:46:04.510029 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eef4e478-7158-4599-af3f-b53306d36487-dns-swift-storage-0\") pod \"dnsmasq-dns-54f4d7d767-8d7qt\" (UID: \"eef4e478-7158-4599-af3f-b53306d36487\") " pod="openstack/dnsmasq-dns-54f4d7d767-8d7qt" Mar 18 13:46:04.512521 master-0 kubenswrapper[27835]: I0318 13:46:04.511092 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eef4e478-7158-4599-af3f-b53306d36487-dns-swift-storage-0\") pod \"dnsmasq-dns-54f4d7d767-8d7qt\" (UID: \"eef4e478-7158-4599-af3f-b53306d36487\") " pod="openstack/dnsmasq-dns-54f4d7d767-8d7qt" Mar 18 13:46:04.512521 master-0 kubenswrapper[27835]: I0318 13:46:04.512091 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eef4e478-7158-4599-af3f-b53306d36487-config\") pod \"dnsmasq-dns-54f4d7d767-8d7qt\" (UID: \"eef4e478-7158-4599-af3f-b53306d36487\") " pod="openstack/dnsmasq-dns-54f4d7d767-8d7qt" Mar 18 13:46:04.512983 master-0 kubenswrapper[27835]: I0318 13:46:04.512818 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eef4e478-7158-4599-af3f-b53306d36487-dns-svc\") pod \"dnsmasq-dns-54f4d7d767-8d7qt\" (UID: \"eef4e478-7158-4599-af3f-b53306d36487\") " pod="openstack/dnsmasq-dns-54f4d7d767-8d7qt" Mar 18 13:46:04.520458 master-0 kubenswrapper[27835]: I0318 13:46:04.515250 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eef4e478-7158-4599-af3f-b53306d36487-ovsdbserver-sb\") pod \"dnsmasq-dns-54f4d7d767-8d7qt\" (UID: \"eef4e478-7158-4599-af3f-b53306d36487\") " pod="openstack/dnsmasq-dns-54f4d7d767-8d7qt" Mar 18 13:46:04.520458 master-0 kubenswrapper[27835]: I0318 13:46:04.515467 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eef4e478-7158-4599-af3f-b53306d36487-ovsdbserver-nb\") pod \"dnsmasq-dns-54f4d7d767-8d7qt\" (UID: \"eef4e478-7158-4599-af3f-b53306d36487\") " pod="openstack/dnsmasq-dns-54f4d7d767-8d7qt" Mar 18 13:46:04.549459 master-0 kubenswrapper[27835]: I0318 13:46:04.534956 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hs5w\" (UniqueName: \"kubernetes.io/projected/eef4e478-7158-4599-af3f-b53306d36487-kube-api-access-7hs5w\") pod \"dnsmasq-dns-54f4d7d767-8d7qt\" (UID: \"eef4e478-7158-4599-af3f-b53306d36487\") " pod="openstack/dnsmasq-dns-54f4d7d767-8d7qt" Mar 18 13:46:04.590475 master-0 kubenswrapper[27835]: I0318 13:46:04.587253 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f4d7d767-8d7qt" Mar 18 13:46:05.138025 master-0 kubenswrapper[27835]: I0318 13:46:05.137928 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54f4d7d767-8d7qt"] Mar 18 13:46:05.142363 master-0 kubenswrapper[27835]: W0318 13:46:05.142312 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeef4e478_7158_4599_af3f_b53306d36487.slice/crio-553805e0cabbab1bf5b60fa12339f30a8469c2df7580c6663ce8336c49b031db WatchSource:0}: Error finding container 553805e0cabbab1bf5b60fa12339f30a8469c2df7580c6663ce8336c49b031db: Status 404 returned error can't find the container with id 553805e0cabbab1bf5b60fa12339f30a8469c2df7580c6663ce8336c49b031db Mar 18 13:46:05.939223 master-0 kubenswrapper[27835]: I0318 13:46:05.938375 27835 generic.go:334] "Generic (PLEG): container finished" podID="eef4e478-7158-4599-af3f-b53306d36487" containerID="5c4ac0175fc35dee20b92ff498a867fa174d03e5efbd40bf683965b2eec45811" exitCode=0 Mar 18 13:46:05.940566 master-0 kubenswrapper[27835]: I0318 13:46:05.940195 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f4d7d767-8d7qt" event={"ID":"eef4e478-7158-4599-af3f-b53306d36487","Type":"ContainerDied","Data":"5c4ac0175fc35dee20b92ff498a867fa174d03e5efbd40bf683965b2eec45811"} Mar 18 13:46:05.940566 master-0 kubenswrapper[27835]: I0318 13:46:05.940224 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f4d7d767-8d7qt" event={"ID":"eef4e478-7158-4599-af3f-b53306d36487","Type":"ContainerStarted","Data":"553805e0cabbab1bf5b60fa12339f30a8469c2df7580c6663ce8336c49b031db"} Mar 18 13:46:06.899134 master-0 kubenswrapper[27835]: I0318 13:46:06.899061 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 18 13:46:06.954300 master-0 kubenswrapper[27835]: I0318 13:46:06.954229 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f9716d8b-d31c-4c84-af6b-fc8881a12372" containerName="nova-api-log" containerID="cri-o://bdec1090d481e10ee7eb36db78c140a73ec64df0d2558f68b80654f1ef70bfe0" gracePeriod=30 Mar 18 13:46:06.954835 master-0 kubenswrapper[27835]: I0318 13:46:06.954648 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f4d7d767-8d7qt" event={"ID":"eef4e478-7158-4599-af3f-b53306d36487","Type":"ContainerStarted","Data":"d3fddee7b2fd6c989f38c2b3599f59330f52969a2f5afe0202a570181c709474"} Mar 18 13:46:06.954835 master-0 kubenswrapper[27835]: I0318 13:46:06.954673 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f9716d8b-d31c-4c84-af6b-fc8881a12372" containerName="nova-api-api" containerID="cri-o://112ad3178a976b8cac301969378ca0f4ac69ac1e159bc964ae5b27acd99b2f2d" gracePeriod=30 Mar 18 13:46:06.955592 master-0 kubenswrapper[27835]: I0318 13:46:06.955207 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54f4d7d767-8d7qt" Mar 18 13:46:06.985979 master-0 kubenswrapper[27835]: I0318 13:46:06.985741 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54f4d7d767-8d7qt" podStartSLOduration=2.985714418 podStartE2EDuration="2.985714418s" podCreationTimestamp="2026-03-18 13:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:46:06.981035083 +0000 UTC m=+1330.946246643" watchObservedRunningTime="2026-03-18 13:46:06.985714418 +0000 UTC m=+1330.950925978" Mar 18 13:46:07.969439 master-0 kubenswrapper[27835]: I0318 13:46:07.968792 27835 generic.go:334] "Generic (PLEG): container finished" podID="f9716d8b-d31c-4c84-af6b-fc8881a12372" containerID="bdec1090d481e10ee7eb36db78c140a73ec64df0d2558f68b80654f1ef70bfe0" exitCode=143 Mar 18 13:46:07.970023 master-0 kubenswrapper[27835]: I0318 13:46:07.969466 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9716d8b-d31c-4c84-af6b-fc8881a12372","Type":"ContainerDied","Data":"bdec1090d481e10ee7eb36db78c140a73ec64df0d2558f68b80654f1ef70bfe0"} Mar 18 13:46:08.590451 master-0 kubenswrapper[27835]: I0318 13:46:08.590338 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Mar 18 13:46:08.616818 master-0 kubenswrapper[27835]: I0318 13:46:08.616748 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Mar 18 13:46:08.627289 master-0 kubenswrapper[27835]: I0318 13:46:08.627206 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 18 13:46:08.627289 master-0 kubenswrapper[27835]: I0318 13:46:08.627283 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 18 13:46:09.002997 master-0 kubenswrapper[27835]: I0318 13:46:09.002928 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Mar 18 13:46:09.641736 master-0 kubenswrapper[27835]: I0318 13:46:09.641651 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="79a347a5-ca15-4fa5-be5f-bb4d7e2c2332" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.7:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:46:09.641736 master-0 kubenswrapper[27835]: I0318 13:46:09.641666 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="79a347a5-ca15-4fa5-be5f-bb4d7e2c2332" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.7:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:46:10.692283 master-0 kubenswrapper[27835]: I0318 13:46:10.692216 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 13:46:10.769243 master-0 kubenswrapper[27835]: I0318 13:46:10.769167 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9716d8b-d31c-4c84-af6b-fc8881a12372-combined-ca-bundle\") pod \"f9716d8b-d31c-4c84-af6b-fc8881a12372\" (UID: \"f9716d8b-d31c-4c84-af6b-fc8881a12372\") " Mar 18 13:46:10.769599 master-0 kubenswrapper[27835]: I0318 13:46:10.769543 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9716d8b-d31c-4c84-af6b-fc8881a12372-logs\") pod \"f9716d8b-d31c-4c84-af6b-fc8881a12372\" (UID: \"f9716d8b-d31c-4c84-af6b-fc8881a12372\") " Mar 18 13:46:10.769698 master-0 kubenswrapper[27835]: I0318 13:46:10.769631 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9716d8b-d31c-4c84-af6b-fc8881a12372-config-data\") pod \"f9716d8b-d31c-4c84-af6b-fc8881a12372\" (UID: \"f9716d8b-d31c-4c84-af6b-fc8881a12372\") " Mar 18 13:46:10.769871 master-0 kubenswrapper[27835]: I0318 13:46:10.769831 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fl9sm\" (UniqueName: \"kubernetes.io/projected/f9716d8b-d31c-4c84-af6b-fc8881a12372-kube-api-access-fl9sm\") pod \"f9716d8b-d31c-4c84-af6b-fc8881a12372\" (UID: \"f9716d8b-d31c-4c84-af6b-fc8881a12372\") " Mar 18 13:46:10.769976 master-0 kubenswrapper[27835]: I0318 13:46:10.769940 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9716d8b-d31c-4c84-af6b-fc8881a12372-logs" (OuterVolumeSpecName: "logs") pod "f9716d8b-d31c-4c84-af6b-fc8881a12372" (UID: "f9716d8b-d31c-4c84-af6b-fc8881a12372"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:46:10.770877 master-0 kubenswrapper[27835]: I0318 13:46:10.770836 27835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9716d8b-d31c-4c84-af6b-fc8881a12372-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:46:10.773465 master-0 kubenswrapper[27835]: I0318 13:46:10.773353 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9716d8b-d31c-4c84-af6b-fc8881a12372-kube-api-access-fl9sm" (OuterVolumeSpecName: "kube-api-access-fl9sm") pod "f9716d8b-d31c-4c84-af6b-fc8881a12372" (UID: "f9716d8b-d31c-4c84-af6b-fc8881a12372"). InnerVolumeSpecName "kube-api-access-fl9sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:46:10.799147 master-0 kubenswrapper[27835]: I0318 13:46:10.799072 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9716d8b-d31c-4c84-af6b-fc8881a12372-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9716d8b-d31c-4c84-af6b-fc8881a12372" (UID: "f9716d8b-d31c-4c84-af6b-fc8881a12372"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:46:10.801560 master-0 kubenswrapper[27835]: I0318 13:46:10.801499 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9716d8b-d31c-4c84-af6b-fc8881a12372-config-data" (OuterVolumeSpecName: "config-data") pod "f9716d8b-d31c-4c84-af6b-fc8881a12372" (UID: "f9716d8b-d31c-4c84-af6b-fc8881a12372"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:46:10.872770 master-0 kubenswrapper[27835]: I0318 13:46:10.872718 27835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9716d8b-d31c-4c84-af6b-fc8881a12372-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 13:46:10.872770 master-0 kubenswrapper[27835]: I0318 13:46:10.872759 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fl9sm\" (UniqueName: \"kubernetes.io/projected/f9716d8b-d31c-4c84-af6b-fc8881a12372-kube-api-access-fl9sm\") on node \"master-0\" DevicePath \"\"" Mar 18 13:46:10.872770 master-0 kubenswrapper[27835]: I0318 13:46:10.872771 27835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9716d8b-d31c-4c84-af6b-fc8881a12372-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:46:11.011859 master-0 kubenswrapper[27835]: I0318 13:46:11.011779 27835 generic.go:334] "Generic (PLEG): container finished" podID="f9716d8b-d31c-4c84-af6b-fc8881a12372" containerID="112ad3178a976b8cac301969378ca0f4ac69ac1e159bc964ae5b27acd99b2f2d" exitCode=0 Mar 18 13:46:11.011859 master-0 kubenswrapper[27835]: I0318 13:46:11.011859 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9716d8b-d31c-4c84-af6b-fc8881a12372","Type":"ContainerDied","Data":"112ad3178a976b8cac301969378ca0f4ac69ac1e159bc964ae5b27acd99b2f2d"} Mar 18 13:46:11.012131 master-0 kubenswrapper[27835]: I0318 13:46:11.011870 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 13:46:11.012131 master-0 kubenswrapper[27835]: I0318 13:46:11.011913 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9716d8b-d31c-4c84-af6b-fc8881a12372","Type":"ContainerDied","Data":"d322f9bc8cdb7b68ec6c9076e5a8fde9efcb22bee8bba0616b6a9fd9d1127531"} Mar 18 13:46:11.012131 master-0 kubenswrapper[27835]: I0318 13:46:11.011944 27835 scope.go:117] "RemoveContainer" containerID="112ad3178a976b8cac301969378ca0f4ac69ac1e159bc964ae5b27acd99b2f2d" Mar 18 13:46:11.047726 master-0 kubenswrapper[27835]: I0318 13:46:11.047536 27835 scope.go:117] "RemoveContainer" containerID="bdec1090d481e10ee7eb36db78c140a73ec64df0d2558f68b80654f1ef70bfe0" Mar 18 13:46:11.073943 master-0 kubenswrapper[27835]: I0318 13:46:11.073636 27835 scope.go:117] "RemoveContainer" containerID="112ad3178a976b8cac301969378ca0f4ac69ac1e159bc964ae5b27acd99b2f2d" Mar 18 13:46:11.074215 master-0 kubenswrapper[27835]: E0318 13:46:11.074158 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"112ad3178a976b8cac301969378ca0f4ac69ac1e159bc964ae5b27acd99b2f2d\": container with ID starting with 112ad3178a976b8cac301969378ca0f4ac69ac1e159bc964ae5b27acd99b2f2d not found: ID does not exist" containerID="112ad3178a976b8cac301969378ca0f4ac69ac1e159bc964ae5b27acd99b2f2d" Mar 18 13:46:11.074312 master-0 kubenswrapper[27835]: I0318 13:46:11.074214 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"112ad3178a976b8cac301969378ca0f4ac69ac1e159bc964ae5b27acd99b2f2d"} err="failed to get container status \"112ad3178a976b8cac301969378ca0f4ac69ac1e159bc964ae5b27acd99b2f2d\": rpc error: code = NotFound desc = could not find container \"112ad3178a976b8cac301969378ca0f4ac69ac1e159bc964ae5b27acd99b2f2d\": container with ID starting with 112ad3178a976b8cac301969378ca0f4ac69ac1e159bc964ae5b27acd99b2f2d not found: ID does not exist" Mar 18 13:46:11.074312 master-0 kubenswrapper[27835]: I0318 13:46:11.074249 27835 scope.go:117] "RemoveContainer" containerID="bdec1090d481e10ee7eb36db78c140a73ec64df0d2558f68b80654f1ef70bfe0" Mar 18 13:46:11.074744 master-0 kubenswrapper[27835]: E0318 13:46:11.074690 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdec1090d481e10ee7eb36db78c140a73ec64df0d2558f68b80654f1ef70bfe0\": container with ID starting with bdec1090d481e10ee7eb36db78c140a73ec64df0d2558f68b80654f1ef70bfe0 not found: ID does not exist" containerID="bdec1090d481e10ee7eb36db78c140a73ec64df0d2558f68b80654f1ef70bfe0" Mar 18 13:46:11.074817 master-0 kubenswrapper[27835]: I0318 13:46:11.074746 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdec1090d481e10ee7eb36db78c140a73ec64df0d2558f68b80654f1ef70bfe0"} err="failed to get container status \"bdec1090d481e10ee7eb36db78c140a73ec64df0d2558f68b80654f1ef70bfe0\": rpc error: code = NotFound desc = could not find container \"bdec1090d481e10ee7eb36db78c140a73ec64df0d2558f68b80654f1ef70bfe0\": container with ID starting with bdec1090d481e10ee7eb36db78c140a73ec64df0d2558f68b80654f1ef70bfe0 not found: ID does not exist" Mar 18 13:46:14.588648 master-0 kubenswrapper[27835]: I0318 13:46:14.588575 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-54f4d7d767-8d7qt" Mar 18 13:46:16.626851 master-0 kubenswrapper[27835]: I0318 13:46:16.626753 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 18 13:46:16.626851 master-0 kubenswrapper[27835]: I0318 13:46:16.626828 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 18 13:46:17.069057 master-0 kubenswrapper[27835]: I0318 13:46:17.068987 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 18 13:46:18.759713 master-0 kubenswrapper[27835]: I0318 13:46:18.759649 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 18 13:46:18.772741 master-0 kubenswrapper[27835]: I0318 13:46:18.772676 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 18 13:46:18.820797 master-0 kubenswrapper[27835]: I0318 13:46:18.820756 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 18 13:46:18.823813 master-0 kubenswrapper[27835]: I0318 13:46:18.823777 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 18 13:46:18.961887 master-0 kubenswrapper[27835]: I0318 13:46:18.961808 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 18 13:46:18.962529 master-0 kubenswrapper[27835]: E0318 13:46:18.962493 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9716d8b-d31c-4c84-af6b-fc8881a12372" containerName="nova-api-log" Mar 18 13:46:18.962529 master-0 kubenswrapper[27835]: I0318 13:46:18.962525 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9716d8b-d31c-4c84-af6b-fc8881a12372" containerName="nova-api-log" Mar 18 13:46:18.962618 master-0 kubenswrapper[27835]: E0318 13:46:18.962549 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9716d8b-d31c-4c84-af6b-fc8881a12372" containerName="nova-api-api" Mar 18 13:46:18.962618 master-0 kubenswrapper[27835]: I0318 13:46:18.962558 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9716d8b-d31c-4c84-af6b-fc8881a12372" containerName="nova-api-api" Mar 18 13:46:18.962817 master-0 kubenswrapper[27835]: I0318 13:46:18.962789 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9716d8b-d31c-4c84-af6b-fc8881a12372" containerName="nova-api-log" Mar 18 13:46:18.962865 master-0 kubenswrapper[27835]: I0318 13:46:18.962818 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9716d8b-d31c-4c84-af6b-fc8881a12372" containerName="nova-api-api" Mar 18 13:46:18.965564 master-0 kubenswrapper[27835]: I0318 13:46:18.965529 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 13:46:18.991935 master-0 kubenswrapper[27835]: I0318 13:46:18.991852 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 18 13:46:18.992158 master-0 kubenswrapper[27835]: I0318 13:46:18.992108 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 18 13:46:18.992478 master-0 kubenswrapper[27835]: I0318 13:46:18.992266 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 18 13:46:19.015501 master-0 kubenswrapper[27835]: I0318 13:46:19.014625 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 18 13:46:19.045438 master-0 kubenswrapper[27835]: I0318 13:46:19.043966 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-pgthm"] Mar 18 13:46:19.059436 master-0 kubenswrapper[27835]: I0318 13:46:19.046101 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pgthm" Mar 18 13:46:19.059436 master-0 kubenswrapper[27835]: I0318 13:46:19.049015 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Mar 18 13:46:19.059436 master-0 kubenswrapper[27835]: I0318 13:46:19.049216 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Mar 18 13:46:19.128825 master-0 kubenswrapper[27835]: I0318 13:46:19.125935 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6zp7\" (UniqueName: \"kubernetes.io/projected/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4-kube-api-access-l6zp7\") pod \"nova-api-0\" (UID: \"3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4\") " pod="openstack/nova-api-0" Mar 18 13:46:19.128825 master-0 kubenswrapper[27835]: I0318 13:46:19.126004 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4-config-data\") pod \"nova-api-0\" (UID: \"3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4\") " pod="openstack/nova-api-0" Mar 18 13:46:19.128825 master-0 kubenswrapper[27835]: I0318 13:46:19.126033 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6l5q\" (UniqueName: \"kubernetes.io/projected/fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0-kube-api-access-w6l5q\") pod \"nova-cell1-cell-mapping-pgthm\" (UID: \"fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0\") " pod="openstack/nova-cell1-cell-mapping-pgthm" Mar 18 13:46:19.128825 master-0 kubenswrapper[27835]: I0318 13:46:19.126074 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-pgthm\" (UID: \"fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0\") " pod="openstack/nova-cell1-cell-mapping-pgthm" Mar 18 13:46:19.128825 master-0 kubenswrapper[27835]: I0318 13:46:19.126109 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0-config-data\") pod \"nova-cell1-cell-mapping-pgthm\" (UID: \"fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0\") " pod="openstack/nova-cell1-cell-mapping-pgthm" Mar 18 13:46:19.128825 master-0 kubenswrapper[27835]: I0318 13:46:19.126164 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0-scripts\") pod \"nova-cell1-cell-mapping-pgthm\" (UID: \"fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0\") " pod="openstack/nova-cell1-cell-mapping-pgthm" Mar 18 13:46:19.128825 master-0 kubenswrapper[27835]: I0318 13:46:19.126207 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4-logs\") pod \"nova-api-0\" (UID: \"3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4\") " pod="openstack/nova-api-0" Mar 18 13:46:19.128825 master-0 kubenswrapper[27835]: I0318 13:46:19.126236 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4\") " pod="openstack/nova-api-0" Mar 18 13:46:19.128825 master-0 kubenswrapper[27835]: I0318 13:46:19.126272 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4-public-tls-certs\") pod \"nova-api-0\" (UID: \"3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4\") " pod="openstack/nova-api-0" Mar 18 13:46:19.128825 master-0 kubenswrapper[27835]: I0318 13:46:19.126290 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4\") " pod="openstack/nova-api-0" Mar 18 13:46:19.160467 master-0 kubenswrapper[27835]: I0318 13:46:19.159116 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-host-discover-hqvsx"] Mar 18 13:46:19.180435 master-0 kubenswrapper[27835]: I0318 13:46:19.174330 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-hqvsx" Mar 18 13:46:19.231631 master-0 kubenswrapper[27835]: I0318 13:46:19.227833 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4-public-tls-certs\") pod \"nova-api-0\" (UID: \"3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4\") " pod="openstack/nova-api-0" Mar 18 13:46:19.231631 master-0 kubenswrapper[27835]: I0318 13:46:19.227997 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4\") " pod="openstack/nova-api-0" Mar 18 13:46:19.231631 master-0 kubenswrapper[27835]: I0318 13:46:19.228136 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6zp7\" (UniqueName: \"kubernetes.io/projected/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4-kube-api-access-l6zp7\") pod \"nova-api-0\" (UID: \"3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4\") " pod="openstack/nova-api-0" Mar 18 13:46:19.231631 master-0 kubenswrapper[27835]: I0318 13:46:19.228182 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsfx6\" (UniqueName: \"kubernetes.io/projected/d5ac354c-0c65-4201-987a-da6a75a7a63c-kube-api-access-wsfx6\") pod \"nova-cell1-host-discover-hqvsx\" (UID: \"d5ac354c-0c65-4201-987a-da6a75a7a63c\") " pod="openstack/nova-cell1-host-discover-hqvsx" Mar 18 13:46:19.231631 master-0 kubenswrapper[27835]: I0318 13:46:19.230541 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4-config-data\") pod \"nova-api-0\" (UID: \"3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4\") " pod="openstack/nova-api-0" Mar 18 13:46:19.231631 master-0 kubenswrapper[27835]: I0318 13:46:19.231214 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ac354c-0c65-4201-987a-da6a75a7a63c-combined-ca-bundle\") pod \"nova-cell1-host-discover-hqvsx\" (UID: \"d5ac354c-0c65-4201-987a-da6a75a7a63c\") " pod="openstack/nova-cell1-host-discover-hqvsx" Mar 18 13:46:19.231631 master-0 kubenswrapper[27835]: I0318 13:46:19.231258 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6l5q\" (UniqueName: \"kubernetes.io/projected/fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0-kube-api-access-w6l5q\") pod \"nova-cell1-cell-mapping-pgthm\" (UID: \"fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0\") " pod="openstack/nova-cell1-cell-mapping-pgthm" Mar 18 13:46:19.231631 master-0 kubenswrapper[27835]: I0318 13:46:19.231647 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-pgthm\" (UID: \"fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0\") " pod="openstack/nova-cell1-cell-mapping-pgthm" Mar 18 13:46:19.232077 master-0 kubenswrapper[27835]: I0318 13:46:19.231722 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0-config-data\") pod \"nova-cell1-cell-mapping-pgthm\" (UID: \"fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0\") " pod="openstack/nova-cell1-cell-mapping-pgthm" Mar 18 13:46:19.232077 master-0 kubenswrapper[27835]: I0318 13:46:19.231770 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ac354c-0c65-4201-987a-da6a75a7a63c-config-data\") pod \"nova-cell1-host-discover-hqvsx\" (UID: \"d5ac354c-0c65-4201-987a-da6a75a7a63c\") " pod="openstack/nova-cell1-host-discover-hqvsx" Mar 18 13:46:19.232077 master-0 kubenswrapper[27835]: I0318 13:46:19.231844 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0-scripts\") pod \"nova-cell1-cell-mapping-pgthm\" (UID: \"fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0\") " pod="openstack/nova-cell1-cell-mapping-pgthm" Mar 18 13:46:19.232077 master-0 kubenswrapper[27835]: I0318 13:46:19.231918 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4-logs\") pod \"nova-api-0\" (UID: \"3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4\") " pod="openstack/nova-api-0" Mar 18 13:46:19.232077 master-0 kubenswrapper[27835]: I0318 13:46:19.231934 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5ac354c-0c65-4201-987a-da6a75a7a63c-scripts\") pod \"nova-cell1-host-discover-hqvsx\" (UID: \"d5ac354c-0c65-4201-987a-da6a75a7a63c\") " pod="openstack/nova-cell1-host-discover-hqvsx" Mar 18 13:46:19.232077 master-0 kubenswrapper[27835]: I0318 13:46:19.231980 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4\") " pod="openstack/nova-api-0" Mar 18 13:46:19.232433 master-0 kubenswrapper[27835]: I0318 13:46:19.232395 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4-logs\") pod \"nova-api-0\" (UID: \"3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4\") " pod="openstack/nova-api-0" Mar 18 13:46:19.236388 master-0 kubenswrapper[27835]: I0318 13:46:19.235326 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4-config-data\") pod \"nova-api-0\" (UID: \"3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4\") " pod="openstack/nova-api-0" Mar 18 13:46:19.237369 master-0 kubenswrapper[27835]: I0318 13:46:19.236682 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4\") " pod="openstack/nova-api-0" Mar 18 13:46:19.237924 master-0 kubenswrapper[27835]: I0318 13:46:19.237585 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4-public-tls-certs\") pod \"nova-api-0\" (UID: \"3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4\") " pod="openstack/nova-api-0" Mar 18 13:46:19.238681 master-0 kubenswrapper[27835]: I0318 13:46:19.238662 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4\") " pod="openstack/nova-api-0" Mar 18 13:46:19.248966 master-0 kubenswrapper[27835]: I0318 13:46:19.248902 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-pgthm"] Mar 18 13:46:19.254927 master-0 kubenswrapper[27835]: I0318 13:46:19.249744 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-pgthm\" (UID: \"fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0\") " pod="openstack/nova-cell1-cell-mapping-pgthm" Mar 18 13:46:19.254927 master-0 kubenswrapper[27835]: I0318 13:46:19.250139 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 18 13:46:19.260437 master-0 kubenswrapper[27835]: I0318 13:46:19.257372 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0-scripts\") pod \"nova-cell1-cell-mapping-pgthm\" (UID: \"fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0\") " pod="openstack/nova-cell1-cell-mapping-pgthm" Mar 18 13:46:19.260437 master-0 kubenswrapper[27835]: I0318 13:46:19.257462 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0-config-data\") pod \"nova-cell1-cell-mapping-pgthm\" (UID: \"fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0\") " pod="openstack/nova-cell1-cell-mapping-pgthm" Mar 18 13:46:19.271914 master-0 kubenswrapper[27835]: I0318 13:46:19.271785 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-hqvsx"] Mar 18 13:46:19.290489 master-0 kubenswrapper[27835]: I0318 13:46:19.290449 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6zp7\" (UniqueName: \"kubernetes.io/projected/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4-kube-api-access-l6zp7\") pod \"nova-api-0\" (UID: \"3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4\") " pod="openstack/nova-api-0" Mar 18 13:46:19.294473 master-0 kubenswrapper[27835]: I0318 13:46:19.294435 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6l5q\" (UniqueName: \"kubernetes.io/projected/fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0-kube-api-access-w6l5q\") pod \"nova-cell1-cell-mapping-pgthm\" (UID: \"fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0\") " pod="openstack/nova-cell1-cell-mapping-pgthm" Mar 18 13:46:19.368031 master-0 kubenswrapper[27835]: I0318 13:46:19.367963 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5ac354c-0c65-4201-987a-da6a75a7a63c-scripts\") pod \"nova-cell1-host-discover-hqvsx\" (UID: \"d5ac354c-0c65-4201-987a-da6a75a7a63c\") " pod="openstack/nova-cell1-host-discover-hqvsx" Mar 18 13:46:19.368936 master-0 kubenswrapper[27835]: I0318 13:46:19.368898 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsfx6\" (UniqueName: \"kubernetes.io/projected/d5ac354c-0c65-4201-987a-da6a75a7a63c-kube-api-access-wsfx6\") pod \"nova-cell1-host-discover-hqvsx\" (UID: \"d5ac354c-0c65-4201-987a-da6a75a7a63c\") " pod="openstack/nova-cell1-host-discover-hqvsx" Mar 18 13:46:19.369683 master-0 kubenswrapper[27835]: I0318 13:46:19.369660 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ac354c-0c65-4201-987a-da6a75a7a63c-combined-ca-bundle\") pod \"nova-cell1-host-discover-hqvsx\" (UID: \"d5ac354c-0c65-4201-987a-da6a75a7a63c\") " pod="openstack/nova-cell1-host-discover-hqvsx" Mar 18 13:46:19.373271 master-0 kubenswrapper[27835]: I0318 13:46:19.373234 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ac354c-0c65-4201-987a-da6a75a7a63c-config-data\") pod \"nova-cell1-host-discover-hqvsx\" (UID: \"d5ac354c-0c65-4201-987a-da6a75a7a63c\") " pod="openstack/nova-cell1-host-discover-hqvsx" Mar 18 13:46:19.376833 master-0 kubenswrapper[27835]: I0318 13:46:19.376790 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ac354c-0c65-4201-987a-da6a75a7a63c-combined-ca-bundle\") pod \"nova-cell1-host-discover-hqvsx\" (UID: \"d5ac354c-0c65-4201-987a-da6a75a7a63c\") " pod="openstack/nova-cell1-host-discover-hqvsx" Mar 18 13:46:19.389142 master-0 kubenswrapper[27835]: I0318 13:46:19.387579 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78f68d4c8f-sdxd4"] Mar 18 13:46:19.389142 master-0 kubenswrapper[27835]: I0318 13:46:19.387908 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78f68d4c8f-sdxd4" podUID="b59d4086-4f07-49d4-bbc4-6fbb69f545c7" containerName="dnsmasq-dns" containerID="cri-o://41af6681e719bc8a416fb08df016e44322a925af8eb52c5c4bcbe18885a8b994" gracePeriod=10 Mar 18 13:46:19.393764 master-0 kubenswrapper[27835]: I0318 13:46:19.393717 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsfx6\" (UniqueName: \"kubernetes.io/projected/d5ac354c-0c65-4201-987a-da6a75a7a63c-kube-api-access-wsfx6\") pod \"nova-cell1-host-discover-hqvsx\" (UID: \"d5ac354c-0c65-4201-987a-da6a75a7a63c\") " pod="openstack/nova-cell1-host-discover-hqvsx" Mar 18 13:46:19.397281 master-0 kubenswrapper[27835]: I0318 13:46:19.397226 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5ac354c-0c65-4201-987a-da6a75a7a63c-scripts\") pod \"nova-cell1-host-discover-hqvsx\" (UID: \"d5ac354c-0c65-4201-987a-da6a75a7a63c\") " pod="openstack/nova-cell1-host-discover-hqvsx" Mar 18 13:46:19.444966 master-0 kubenswrapper[27835]: I0318 13:46:19.444134 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ac354c-0c65-4201-987a-da6a75a7a63c-config-data\") pod \"nova-cell1-host-discover-hqvsx\" (UID: \"d5ac354c-0c65-4201-987a-da6a75a7a63c\") " pod="openstack/nova-cell1-host-discover-hqvsx" Mar 18 13:46:19.451179 master-0 kubenswrapper[27835]: I0318 13:46:19.450505 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pgthm" Mar 18 13:46:19.588440 master-0 kubenswrapper[27835]: I0318 13:46:19.588178 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 13:46:19.671074 master-0 kubenswrapper[27835]: I0318 13:46:19.671003 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-hqvsx" Mar 18 13:46:20.214474 master-0 kubenswrapper[27835]: I0318 13:46:20.210700 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-pgthm"] Mar 18 13:46:20.255446 master-0 kubenswrapper[27835]: I0318 13:46:20.237318 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78f68d4c8f-sdxd4" Mar 18 13:46:20.260443 master-0 kubenswrapper[27835]: I0318 13:46:20.256163 27835 generic.go:334] "Generic (PLEG): container finished" podID="b59d4086-4f07-49d4-bbc4-6fbb69f545c7" containerID="41af6681e719bc8a416fb08df016e44322a925af8eb52c5c4bcbe18885a8b994" exitCode=0 Mar 18 13:46:20.260443 master-0 kubenswrapper[27835]: I0318 13:46:20.257365 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78f68d4c8f-sdxd4" Mar 18 13:46:20.260443 master-0 kubenswrapper[27835]: I0318 13:46:20.257625 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78f68d4c8f-sdxd4" event={"ID":"b59d4086-4f07-49d4-bbc4-6fbb69f545c7","Type":"ContainerDied","Data":"41af6681e719bc8a416fb08df016e44322a925af8eb52c5c4bcbe18885a8b994"} Mar 18 13:46:20.260443 master-0 kubenswrapper[27835]: I0318 13:46:20.257658 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78f68d4c8f-sdxd4" event={"ID":"b59d4086-4f07-49d4-bbc4-6fbb69f545c7","Type":"ContainerDied","Data":"b6dc17a96f87bda3cd5426f233d94106e1bc48f14b1eeb254b0202bc013b842a"} Mar 18 13:46:20.260443 master-0 kubenswrapper[27835]: I0318 13:46:20.257678 27835 scope.go:117] "RemoveContainer" containerID="41af6681e719bc8a416fb08df016e44322a925af8eb52c5c4bcbe18885a8b994" Mar 18 13:46:20.379446 master-0 kubenswrapper[27835]: I0318 13:46:20.379187 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b59d4086-4f07-49d4-bbc4-6fbb69f545c7-config\") pod \"b59d4086-4f07-49d4-bbc4-6fbb69f545c7\" (UID: \"b59d4086-4f07-49d4-bbc4-6fbb69f545c7\") " Mar 18 13:46:20.379446 master-0 kubenswrapper[27835]: I0318 13:46:20.379310 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b59d4086-4f07-49d4-bbc4-6fbb69f545c7-dns-svc\") pod \"b59d4086-4f07-49d4-bbc4-6fbb69f545c7\" (UID: \"b59d4086-4f07-49d4-bbc4-6fbb69f545c7\") " Mar 18 13:46:20.379612 master-0 kubenswrapper[27835]: I0318 13:46:20.379518 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b59d4086-4f07-49d4-bbc4-6fbb69f545c7-ovsdbserver-sb\") pod \"b59d4086-4f07-49d4-bbc4-6fbb69f545c7\" (UID: \"b59d4086-4f07-49d4-bbc4-6fbb69f545c7\") " Mar 18 13:46:20.379612 master-0 kubenswrapper[27835]: I0318 13:46:20.379588 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spmdr\" (UniqueName: \"kubernetes.io/projected/b59d4086-4f07-49d4-bbc4-6fbb69f545c7-kube-api-access-spmdr\") pod \"b59d4086-4f07-49d4-bbc4-6fbb69f545c7\" (UID: \"b59d4086-4f07-49d4-bbc4-6fbb69f545c7\") " Mar 18 13:46:20.379696 master-0 kubenswrapper[27835]: I0318 13:46:20.379627 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b59d4086-4f07-49d4-bbc4-6fbb69f545c7-dns-swift-storage-0\") pod \"b59d4086-4f07-49d4-bbc4-6fbb69f545c7\" (UID: \"b59d4086-4f07-49d4-bbc4-6fbb69f545c7\") " Mar 18 13:46:20.379696 master-0 kubenswrapper[27835]: I0318 13:46:20.379675 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b59d4086-4f07-49d4-bbc4-6fbb69f545c7-ovsdbserver-nb\") pod \"b59d4086-4f07-49d4-bbc4-6fbb69f545c7\" (UID: \"b59d4086-4f07-49d4-bbc4-6fbb69f545c7\") " Mar 18 13:46:20.396802 master-0 kubenswrapper[27835]: I0318 13:46:20.396743 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9716d8b-d31c-4c84-af6b-fc8881a12372" path="/var/lib/kubelet/pods/f9716d8b-d31c-4c84-af6b-fc8881a12372/volumes" Mar 18 13:46:20.446593 master-0 kubenswrapper[27835]: I0318 13:46:20.441847 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b59d4086-4f07-49d4-bbc4-6fbb69f545c7-kube-api-access-spmdr" (OuterVolumeSpecName: "kube-api-access-spmdr") pod "b59d4086-4f07-49d4-bbc4-6fbb69f545c7" (UID: "b59d4086-4f07-49d4-bbc4-6fbb69f545c7"). InnerVolumeSpecName "kube-api-access-spmdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:46:20.480744 master-0 kubenswrapper[27835]: I0318 13:46:20.480119 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b59d4086-4f07-49d4-bbc4-6fbb69f545c7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b59d4086-4f07-49d4-bbc4-6fbb69f545c7" (UID: "b59d4086-4f07-49d4-bbc4-6fbb69f545c7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:46:20.481745 master-0 kubenswrapper[27835]: I0318 13:46:20.481071 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 18 13:46:20.485031 master-0 kubenswrapper[27835]: I0318 13:46:20.482437 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spmdr\" (UniqueName: \"kubernetes.io/projected/b59d4086-4f07-49d4-bbc4-6fbb69f545c7-kube-api-access-spmdr\") on node \"master-0\" DevicePath \"\"" Mar 18 13:46:20.485031 master-0 kubenswrapper[27835]: I0318 13:46:20.482460 27835 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b59d4086-4f07-49d4-bbc4-6fbb69f545c7-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 18 13:46:20.493618 master-0 kubenswrapper[27835]: I0318 13:46:20.493554 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b59d4086-4f07-49d4-bbc4-6fbb69f545c7-config" (OuterVolumeSpecName: "config") pod "b59d4086-4f07-49d4-bbc4-6fbb69f545c7" (UID: "b59d4086-4f07-49d4-bbc4-6fbb69f545c7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:46:20.502663 master-0 kubenswrapper[27835]: I0318 13:46:20.502587 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b59d4086-4f07-49d4-bbc4-6fbb69f545c7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b59d4086-4f07-49d4-bbc4-6fbb69f545c7" (UID: "b59d4086-4f07-49d4-bbc4-6fbb69f545c7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:46:20.529619 master-0 kubenswrapper[27835]: I0318 13:46:20.529540 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-hqvsx"] Mar 18 13:46:20.539838 master-0 kubenswrapper[27835]: I0318 13:46:20.539588 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b59d4086-4f07-49d4-bbc4-6fbb69f545c7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b59d4086-4f07-49d4-bbc4-6fbb69f545c7" (UID: "b59d4086-4f07-49d4-bbc4-6fbb69f545c7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:46:20.544665 master-0 kubenswrapper[27835]: I0318 13:46:20.541641 27835 scope.go:117] "RemoveContainer" containerID="f7c7b71330a3d369808c2843a4dc3a5a27109435ac9d1503d23a6f187959e5c3" Mar 18 13:46:20.544665 master-0 kubenswrapper[27835]: I0318 13:46:20.543067 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b59d4086-4f07-49d4-bbc4-6fbb69f545c7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b59d4086-4f07-49d4-bbc4-6fbb69f545c7" (UID: "b59d4086-4f07-49d4-bbc4-6fbb69f545c7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:46:20.579805 master-0 kubenswrapper[27835]: I0318 13:46:20.579104 27835 scope.go:117] "RemoveContainer" containerID="41af6681e719bc8a416fb08df016e44322a925af8eb52c5c4bcbe18885a8b994" Mar 18 13:46:20.579805 master-0 kubenswrapper[27835]: E0318 13:46:20.579475 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41af6681e719bc8a416fb08df016e44322a925af8eb52c5c4bcbe18885a8b994\": container with ID starting with 41af6681e719bc8a416fb08df016e44322a925af8eb52c5c4bcbe18885a8b994 not found: ID does not exist" containerID="41af6681e719bc8a416fb08df016e44322a925af8eb52c5c4bcbe18885a8b994" Mar 18 13:46:20.579805 master-0 kubenswrapper[27835]: I0318 13:46:20.579503 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41af6681e719bc8a416fb08df016e44322a925af8eb52c5c4bcbe18885a8b994"} err="failed to get container status \"41af6681e719bc8a416fb08df016e44322a925af8eb52c5c4bcbe18885a8b994\": rpc error: code = NotFound desc = could not find container \"41af6681e719bc8a416fb08df016e44322a925af8eb52c5c4bcbe18885a8b994\": container with ID starting with 41af6681e719bc8a416fb08df016e44322a925af8eb52c5c4bcbe18885a8b994 not found: ID does not exist" Mar 18 13:46:20.579805 master-0 kubenswrapper[27835]: I0318 13:46:20.579524 27835 scope.go:117] "RemoveContainer" containerID="f7c7b71330a3d369808c2843a4dc3a5a27109435ac9d1503d23a6f187959e5c3" Mar 18 13:46:20.579805 master-0 kubenswrapper[27835]: E0318 13:46:20.579734 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7c7b71330a3d369808c2843a4dc3a5a27109435ac9d1503d23a6f187959e5c3\": container with ID starting with f7c7b71330a3d369808c2843a4dc3a5a27109435ac9d1503d23a6f187959e5c3 not found: ID does not exist" containerID="f7c7b71330a3d369808c2843a4dc3a5a27109435ac9d1503d23a6f187959e5c3" Mar 18 13:46:20.579805 master-0 kubenswrapper[27835]: I0318 13:46:20.579753 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7c7b71330a3d369808c2843a4dc3a5a27109435ac9d1503d23a6f187959e5c3"} err="failed to get container status \"f7c7b71330a3d369808c2843a4dc3a5a27109435ac9d1503d23a6f187959e5c3\": rpc error: code = NotFound desc = could not find container \"f7c7b71330a3d369808c2843a4dc3a5a27109435ac9d1503d23a6f187959e5c3\": container with ID starting with f7c7b71330a3d369808c2843a4dc3a5a27109435ac9d1503d23a6f187959e5c3 not found: ID does not exist" Mar 18 13:46:20.587635 master-0 kubenswrapper[27835]: I0318 13:46:20.587574 27835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b59d4086-4f07-49d4-bbc4-6fbb69f545c7-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 18 13:46:20.587635 master-0 kubenswrapper[27835]: I0318 13:46:20.587625 27835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b59d4086-4f07-49d4-bbc4-6fbb69f545c7-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:46:20.587759 master-0 kubenswrapper[27835]: I0318 13:46:20.587637 27835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b59d4086-4f07-49d4-bbc4-6fbb69f545c7-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 18 13:46:20.587759 master-0 kubenswrapper[27835]: I0318 13:46:20.587649 27835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b59d4086-4f07-49d4-bbc4-6fbb69f545c7-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 18 13:46:20.627486 master-0 kubenswrapper[27835]: I0318 13:46:20.625244 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78f68d4c8f-sdxd4"] Mar 18 13:46:20.641177 master-0 kubenswrapper[27835]: I0318 13:46:20.641100 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78f68d4c8f-sdxd4"] Mar 18 13:46:21.297520 master-0 kubenswrapper[27835]: I0318 13:46:21.293589 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-hqvsx" event={"ID":"d5ac354c-0c65-4201-987a-da6a75a7a63c","Type":"ContainerStarted","Data":"e696a90a2c60d618b8c3941cf72a4e458a85392504eaa3071b4be42aa752beac"} Mar 18 13:46:21.297520 master-0 kubenswrapper[27835]: I0318 13:46:21.293687 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-hqvsx" event={"ID":"d5ac354c-0c65-4201-987a-da6a75a7a63c","Type":"ContainerStarted","Data":"7a8704ffaa23e34163b30080117b0cef69fc18cfe03bd9efcdb101a895da72dc"} Mar 18 13:46:21.306643 master-0 kubenswrapper[27835]: I0318 13:46:21.306580 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pgthm" event={"ID":"fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0","Type":"ContainerStarted","Data":"091ac721966390f31c0cf90b9eb2d145b7b52f03ebd6069cfdd6a6bd232dda86"} Mar 18 13:46:21.306730 master-0 kubenswrapper[27835]: I0318 13:46:21.306648 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pgthm" event={"ID":"fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0","Type":"ContainerStarted","Data":"166510bfaa3d649cef8d653ae00bc22ca5daa2dd320a837430ea4739a5c7576f"} Mar 18 13:46:21.335572 master-0 kubenswrapper[27835]: I0318 13:46:21.335468 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-host-discover-hqvsx" podStartSLOduration=3.335445835 podStartE2EDuration="3.335445835s" podCreationTimestamp="2026-03-18 13:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:46:21.323223111 +0000 UTC m=+1345.288434671" watchObservedRunningTime="2026-03-18 13:46:21.335445835 +0000 UTC m=+1345.300657395" Mar 18 13:46:21.348789 master-0 kubenswrapper[27835]: I0318 13:46:21.348732 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4","Type":"ContainerStarted","Data":"08932d5fedde2ce87c46b6959887f86e21c0b0d0703c609257167d1ca9e3a711"} Mar 18 13:46:21.348789 master-0 kubenswrapper[27835]: I0318 13:46:21.348772 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4","Type":"ContainerStarted","Data":"73e505276c2d70732d1071ae922208599d4de0182d08ed51bf43788cb595fd82"} Mar 18 13:46:21.348789 master-0 kubenswrapper[27835]: I0318 13:46:21.348781 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4","Type":"ContainerStarted","Data":"cb0f2dfc68beb9f9688324eccbd865fd4bff73278ff06eb91c6d42c5ca695045"} Mar 18 13:46:21.352601 master-0 kubenswrapper[27835]: I0318 13:46:21.352516 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-pgthm" podStartSLOduration=3.352496447 podStartE2EDuration="3.352496447s" podCreationTimestamp="2026-03-18 13:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:46:21.340131079 +0000 UTC m=+1345.305342639" watchObservedRunningTime="2026-03-18 13:46:21.352496447 +0000 UTC m=+1345.317708007" Mar 18 13:46:21.391465 master-0 kubenswrapper[27835]: I0318 13:46:21.391362 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.391343137 podStartE2EDuration="3.391343137s" podCreationTimestamp="2026-03-18 13:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:46:21.370231517 +0000 UTC m=+1345.335443067" watchObservedRunningTime="2026-03-18 13:46:21.391343137 +0000 UTC m=+1345.356554707" Mar 18 13:46:22.299729 master-0 kubenswrapper[27835]: I0318 13:46:22.299549 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b59d4086-4f07-49d4-bbc4-6fbb69f545c7" path="/var/lib/kubelet/pods/b59d4086-4f07-49d4-bbc4-6fbb69f545c7/volumes" Mar 18 13:46:24.389910 master-0 kubenswrapper[27835]: I0318 13:46:24.389843 27835 generic.go:334] "Generic (PLEG): container finished" podID="d5ac354c-0c65-4201-987a-da6a75a7a63c" containerID="e696a90a2c60d618b8c3941cf72a4e458a85392504eaa3071b4be42aa752beac" exitCode=0 Mar 18 13:46:24.389910 master-0 kubenswrapper[27835]: I0318 13:46:24.389897 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-hqvsx" event={"ID":"d5ac354c-0c65-4201-987a-da6a75a7a63c","Type":"ContainerDied","Data":"e696a90a2c60d618b8c3941cf72a4e458a85392504eaa3071b4be42aa752beac"} Mar 18 13:46:25.882943 master-0 kubenswrapper[27835]: I0318 13:46:25.882899 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-hqvsx" Mar 18 13:46:25.945647 master-0 kubenswrapper[27835]: I0318 13:46:25.943102 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ac354c-0c65-4201-987a-da6a75a7a63c-config-data\") pod \"d5ac354c-0c65-4201-987a-da6a75a7a63c\" (UID: \"d5ac354c-0c65-4201-987a-da6a75a7a63c\") " Mar 18 13:46:25.945647 master-0 kubenswrapper[27835]: I0318 13:46:25.943265 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ac354c-0c65-4201-987a-da6a75a7a63c-combined-ca-bundle\") pod \"d5ac354c-0c65-4201-987a-da6a75a7a63c\" (UID: \"d5ac354c-0c65-4201-987a-da6a75a7a63c\") " Mar 18 13:46:25.945647 master-0 kubenswrapper[27835]: I0318 13:46:25.943341 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsfx6\" (UniqueName: \"kubernetes.io/projected/d5ac354c-0c65-4201-987a-da6a75a7a63c-kube-api-access-wsfx6\") pod \"d5ac354c-0c65-4201-987a-da6a75a7a63c\" (UID: \"d5ac354c-0c65-4201-987a-da6a75a7a63c\") " Mar 18 13:46:25.945647 master-0 kubenswrapper[27835]: I0318 13:46:25.943559 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5ac354c-0c65-4201-987a-da6a75a7a63c-scripts\") pod \"d5ac354c-0c65-4201-987a-da6a75a7a63c\" (UID: \"d5ac354c-0c65-4201-987a-da6a75a7a63c\") " Mar 18 13:46:25.948376 master-0 kubenswrapper[27835]: I0318 13:46:25.948291 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5ac354c-0c65-4201-987a-da6a75a7a63c-kube-api-access-wsfx6" (OuterVolumeSpecName: "kube-api-access-wsfx6") pod "d5ac354c-0c65-4201-987a-da6a75a7a63c" (UID: "d5ac354c-0c65-4201-987a-da6a75a7a63c"). InnerVolumeSpecName "kube-api-access-wsfx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:46:25.951653 master-0 kubenswrapper[27835]: I0318 13:46:25.951624 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ac354c-0c65-4201-987a-da6a75a7a63c-scripts" (OuterVolumeSpecName: "scripts") pod "d5ac354c-0c65-4201-987a-da6a75a7a63c" (UID: "d5ac354c-0c65-4201-987a-da6a75a7a63c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:46:25.973881 master-0 kubenswrapper[27835]: I0318 13:46:25.973738 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ac354c-0c65-4201-987a-da6a75a7a63c-config-data" (OuterVolumeSpecName: "config-data") pod "d5ac354c-0c65-4201-987a-da6a75a7a63c" (UID: "d5ac354c-0c65-4201-987a-da6a75a7a63c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:46:25.976130 master-0 kubenswrapper[27835]: I0318 13:46:25.976073 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ac354c-0c65-4201-987a-da6a75a7a63c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5ac354c-0c65-4201-987a-da6a75a7a63c" (UID: "d5ac354c-0c65-4201-987a-da6a75a7a63c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:46:26.047269 master-0 kubenswrapper[27835]: I0318 13:46:26.047224 27835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ac354c-0c65-4201-987a-da6a75a7a63c-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:46:26.047459 master-0 kubenswrapper[27835]: I0318 13:46:26.047445 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsfx6\" (UniqueName: \"kubernetes.io/projected/d5ac354c-0c65-4201-987a-da6a75a7a63c-kube-api-access-wsfx6\") on node \"master-0\" DevicePath \"\"" Mar 18 13:46:26.047531 master-0 kubenswrapper[27835]: I0318 13:46:26.047521 27835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5ac354c-0c65-4201-987a-da6a75a7a63c-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:46:26.047591 master-0 kubenswrapper[27835]: I0318 13:46:26.047581 27835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ac354c-0c65-4201-987a-da6a75a7a63c-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 13:46:26.415027 master-0 kubenswrapper[27835]: I0318 13:46:26.414966 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-hqvsx" event={"ID":"d5ac354c-0c65-4201-987a-da6a75a7a63c","Type":"ContainerDied","Data":"7a8704ffaa23e34163b30080117b0cef69fc18cfe03bd9efcdb101a895da72dc"} Mar 18 13:46:26.415027 master-0 kubenswrapper[27835]: I0318 13:46:26.415019 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a8704ffaa23e34163b30080117b0cef69fc18cfe03bd9efcdb101a895da72dc" Mar 18 13:46:26.415263 master-0 kubenswrapper[27835]: I0318 13:46:26.415135 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-hqvsx" Mar 18 13:46:26.417491 master-0 kubenswrapper[27835]: I0318 13:46:26.417442 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pgthm" event={"ID":"fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0","Type":"ContainerDied","Data":"091ac721966390f31c0cf90b9eb2d145b7b52f03ebd6069cfdd6a6bd232dda86"} Mar 18 13:46:26.417645 master-0 kubenswrapper[27835]: I0318 13:46:26.417566 27835 generic.go:334] "Generic (PLEG): container finished" podID="fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0" containerID="091ac721966390f31c0cf90b9eb2d145b7b52f03ebd6069cfdd6a6bd232dda86" exitCode=0 Mar 18 13:46:27.872059 master-0 kubenswrapper[27835]: I0318 13:46:27.871994 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pgthm" Mar 18 13:46:27.995439 master-0 kubenswrapper[27835]: I0318 13:46:27.994394 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0-config-data\") pod \"fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0\" (UID: \"fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0\") " Mar 18 13:46:27.995439 master-0 kubenswrapper[27835]: I0318 13:46:27.994571 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0-combined-ca-bundle\") pod \"fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0\" (UID: \"fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0\") " Mar 18 13:46:27.995439 master-0 kubenswrapper[27835]: I0318 13:46:27.994603 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0-scripts\") pod \"fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0\" (UID: \"fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0\") " Mar 18 13:46:27.995439 master-0 kubenswrapper[27835]: I0318 13:46:27.994921 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6l5q\" (UniqueName: \"kubernetes.io/projected/fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0-kube-api-access-w6l5q\") pod \"fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0\" (UID: \"fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0\") " Mar 18 13:46:28.004435 master-0 kubenswrapper[27835]: I0318 13:46:27.997584 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0-scripts" (OuterVolumeSpecName: "scripts") pod "fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0" (UID: "fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:46:28.004435 master-0 kubenswrapper[27835]: I0318 13:46:27.998638 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0-kube-api-access-w6l5q" (OuterVolumeSpecName: "kube-api-access-w6l5q") pod "fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0" (UID: "fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0"). InnerVolumeSpecName "kube-api-access-w6l5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:46:28.024066 master-0 kubenswrapper[27835]: I0318 13:46:28.023960 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0-config-data" (OuterVolumeSpecName: "config-data") pod "fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0" (UID: "fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:46:28.026395 master-0 kubenswrapper[27835]: I0318 13:46:28.026337 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0" (UID: "fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:46:28.097802 master-0 kubenswrapper[27835]: I0318 13:46:28.097735 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6l5q\" (UniqueName: \"kubernetes.io/projected/fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0-kube-api-access-w6l5q\") on node \"master-0\" DevicePath \"\"" Mar 18 13:46:28.097802 master-0 kubenswrapper[27835]: I0318 13:46:28.097794 27835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 13:46:28.097802 master-0 kubenswrapper[27835]: I0318 13:46:28.097807 27835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:46:28.097802 master-0 kubenswrapper[27835]: I0318 13:46:28.097816 27835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 13:46:28.446101 master-0 kubenswrapper[27835]: I0318 13:46:28.446028 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-pgthm" event={"ID":"fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0","Type":"ContainerDied","Data":"166510bfaa3d649cef8d653ae00bc22ca5daa2dd320a837430ea4739a5c7576f"} Mar 18 13:46:28.446101 master-0 kubenswrapper[27835]: I0318 13:46:28.446100 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="166510bfaa3d649cef8d653ae00bc22ca5daa2dd320a837430ea4739a5c7576f" Mar 18 13:46:28.446505 master-0 kubenswrapper[27835]: I0318 13:46:28.446132 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-pgthm" Mar 18 13:46:28.757230 master-0 kubenswrapper[27835]: I0318 13:46:28.753525 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 18 13:46:28.757230 master-0 kubenswrapper[27835]: I0318 13:46:28.753949 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4" containerName="nova-api-log" containerID="cri-o://73e505276c2d70732d1071ae922208599d4de0182d08ed51bf43788cb595fd82" gracePeriod=30 Mar 18 13:46:28.757230 master-0 kubenswrapper[27835]: I0318 13:46:28.754237 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4" containerName="nova-api-api" containerID="cri-o://08932d5fedde2ce87c46b6959887f86e21c0b0d0703c609257167d1ca9e3a711" gracePeriod=30 Mar 18 13:46:28.777593 master-0 kubenswrapper[27835]: I0318 13:46:28.777299 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 13:46:28.777709 master-0 kubenswrapper[27835]: I0318 13:46:28.777647 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="374b1e4c-f35c-4d46-9ea0-e65d45b825d8" containerName="nova-scheduler-scheduler" containerID="cri-o://82b65ca3527690011b6a8817ba9fea65862d72f03e4b5dd3ad91a708d1f08938" gracePeriod=30 Mar 18 13:46:28.856463 master-0 kubenswrapper[27835]: I0318 13:46:28.854939 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 18 13:46:28.856463 master-0 kubenswrapper[27835]: I0318 13:46:28.855190 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="79a347a5-ca15-4fa5-be5f-bb4d7e2c2332" containerName="nova-metadata-log" containerID="cri-o://31d77bb97d052618340ff92a3dd7c1b07c30e9b59fbfd768fb7455fa70f9cefd" gracePeriod=30 Mar 18 13:46:28.856463 master-0 kubenswrapper[27835]: I0318 13:46:28.855348 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="79a347a5-ca15-4fa5-be5f-bb4d7e2c2332" containerName="nova-metadata-metadata" containerID="cri-o://f3610f6494fb27e9563ad2cb50befe234c773a41f0fe21f55ca6a508beb55696" gracePeriod=30 Mar 18 13:46:29.464198 master-0 kubenswrapper[27835]: I0318 13:46:29.464037 27835 generic.go:334] "Generic (PLEG): container finished" podID="3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4" containerID="08932d5fedde2ce87c46b6959887f86e21c0b0d0703c609257167d1ca9e3a711" exitCode=0 Mar 18 13:46:29.464198 master-0 kubenswrapper[27835]: I0318 13:46:29.464082 27835 generic.go:334] "Generic (PLEG): container finished" podID="3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4" containerID="73e505276c2d70732d1071ae922208599d4de0182d08ed51bf43788cb595fd82" exitCode=143 Mar 18 13:46:29.464198 master-0 kubenswrapper[27835]: I0318 13:46:29.464138 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4","Type":"ContainerDied","Data":"08932d5fedde2ce87c46b6959887f86e21c0b0d0703c609257167d1ca9e3a711"} Mar 18 13:46:29.464198 master-0 kubenswrapper[27835]: I0318 13:46:29.464171 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4","Type":"ContainerDied","Data":"73e505276c2d70732d1071ae922208599d4de0182d08ed51bf43788cb595fd82"} Mar 18 13:46:29.464198 master-0 kubenswrapper[27835]: I0318 13:46:29.464181 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4","Type":"ContainerDied","Data":"cb0f2dfc68beb9f9688324eccbd865fd4bff73278ff06eb91c6d42c5ca695045"} Mar 18 13:46:29.464198 master-0 kubenswrapper[27835]: I0318 13:46:29.464193 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb0f2dfc68beb9f9688324eccbd865fd4bff73278ff06eb91c6d42c5ca695045" Mar 18 13:46:29.467829 master-0 kubenswrapper[27835]: I0318 13:46:29.467788 27835 generic.go:334] "Generic (PLEG): container finished" podID="79a347a5-ca15-4fa5-be5f-bb4d7e2c2332" containerID="31d77bb97d052618340ff92a3dd7c1b07c30e9b59fbfd768fb7455fa70f9cefd" exitCode=143 Mar 18 13:46:29.467958 master-0 kubenswrapper[27835]: I0318 13:46:29.467835 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"79a347a5-ca15-4fa5-be5f-bb4d7e2c2332","Type":"ContainerDied","Data":"31d77bb97d052618340ff92a3dd7c1b07c30e9b59fbfd768fb7455fa70f9cefd"} Mar 18 13:46:29.471363 master-0 kubenswrapper[27835]: I0318 13:46:29.471312 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 13:46:29.532229 master-0 kubenswrapper[27835]: I0318 13:46:29.532165 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4-combined-ca-bundle\") pod \"3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4\" (UID: \"3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4\") " Mar 18 13:46:29.532492 master-0 kubenswrapper[27835]: I0318 13:46:29.532261 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6zp7\" (UniqueName: \"kubernetes.io/projected/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4-kube-api-access-l6zp7\") pod \"3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4\" (UID: \"3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4\") " Mar 18 13:46:29.532492 master-0 kubenswrapper[27835]: I0318 13:46:29.532296 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4-config-data\") pod \"3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4\" (UID: \"3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4\") " Mar 18 13:46:29.532605 master-0 kubenswrapper[27835]: I0318 13:46:29.532503 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4-internal-tls-certs\") pod \"3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4\" (UID: \"3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4\") " Mar 18 13:46:29.532605 master-0 kubenswrapper[27835]: I0318 13:46:29.532538 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4-logs\") pod \"3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4\" (UID: \"3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4\") " Mar 18 13:46:29.532732 master-0 kubenswrapper[27835]: I0318 13:46:29.532696 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4-public-tls-certs\") pod \"3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4\" (UID: \"3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4\") " Mar 18 13:46:29.533125 master-0 kubenswrapper[27835]: I0318 13:46:29.533083 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4-logs" (OuterVolumeSpecName: "logs") pod "3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4" (UID: "3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:46:29.540772 master-0 kubenswrapper[27835]: I0318 13:46:29.540712 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4-kube-api-access-l6zp7" (OuterVolumeSpecName: "kube-api-access-l6zp7") pod "3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4" (UID: "3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4"). InnerVolumeSpecName "kube-api-access-l6zp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:46:29.562554 master-0 kubenswrapper[27835]: I0318 13:46:29.562491 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4" (UID: "3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:46:29.565774 master-0 kubenswrapper[27835]: I0318 13:46:29.565732 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4-config-data" (OuterVolumeSpecName: "config-data") pod "3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4" (UID: "3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:46:29.593279 master-0 kubenswrapper[27835]: I0318 13:46:29.592961 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4" (UID: "3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:46:29.601671 master-0 kubenswrapper[27835]: I0318 13:46:29.601596 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4" (UID: "3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:46:29.634989 master-0 kubenswrapper[27835]: I0318 13:46:29.634837 27835 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:46:29.634989 master-0 kubenswrapper[27835]: I0318 13:46:29.634903 27835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:46:29.634989 master-0 kubenswrapper[27835]: I0318 13:46:29.634917 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6zp7\" (UniqueName: \"kubernetes.io/projected/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4-kube-api-access-l6zp7\") on node \"master-0\" DevicePath \"\"" Mar 18 13:46:29.634989 master-0 kubenswrapper[27835]: I0318 13:46:29.634936 27835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 13:46:29.634989 master-0 kubenswrapper[27835]: I0318 13:46:29.634948 27835 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:46:29.634989 master-0 kubenswrapper[27835]: I0318 13:46:29.634958 27835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:46:30.005356 master-0 kubenswrapper[27835]: E0318 13:46:30.005264 27835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="82b65ca3527690011b6a8817ba9fea65862d72f03e4b5dd3ad91a708d1f08938" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 18 13:46:30.007665 master-0 kubenswrapper[27835]: E0318 13:46:30.007582 27835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="82b65ca3527690011b6a8817ba9fea65862d72f03e4b5dd3ad91a708d1f08938" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 18 13:46:30.009363 master-0 kubenswrapper[27835]: E0318 13:46:30.009281 27835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="82b65ca3527690011b6a8817ba9fea65862d72f03e4b5dd3ad91a708d1f08938" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 18 13:46:30.009456 master-0 kubenswrapper[27835]: E0318 13:46:30.009373 27835 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="374b1e4c-f35c-4d46-9ea0-e65d45b825d8" containerName="nova-scheduler-scheduler" Mar 18 13:46:30.504435 master-0 kubenswrapper[27835]: I0318 13:46:30.503952 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 13:46:30.595695 master-0 kubenswrapper[27835]: I0318 13:46:30.595551 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 18 13:46:30.609536 master-0 kubenswrapper[27835]: I0318 13:46:30.609298 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 18 13:46:30.621381 master-0 kubenswrapper[27835]: I0318 13:46:30.621314 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 18 13:46:30.621929 master-0 kubenswrapper[27835]: E0318 13:46:30.621903 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5ac354c-0c65-4201-987a-da6a75a7a63c" containerName="nova-manage" Mar 18 13:46:30.621929 master-0 kubenswrapper[27835]: I0318 13:46:30.621927 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ac354c-0c65-4201-987a-da6a75a7a63c" containerName="nova-manage" Mar 18 13:46:30.622076 master-0 kubenswrapper[27835]: E0318 13:46:30.621938 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b59d4086-4f07-49d4-bbc4-6fbb69f545c7" containerName="init" Mar 18 13:46:30.622076 master-0 kubenswrapper[27835]: I0318 13:46:30.621944 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b59d4086-4f07-49d4-bbc4-6fbb69f545c7" containerName="init" Mar 18 13:46:30.622076 master-0 kubenswrapper[27835]: E0318 13:46:30.621985 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0" containerName="nova-manage" Mar 18 13:46:30.622076 master-0 kubenswrapper[27835]: I0318 13:46:30.621993 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0" containerName="nova-manage" Mar 18 13:46:30.622076 master-0 kubenswrapper[27835]: E0318 13:46:30.622008 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b59d4086-4f07-49d4-bbc4-6fbb69f545c7" containerName="dnsmasq-dns" Mar 18 13:46:30.622076 master-0 kubenswrapper[27835]: I0318 13:46:30.622013 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b59d4086-4f07-49d4-bbc4-6fbb69f545c7" containerName="dnsmasq-dns" Mar 18 13:46:30.622076 master-0 kubenswrapper[27835]: E0318 13:46:30.622045 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4" containerName="nova-api-log" Mar 18 13:46:30.622076 master-0 kubenswrapper[27835]: I0318 13:46:30.622051 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4" containerName="nova-api-log" Mar 18 13:46:30.622076 master-0 kubenswrapper[27835]: E0318 13:46:30.622062 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4" containerName="nova-api-api" Mar 18 13:46:30.622076 master-0 kubenswrapper[27835]: I0318 13:46:30.622070 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4" containerName="nova-api-api" Mar 18 13:46:30.622676 master-0 kubenswrapper[27835]: I0318 13:46:30.622259 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4" containerName="nova-api-log" Mar 18 13:46:30.622676 master-0 kubenswrapper[27835]: I0318 13:46:30.622506 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b59d4086-4f07-49d4-bbc4-6fbb69f545c7" containerName="dnsmasq-dns" Mar 18 13:46:30.622676 master-0 kubenswrapper[27835]: I0318 13:46:30.622538 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0" containerName="nova-manage" Mar 18 13:46:30.622676 master-0 kubenswrapper[27835]: I0318 13:46:30.622550 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5ac354c-0c65-4201-987a-da6a75a7a63c" containerName="nova-manage" Mar 18 13:46:30.622676 master-0 kubenswrapper[27835]: I0318 13:46:30.622564 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4" containerName="nova-api-api" Mar 18 13:46:30.623944 master-0 kubenswrapper[27835]: I0318 13:46:30.623908 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 13:46:30.630200 master-0 kubenswrapper[27835]: I0318 13:46:30.630148 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 18 13:46:30.631145 master-0 kubenswrapper[27835]: I0318 13:46:30.630433 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 18 13:46:30.631145 master-0 kubenswrapper[27835]: I0318 13:46:30.630468 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 18 13:46:30.645125 master-0 kubenswrapper[27835]: I0318 13:46:30.645062 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 18 13:46:30.767029 master-0 kubenswrapper[27835]: I0318 13:46:30.766964 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fb70fd2-80c4-4ae6-8568-67bb171eb5cd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8fb70fd2-80c4-4ae6-8568-67bb171eb5cd\") " pod="openstack/nova-api-0" Mar 18 13:46:30.767240 master-0 kubenswrapper[27835]: I0318 13:46:30.767043 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fb70fd2-80c4-4ae6-8568-67bb171eb5cd-logs\") pod \"nova-api-0\" (UID: \"8fb70fd2-80c4-4ae6-8568-67bb171eb5cd\") " pod="openstack/nova-api-0" Mar 18 13:46:30.767275 master-0 kubenswrapper[27835]: I0318 13:46:30.767250 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fb70fd2-80c4-4ae6-8568-67bb171eb5cd-config-data\") pod \"nova-api-0\" (UID: \"8fb70fd2-80c4-4ae6-8568-67bb171eb5cd\") " pod="openstack/nova-api-0" Mar 18 13:46:30.767485 master-0 kubenswrapper[27835]: I0318 13:46:30.767456 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fb70fd2-80c4-4ae6-8568-67bb171eb5cd-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8fb70fd2-80c4-4ae6-8568-67bb171eb5cd\") " pod="openstack/nova-api-0" Mar 18 13:46:30.767538 master-0 kubenswrapper[27835]: I0318 13:46:30.767504 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6x8g\" (UniqueName: \"kubernetes.io/projected/8fb70fd2-80c4-4ae6-8568-67bb171eb5cd-kube-api-access-h6x8g\") pod \"nova-api-0\" (UID: \"8fb70fd2-80c4-4ae6-8568-67bb171eb5cd\") " pod="openstack/nova-api-0" Mar 18 13:46:30.767578 master-0 kubenswrapper[27835]: I0318 13:46:30.767544 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fb70fd2-80c4-4ae6-8568-67bb171eb5cd-public-tls-certs\") pod \"nova-api-0\" (UID: \"8fb70fd2-80c4-4ae6-8568-67bb171eb5cd\") " pod="openstack/nova-api-0" Mar 18 13:46:30.870346 master-0 kubenswrapper[27835]: I0318 13:46:30.870221 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fb70fd2-80c4-4ae6-8568-67bb171eb5cd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8fb70fd2-80c4-4ae6-8568-67bb171eb5cd\") " pod="openstack/nova-api-0" Mar 18 13:46:30.870346 master-0 kubenswrapper[27835]: I0318 13:46:30.870316 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fb70fd2-80c4-4ae6-8568-67bb171eb5cd-logs\") pod \"nova-api-0\" (UID: \"8fb70fd2-80c4-4ae6-8568-67bb171eb5cd\") " pod="openstack/nova-api-0" Mar 18 13:46:30.870613 master-0 kubenswrapper[27835]: I0318 13:46:30.870398 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fb70fd2-80c4-4ae6-8568-67bb171eb5cd-config-data\") pod \"nova-api-0\" (UID: \"8fb70fd2-80c4-4ae6-8568-67bb171eb5cd\") " pod="openstack/nova-api-0" Mar 18 13:46:30.870613 master-0 kubenswrapper[27835]: I0318 13:46:30.870495 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fb70fd2-80c4-4ae6-8568-67bb171eb5cd-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8fb70fd2-80c4-4ae6-8568-67bb171eb5cd\") " pod="openstack/nova-api-0" Mar 18 13:46:30.870613 master-0 kubenswrapper[27835]: I0318 13:46:30.870532 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6x8g\" (UniqueName: \"kubernetes.io/projected/8fb70fd2-80c4-4ae6-8568-67bb171eb5cd-kube-api-access-h6x8g\") pod \"nova-api-0\" (UID: \"8fb70fd2-80c4-4ae6-8568-67bb171eb5cd\") " pod="openstack/nova-api-0" Mar 18 13:46:30.871238 master-0 kubenswrapper[27835]: I0318 13:46:30.871171 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fb70fd2-80c4-4ae6-8568-67bb171eb5cd-public-tls-certs\") pod \"nova-api-0\" (UID: \"8fb70fd2-80c4-4ae6-8568-67bb171eb5cd\") " pod="openstack/nova-api-0" Mar 18 13:46:30.871311 master-0 kubenswrapper[27835]: I0318 13:46:30.871274 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fb70fd2-80c4-4ae6-8568-67bb171eb5cd-logs\") pod \"nova-api-0\" (UID: \"8fb70fd2-80c4-4ae6-8568-67bb171eb5cd\") " pod="openstack/nova-api-0" Mar 18 13:46:30.877203 master-0 kubenswrapper[27835]: I0318 13:46:30.877157 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fb70fd2-80c4-4ae6-8568-67bb171eb5cd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8fb70fd2-80c4-4ae6-8568-67bb171eb5cd\") " pod="openstack/nova-api-0" Mar 18 13:46:30.877326 master-0 kubenswrapper[27835]: I0318 13:46:30.877228 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fb70fd2-80c4-4ae6-8568-67bb171eb5cd-public-tls-certs\") pod \"nova-api-0\" (UID: \"8fb70fd2-80c4-4ae6-8568-67bb171eb5cd\") " pod="openstack/nova-api-0" Mar 18 13:46:30.877810 master-0 kubenswrapper[27835]: I0318 13:46:30.877783 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fb70fd2-80c4-4ae6-8568-67bb171eb5cd-config-data\") pod \"nova-api-0\" (UID: \"8fb70fd2-80c4-4ae6-8568-67bb171eb5cd\") " pod="openstack/nova-api-0" Mar 18 13:46:30.880893 master-0 kubenswrapper[27835]: I0318 13:46:30.880863 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fb70fd2-80c4-4ae6-8568-67bb171eb5cd-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8fb70fd2-80c4-4ae6-8568-67bb171eb5cd\") " pod="openstack/nova-api-0" Mar 18 13:46:30.896330 master-0 kubenswrapper[27835]: I0318 13:46:30.896273 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6x8g\" (UniqueName: \"kubernetes.io/projected/8fb70fd2-80c4-4ae6-8568-67bb171eb5cd-kube-api-access-h6x8g\") pod \"nova-api-0\" (UID: \"8fb70fd2-80c4-4ae6-8568-67bb171eb5cd\") " pod="openstack/nova-api-0" Mar 18 13:46:30.956502 master-0 kubenswrapper[27835]: I0318 13:46:30.956440 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 13:46:31.405790 master-0 kubenswrapper[27835]: I0318 13:46:31.405736 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 18 13:46:31.409265 master-0 kubenswrapper[27835]: W0318 13:46:31.409218 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fb70fd2_80c4_4ae6_8568_67bb171eb5cd.slice/crio-9226c76d35c22510a55a87fc9ce60e2f280e3e38c70de9565014df7cd3fed494 WatchSource:0}: Error finding container 9226c76d35c22510a55a87fc9ce60e2f280e3e38c70de9565014df7cd3fed494: Status 404 returned error can't find the container with id 9226c76d35c22510a55a87fc9ce60e2f280e3e38c70de9565014df7cd3fed494 Mar 18 13:46:31.521235 master-0 kubenswrapper[27835]: I0318 13:46:31.520939 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8fb70fd2-80c4-4ae6-8568-67bb171eb5cd","Type":"ContainerStarted","Data":"9226c76d35c22510a55a87fc9ce60e2f280e3e38c70de9565014df7cd3fed494"} Mar 18 13:46:32.306617 master-0 kubenswrapper[27835]: I0318 13:46:32.305762 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4" path="/var/lib/kubelet/pods/3e8a1974-f5b4-4dc5-b6b3-4a97d19da1c4/volumes" Mar 18 13:46:32.538468 master-0 kubenswrapper[27835]: I0318 13:46:32.538180 27835 generic.go:334] "Generic (PLEG): container finished" podID="79a347a5-ca15-4fa5-be5f-bb4d7e2c2332" containerID="f3610f6494fb27e9563ad2cb50befe234c773a41f0fe21f55ca6a508beb55696" exitCode=0 Mar 18 13:46:32.538468 master-0 kubenswrapper[27835]: I0318 13:46:32.538273 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"79a347a5-ca15-4fa5-be5f-bb4d7e2c2332","Type":"ContainerDied","Data":"f3610f6494fb27e9563ad2cb50befe234c773a41f0fe21f55ca6a508beb55696"} Mar 18 13:46:32.538468 master-0 kubenswrapper[27835]: I0318 13:46:32.538336 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"79a347a5-ca15-4fa5-be5f-bb4d7e2c2332","Type":"ContainerDied","Data":"96b8d3ae9e3ada1be16aac169cc30b24c19737cf3979d6559118f6d4099f4a38"} Mar 18 13:46:32.538468 master-0 kubenswrapper[27835]: I0318 13:46:32.538351 27835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96b8d3ae9e3ada1be16aac169cc30b24c19737cf3979d6559118f6d4099f4a38" Mar 18 13:46:32.542462 master-0 kubenswrapper[27835]: I0318 13:46:32.542066 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8fb70fd2-80c4-4ae6-8568-67bb171eb5cd","Type":"ContainerStarted","Data":"d0cb92b99946d93ee8437f95d30c36a80710e89f27220023e9589a0bf51447f2"} Mar 18 13:46:32.542462 master-0 kubenswrapper[27835]: I0318 13:46:32.542105 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8fb70fd2-80c4-4ae6-8568-67bb171eb5cd","Type":"ContainerStarted","Data":"9490f6eafe7610e093cf148651164b691b8434a25bba89e916a3385a2d4f82de"} Mar 18 13:46:32.555023 master-0 kubenswrapper[27835]: I0318 13:46:32.553645 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 18 13:46:32.577518 master-0 kubenswrapper[27835]: I0318 13:46:32.575178 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.575155235 podStartE2EDuration="2.575155235s" podCreationTimestamp="2026-03-18 13:46:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:46:32.56853227 +0000 UTC m=+1356.533743860" watchObservedRunningTime="2026-03-18 13:46:32.575155235 +0000 UTC m=+1356.540366795" Mar 18 13:46:32.632461 master-0 kubenswrapper[27835]: I0318 13:46:32.630111 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7hlg\" (UniqueName: \"kubernetes.io/projected/79a347a5-ca15-4fa5-be5f-bb4d7e2c2332-kube-api-access-n7hlg\") pod \"79a347a5-ca15-4fa5-be5f-bb4d7e2c2332\" (UID: \"79a347a5-ca15-4fa5-be5f-bb4d7e2c2332\") " Mar 18 13:46:32.632461 master-0 kubenswrapper[27835]: I0318 13:46:32.630226 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79a347a5-ca15-4fa5-be5f-bb4d7e2c2332-logs\") pod \"79a347a5-ca15-4fa5-be5f-bb4d7e2c2332\" (UID: \"79a347a5-ca15-4fa5-be5f-bb4d7e2c2332\") " Mar 18 13:46:32.632461 master-0 kubenswrapper[27835]: I0318 13:46:32.630266 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79a347a5-ca15-4fa5-be5f-bb4d7e2c2332-combined-ca-bundle\") pod \"79a347a5-ca15-4fa5-be5f-bb4d7e2c2332\" (UID: \"79a347a5-ca15-4fa5-be5f-bb4d7e2c2332\") " Mar 18 13:46:32.632461 master-0 kubenswrapper[27835]: I0318 13:46:32.630400 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/79a347a5-ca15-4fa5-be5f-bb4d7e2c2332-nova-metadata-tls-certs\") pod \"79a347a5-ca15-4fa5-be5f-bb4d7e2c2332\" (UID: \"79a347a5-ca15-4fa5-be5f-bb4d7e2c2332\") " Mar 18 13:46:32.632461 master-0 kubenswrapper[27835]: I0318 13:46:32.630464 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79a347a5-ca15-4fa5-be5f-bb4d7e2c2332-config-data\") pod \"79a347a5-ca15-4fa5-be5f-bb4d7e2c2332\" (UID: \"79a347a5-ca15-4fa5-be5f-bb4d7e2c2332\") " Mar 18 13:46:32.632461 master-0 kubenswrapper[27835]: I0318 13:46:32.630978 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79a347a5-ca15-4fa5-be5f-bb4d7e2c2332-logs" (OuterVolumeSpecName: "logs") pod "79a347a5-ca15-4fa5-be5f-bb4d7e2c2332" (UID: "79a347a5-ca15-4fa5-be5f-bb4d7e2c2332"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 13:46:32.632461 master-0 kubenswrapper[27835]: I0318 13:46:32.631545 27835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79a347a5-ca15-4fa5-be5f-bb4d7e2c2332-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:46:32.641189 master-0 kubenswrapper[27835]: I0318 13:46:32.641099 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79a347a5-ca15-4fa5-be5f-bb4d7e2c2332-kube-api-access-n7hlg" (OuterVolumeSpecName: "kube-api-access-n7hlg") pod "79a347a5-ca15-4fa5-be5f-bb4d7e2c2332" (UID: "79a347a5-ca15-4fa5-be5f-bb4d7e2c2332"). InnerVolumeSpecName "kube-api-access-n7hlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:46:32.682634 master-0 kubenswrapper[27835]: I0318 13:46:32.678545 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79a347a5-ca15-4fa5-be5f-bb4d7e2c2332-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79a347a5-ca15-4fa5-be5f-bb4d7e2c2332" (UID: "79a347a5-ca15-4fa5-be5f-bb4d7e2c2332"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:46:32.699855 master-0 kubenswrapper[27835]: I0318 13:46:32.699785 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79a347a5-ca15-4fa5-be5f-bb4d7e2c2332-config-data" (OuterVolumeSpecName: "config-data") pod "79a347a5-ca15-4fa5-be5f-bb4d7e2c2332" (UID: "79a347a5-ca15-4fa5-be5f-bb4d7e2c2332"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:46:32.713121 master-0 kubenswrapper[27835]: I0318 13:46:32.713042 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79a347a5-ca15-4fa5-be5f-bb4d7e2c2332-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "79a347a5-ca15-4fa5-be5f-bb4d7e2c2332" (UID: "79a347a5-ca15-4fa5-be5f-bb4d7e2c2332"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:46:32.735746 master-0 kubenswrapper[27835]: I0318 13:46:32.733944 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7hlg\" (UniqueName: \"kubernetes.io/projected/79a347a5-ca15-4fa5-be5f-bb4d7e2c2332-kube-api-access-n7hlg\") on node \"master-0\" DevicePath \"\"" Mar 18 13:46:32.735746 master-0 kubenswrapper[27835]: I0318 13:46:32.734013 27835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79a347a5-ca15-4fa5-be5f-bb4d7e2c2332-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:46:32.735746 master-0 kubenswrapper[27835]: I0318 13:46:32.734028 27835 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/79a347a5-ca15-4fa5-be5f-bb4d7e2c2332-nova-metadata-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 13:46:32.735746 master-0 kubenswrapper[27835]: I0318 13:46:32.734042 27835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79a347a5-ca15-4fa5-be5f-bb4d7e2c2332-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 13:46:33.562528 master-0 kubenswrapper[27835]: I0318 13:46:33.562406 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 18 13:46:33.678472 master-0 kubenswrapper[27835]: I0318 13:46:33.678371 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 18 13:46:33.699651 master-0 kubenswrapper[27835]: I0318 13:46:33.699178 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 18 13:46:33.729079 master-0 kubenswrapper[27835]: I0318 13:46:33.726928 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 18 13:46:33.740887 master-0 kubenswrapper[27835]: E0318 13:46:33.740829 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79a347a5-ca15-4fa5-be5f-bb4d7e2c2332" containerName="nova-metadata-log" Mar 18 13:46:33.741158 master-0 kubenswrapper[27835]: I0318 13:46:33.741142 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="79a347a5-ca15-4fa5-be5f-bb4d7e2c2332" containerName="nova-metadata-log" Mar 18 13:46:33.741292 master-0 kubenswrapper[27835]: E0318 13:46:33.741274 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79a347a5-ca15-4fa5-be5f-bb4d7e2c2332" containerName="nova-metadata-metadata" Mar 18 13:46:33.741386 master-0 kubenswrapper[27835]: I0318 13:46:33.741373 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="79a347a5-ca15-4fa5-be5f-bb4d7e2c2332" containerName="nova-metadata-metadata" Mar 18 13:46:33.741918 master-0 kubenswrapper[27835]: I0318 13:46:33.741896 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="79a347a5-ca15-4fa5-be5f-bb4d7e2c2332" containerName="nova-metadata-metadata" Mar 18 13:46:33.742038 master-0 kubenswrapper[27835]: I0318 13:46:33.742024 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="79a347a5-ca15-4fa5-be5f-bb4d7e2c2332" containerName="nova-metadata-log" Mar 18 13:46:33.743797 master-0 kubenswrapper[27835]: I0318 13:46:33.743770 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 18 13:46:33.752038 master-0 kubenswrapper[27835]: I0318 13:46:33.751989 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 18 13:46:33.752582 master-0 kubenswrapper[27835]: I0318 13:46:33.752498 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 18 13:46:33.757447 master-0 kubenswrapper[27835]: I0318 13:46:33.757372 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 18 13:46:33.886522 master-0 kubenswrapper[27835]: I0318 13:46:33.884341 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239-config-data\") pod \"nova-metadata-0\" (UID: \"d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239\") " pod="openstack/nova-metadata-0" Mar 18 13:46:33.886522 master-0 kubenswrapper[27835]: I0318 13:46:33.884450 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh6l2\" (UniqueName: \"kubernetes.io/projected/d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239-kube-api-access-bh6l2\") pod \"nova-metadata-0\" (UID: \"d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239\") " pod="openstack/nova-metadata-0" Mar 18 13:46:33.886522 master-0 kubenswrapper[27835]: I0318 13:46:33.884566 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239-logs\") pod \"nova-metadata-0\" (UID: \"d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239\") " pod="openstack/nova-metadata-0" Mar 18 13:46:33.886522 master-0 kubenswrapper[27835]: I0318 13:46:33.885208 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239\") " pod="openstack/nova-metadata-0" Mar 18 13:46:33.886522 master-0 kubenswrapper[27835]: I0318 13:46:33.885288 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239\") " pod="openstack/nova-metadata-0" Mar 18 13:46:33.987634 master-0 kubenswrapper[27835]: I0318 13:46:33.987559 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239-logs\") pod \"nova-metadata-0\" (UID: \"d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239\") " pod="openstack/nova-metadata-0" Mar 18 13:46:33.987862 master-0 kubenswrapper[27835]: I0318 13:46:33.987746 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239\") " pod="openstack/nova-metadata-0" Mar 18 13:46:33.987862 master-0 kubenswrapper[27835]: I0318 13:46:33.987783 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239\") " pod="openstack/nova-metadata-0" Mar 18 13:46:33.987972 master-0 kubenswrapper[27835]: I0318 13:46:33.987865 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239-config-data\") pod \"nova-metadata-0\" (UID: \"d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239\") " pod="openstack/nova-metadata-0" Mar 18 13:46:33.987972 master-0 kubenswrapper[27835]: I0318 13:46:33.987919 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh6l2\" (UniqueName: \"kubernetes.io/projected/d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239-kube-api-access-bh6l2\") pod \"nova-metadata-0\" (UID: \"d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239\") " pod="openstack/nova-metadata-0" Mar 18 13:46:33.988089 master-0 kubenswrapper[27835]: I0318 13:46:33.988051 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239-logs\") pod \"nova-metadata-0\" (UID: \"d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239\") " pod="openstack/nova-metadata-0" Mar 18 13:46:33.991771 master-0 kubenswrapper[27835]: I0318 13:46:33.991424 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239-config-data\") pod \"nova-metadata-0\" (UID: \"d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239\") " pod="openstack/nova-metadata-0" Mar 18 13:46:34.001477 master-0 kubenswrapper[27835]: I0318 13:46:34.000006 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239\") " pod="openstack/nova-metadata-0" Mar 18 13:46:34.003582 master-0 kubenswrapper[27835]: I0318 13:46:34.003544 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239\") " pod="openstack/nova-metadata-0" Mar 18 13:46:34.005583 master-0 kubenswrapper[27835]: I0318 13:46:34.005538 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh6l2\" (UniqueName: \"kubernetes.io/projected/d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239-kube-api-access-bh6l2\") pod \"nova-metadata-0\" (UID: \"d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239\") " pod="openstack/nova-metadata-0" Mar 18 13:46:34.068328 master-0 kubenswrapper[27835]: I0318 13:46:34.068280 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 18 13:46:34.217451 master-0 kubenswrapper[27835]: I0318 13:46:34.217395 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 18 13:46:34.302647 master-0 kubenswrapper[27835]: I0318 13:46:34.302342 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79a347a5-ca15-4fa5-be5f-bb4d7e2c2332" path="/var/lib/kubelet/pods/79a347a5-ca15-4fa5-be5f-bb4d7e2c2332/volumes" Mar 18 13:46:34.397432 master-0 kubenswrapper[27835]: I0318 13:46:34.397358 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374b1e4c-f35c-4d46-9ea0-e65d45b825d8-combined-ca-bundle\") pod \"374b1e4c-f35c-4d46-9ea0-e65d45b825d8\" (UID: \"374b1e4c-f35c-4d46-9ea0-e65d45b825d8\") " Mar 18 13:46:34.398586 master-0 kubenswrapper[27835]: I0318 13:46:34.398548 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzlbz\" (UniqueName: \"kubernetes.io/projected/374b1e4c-f35c-4d46-9ea0-e65d45b825d8-kube-api-access-fzlbz\") pod \"374b1e4c-f35c-4d46-9ea0-e65d45b825d8\" (UID: \"374b1e4c-f35c-4d46-9ea0-e65d45b825d8\") " Mar 18 13:46:34.398654 master-0 kubenswrapper[27835]: I0318 13:46:34.398639 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/374b1e4c-f35c-4d46-9ea0-e65d45b825d8-config-data\") pod \"374b1e4c-f35c-4d46-9ea0-e65d45b825d8\" (UID: \"374b1e4c-f35c-4d46-9ea0-e65d45b825d8\") " Mar 18 13:46:34.402950 master-0 kubenswrapper[27835]: I0318 13:46:34.402887 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/374b1e4c-f35c-4d46-9ea0-e65d45b825d8-kube-api-access-fzlbz" (OuterVolumeSpecName: "kube-api-access-fzlbz") pod "374b1e4c-f35c-4d46-9ea0-e65d45b825d8" (UID: "374b1e4c-f35c-4d46-9ea0-e65d45b825d8"). InnerVolumeSpecName "kube-api-access-fzlbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:46:34.436134 master-0 kubenswrapper[27835]: I0318 13:46:34.436073 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/374b1e4c-f35c-4d46-9ea0-e65d45b825d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "374b1e4c-f35c-4d46-9ea0-e65d45b825d8" (UID: "374b1e4c-f35c-4d46-9ea0-e65d45b825d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:46:34.439724 master-0 kubenswrapper[27835]: I0318 13:46:34.439630 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/374b1e4c-f35c-4d46-9ea0-e65d45b825d8-config-data" (OuterVolumeSpecName: "config-data") pod "374b1e4c-f35c-4d46-9ea0-e65d45b825d8" (UID: "374b1e4c-f35c-4d46-9ea0-e65d45b825d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:46:34.503447 master-0 kubenswrapper[27835]: I0318 13:46:34.503200 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzlbz\" (UniqueName: \"kubernetes.io/projected/374b1e4c-f35c-4d46-9ea0-e65d45b825d8-kube-api-access-fzlbz\") on node \"master-0\" DevicePath \"\"" Mar 18 13:46:34.503447 master-0 kubenswrapper[27835]: I0318 13:46:34.503256 27835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/374b1e4c-f35c-4d46-9ea0-e65d45b825d8-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 13:46:34.503447 master-0 kubenswrapper[27835]: I0318 13:46:34.503271 27835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374b1e4c-f35c-4d46-9ea0-e65d45b825d8-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 13:46:34.564563 master-0 kubenswrapper[27835]: I0318 13:46:34.564510 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 18 13:46:34.580468 master-0 kubenswrapper[27835]: I0318 13:46:34.580369 27835 generic.go:334] "Generic (PLEG): container finished" podID="374b1e4c-f35c-4d46-9ea0-e65d45b825d8" containerID="82b65ca3527690011b6a8817ba9fea65862d72f03e4b5dd3ad91a708d1f08938" exitCode=0 Mar 18 13:46:34.580468 master-0 kubenswrapper[27835]: I0318 13:46:34.580458 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"374b1e4c-f35c-4d46-9ea0-e65d45b825d8","Type":"ContainerDied","Data":"82b65ca3527690011b6a8817ba9fea65862d72f03e4b5dd3ad91a708d1f08938"} Mar 18 13:46:34.580758 master-0 kubenswrapper[27835]: I0318 13:46:34.580488 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"374b1e4c-f35c-4d46-9ea0-e65d45b825d8","Type":"ContainerDied","Data":"84d6d8e8e4576ec25ac8bb82270677e1e515fff8b6abc9022baf622b858bad09"} Mar 18 13:46:34.580758 master-0 kubenswrapper[27835]: I0318 13:46:34.580508 27835 scope.go:117] "RemoveContainer" containerID="82b65ca3527690011b6a8817ba9fea65862d72f03e4b5dd3ad91a708d1f08938" Mar 18 13:46:34.580758 master-0 kubenswrapper[27835]: I0318 13:46:34.580635 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 18 13:46:34.582015 master-0 kubenswrapper[27835]: I0318 13:46:34.581979 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239","Type":"ContainerStarted","Data":"0acc10a11fa061550980b8d66b54ef10e9c675ea17aa0417c1ad794dc5029c61"} Mar 18 13:46:34.606660 master-0 kubenswrapper[27835]: I0318 13:46:34.606611 27835 scope.go:117] "RemoveContainer" containerID="82b65ca3527690011b6a8817ba9fea65862d72f03e4b5dd3ad91a708d1f08938" Mar 18 13:46:34.607345 master-0 kubenswrapper[27835]: E0318 13:46:34.607313 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82b65ca3527690011b6a8817ba9fea65862d72f03e4b5dd3ad91a708d1f08938\": container with ID starting with 82b65ca3527690011b6a8817ba9fea65862d72f03e4b5dd3ad91a708d1f08938 not found: ID does not exist" containerID="82b65ca3527690011b6a8817ba9fea65862d72f03e4b5dd3ad91a708d1f08938" Mar 18 13:46:34.607443 master-0 kubenswrapper[27835]: I0318 13:46:34.607354 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82b65ca3527690011b6a8817ba9fea65862d72f03e4b5dd3ad91a708d1f08938"} err="failed to get container status \"82b65ca3527690011b6a8817ba9fea65862d72f03e4b5dd3ad91a708d1f08938\": rpc error: code = NotFound desc = could not find container \"82b65ca3527690011b6a8817ba9fea65862d72f03e4b5dd3ad91a708d1f08938\": container with ID starting with 82b65ca3527690011b6a8817ba9fea65862d72f03e4b5dd3ad91a708d1f08938 not found: ID does not exist" Mar 18 13:46:34.629929 master-0 kubenswrapper[27835]: I0318 13:46:34.629879 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 13:46:34.658547 master-0 kubenswrapper[27835]: I0318 13:46:34.658479 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 13:46:34.709901 master-0 kubenswrapper[27835]: I0318 13:46:34.709849 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 13:46:34.710453 master-0 kubenswrapper[27835]: E0318 13:46:34.710421 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="374b1e4c-f35c-4d46-9ea0-e65d45b825d8" containerName="nova-scheduler-scheduler" Mar 18 13:46:34.710453 master-0 kubenswrapper[27835]: I0318 13:46:34.710449 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="374b1e4c-f35c-4d46-9ea0-e65d45b825d8" containerName="nova-scheduler-scheduler" Mar 18 13:46:34.710771 master-0 kubenswrapper[27835]: I0318 13:46:34.710752 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="374b1e4c-f35c-4d46-9ea0-e65d45b825d8" containerName="nova-scheduler-scheduler" Mar 18 13:46:34.711750 master-0 kubenswrapper[27835]: I0318 13:46:34.711720 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 18 13:46:34.713723 master-0 kubenswrapper[27835]: I0318 13:46:34.713689 27835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 18 13:46:34.729080 master-0 kubenswrapper[27835]: I0318 13:46:34.729027 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 13:46:34.809454 master-0 kubenswrapper[27835]: I0318 13:46:34.809385 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2965ba5c-2c16-4038-9a9a-2b7720a286f7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2965ba5c-2c16-4038-9a9a-2b7720a286f7\") " pod="openstack/nova-scheduler-0" Mar 18 13:46:34.809547 master-0 kubenswrapper[27835]: I0318 13:46:34.809527 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l44bs\" (UniqueName: \"kubernetes.io/projected/2965ba5c-2c16-4038-9a9a-2b7720a286f7-kube-api-access-l44bs\") pod \"nova-scheduler-0\" (UID: \"2965ba5c-2c16-4038-9a9a-2b7720a286f7\") " pod="openstack/nova-scheduler-0" Mar 18 13:46:34.809639 master-0 kubenswrapper[27835]: I0318 13:46:34.809616 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2965ba5c-2c16-4038-9a9a-2b7720a286f7-config-data\") pod \"nova-scheduler-0\" (UID: \"2965ba5c-2c16-4038-9a9a-2b7720a286f7\") " pod="openstack/nova-scheduler-0" Mar 18 13:46:34.911930 master-0 kubenswrapper[27835]: I0318 13:46:34.911853 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2965ba5c-2c16-4038-9a9a-2b7720a286f7-config-data\") pod \"nova-scheduler-0\" (UID: \"2965ba5c-2c16-4038-9a9a-2b7720a286f7\") " pod="openstack/nova-scheduler-0" Mar 18 13:46:34.912080 master-0 kubenswrapper[27835]: I0318 13:46:34.912020 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2965ba5c-2c16-4038-9a9a-2b7720a286f7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2965ba5c-2c16-4038-9a9a-2b7720a286f7\") " pod="openstack/nova-scheduler-0" Mar 18 13:46:34.912150 master-0 kubenswrapper[27835]: I0318 13:46:34.912132 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l44bs\" (UniqueName: \"kubernetes.io/projected/2965ba5c-2c16-4038-9a9a-2b7720a286f7-kube-api-access-l44bs\") pod \"nova-scheduler-0\" (UID: \"2965ba5c-2c16-4038-9a9a-2b7720a286f7\") " pod="openstack/nova-scheduler-0" Mar 18 13:46:34.916226 master-0 kubenswrapper[27835]: I0318 13:46:34.916172 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2965ba5c-2c16-4038-9a9a-2b7720a286f7-config-data\") pod \"nova-scheduler-0\" (UID: \"2965ba5c-2c16-4038-9a9a-2b7720a286f7\") " pod="openstack/nova-scheduler-0" Mar 18 13:46:34.917622 master-0 kubenswrapper[27835]: I0318 13:46:34.917588 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2965ba5c-2c16-4038-9a9a-2b7720a286f7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2965ba5c-2c16-4038-9a9a-2b7720a286f7\") " pod="openstack/nova-scheduler-0" Mar 18 13:46:34.933928 master-0 kubenswrapper[27835]: I0318 13:46:34.933886 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l44bs\" (UniqueName: \"kubernetes.io/projected/2965ba5c-2c16-4038-9a9a-2b7720a286f7-kube-api-access-l44bs\") pod \"nova-scheduler-0\" (UID: \"2965ba5c-2c16-4038-9a9a-2b7720a286f7\") " pod="openstack/nova-scheduler-0" Mar 18 13:46:35.069217 master-0 kubenswrapper[27835]: I0318 13:46:35.069168 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 18 13:46:35.607692 master-0 kubenswrapper[27835]: I0318 13:46:35.607609 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 13:46:35.609251 master-0 kubenswrapper[27835]: I0318 13:46:35.609176 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239","Type":"ContainerStarted","Data":"fec9bfce58162ee6d27a50920d78b19d59711e1481313237266e1b55ff27a582"} Mar 18 13:46:35.609251 master-0 kubenswrapper[27835]: I0318 13:46:35.609251 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239","Type":"ContainerStarted","Data":"ea82cb046ed60391802fbd665bcdc75f1e3d40bbfe80390943240c688c1df9c5"} Mar 18 13:46:35.615533 master-0 kubenswrapper[27835]: W0318 13:46:35.613205 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2965ba5c_2c16_4038_9a9a_2b7720a286f7.slice/crio-c0e11c2f0f68dc0e4c9f7996238cfca636cf5d4e7202d1203f9fa03fce9bfa87 WatchSource:0}: Error finding container c0e11c2f0f68dc0e4c9f7996238cfca636cf5d4e7202d1203f9fa03fce9bfa87: Status 404 returned error can't find the container with id c0e11c2f0f68dc0e4c9f7996238cfca636cf5d4e7202d1203f9fa03fce9bfa87 Mar 18 13:46:35.647453 master-0 kubenswrapper[27835]: I0318 13:46:35.647341 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.647321599 podStartE2EDuration="2.647321599s" podCreationTimestamp="2026-03-18 13:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:46:35.629927828 +0000 UTC m=+1359.595139428" watchObservedRunningTime="2026-03-18 13:46:35.647321599 +0000 UTC m=+1359.612533159" Mar 18 13:46:36.323551 master-0 kubenswrapper[27835]: I0318 13:46:36.323455 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="374b1e4c-f35c-4d46-9ea0-e65d45b825d8" path="/var/lib/kubelet/pods/374b1e4c-f35c-4d46-9ea0-e65d45b825d8/volumes" Mar 18 13:46:36.629860 master-0 kubenswrapper[27835]: I0318 13:46:36.629701 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2965ba5c-2c16-4038-9a9a-2b7720a286f7","Type":"ContainerStarted","Data":"f2ba7e32b73c55ce7df015d6616914158faf81d8681b6c9d447255d57b21e25e"} Mar 18 13:46:36.629860 master-0 kubenswrapper[27835]: I0318 13:46:36.629745 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2965ba5c-2c16-4038-9a9a-2b7720a286f7","Type":"ContainerStarted","Data":"c0e11c2f0f68dc0e4c9f7996238cfca636cf5d4e7202d1203f9fa03fce9bfa87"} Mar 18 13:46:36.656523 master-0 kubenswrapper[27835]: I0318 13:46:36.656402 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.6563829119999998 podStartE2EDuration="2.656382912s" podCreationTimestamp="2026-03-18 13:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:46:36.650986259 +0000 UTC m=+1360.616197839" watchObservedRunningTime="2026-03-18 13:46:36.656382912 +0000 UTC m=+1360.621594472" Mar 18 13:46:40.070292 master-0 kubenswrapper[27835]: I0318 13:46:40.069553 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 18 13:46:40.957284 master-0 kubenswrapper[27835]: I0318 13:46:40.957198 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 18 13:46:40.957899 master-0 kubenswrapper[27835]: I0318 13:46:40.957877 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 18 13:46:41.969223 master-0 kubenswrapper[27835]: I0318 13:46:41.968936 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8fb70fd2-80c4-4ae6-8568-67bb171eb5cd" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.128.1.12:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:46:41.969223 master-0 kubenswrapper[27835]: I0318 13:46:41.969035 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8fb70fd2-80c4-4ae6-8568-67bb171eb5cd" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.128.1.12:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:46:44.069846 master-0 kubenswrapper[27835]: I0318 13:46:44.069775 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 18 13:46:44.069846 master-0 kubenswrapper[27835]: I0318 13:46:44.069858 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 18 13:46:45.070432 master-0 kubenswrapper[27835]: I0318 13:46:45.070346 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 18 13:46:45.081438 master-0 kubenswrapper[27835]: I0318 13:46:45.080872 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.13:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:46:45.081438 master-0 kubenswrapper[27835]: I0318 13:46:45.080913 27835 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.13:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 13:46:45.107440 master-0 kubenswrapper[27835]: I0318 13:46:45.107156 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 18 13:46:45.777303 master-0 kubenswrapper[27835]: I0318 13:46:45.777242 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 18 13:46:48.957558 master-0 kubenswrapper[27835]: I0318 13:46:48.957486 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 18 13:46:48.958291 master-0 kubenswrapper[27835]: I0318 13:46:48.957905 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 18 13:46:50.970598 master-0 kubenswrapper[27835]: I0318 13:46:50.970529 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 18 13:46:50.980300 master-0 kubenswrapper[27835]: I0318 13:46:50.980218 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 18 13:46:50.989025 master-0 kubenswrapper[27835]: I0318 13:46:50.988952 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 18 13:46:51.867560 master-0 kubenswrapper[27835]: I0318 13:46:51.867487 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 18 13:46:52.072074 master-0 kubenswrapper[27835]: I0318 13:46:52.071671 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 18 13:46:52.072074 master-0 kubenswrapper[27835]: I0318 13:46:52.071750 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 18 13:46:54.078468 master-0 kubenswrapper[27835]: I0318 13:46:54.076469 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 18 13:46:54.079878 master-0 kubenswrapper[27835]: I0318 13:46:54.079843 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 18 13:46:54.083064 master-0 kubenswrapper[27835]: I0318 13:46:54.083007 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 18 13:46:54.903342 master-0 kubenswrapper[27835]: I0318 13:46:54.903280 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 18 13:47:22.515519 master-0 kubenswrapper[27835]: I0318 13:47:22.514263 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["sushy-emulator/sushy-emulator-59477995f9-p67lz"] Mar 18 13:47:22.515519 master-0 kubenswrapper[27835]: I0318 13:47:22.514557 27835 kuberuntime_container.go:808] "Killing container with a grace period" pod="sushy-emulator/sushy-emulator-59477995f9-p67lz" podUID="355da07d-e01b-4940-a772-686d744c936c" containerName="sushy-emulator" containerID="cri-o://e14a2769aa027884fe2cb7c11df376ef1c59d384146a52f5efa4c2e414161872" gracePeriod=30 Mar 18 13:47:23.326050 master-0 kubenswrapper[27835]: I0318 13:47:23.325997 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-59477995f9-p67lz" Mar 18 13:47:23.489663 master-0 kubenswrapper[27835]: I0318 13:47:23.489568 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/355da07d-e01b-4940-a772-686d744c936c-os-client-config\") pod \"355da07d-e01b-4940-a772-686d744c936c\" (UID: \"355da07d-e01b-4940-a772-686d744c936c\") " Mar 18 13:47:23.489771 master-0 kubenswrapper[27835]: I0318 13:47:23.489708 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc4gh\" (UniqueName: \"kubernetes.io/projected/355da07d-e01b-4940-a772-686d744c936c-kube-api-access-qc4gh\") pod \"355da07d-e01b-4940-a772-686d744c936c\" (UID: \"355da07d-e01b-4940-a772-686d744c936c\") " Mar 18 13:47:23.492813 master-0 kubenswrapper[27835]: I0318 13:47:23.489855 27835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/355da07d-e01b-4940-a772-686d744c936c-sushy-emulator-config\") pod \"355da07d-e01b-4940-a772-686d744c936c\" (UID: \"355da07d-e01b-4940-a772-686d744c936c\") " Mar 18 13:47:23.499355 master-0 kubenswrapper[27835]: I0318 13:47:23.499295 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/355da07d-e01b-4940-a772-686d744c936c-sushy-emulator-config" (OuterVolumeSpecName: "sushy-emulator-config") pod "355da07d-e01b-4940-a772-686d744c936c" (UID: "355da07d-e01b-4940-a772-686d744c936c"). InnerVolumeSpecName "sushy-emulator-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 13:47:23.507048 master-0 kubenswrapper[27835]: I0318 13:47:23.506947 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/355da07d-e01b-4940-a772-686d744c936c-kube-api-access-qc4gh" (OuterVolumeSpecName: "kube-api-access-qc4gh") pod "355da07d-e01b-4940-a772-686d744c936c" (UID: "355da07d-e01b-4940-a772-686d744c936c"). InnerVolumeSpecName "kube-api-access-qc4gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 13:47:23.526813 master-0 kubenswrapper[27835]: I0318 13:47:23.517815 27835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/355da07d-e01b-4940-a772-686d744c936c-os-client-config" (OuterVolumeSpecName: "os-client-config") pod "355da07d-e01b-4940-a772-686d744c936c" (UID: "355da07d-e01b-4940-a772-686d744c936c"). InnerVolumeSpecName "os-client-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 13:47:23.585645 master-0 kubenswrapper[27835]: I0318 13:47:23.585077 27835 generic.go:334] "Generic (PLEG): container finished" podID="355da07d-e01b-4940-a772-686d744c936c" containerID="e14a2769aa027884fe2cb7c11df376ef1c59d384146a52f5efa4c2e414161872" exitCode=0 Mar 18 13:47:23.585645 master-0 kubenswrapper[27835]: I0318 13:47:23.585189 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-59477995f9-p67lz" event={"ID":"355da07d-e01b-4940-a772-686d744c936c","Type":"ContainerDied","Data":"e14a2769aa027884fe2cb7c11df376ef1c59d384146a52f5efa4c2e414161872"} Mar 18 13:47:23.585645 master-0 kubenswrapper[27835]: I0318 13:47:23.585251 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-59477995f9-p67lz" event={"ID":"355da07d-e01b-4940-a772-686d744c936c","Type":"ContainerDied","Data":"58336f73f174603f1b575703caed84c86d87898edd00bc63087744de4671d9f2"} Mar 18 13:47:23.585645 master-0 kubenswrapper[27835]: I0318 13:47:23.585281 27835 scope.go:117] "RemoveContainer" containerID="e14a2769aa027884fe2cb7c11df376ef1c59d384146a52f5efa4c2e414161872" Mar 18 13:47:23.585988 master-0 kubenswrapper[27835]: I0318 13:47:23.585674 27835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-59477995f9-p67lz" Mar 18 13:47:23.596639 master-0 kubenswrapper[27835]: I0318 13:47:23.596567 27835 reconciler_common.go:293] "Volume detached for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/355da07d-e01b-4940-a772-686d744c936c-os-client-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:47:23.596639 master-0 kubenswrapper[27835]: I0318 13:47:23.596637 27835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qc4gh\" (UniqueName: \"kubernetes.io/projected/355da07d-e01b-4940-a772-686d744c936c-kube-api-access-qc4gh\") on node \"master-0\" DevicePath \"\"" Mar 18 13:47:23.596871 master-0 kubenswrapper[27835]: I0318 13:47:23.596660 27835 reconciler_common.go:293] "Volume detached for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/355da07d-e01b-4940-a772-686d744c936c-sushy-emulator-config\") on node \"master-0\" DevicePath \"\"" Mar 18 13:47:23.657849 master-0 kubenswrapper[27835]: I0318 13:47:23.656137 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-54b65fbdd6-nz42h"] Mar 18 13:47:23.657849 master-0 kubenswrapper[27835]: E0318 13:47:23.657107 27835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="355da07d-e01b-4940-a772-686d744c936c" containerName="sushy-emulator" Mar 18 13:47:23.657849 master-0 kubenswrapper[27835]: I0318 13:47:23.657129 27835 state_mem.go:107] "Deleted CPUSet assignment" podUID="355da07d-e01b-4940-a772-686d744c936c" containerName="sushy-emulator" Mar 18 13:47:23.657849 master-0 kubenswrapper[27835]: I0318 13:47:23.657740 27835 memory_manager.go:354] "RemoveStaleState removing state" podUID="355da07d-e01b-4940-a772-686d744c936c" containerName="sushy-emulator" Mar 18 13:47:23.659229 master-0 kubenswrapper[27835]: I0318 13:47:23.659193 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-54b65fbdd6-nz42h" Mar 18 13:47:23.661787 master-0 kubenswrapper[27835]: I0318 13:47:23.661747 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"sushy-emulator-config" Mar 18 13:47:23.670314 master-0 kubenswrapper[27835]: I0318 13:47:23.670206 27835 scope.go:117] "RemoveContainer" containerID="e14a2769aa027884fe2cb7c11df376ef1c59d384146a52f5efa4c2e414161872" Mar 18 13:47:23.671065 master-0 kubenswrapper[27835]: E0318 13:47:23.671000 27835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e14a2769aa027884fe2cb7c11df376ef1c59d384146a52f5efa4c2e414161872\": container with ID starting with e14a2769aa027884fe2cb7c11df376ef1c59d384146a52f5efa4c2e414161872 not found: ID does not exist" containerID="e14a2769aa027884fe2cb7c11df376ef1c59d384146a52f5efa4c2e414161872" Mar 18 13:47:23.671128 master-0 kubenswrapper[27835]: I0318 13:47:23.671076 27835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e14a2769aa027884fe2cb7c11df376ef1c59d384146a52f5efa4c2e414161872"} err="failed to get container status \"e14a2769aa027884fe2cb7c11df376ef1c59d384146a52f5efa4c2e414161872\": rpc error: code = NotFound desc = could not find container \"e14a2769aa027884fe2cb7c11df376ef1c59d384146a52f5efa4c2e414161872\": container with ID starting with e14a2769aa027884fe2cb7c11df376ef1c59d384146a52f5efa4c2e414161872 not found: ID does not exist" Mar 18 13:47:23.672209 master-0 kubenswrapper[27835]: I0318 13:47:23.672161 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-54b65fbdd6-nz42h"] Mar 18 13:47:23.690501 master-0 kubenswrapper[27835]: I0318 13:47:23.690348 27835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["sushy-emulator/sushy-emulator-59477995f9-p67lz"] Mar 18 13:47:23.710378 master-0 kubenswrapper[27835]: I0318 13:47:23.710289 27835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["sushy-emulator/sushy-emulator-59477995f9-p67lz"] Mar 18 13:47:23.807007 master-0 kubenswrapper[27835]: I0318 13:47:23.806837 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/e0699c4f-54b8-4f21-aeb0-c91f7d923017-sushy-emulator-config\") pod \"sushy-emulator-54b65fbdd6-nz42h\" (UID: \"e0699c4f-54b8-4f21-aeb0-c91f7d923017\") " pod="sushy-emulator/sushy-emulator-54b65fbdd6-nz42h" Mar 18 13:47:23.807007 master-0 kubenswrapper[27835]: I0318 13:47:23.806988 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx2zg\" (UniqueName: \"kubernetes.io/projected/e0699c4f-54b8-4f21-aeb0-c91f7d923017-kube-api-access-qx2zg\") pod \"sushy-emulator-54b65fbdd6-nz42h\" (UID: \"e0699c4f-54b8-4f21-aeb0-c91f7d923017\") " pod="sushy-emulator/sushy-emulator-54b65fbdd6-nz42h" Mar 18 13:47:23.807237 master-0 kubenswrapper[27835]: I0318 13:47:23.807052 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/e0699c4f-54b8-4f21-aeb0-c91f7d923017-os-client-config\") pod \"sushy-emulator-54b65fbdd6-nz42h\" (UID: \"e0699c4f-54b8-4f21-aeb0-c91f7d923017\") " pod="sushy-emulator/sushy-emulator-54b65fbdd6-nz42h" Mar 18 13:47:23.910381 master-0 kubenswrapper[27835]: I0318 13:47:23.910274 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/e0699c4f-54b8-4f21-aeb0-c91f7d923017-sushy-emulator-config\") pod \"sushy-emulator-54b65fbdd6-nz42h\" (UID: \"e0699c4f-54b8-4f21-aeb0-c91f7d923017\") " pod="sushy-emulator/sushy-emulator-54b65fbdd6-nz42h" Mar 18 13:47:23.910647 master-0 kubenswrapper[27835]: I0318 13:47:23.910476 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx2zg\" (UniqueName: \"kubernetes.io/projected/e0699c4f-54b8-4f21-aeb0-c91f7d923017-kube-api-access-qx2zg\") pod \"sushy-emulator-54b65fbdd6-nz42h\" (UID: \"e0699c4f-54b8-4f21-aeb0-c91f7d923017\") " pod="sushy-emulator/sushy-emulator-54b65fbdd6-nz42h" Mar 18 13:47:23.910647 master-0 kubenswrapper[27835]: I0318 13:47:23.910539 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/e0699c4f-54b8-4f21-aeb0-c91f7d923017-os-client-config\") pod \"sushy-emulator-54b65fbdd6-nz42h\" (UID: \"e0699c4f-54b8-4f21-aeb0-c91f7d923017\") " pod="sushy-emulator/sushy-emulator-54b65fbdd6-nz42h" Mar 18 13:47:23.911636 master-0 kubenswrapper[27835]: I0318 13:47:23.911578 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/e0699c4f-54b8-4f21-aeb0-c91f7d923017-sushy-emulator-config\") pod \"sushy-emulator-54b65fbdd6-nz42h\" (UID: \"e0699c4f-54b8-4f21-aeb0-c91f7d923017\") " pod="sushy-emulator/sushy-emulator-54b65fbdd6-nz42h" Mar 18 13:47:23.918458 master-0 kubenswrapper[27835]: I0318 13:47:23.918373 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/e0699c4f-54b8-4f21-aeb0-c91f7d923017-os-client-config\") pod \"sushy-emulator-54b65fbdd6-nz42h\" (UID: \"e0699c4f-54b8-4f21-aeb0-c91f7d923017\") " pod="sushy-emulator/sushy-emulator-54b65fbdd6-nz42h" Mar 18 13:47:23.939619 master-0 kubenswrapper[27835]: I0318 13:47:23.939550 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx2zg\" (UniqueName: \"kubernetes.io/projected/e0699c4f-54b8-4f21-aeb0-c91f7d923017-kube-api-access-qx2zg\") pod \"sushy-emulator-54b65fbdd6-nz42h\" (UID: \"e0699c4f-54b8-4f21-aeb0-c91f7d923017\") " pod="sushy-emulator/sushy-emulator-54b65fbdd6-nz42h" Mar 18 13:47:23.996783 master-0 kubenswrapper[27835]: I0318 13:47:23.996695 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-54b65fbdd6-nz42h" Mar 18 13:47:24.298028 master-0 kubenswrapper[27835]: I0318 13:47:24.297770 27835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="355da07d-e01b-4940-a772-686d744c936c" path="/var/lib/kubelet/pods/355da07d-e01b-4940-a772-686d744c936c/volumes" Mar 18 13:47:24.829849 master-0 kubenswrapper[27835]: I0318 13:47:24.829271 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-54b65fbdd6-nz42h"] Mar 18 13:47:25.629552 master-0 kubenswrapper[27835]: I0318 13:47:25.629312 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-54b65fbdd6-nz42h" event={"ID":"e0699c4f-54b8-4f21-aeb0-c91f7d923017","Type":"ContainerStarted","Data":"da4d716890429a8256ac67ea241384f47575756fac1b17a6e8189c8a42659114"} Mar 18 13:47:25.629552 master-0 kubenswrapper[27835]: I0318 13:47:25.629380 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-54b65fbdd6-nz42h" event={"ID":"e0699c4f-54b8-4f21-aeb0-c91f7d923017","Type":"ContainerStarted","Data":"200f1626b6fb1be4a3075b1edd5dddc5cbfe333cd864f4e1c87c4f4b21eeb4fd"} Mar 18 13:47:33.997200 master-0 kubenswrapper[27835]: I0318 13:47:33.997134 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-54b65fbdd6-nz42h" Mar 18 13:47:33.998386 master-0 kubenswrapper[27835]: I0318 13:47:33.997747 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-54b65fbdd6-nz42h" Mar 18 13:47:34.010280 master-0 kubenswrapper[27835]: I0318 13:47:34.010223 27835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-54b65fbdd6-nz42h" Mar 18 13:47:34.041329 master-0 kubenswrapper[27835]: I0318 13:47:34.041227 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-54b65fbdd6-nz42h" podStartSLOduration=11.041208049 podStartE2EDuration="11.041208049s" podCreationTimestamp="2026-03-18 13:47:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:47:25.65564224 +0000 UTC m=+1409.620853850" watchObservedRunningTime="2026-03-18 13:47:34.041208049 +0000 UTC m=+1418.006419609" Mar 18 13:47:34.754837 master-0 kubenswrapper[27835]: I0318 13:47:34.754767 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-54b65fbdd6-nz42h" Mar 18 13:49:02.825103 master-0 kubenswrapper[27835]: I0318 13:49:02.825000 27835 scope.go:117] "RemoveContainer" containerID="76ae9528f94638856ffd919244239fe4fc5ea2ae62abb4866666f27ab61d8b5d" Mar 18 13:49:02.853212 master-0 kubenswrapper[27835]: I0318 13:49:02.853081 27835 scope.go:117] "RemoveContainer" containerID="eef251ff3b79c10689d866f86932e87ec8fca874d418bf46a68b59aa4480fe70" Mar 18 13:49:02.892996 master-0 kubenswrapper[27835]: I0318 13:49:02.891638 27835 scope.go:117] "RemoveContainer" containerID="02e9d4627ab91f9885fa3697aa5e5860b1ec4409c6144140cf0e851184e8045e" Mar 18 13:50:02.981256 master-0 kubenswrapper[27835]: I0318 13:50:02.981162 27835 scope.go:117] "RemoveContainer" containerID="c351e66ead3b863e0bf41427b1781b2f159cb4bc7fb04945ff8e1b265cc8ad3a" Mar 18 13:50:03.009678 master-0 kubenswrapper[27835]: I0318 13:50:03.009571 27835 scope.go:117] "RemoveContainer" containerID="f89bf4204b535ab750e695241c60df308ecd5f2ab4f3683dc0bb785add0f33b4" Mar 18 13:50:03.041865 master-0 kubenswrapper[27835]: I0318 13:50:03.041750 27835 scope.go:117] "RemoveContainer" containerID="7566039db55a3fa7a1c1e767db952b44b7bce0f0fe743c1d4c48cd7017b26a49" Mar 18 13:50:03.070573 master-0 kubenswrapper[27835]: I0318 13:50:03.070524 27835 scope.go:117] "RemoveContainer" containerID="265a0fc59c15ef6c7115cfbea99575530cc3273b796645623364156cc8e7e6bf" Mar 18 13:50:03.128728 master-0 kubenswrapper[27835]: I0318 13:50:03.128618 27835 scope.go:117] "RemoveContainer" containerID="b2864d0693a14f2ef2886a56951aea80a78bb695e86b9f014ae631a06b82319e" Mar 18 13:50:03.181845 master-0 kubenswrapper[27835]: I0318 13:50:03.181784 27835 scope.go:117] "RemoveContainer" containerID="d23f8cc53a3e4ddc3643e1801a64fc62b1e0b556bc933ce54e5a6a65b3886338" Mar 18 13:50:03.207155 master-0 kubenswrapper[27835]: I0318 13:50:03.207097 27835 scope.go:117] "RemoveContainer" containerID="43f8b3bdb46056778948638357ff249d94df1b2779c75ff764cc1ed3f2f99d3a" Mar 18 13:51:33.546160 master-0 kubenswrapper[27835]: I0318 13:51:33.546061 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-b62cm/must-gather-n4hvs"] Mar 18 13:51:33.549775 master-0 kubenswrapper[27835]: I0318 13:51:33.549729 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b62cm/must-gather-n4hvs" Mar 18 13:51:33.556446 master-0 kubenswrapper[27835]: I0318 13:51:33.556335 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-b62cm"/"kube-root-ca.crt" Mar 18 13:51:33.563257 master-0 kubenswrapper[27835]: I0318 13:51:33.559901 27835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-b62cm"/"openshift-service-ca.crt" Mar 18 13:51:33.563506 master-0 kubenswrapper[27835]: I0318 13:51:33.563315 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-b62cm/must-gather-45vsl"] Mar 18 13:51:33.565833 master-0 kubenswrapper[27835]: I0318 13:51:33.565783 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b62cm/must-gather-45vsl" Mar 18 13:51:33.615277 master-0 kubenswrapper[27835]: I0318 13:51:33.615199 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-b62cm/must-gather-n4hvs"] Mar 18 13:51:33.630437 master-0 kubenswrapper[27835]: I0318 13:51:33.630364 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/20cc9b16-52d7-430e-b094-dd677e407a71-must-gather-output\") pod \"must-gather-n4hvs\" (UID: \"20cc9b16-52d7-430e-b094-dd677e407a71\") " pod="openshift-must-gather-b62cm/must-gather-n4hvs" Mar 18 13:51:33.635852 master-0 kubenswrapper[27835]: I0318 13:51:33.630723 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkwcn\" (UniqueName: \"kubernetes.io/projected/20cc9b16-52d7-430e-b094-dd677e407a71-kube-api-access-pkwcn\") pod \"must-gather-n4hvs\" (UID: \"20cc9b16-52d7-430e-b094-dd677e407a71\") " pod="openshift-must-gather-b62cm/must-gather-n4hvs" Mar 18 13:51:33.635852 master-0 kubenswrapper[27835]: I0318 13:51:33.631849 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mdhn\" (UniqueName: \"kubernetes.io/projected/a268fe80-ede4-47c3-ad12-33e4cdaf744b-kube-api-access-8mdhn\") pod \"must-gather-45vsl\" (UID: \"a268fe80-ede4-47c3-ad12-33e4cdaf744b\") " pod="openshift-must-gather-b62cm/must-gather-45vsl" Mar 18 13:51:33.635852 master-0 kubenswrapper[27835]: I0318 13:51:33.631940 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a268fe80-ede4-47c3-ad12-33e4cdaf744b-must-gather-output\") pod \"must-gather-45vsl\" (UID: \"a268fe80-ede4-47c3-ad12-33e4cdaf744b\") " pod="openshift-must-gather-b62cm/must-gather-45vsl" Mar 18 13:51:33.635852 master-0 kubenswrapper[27835]: I0318 13:51:33.630996 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-b62cm/must-gather-45vsl"] Mar 18 13:51:33.735676 master-0 kubenswrapper[27835]: I0318 13:51:33.735341 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/20cc9b16-52d7-430e-b094-dd677e407a71-must-gather-output\") pod \"must-gather-n4hvs\" (UID: \"20cc9b16-52d7-430e-b094-dd677e407a71\") " pod="openshift-must-gather-b62cm/must-gather-n4hvs" Mar 18 13:51:33.735676 master-0 kubenswrapper[27835]: I0318 13:51:33.735480 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkwcn\" (UniqueName: \"kubernetes.io/projected/20cc9b16-52d7-430e-b094-dd677e407a71-kube-api-access-pkwcn\") pod \"must-gather-n4hvs\" (UID: \"20cc9b16-52d7-430e-b094-dd677e407a71\") " pod="openshift-must-gather-b62cm/must-gather-n4hvs" Mar 18 13:51:33.735676 master-0 kubenswrapper[27835]: I0318 13:51:33.735616 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mdhn\" (UniqueName: \"kubernetes.io/projected/a268fe80-ede4-47c3-ad12-33e4cdaf744b-kube-api-access-8mdhn\") pod \"must-gather-45vsl\" (UID: \"a268fe80-ede4-47c3-ad12-33e4cdaf744b\") " pod="openshift-must-gather-b62cm/must-gather-45vsl" Mar 18 13:51:33.735676 master-0 kubenswrapper[27835]: I0318 13:51:33.735652 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a268fe80-ede4-47c3-ad12-33e4cdaf744b-must-gather-output\") pod \"must-gather-45vsl\" (UID: \"a268fe80-ede4-47c3-ad12-33e4cdaf744b\") " pod="openshift-must-gather-b62cm/must-gather-45vsl" Mar 18 13:51:33.737326 master-0 kubenswrapper[27835]: I0318 13:51:33.736260 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a268fe80-ede4-47c3-ad12-33e4cdaf744b-must-gather-output\") pod \"must-gather-45vsl\" (UID: \"a268fe80-ede4-47c3-ad12-33e4cdaf744b\") " pod="openshift-must-gather-b62cm/must-gather-45vsl" Mar 18 13:51:33.737326 master-0 kubenswrapper[27835]: I0318 13:51:33.736807 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/20cc9b16-52d7-430e-b094-dd677e407a71-must-gather-output\") pod \"must-gather-n4hvs\" (UID: \"20cc9b16-52d7-430e-b094-dd677e407a71\") " pod="openshift-must-gather-b62cm/must-gather-n4hvs" Mar 18 13:51:33.772530 master-0 kubenswrapper[27835]: I0318 13:51:33.772473 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkwcn\" (UniqueName: \"kubernetes.io/projected/20cc9b16-52d7-430e-b094-dd677e407a71-kube-api-access-pkwcn\") pod \"must-gather-n4hvs\" (UID: \"20cc9b16-52d7-430e-b094-dd677e407a71\") " pod="openshift-must-gather-b62cm/must-gather-n4hvs" Mar 18 13:51:33.773119 master-0 kubenswrapper[27835]: I0318 13:51:33.773073 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mdhn\" (UniqueName: \"kubernetes.io/projected/a268fe80-ede4-47c3-ad12-33e4cdaf744b-kube-api-access-8mdhn\") pod \"must-gather-45vsl\" (UID: \"a268fe80-ede4-47c3-ad12-33e4cdaf744b\") " pod="openshift-must-gather-b62cm/must-gather-45vsl" Mar 18 13:51:33.889429 master-0 kubenswrapper[27835]: I0318 13:51:33.889286 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b62cm/must-gather-n4hvs" Mar 18 13:51:33.916783 master-0 kubenswrapper[27835]: I0318 13:51:33.916713 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b62cm/must-gather-45vsl" Mar 18 13:51:34.503631 master-0 kubenswrapper[27835]: I0318 13:51:34.503557 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-b62cm/must-gather-n4hvs"] Mar 18 13:51:34.506820 master-0 kubenswrapper[27835]: W0318 13:51:34.506769 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20cc9b16_52d7_430e_b094_dd677e407a71.slice/crio-b0916939f0748d35a9c7feb99d931dcf564e133632c3082c5443913b753db146 WatchSource:0}: Error finding container b0916939f0748d35a9c7feb99d931dcf564e133632c3082c5443913b753db146: Status 404 returned error can't find the container with id b0916939f0748d35a9c7feb99d931dcf564e133632c3082c5443913b753db146 Mar 18 13:51:34.512754 master-0 kubenswrapper[27835]: I0318 13:51:34.512691 27835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 13:51:34.581492 master-0 kubenswrapper[27835]: W0318 13:51:34.570567 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda268fe80_ede4_47c3_ad12_33e4cdaf744b.slice/crio-4644dc51efe50564d3ca1abc18f63bc1202fd7f19f8e4b1682ac61c82e4f2080 WatchSource:0}: Error finding container 4644dc51efe50564d3ca1abc18f63bc1202fd7f19f8e4b1682ac61c82e4f2080: Status 404 returned error can't find the container with id 4644dc51efe50564d3ca1abc18f63bc1202fd7f19f8e4b1682ac61c82e4f2080 Mar 18 13:51:34.581492 master-0 kubenswrapper[27835]: I0318 13:51:34.572883 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-b62cm/must-gather-45vsl"] Mar 18 13:51:35.107908 master-0 kubenswrapper[27835]: I0318 13:51:35.107810 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b62cm/must-gather-n4hvs" event={"ID":"20cc9b16-52d7-430e-b094-dd677e407a71","Type":"ContainerStarted","Data":"b0916939f0748d35a9c7feb99d931dcf564e133632c3082c5443913b753db146"} Mar 18 13:51:35.111473 master-0 kubenswrapper[27835]: I0318 13:51:35.110968 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b62cm/must-gather-45vsl" event={"ID":"a268fe80-ede4-47c3-ad12-33e4cdaf744b","Type":"ContainerStarted","Data":"4644dc51efe50564d3ca1abc18f63bc1202fd7f19f8e4b1682ac61c82e4f2080"} Mar 18 13:51:37.144073 master-0 kubenswrapper[27835]: I0318 13:51:37.143389 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b62cm/must-gather-45vsl" event={"ID":"a268fe80-ede4-47c3-ad12-33e4cdaf744b","Type":"ContainerStarted","Data":"087cafdca84e84a9a1c1f4bb4edf29814dcecde893b888abe1b956302db29096"} Mar 18 13:51:37.144073 master-0 kubenswrapper[27835]: I0318 13:51:37.143467 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b62cm/must-gather-45vsl" event={"ID":"a268fe80-ede4-47c3-ad12-33e4cdaf744b","Type":"ContainerStarted","Data":"373b9724cc8c9fe3e08e7961bdfb76eda3f560a8499ed2b1bb77748c778a9cdb"} Mar 18 13:51:37.186445 master-0 kubenswrapper[27835]: I0318 13:51:37.186336 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-b62cm/must-gather-45vsl" podStartSLOduration=2.76178714 podStartE2EDuration="4.186297605s" podCreationTimestamp="2026-03-18 13:51:33 +0000 UTC" firstStartedPulling="2026-03-18 13:51:34.574337493 +0000 UTC m=+1658.539549053" lastFinishedPulling="2026-03-18 13:51:35.998847958 +0000 UTC m=+1659.964059518" observedRunningTime="2026-03-18 13:51:37.166966127 +0000 UTC m=+1661.132177687" watchObservedRunningTime="2026-03-18 13:51:37.186297605 +0000 UTC m=+1661.151509175" Mar 18 13:51:39.421162 master-0 kubenswrapper[27835]: I0318 13:51:39.420908 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-7d58488df-bqmqw_8ffe2e75-9cc3-4244-95c8-800463c5aa28/cluster-version-operator/0.log" Mar 18 13:51:43.059814 master-0 kubenswrapper[27835]: E0318 13:51:43.059751 27835 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.32.10:53734->192.168.32.10:34267: write tcp 192.168.32.10:53734->192.168.32.10:34267: write: connection reset by peer Mar 18 13:51:43.404519 master-0 kubenswrapper[27835]: I0318 13:51:43.402915 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-jxd5c_4a9778bb-d878-4ae5-a7ab-004a1c9ee1e3/nmstate-console-plugin/0.log" Mar 18 13:51:43.740220 master-0 kubenswrapper[27835]: I0318 13:51:43.736928 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-k22dv_2954b55d-28c7-453e-b699-c73bb05e05b9/nmstate-handler/0.log" Mar 18 13:51:43.874800 master-0 kubenswrapper[27835]: I0318 13:51:43.868154 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-29vrd_4eadbe2a-43ed-4b28-871c-b8f7a3579fae/nmstate-metrics/0.log" Mar 18 13:51:43.880850 master-0 kubenswrapper[27835]: I0318 13:51:43.880791 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-29vrd_4eadbe2a-43ed-4b28-871c-b8f7a3579fae/kube-rbac-proxy/0.log" Mar 18 13:51:43.933853 master-0 kubenswrapper[27835]: I0318 13:51:43.933797 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-xp5mm_62a087e6-1ed1-4cab-9c7e-f712ed437c35/nmstate-operator/0.log" Mar 18 13:51:43.961500 master-0 kubenswrapper[27835]: I0318 13:51:43.961461 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-mqsmx_bc036027-dfb3-45cb-aedb-017872d83490/nmstate-webhook/0.log" Mar 18 13:51:44.062474 master-0 kubenswrapper[27835]: I0318 13:51:44.061815 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-h6chv_c730deaa-80a1-4fa1-aa84-0c523df12f53/controller/0.log" Mar 18 13:51:44.077449 master-0 kubenswrapper[27835]: I0318 13:51:44.075276 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-h6chv_c730deaa-80a1-4fa1-aa84-0c523df12f53/kube-rbac-proxy/0.log" Mar 18 13:51:44.131452 master-0 kubenswrapper[27835]: I0318 13:51:44.131389 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n7qbc_bb0c2adb-b940-48a0-b870-826b63cc2de4/controller/0.log" Mar 18 13:51:45.174620 master-0 kubenswrapper[27835]: I0318 13:51:45.171880 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n7qbc_bb0c2adb-b940-48a0-b870-826b63cc2de4/frr/0.log" Mar 18 13:51:45.186440 master-0 kubenswrapper[27835]: I0318 13:51:45.185288 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n7qbc_bb0c2adb-b940-48a0-b870-826b63cc2de4/reloader/0.log" Mar 18 13:51:45.204438 master-0 kubenswrapper[27835]: I0318 13:51:45.201794 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n7qbc_bb0c2adb-b940-48a0-b870-826b63cc2de4/frr-metrics/0.log" Mar 18 13:51:45.217447 master-0 kubenswrapper[27835]: I0318 13:51:45.212307 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n7qbc_bb0c2adb-b940-48a0-b870-826b63cc2de4/kube-rbac-proxy/0.log" Mar 18 13:51:45.237434 master-0 kubenswrapper[27835]: I0318 13:51:45.235814 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n7qbc_bb0c2adb-b940-48a0-b870-826b63cc2de4/kube-rbac-proxy-frr/0.log" Mar 18 13:51:45.251438 master-0 kubenswrapper[27835]: I0318 13:51:45.249629 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n7qbc_bb0c2adb-b940-48a0-b870-826b63cc2de4/cp-frr-files/0.log" Mar 18 13:51:45.276437 master-0 kubenswrapper[27835]: I0318 13:51:45.276217 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n7qbc_bb0c2adb-b940-48a0-b870-826b63cc2de4/cp-reloader/0.log" Mar 18 13:51:45.286442 master-0 kubenswrapper[27835]: I0318 13:51:45.286061 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n7qbc_bb0c2adb-b940-48a0-b870-826b63cc2de4/cp-metrics/0.log" Mar 18 13:51:45.322437 master-0 kubenswrapper[27835]: I0318 13:51:45.321982 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-49gvc_ea0bf2ff-c852-4365-9856-f3e4aa5b766d/frr-k8s-webhook-server/0.log" Mar 18 13:51:45.400805 master-0 kubenswrapper[27835]: I0318 13:51:45.400716 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-549849bb46-fnr78_ebfb29f2-806f-4507-8235-055d55cd360b/manager/0.log" Mar 18 13:51:45.427462 master-0 kubenswrapper[27835]: I0318 13:51:45.426317 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6c4dc89f9c-lppql_01ddc4ef-d5f9-4761-b728-828f9a107b0c/webhook-server/0.log" Mar 18 13:51:45.785003 master-0 kubenswrapper[27835]: I0318 13:51:45.782965 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jr6nk_675171a9-caa6-495c-930b-aee9a8d4cbeb/speaker/0.log" Mar 18 13:51:45.792927 master-0 kubenswrapper[27835]: I0318 13:51:45.788977 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jr6nk_675171a9-caa6-495c-930b-aee9a8d4cbeb/kube-rbac-proxy/0.log" Mar 18 13:51:46.308379 master-0 kubenswrapper[27835]: I0318 13:51:46.308313 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b62cm/must-gather-n4hvs" event={"ID":"20cc9b16-52d7-430e-b094-dd677e407a71","Type":"ContainerStarted","Data":"075c920d172f68ee65ecf3ee9c8031f226a23d605177448ddb5b39b8c73312b7"} Mar 18 13:51:46.308379 master-0 kubenswrapper[27835]: I0318 13:51:46.308373 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b62cm/must-gather-n4hvs" event={"ID":"20cc9b16-52d7-430e-b094-dd677e407a71","Type":"ContainerStarted","Data":"844d0d5e76f27147065e2177656476a05938801c67411538e24ddac43d678510"} Mar 18 13:51:46.328196 master-0 kubenswrapper[27835]: I0318 13:51:46.328102 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-b62cm/must-gather-n4hvs" podStartSLOduration=2.680049349 podStartE2EDuration="13.32808287s" podCreationTimestamp="2026-03-18 13:51:33 +0000 UTC" firstStartedPulling="2026-03-18 13:51:34.512561048 +0000 UTC m=+1658.477772608" lastFinishedPulling="2026-03-18 13:51:45.160594579 +0000 UTC m=+1669.125806129" observedRunningTime="2026-03-18 13:51:46.322682204 +0000 UTC m=+1670.287893774" watchObservedRunningTime="2026-03-18 13:51:46.32808287 +0000 UTC m=+1670.293294440" Mar 18 13:51:48.028326 master-0 kubenswrapper[27835]: I0318 13:51:48.028274 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcdctl/0.log" Mar 18 13:51:48.317789 master-0 kubenswrapper[27835]: I0318 13:51:48.315557 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd/0.log" Mar 18 13:51:48.337523 master-0 kubenswrapper[27835]: I0318 13:51:48.336212 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-metrics/0.log" Mar 18 13:51:48.360097 master-0 kubenswrapper[27835]: I0318 13:51:48.356821 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-readyz/0.log" Mar 18 13:51:48.384113 master-0 kubenswrapper[27835]: I0318 13:51:48.384052 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-rev/0.log" Mar 18 13:51:48.426818 master-0 kubenswrapper[27835]: I0318 13:51:48.426733 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/setup/0.log" Mar 18 13:51:48.447505 master-0 kubenswrapper[27835]: I0318 13:51:48.447438 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-ensure-env-vars/0.log" Mar 18 13:51:48.470055 master-0 kubenswrapper[27835]: I0318 13:51:48.469990 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-resources-copy/0.log" Mar 18 13:51:48.525482 master-0 kubenswrapper[27835]: I0318 13:51:48.524554 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_814ffa63-b08e-4de8-b912-8d7f0638230b/installer/0.log" Mar 18 13:51:48.566383 master-0 kubenswrapper[27835]: I0318 13:51:48.566321 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_5217b77d-b517-45c3-b76d-eee86d72b141/installer/0.log" Mar 18 13:51:48.656068 master-0 kubenswrapper[27835]: I0318 13:51:48.656022 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-6b4867d948-qsvkm_d5b46736-78ed-49f2-88ea-b5f864675d0f/oauth-openshift/0.log" Mar 18 13:51:50.144227 master-0 kubenswrapper[27835]: I0318 13:51:50.144111 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-sp4ld_bf9d21f9-64d6-4e21-a985-491197038568/authentication-operator/0.log" Mar 18 13:51:50.224437 master-0 kubenswrapper[27835]: I0318 13:51:50.222543 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-sp4ld_bf9d21f9-64d6-4e21-a985-491197038568/authentication-operator/1.log" Mar 18 13:51:50.261229 master-0 kubenswrapper[27835]: I0318 13:51:50.261172 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/assisted-installer_assisted-installer-controller-7bfhd_80daec9e-b15b-4782-a1f7-ce398bbe323b/assisted-installer-controller/0.log" Mar 18 13:51:50.571046 master-0 kubenswrapper[27835]: I0318 13:51:50.570982 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-b62cm/master-0-debug-t8gp9"] Mar 18 13:51:50.572992 master-0 kubenswrapper[27835]: I0318 13:51:50.572943 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b62cm/master-0-debug-t8gp9" Mar 18 13:51:50.622215 master-0 kubenswrapper[27835]: I0318 13:51:50.622145 27835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-b62cm/perf-node-gather-daemonset-r4jxf"] Mar 18 13:51:50.624025 master-0 kubenswrapper[27835]: I0318 13:51:50.623986 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b62cm/perf-node-gather-daemonset-r4jxf" Mar 18 13:51:50.714489 master-0 kubenswrapper[27835]: I0318 13:51:50.701861 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-b62cm/perf-node-gather-daemonset-r4jxf"] Mar 18 13:51:50.714489 master-0 kubenswrapper[27835]: I0318 13:51:50.704683 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgpnw\" (UniqueName: \"kubernetes.io/projected/973053dd-62c6-47ec-a86b-2eba772a7039-kube-api-access-bgpnw\") pod \"master-0-debug-t8gp9\" (UID: \"973053dd-62c6-47ec-a86b-2eba772a7039\") " pod="openshift-must-gather-b62cm/master-0-debug-t8gp9" Mar 18 13:51:50.714489 master-0 kubenswrapper[27835]: I0318 13:51:50.704843 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/973053dd-62c6-47ec-a86b-2eba772a7039-host\") pod \"master-0-debug-t8gp9\" (UID: \"973053dd-62c6-47ec-a86b-2eba772a7039\") " pod="openshift-must-gather-b62cm/master-0-debug-t8gp9" Mar 18 13:51:50.807073 master-0 kubenswrapper[27835]: I0318 13:51:50.806998 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/9b79b334-3d10-459a-a6c3-30395b193d8f-podres\") pod \"perf-node-gather-daemonset-r4jxf\" (UID: \"9b79b334-3d10-459a-a6c3-30395b193d8f\") " pod="openshift-must-gather-b62cm/perf-node-gather-daemonset-r4jxf" Mar 18 13:51:50.807073 master-0 kubenswrapper[27835]: I0318 13:51:50.807068 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9b79b334-3d10-459a-a6c3-30395b193d8f-sys\") pod \"perf-node-gather-daemonset-r4jxf\" (UID: \"9b79b334-3d10-459a-a6c3-30395b193d8f\") " pod="openshift-must-gather-b62cm/perf-node-gather-daemonset-r4jxf" Mar 18 13:51:50.807358 master-0 kubenswrapper[27835]: I0318 13:51:50.807094 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/9b79b334-3d10-459a-a6c3-30395b193d8f-proc\") pod \"perf-node-gather-daemonset-r4jxf\" (UID: \"9b79b334-3d10-459a-a6c3-30395b193d8f\") " pod="openshift-must-gather-b62cm/perf-node-gather-daemonset-r4jxf" Mar 18 13:51:50.807358 master-0 kubenswrapper[27835]: I0318 13:51:50.807122 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/973053dd-62c6-47ec-a86b-2eba772a7039-host\") pod \"master-0-debug-t8gp9\" (UID: \"973053dd-62c6-47ec-a86b-2eba772a7039\") " pod="openshift-must-gather-b62cm/master-0-debug-t8gp9" Mar 18 13:51:50.807358 master-0 kubenswrapper[27835]: I0318 13:51:50.807193 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df9xh\" (UniqueName: \"kubernetes.io/projected/9b79b334-3d10-459a-a6c3-30395b193d8f-kube-api-access-df9xh\") pod \"perf-node-gather-daemonset-r4jxf\" (UID: \"9b79b334-3d10-459a-a6c3-30395b193d8f\") " pod="openshift-must-gather-b62cm/perf-node-gather-daemonset-r4jxf" Mar 18 13:51:50.807358 master-0 kubenswrapper[27835]: I0318 13:51:50.807217 27835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b79b334-3d10-459a-a6c3-30395b193d8f-lib-modules\") pod \"perf-node-gather-daemonset-r4jxf\" (UID: \"9b79b334-3d10-459a-a6c3-30395b193d8f\") " pod="openshift-must-gather-b62cm/perf-node-gather-daemonset-r4jxf" Mar 18 13:51:50.807358 master-0 kubenswrapper[27835]: I0318 13:51:50.807290 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgpnw\" (UniqueName: \"kubernetes.io/projected/973053dd-62c6-47ec-a86b-2eba772a7039-kube-api-access-bgpnw\") pod \"master-0-debug-t8gp9\" (UID: \"973053dd-62c6-47ec-a86b-2eba772a7039\") " pod="openshift-must-gather-b62cm/master-0-debug-t8gp9" Mar 18 13:51:50.807815 master-0 kubenswrapper[27835]: I0318 13:51:50.807794 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/973053dd-62c6-47ec-a86b-2eba772a7039-host\") pod \"master-0-debug-t8gp9\" (UID: \"973053dd-62c6-47ec-a86b-2eba772a7039\") " pod="openshift-must-gather-b62cm/master-0-debug-t8gp9" Mar 18 13:51:50.830294 master-0 kubenswrapper[27835]: I0318 13:51:50.830169 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgpnw\" (UniqueName: \"kubernetes.io/projected/973053dd-62c6-47ec-a86b-2eba772a7039-kube-api-access-bgpnw\") pod \"master-0-debug-t8gp9\" (UID: \"973053dd-62c6-47ec-a86b-2eba772a7039\") " pod="openshift-must-gather-b62cm/master-0-debug-t8gp9" Mar 18 13:51:50.901648 master-0 kubenswrapper[27835]: I0318 13:51:50.901582 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b62cm/master-0-debug-t8gp9" Mar 18 13:51:50.909606 master-0 kubenswrapper[27835]: I0318 13:51:50.909546 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/9b79b334-3d10-459a-a6c3-30395b193d8f-podres\") pod \"perf-node-gather-daemonset-r4jxf\" (UID: \"9b79b334-3d10-459a-a6c3-30395b193d8f\") " pod="openshift-must-gather-b62cm/perf-node-gather-daemonset-r4jxf" Mar 18 13:51:50.909858 master-0 kubenswrapper[27835]: I0318 13:51:50.909616 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9b79b334-3d10-459a-a6c3-30395b193d8f-sys\") pod \"perf-node-gather-daemonset-r4jxf\" (UID: \"9b79b334-3d10-459a-a6c3-30395b193d8f\") " pod="openshift-must-gather-b62cm/perf-node-gather-daemonset-r4jxf" Mar 18 13:51:50.909858 master-0 kubenswrapper[27835]: I0318 13:51:50.909652 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/9b79b334-3d10-459a-a6c3-30395b193d8f-proc\") pod \"perf-node-gather-daemonset-r4jxf\" (UID: \"9b79b334-3d10-459a-a6c3-30395b193d8f\") " pod="openshift-must-gather-b62cm/perf-node-gather-daemonset-r4jxf" Mar 18 13:51:50.909858 master-0 kubenswrapper[27835]: I0318 13:51:50.909745 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/9b79b334-3d10-459a-a6c3-30395b193d8f-podres\") pod \"perf-node-gather-daemonset-r4jxf\" (UID: \"9b79b334-3d10-459a-a6c3-30395b193d8f\") " pod="openshift-must-gather-b62cm/perf-node-gather-daemonset-r4jxf" Mar 18 13:51:50.909858 master-0 kubenswrapper[27835]: I0318 13:51:50.909810 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/9b79b334-3d10-459a-a6c3-30395b193d8f-proc\") pod \"perf-node-gather-daemonset-r4jxf\" (UID: \"9b79b334-3d10-459a-a6c3-30395b193d8f\") " pod="openshift-must-gather-b62cm/perf-node-gather-daemonset-r4jxf" Mar 18 13:51:50.909858 master-0 kubenswrapper[27835]: I0318 13:51:50.909812 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9b79b334-3d10-459a-a6c3-30395b193d8f-sys\") pod \"perf-node-gather-daemonset-r4jxf\" (UID: \"9b79b334-3d10-459a-a6c3-30395b193d8f\") " pod="openshift-must-gather-b62cm/perf-node-gather-daemonset-r4jxf" Mar 18 13:51:50.910097 master-0 kubenswrapper[27835]: I0318 13:51:50.909848 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df9xh\" (UniqueName: \"kubernetes.io/projected/9b79b334-3d10-459a-a6c3-30395b193d8f-kube-api-access-df9xh\") pod \"perf-node-gather-daemonset-r4jxf\" (UID: \"9b79b334-3d10-459a-a6c3-30395b193d8f\") " pod="openshift-must-gather-b62cm/perf-node-gather-daemonset-r4jxf" Mar 18 13:51:50.910097 master-0 kubenswrapper[27835]: I0318 13:51:50.909907 27835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b79b334-3d10-459a-a6c3-30395b193d8f-lib-modules\") pod \"perf-node-gather-daemonset-r4jxf\" (UID: \"9b79b334-3d10-459a-a6c3-30395b193d8f\") " pod="openshift-must-gather-b62cm/perf-node-gather-daemonset-r4jxf" Mar 18 13:51:50.910097 master-0 kubenswrapper[27835]: I0318 13:51:50.910085 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b79b334-3d10-459a-a6c3-30395b193d8f-lib-modules\") pod \"perf-node-gather-daemonset-r4jxf\" (UID: \"9b79b334-3d10-459a-a6c3-30395b193d8f\") " pod="openshift-must-gather-b62cm/perf-node-gather-daemonset-r4jxf" Mar 18 13:51:50.930137 master-0 kubenswrapper[27835]: I0318 13:51:50.930084 27835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df9xh\" (UniqueName: \"kubernetes.io/projected/9b79b334-3d10-459a-a6c3-30395b193d8f-kube-api-access-df9xh\") pod \"perf-node-gather-daemonset-r4jxf\" (UID: \"9b79b334-3d10-459a-a6c3-30395b193d8f\") " pod="openshift-must-gather-b62cm/perf-node-gather-daemonset-r4jxf" Mar 18 13:51:50.947029 master-0 kubenswrapper[27835]: W0318 13:51:50.946535 27835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod973053dd_62c6_47ec_a86b_2eba772a7039.slice/crio-979dc5e9c382f4c42f60c5ee272d09e5d45c0fd30edc26158134fb47258efb90 WatchSource:0}: Error finding container 979dc5e9c382f4c42f60c5ee272d09e5d45c0fd30edc26158134fb47258efb90: Status 404 returned error can't find the container with id 979dc5e9c382f4c42f60c5ee272d09e5d45c0fd30edc26158134fb47258efb90 Mar 18 13:51:50.951110 master-0 kubenswrapper[27835]: I0318 13:51:50.951076 27835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b62cm/perf-node-gather-daemonset-r4jxf" Mar 18 13:51:51.437440 master-0 kubenswrapper[27835]: I0318 13:51:51.436174 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b62cm/master-0-debug-t8gp9" event={"ID":"973053dd-62c6-47ec-a86b-2eba772a7039","Type":"ContainerStarted","Data":"979dc5e9c382f4c42f60c5ee272d09e5d45c0fd30edc26158134fb47258efb90"} Mar 18 13:51:51.475426 master-0 kubenswrapper[27835]: I0318 13:51:51.470108 27835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-b62cm/perf-node-gather-daemonset-r4jxf"] Mar 18 13:51:51.768860 master-0 kubenswrapper[27835]: I0318 13:51:51.764825 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-7dcf5569b5-gvmtv_00375107-9a3b-4161-a90d-72ea8827c5fc/router/4.log" Mar 18 13:51:51.773122 master-0 kubenswrapper[27835]: I0318 13:51:51.769660 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-7dcf5569b5-gvmtv_00375107-9a3b-4161-a90d-72ea8827c5fc/router/5.log" Mar 18 13:51:52.457269 master-0 kubenswrapper[27835]: I0318 13:51:52.456131 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b62cm/perf-node-gather-daemonset-r4jxf" event={"ID":"9b79b334-3d10-459a-a6c3-30395b193d8f","Type":"ContainerStarted","Data":"8241d52b343bfdbb1b90b200c8bb878fc691cf0cebd075f3ccaf39a2e4341738"} Mar 18 13:51:52.457269 master-0 kubenswrapper[27835]: I0318 13:51:52.456195 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b62cm/perf-node-gather-daemonset-r4jxf" event={"ID":"9b79b334-3d10-459a-a6c3-30395b193d8f","Type":"ContainerStarted","Data":"c7b6d77a80eeb8894739c1832b7a124d75db3b6b6e110eb734678d1da4a30b3f"} Mar 18 13:51:52.457962 master-0 kubenswrapper[27835]: I0318 13:51:52.457752 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-must-gather-b62cm/perf-node-gather-daemonset-r4jxf" Mar 18 13:51:52.501895 master-0 kubenswrapper[27835]: I0318 13:51:52.500270 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-b62cm/perf-node-gather-daemonset-r4jxf" podStartSLOduration=2.500246887 podStartE2EDuration="2.500246887s" podCreationTimestamp="2026-03-18 13:51:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 13:51:52.485879565 +0000 UTC m=+1676.451091125" watchObservedRunningTime="2026-03-18 13:51:52.500246887 +0000 UTC m=+1676.465458447" Mar 18 13:51:53.129021 master-0 kubenswrapper[27835]: I0318 13:51:53.128229 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-5bb6f9f846-6wq9c_7fb5bad7-07d9-45ac-ad27-a887d12d148f/oauth-apiserver/0.log" Mar 18 13:51:53.152432 master-0 kubenswrapper[27835]: I0318 13:51:53.150219 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-07518-api-0_2fcb2b42-0212-4505-ac03-9b094ce3f2eb/cinder-07518-api-log/0.log" Mar 18 13:51:53.152432 master-0 kubenswrapper[27835]: I0318 13:51:53.152272 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-5bb6f9f846-6wq9c_7fb5bad7-07d9-45ac-ad27-a887d12d148f/fix-audit-permissions/0.log" Mar 18 13:51:53.166870 master-0 kubenswrapper[27835]: I0318 13:51:53.166815 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-07518-api-0_2fcb2b42-0212-4505-ac03-9b094ce3f2eb/cinder-api/0.log" Mar 18 13:51:53.250439 master-0 kubenswrapper[27835]: I0318 13:51:53.249310 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-07518-backup-0_59594f38-4062-4f67-a913-2d334dba30c0/cinder-backup/0.log" Mar 18 13:51:53.272564 master-0 kubenswrapper[27835]: I0318 13:51:53.271879 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-07518-backup-0_59594f38-4062-4f67-a913-2d334dba30c0/probe/0.log" Mar 18 13:51:53.288201 master-0 kubenswrapper[27835]: I0318 13:51:53.288151 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-07518-db-sync-jhwx7_b14c1c8f-6882-4b86-bfdf-ef1cae5e8321/cinder-07518-db-sync/0.log" Mar 18 13:51:53.388485 master-0 kubenswrapper[27835]: I0318 13:51:53.388336 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-07518-scheduler-0_76243274-a619-4c3f-8b9c-19a8c89eb6f1/cinder-scheduler/0.log" Mar 18 13:51:53.409445 master-0 kubenswrapper[27835]: I0318 13:51:53.406372 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-07518-scheduler-0_76243274-a619-4c3f-8b9c-19a8c89eb6f1/probe/0.log" Mar 18 13:51:53.505275 master-0 kubenswrapper[27835]: I0318 13:51:53.505218 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-07518-volume-lvm-iscsi-0_223b1b5f-e043-4216-99da-3329720c45d7/cinder-volume/0.log" Mar 18 13:51:53.545448 master-0 kubenswrapper[27835]: I0318 13:51:53.544974 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-07518-volume-lvm-iscsi-0_223b1b5f-e043-4216-99da-3329720c45d7/probe/0.log" Mar 18 13:51:53.564971 master-0 kubenswrapper[27835]: I0318 13:51:53.564921 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-cb39-account-create-update-tfcs5_dad967d0-9ad1-4342-9885-e5e28a68d3af/mariadb-account-create-update/0.log" Mar 18 13:51:53.597795 master-0 kubenswrapper[27835]: I0318 13:51:53.597744 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-db-create-8j9l6_24b8658d-cf54-48b9-b4ee-fceef6236403/mariadb-database-create/0.log" Mar 18 13:51:53.608941 master-0 kubenswrapper[27835]: I0318 13:51:53.608570 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-54f4d7d767-8d7qt_eef4e478-7158-4599-af3f-b53306d36487/dnsmasq-dns/0.log" Mar 18 13:51:53.617312 master-0 kubenswrapper[27835]: I0318 13:51:53.617271 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-54f4d7d767-8d7qt_eef4e478-7158-4599-af3f-b53306d36487/init/0.log" Mar 18 13:51:53.633887 master-0 kubenswrapper[27835]: I0318 13:51:53.632909 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-0dea-account-create-update-x7t7q_40629581-9efa-429e-adb9-d34bd5a7503d/mariadb-account-create-update/0.log" Mar 18 13:51:53.729811 master-0 kubenswrapper[27835]: I0318 13:51:53.729757 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-4f519-default-external-api-0_a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1/glance-log/0.log" Mar 18 13:51:53.738607 master-0 kubenswrapper[27835]: I0318 13:51:53.738570 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-4f519-default-external-api-0_a451fb23-4bc4-4a1c-a0fb-6854ce0ea1b1/glance-httpd/0.log" Mar 18 13:51:53.804229 master-0 kubenswrapper[27835]: I0318 13:51:53.804166 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-4f519-default-internal-api-0_7948c85d-504d-49e9-8c64-f201e15eae46/glance-log/0.log" Mar 18 13:51:53.815143 master-0 kubenswrapper[27835]: I0318 13:51:53.815081 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-4f519-default-internal-api-0_7948c85d-504d-49e9-8c64-f201e15eae46/glance-httpd/0.log" Mar 18 13:51:53.824379 master-0 kubenswrapper[27835]: I0318 13:51:53.824344 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-db-create-qftmv_c5346156-9529-4575-bd1e-79d1d034ec56/mariadb-database-create/0.log" Mar 18 13:51:53.844537 master-0 kubenswrapper[27835]: I0318 13:51:53.844485 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-db-sync-p2pp6_05a99a49-5215-40c9-ba30-54618aa67479/glance-db-sync/0.log" Mar 18 13:51:53.862297 master-0 kubenswrapper[27835]: I0318 13:51:53.862240 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-64fd4cfc77-nwzfl_f5e2d50f-621d-4ba6-9293-7c3d111e08dc/ironic-api-log/0.log" Mar 18 13:51:53.889834 master-0 kubenswrapper[27835]: I0318 13:51:53.889791 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-64fd4cfc77-nwzfl_f5e2d50f-621d-4ba6-9293-7c3d111e08dc/ironic-api/0.log" Mar 18 13:51:53.900352 master-0 kubenswrapper[27835]: I0318 13:51:53.900263 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-64fd4cfc77-nwzfl_f5e2d50f-621d-4ba6-9293-7c3d111e08dc/init/0.log" Mar 18 13:51:53.933936 master-0 kubenswrapper[27835]: I0318 13:51:53.933882 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_d2a793d4-62c6-4482-a5e5-21ed4cc72e33/ironic-conductor/0.log" Mar 18 13:51:53.945517 master-0 kubenswrapper[27835]: I0318 13:51:53.943924 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_d2a793d4-62c6-4482-a5e5-21ed4cc72e33/httpboot/0.log" Mar 18 13:51:53.977077 master-0 kubenswrapper[27835]: I0318 13:51:53.977009 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_d2a793d4-62c6-4482-a5e5-21ed4cc72e33/dnsmasq/0.log" Mar 18 13:51:53.995010 master-0 kubenswrapper[27835]: I0318 13:51:53.994855 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_d2a793d4-62c6-4482-a5e5-21ed4cc72e33/init/0.log" Mar 18 13:51:54.008020 master-0 kubenswrapper[27835]: I0318 13:51:54.007925 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_d2a793d4-62c6-4482-a5e5-21ed4cc72e33/ironic-python-agent-init/0.log" Mar 18 13:51:54.213426 master-0 kubenswrapper[27835]: I0318 13:51:54.213363 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-lqtbg_2b12af9a-8041-477f-90eb-05bb6ae7861a/kube-rbac-proxy/0.log" Mar 18 13:51:54.274853 master-0 kubenswrapper[27835]: I0318 13:51:54.274696 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-lqtbg_2b12af9a-8041-477f-90eb-05bb6ae7861a/cluster-autoscaler-operator/0.log" Mar 18 13:51:54.288504 master-0 kubenswrapper[27835]: I0318 13:51:54.288458 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-lqtbg_2b12af9a-8041-477f-90eb-05bb6ae7861a/cluster-autoscaler-operator/1.log" Mar 18 13:51:54.312101 master-0 kubenswrapper[27835]: I0318 13:51:54.311827 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-mz4qp_ac6d8eb6-1d5e-4757-9823-5ffe478c711c/cluster-baremetal-operator/2.log" Mar 18 13:51:54.317227 master-0 kubenswrapper[27835]: I0318 13:51:54.317145 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-mz4qp_ac6d8eb6-1d5e-4757-9823-5ffe478c711c/cluster-baremetal-operator/3.log" Mar 18 13:51:54.336536 master-0 kubenswrapper[27835]: I0318 13:51:54.336480 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-mz4qp_ac6d8eb6-1d5e-4757-9823-5ffe478c711c/baremetal-kube-rbac-proxy/0.log" Mar 18 13:51:54.355890 master-0 kubenswrapper[27835]: I0318 13:51:54.355837 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-5vhnr_bdcd72a6-a8e8-47ba-8b51-7325d35bad6b/control-plane-machine-set-operator/0.log" Mar 18 13:51:54.358173 master-0 kubenswrapper[27835]: I0318 13:51:54.358086 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-5vhnr_bdcd72a6-a8e8-47ba-8b51-7325d35bad6b/control-plane-machine-set-operator/1.log" Mar 18 13:51:54.392624 master-0 kubenswrapper[27835]: I0318 13:51:54.392568 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-9bqxm_68104a8c-3fac-4d4b-b975-bc2d045b3375/kube-rbac-proxy/0.log" Mar 18 13:51:54.419294 master-0 kubenswrapper[27835]: I0318 13:51:54.419244 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-9bqxm_68104a8c-3fac-4d4b-b975-bc2d045b3375/machine-api-operator/0.log" Mar 18 13:51:54.424010 master-0 kubenswrapper[27835]: I0318 13:51:54.423981 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-9bqxm_68104a8c-3fac-4d4b-b975-bc2d045b3375/machine-api-operator/1.log" Mar 18 13:51:54.726180 master-0 kubenswrapper[27835]: I0318 13:51:54.726139 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_d2a793d4-62c6-4482-a5e5-21ed4cc72e33/pxe-init/0.log" Mar 18 13:51:54.739071 master-0 kubenswrapper[27835]: I0318 13:51:54.739015 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-create-vkvz7_b686167f-35b3-4b2c-a6c7-074c63023350/mariadb-database-create/0.log" Mar 18 13:51:54.761458 master-0 kubenswrapper[27835]: I0318 13:51:54.761338 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-sync-tmkx2_653293fa-39a3-4b35-ad41-6a3cac734e80/ironic-db-sync/0.log" Mar 18 13:51:54.773473 master-0 kubenswrapper[27835]: I0318 13:51:54.773382 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-sync-tmkx2_653293fa-39a3-4b35-ad41-6a3cac734e80/init/0.log" Mar 18 13:51:54.784498 master-0 kubenswrapper[27835]: I0318 13:51:54.784293 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-f5fe-account-create-update-4dwnc_6b779cf8-bfc6-417f-b28b-4cfa060a6db2/mariadb-account-create-update/0.log" Mar 18 13:51:54.818293 master-0 kubenswrapper[27835]: I0318 13:51:54.818247 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_3d880f5c-a4f4-4e41-aa98-185af9802996/ironic-inspector-httpd/0.log" Mar 18 13:51:54.844017 master-0 kubenswrapper[27835]: I0318 13:51:54.841983 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_3d880f5c-a4f4-4e41-aa98-185af9802996/ironic-inspector/0.log" Mar 18 13:51:54.849739 master-0 kubenswrapper[27835]: I0318 13:51:54.849689 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_3d880f5c-a4f4-4e41-aa98-185af9802996/inspector-httpboot/0.log" Mar 18 13:51:54.862769 master-0 kubenswrapper[27835]: I0318 13:51:54.861503 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_3d880f5c-a4f4-4e41-aa98-185af9802996/ramdisk-logs/0.log" Mar 18 13:51:54.874287 master-0 kubenswrapper[27835]: I0318 13:51:54.874089 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_3d880f5c-a4f4-4e41-aa98-185af9802996/inspector-dnsmasq/0.log" Mar 18 13:51:54.883998 master-0 kubenswrapper[27835]: I0318 13:51:54.883879 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_3d880f5c-a4f4-4e41-aa98-185af9802996/ironic-python-agent-init/0.log" Mar 18 13:51:54.902172 master-0 kubenswrapper[27835]: I0318 13:51:54.901603 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_3d880f5c-a4f4-4e41-aa98-185af9802996/inspector-pxe-init/0.log" Mar 18 13:51:54.913052 master-0 kubenswrapper[27835]: I0318 13:51:54.912590 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-1f88-account-create-update-pdcfx_b7cb1d5c-2899-4a22-b167-97f305fd2393/mariadb-account-create-update/0.log" Mar 18 13:51:54.920721 master-0 kubenswrapper[27835]: I0318 13:51:54.920665 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-db-create-tkbch_31573687-c807-4574-8813-ba2280fb170a/mariadb-database-create/0.log" Mar 18 13:51:54.958581 master-0 kubenswrapper[27835]: I0318 13:51:54.958492 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-db-sync-s7cvj_c1fc873e-3d35-4632-a144-08e9b6e74e02/ironic-inspector-db-sync/0.log" Mar 18 13:51:54.979030 master-0 kubenswrapper[27835]: I0318 13:51:54.978912 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-neutron-agent-689c666fd-tjnb9_cc7df07d-4c6b-469f-b007-e3d799a49fd5/ironic-neutron-agent/3.log" Mar 18 13:51:54.981647 master-0 kubenswrapper[27835]: I0318 13:51:54.981615 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-neutron-agent-689c666fd-tjnb9_cc7df07d-4c6b-469f-b007-e3d799a49fd5/ironic-neutron-agent/2.log" Mar 18 13:51:55.049436 master-0 kubenswrapper[27835]: I0318 13:51:55.048024 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-58d6fd9d55-fhlft_88fc7fd1-97a2-4879-b813-d29ddcc4d3b0/keystone-api/0.log" Mar 18 13:51:55.058439 master-0 kubenswrapper[27835]: I0318 13:51:55.056175 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-8ae1-account-create-update-tpb5p_84a4c423-d112-4b2d-9917-2fb8af188187/mariadb-account-create-update/0.log" Mar 18 13:51:55.075442 master-0 kubenswrapper[27835]: I0318 13:51:55.075380 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-bootstrap-xcnw2_81aa1d7d-25c8-4408-a790-4c9fe8ed9742/keystone-bootstrap/0.log" Mar 18 13:51:55.091437 master-0 kubenswrapper[27835]: I0318 13:51:55.088382 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-db-create-lbbjp_36e457fd-4fdd-4013-a0ba-e4b04480064b/mariadb-database-create/0.log" Mar 18 13:51:55.108429 master-0 kubenswrapper[27835]: I0318 13:51:55.107401 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-db-sync-sj9px_b0c2698f-e0c0-413f-8e86-184f8ab0b231/keystone-db-sync/0.log" Mar 18 13:51:56.288167 master-0 kubenswrapper[27835]: I0318 13:51:56.288048 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-8lzkl_80994f33-21e7-45d6-9f21-1cfd8e1f41ce/cluster-cloud-controller-manager/0.log" Mar 18 13:51:56.309961 master-0 kubenswrapper[27835]: I0318 13:51:56.309890 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-8lzkl_80994f33-21e7-45d6-9f21-1cfd8e1f41ce/cluster-cloud-controller-manager/1.log" Mar 18 13:51:57.151729 master-0 kubenswrapper[27835]: I0318 13:51:57.151665 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-8lzkl_80994f33-21e7-45d6-9f21-1cfd8e1f41ce/config-sync-controllers/0.log" Mar 18 13:51:57.157495 master-0 kubenswrapper[27835]: I0318 13:51:57.156933 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-8lzkl_80994f33-21e7-45d6-9f21-1cfd8e1f41ce/config-sync-controllers/1.log" Mar 18 13:51:57.174564 master-0 kubenswrapper[27835]: I0318 13:51:57.174359 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-8lzkl_80994f33-21e7-45d6-9f21-1cfd8e1f41ce/kube-rbac-proxy/0.log" Mar 18 13:51:59.704701 master-0 kubenswrapper[27835]: I0318 13:51:59.704640 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-744f9dbf77-59d95_a9de7243-90c0-49c4-8059-34e0558fca40/kube-rbac-proxy/0.log" Mar 18 13:51:59.765850 master-0 kubenswrapper[27835]: I0318 13:51:59.764662 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-744f9dbf77-59d95_a9de7243-90c0-49c4-8059-34e0558fca40/cloud-credential-operator/0.log" Mar 18 13:52:00.990694 master-0 kubenswrapper[27835]: I0318 13:52:00.990596 27835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-b62cm/perf-node-gather-daemonset-r4jxf" Mar 18 13:52:02.359508 master-0 kubenswrapper[27835]: I0318 13:52:02.359102 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-qwgrm_ce3728ab-5d50-40ac-95b3-74a5b62a557f/openshift-config-operator/1.log" Mar 18 13:52:02.370440 master-0 kubenswrapper[27835]: I0318 13:52:02.370255 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-qwgrm_ce3728ab-5d50-40ac-95b3-74a5b62a557f/openshift-config-operator/2.log" Mar 18 13:52:02.399187 master-0 kubenswrapper[27835]: I0318 13:52:02.398966 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-qwgrm_ce3728ab-5d50-40ac-95b3-74a5b62a557f/openshift-api/0.log" Mar 18 13:52:03.131162 master-0 kubenswrapper[27835]: I0318 13:52:03.126726 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_52c3f355-8836-4d58-84ee-d6c2afb6c776/memcached/0.log" Mar 18 13:52:03.299444 master-0 kubenswrapper[27835]: I0318 13:52:03.295118 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6bf5c56f77-ccf49_1253c32f-2d3e-455b-881b-e4d04e4f746c/neutron-api/0.log" Mar 18 13:52:03.319838 master-0 kubenswrapper[27835]: I0318 13:52:03.319776 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6bf5c56f77-ccf49_1253c32f-2d3e-455b-881b-e4d04e4f746c/neutron-httpd/0.log" Mar 18 13:52:03.344084 master-0 kubenswrapper[27835]: I0318 13:52:03.344014 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-ac18-account-create-update-t8fjp_9fb62e62-ee53-4992-a2af-06420a2812ed/mariadb-account-create-update/0.log" Mar 18 13:52:03.372225 master-0 kubenswrapper[27835]: I0318 13:52:03.372177 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-db-create-7tsdz_825a0cb3-ac48-4974-8e6c-eb30956b617e/mariadb-database-create/0.log" Mar 18 13:52:03.400525 master-0 kubenswrapper[27835]: I0318 13:52:03.396312 27835 scope.go:117] "RemoveContainer" containerID="f3610f6494fb27e9563ad2cb50befe234c773a41f0fe21f55ca6a508beb55696" Mar 18 13:52:03.401811 master-0 kubenswrapper[27835]: I0318 13:52:03.401649 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-db-sync-mtjpd_66ceeb4b-18bd-4d26-a1e7-ef700771aeec/neutron-db-sync/0.log" Mar 18 13:52:03.516290 master-0 kubenswrapper[27835]: I0318 13:52:03.516227 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_8fb70fd2-80c4-4ae6-8568-67bb171eb5cd/nova-api-log/0.log" Mar 18 13:52:03.638615 master-0 kubenswrapper[27835]: I0318 13:52:03.638464 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_8fb70fd2-80c4-4ae6-8568-67bb171eb5cd/nova-api-api/0.log" Mar 18 13:52:03.646746 master-0 kubenswrapper[27835]: I0318 13:52:03.646684 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-db-create-gwfch_549326ca-2cc7-49f2-bffb-a76953703d01/mariadb-database-create/0.log" Mar 18 13:52:03.656020 master-0 kubenswrapper[27835]: I0318 13:52:03.655953 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-ff10-account-create-update-49jqz_d8553452-d2f4-4ad0-9fe0-0d2d984be2b0/mariadb-account-create-update/0.log" Mar 18 13:52:03.664949 master-0 kubenswrapper[27835]: I0318 13:52:03.664876 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-63a9-account-create-update-dbfqn_38fdf2ba-5074-47b8-b534-c12270b771e8/mariadb-account-create-update/0.log" Mar 18 13:52:03.682713 master-0 kubenswrapper[27835]: I0318 13:52:03.681925 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-cell-mapping-qd8n8_d09230d0-21c5-4e63-b56d-f9346dce706d/nova-manage/0.log" Mar 18 13:52:03.805797 master-0 kubenswrapper[27835]: I0318 13:52:03.805750 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_abc6cc7b-4e38-41fc-9f09-66f46a74cdbc/nova-cell0-conductor-conductor/0.log" Mar 18 13:52:03.823239 master-0 kubenswrapper[27835]: I0318 13:52:03.823180 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-db-sync-rnqgv_37131fa0-c66e-4abc-b58f-f84c492056df/nova-cell0-conductor-db-sync/0.log" Mar 18 13:52:03.854762 master-0 kubenswrapper[27835]: I0318 13:52:03.854387 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-db-create-g2nft_83e94368-fc4d-4fdd-bb0e-266a8d57bfd1/mariadb-database-create/0.log" Mar 18 13:52:03.856776 master-0 kubenswrapper[27835]: I0318 13:52:03.856743 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-76b6568d85-cxn2f_5e8a9745-28f0-47b0-a930-49ce65ca1ae0/console-operator/0.log" Mar 18 13:52:03.867644 master-0 kubenswrapper[27835]: I0318 13:52:03.867597 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-cell-mapping-pgthm_fe2d1a9b-c471-41bf-bdb6-a2dc271cddf0/nova-manage/0.log" Mar 18 13:52:03.943444 master-0 kubenswrapper[27835]: I0318 13:52:03.943375 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-compute-ironic-compute-0_0a015210-4858-461a-955f-761275fc2b6a/nova-cell1-compute-ironic-compute-compute/0.log" Mar 18 13:52:04.012378 master-0 kubenswrapper[27835]: I0318 13:52:04.012334 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_3c66e4d3-5b19-4466-b3c3-61bd46730848/nova-cell1-conductor-conductor/0.log" Mar 18 13:52:04.027755 master-0 kubenswrapper[27835]: I0318 13:52:04.027516 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-db-sync-89c8v_aaa7ee44-2954-4d95-8cf3-a1fd004b87e6/nova-cell1-conductor-db-sync/0.log" Mar 18 13:52:04.037093 master-0 kubenswrapper[27835]: I0318 13:52:04.037035 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-db-create-g9ddk_8aeac094-6720-488f-a255-c3042b569033/mariadb-database-create/0.log" Mar 18 13:52:04.052934 master-0 kubenswrapper[27835]: I0318 13:52:04.052881 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-ea54-account-create-update-85drm_1ceb0690-8659-42ff-929b-faf3879c7ffb/mariadb-account-create-update/0.log" Mar 18 13:52:04.067939 master-0 kubenswrapper[27835]: I0318 13:52:04.067892 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-host-discover-hqvsx_d5ac354c-0c65-4201-987a-da6a75a7a63c/nova-manage/0.log" Mar 18 13:52:04.138122 master-0 kubenswrapper[27835]: I0318 13:52:04.138069 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_b3975b38-2c90-4255-b2d1-ab1b2fa723b5/nova-cell1-novncproxy-novncproxy/0.log" Mar 18 13:52:04.217530 master-0 kubenswrapper[27835]: I0318 13:52:04.217291 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239/nova-metadata-log/0.log" Mar 18 13:52:04.278225 master-0 kubenswrapper[27835]: I0318 13:52:04.278173 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d84c3ac5-5bcc-47c0-b2ef-f944cdd4c239/nova-metadata-metadata/0.log" Mar 18 13:52:04.366970 master-0 kubenswrapper[27835]: I0318 13:52:04.366906 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_2965ba5c-2c16-4038-9a9a-2b7720a286f7/nova-scheduler-scheduler/0.log" Mar 18 13:52:04.403534 master-0 kubenswrapper[27835]: I0318 13:52:04.403466 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b2c70993-8f51-411e-ae8d-65ea5161c75e/galera/0.log" Mar 18 13:52:04.419272 master-0 kubenswrapper[27835]: I0318 13:52:04.419210 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b2c70993-8f51-411e-ae8d-65ea5161c75e/mysql-bootstrap/0.log" Mar 18 13:52:04.449361 master-0 kubenswrapper[27835]: I0318 13:52:04.449306 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_483c8547-dea7-4fd8-b4db-4849a346d73a/galera/0.log" Mar 18 13:52:04.463193 master-0 kubenswrapper[27835]: I0318 13:52:04.463163 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_483c8547-dea7-4fd8-b4db-4849a346d73a/mysql-bootstrap/0.log" Mar 18 13:52:04.473291 master-0 kubenswrapper[27835]: I0318 13:52:04.473192 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_c7433697-622c-4928-be5c-cd3a3c65cc8c/openstackclient/0.log" Mar 18 13:52:04.490229 master-0 kubenswrapper[27835]: I0318 13:52:04.489888 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-zkgwp_c5e56cc8-abf0-403b-9a1f-3f073ae89422/openstack-network-exporter/0.log" Mar 18 13:52:04.506551 master-0 kubenswrapper[27835]: I0318 13:52:04.506503 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gtvxg_c0b4c95e-e177-4d01-bd2e-ff94c66d594d/ovsdb-server/0.log" Mar 18 13:52:04.520163 master-0 kubenswrapper[27835]: I0318 13:52:04.520114 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gtvxg_c0b4c95e-e177-4d01-bd2e-ff94c66d594d/ovs-vswitchd/0.log" Mar 18 13:52:04.533776 master-0 kubenswrapper[27835]: I0318 13:52:04.532940 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gtvxg_c0b4c95e-e177-4d01-bd2e-ff94c66d594d/ovsdb-server-init/0.log" Mar 18 13:52:04.549513 master-0 kubenswrapper[27835]: I0318 13:52:04.547757 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-sf7pp_c90d0a00-53e9-4145-8137-d73cee5337f0/ovn-controller/0.log" Mar 18 13:52:04.558799 master-0 kubenswrapper[27835]: I0318 13:52:04.558296 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_6c17ba4f-cece-4e06-b786-27992d500ae7/ovn-northd/0.log" Mar 18 13:52:04.564930 master-0 kubenswrapper[27835]: I0318 13:52:04.564891 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_6c17ba4f-cece-4e06-b786-27992d500ae7/openstack-network-exporter/0.log" Mar 18 13:52:04.585680 master-0 kubenswrapper[27835]: I0318 13:52:04.585614 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_d31955ae-5786-4417-880f-f71c7d4347c1/ovsdbserver-nb/0.log" Mar 18 13:52:04.591588 master-0 kubenswrapper[27835]: I0318 13:52:04.591543 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_d31955ae-5786-4417-880f-f71c7d4347c1/openstack-network-exporter/0.log" Mar 18 13:52:04.612795 master-0 kubenswrapper[27835]: I0318 13:52:04.612732 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a40472e4-a359-41e5-8e65-f6c7cb3b7ac7/ovsdbserver-sb/0.log" Mar 18 13:52:04.622064 master-0 kubenswrapper[27835]: I0318 13:52:04.621150 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a40472e4-a359-41e5-8e65-f6c7cb3b7ac7/openstack-network-exporter/0.log" Mar 18 13:52:04.639497 master-0 kubenswrapper[27835]: I0318 13:52:04.639359 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-571e-account-create-update-qwmpc_6962730c-c54f-4806-8fd4-165f6c7b5728/mariadb-account-create-update/0.log" Mar 18 13:52:04.668705 master-0 kubenswrapper[27835]: I0318 13:52:04.668665 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-8645fd5fb8-gm6gg_a24d7c40-e543-4f45-81ab-2769f7077efb/placement-log/0.log" Mar 18 13:52:04.698368 master-0 kubenswrapper[27835]: I0318 13:52:04.698316 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-8645fd5fb8-gm6gg_a24d7c40-e543-4f45-81ab-2769f7077efb/placement-api/0.log" Mar 18 13:52:04.706581 master-0 kubenswrapper[27835]: I0318 13:52:04.706149 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-db-create-hd42z_19318ed4-494a-44fb-b05f-1b82d07994be/mariadb-database-create/0.log" Mar 18 13:52:04.726460 master-0 kubenswrapper[27835]: I0318 13:52:04.726277 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-db-sync-8mqtx_fa96d61e-36ca-4846-a008-82052eff4ab8/placement-db-sync/0.log" Mar 18 13:52:04.765439 master-0 kubenswrapper[27835]: I0318 13:52:04.764701 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6f51d7b8-7e16-4c10-8e64-a5af8a8522ed/rabbitmq/0.log" Mar 18 13:52:04.780997 master-0 kubenswrapper[27835]: I0318 13:52:04.780926 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6f51d7b8-7e16-4c10-8e64-a5af8a8522ed/setup-container/0.log" Mar 18 13:52:04.823621 master-0 kubenswrapper[27835]: I0318 13:52:04.822521 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1b76c81c-7824-4bfa-af04-9c1fd928fb63/rabbitmq/0.log" Mar 18 13:52:04.829043 master-0 kubenswrapper[27835]: I0318 13:52:04.829004 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1b76c81c-7824-4bfa-af04-9c1fd928fb63/setup-container/0.log" Mar 18 13:52:04.841542 master-0 kubenswrapper[27835]: I0318 13:52:04.841487 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7d8bdbbd8c-dfpg8_30881447-b8cd-4e98-a3d6-78b186a00d82/console/0.log" Mar 18 13:52:04.846787 master-0 kubenswrapper[27835]: I0318 13:52:04.846662 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_root-account-create-update-5njg9_644cf655-a9b1-4879-b84b-8db7dc2a98e6/mariadb-account-create-update/0.log" Mar 18 13:52:04.879563 master-0 kubenswrapper[27835]: I0318 13:52:04.879506 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-66b8ffb895-2tkdh_3f688009-66eb-490d-a0fb-464dba69fb96/download-server/0.log" Mar 18 13:52:04.891549 master-0 kubenswrapper[27835]: I0318 13:52:04.891509 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6f6cbd65db-gzvz8_8db812c3-c391-4147-8220-fdd68cdd11d3/proxy-httpd/0.log" Mar 18 13:52:04.913635 master-0 kubenswrapper[27835]: I0318 13:52:04.912455 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6f6cbd65db-gzvz8_8db812c3-c391-4147-8220-fdd68cdd11d3/proxy-server/0.log" Mar 18 13:52:04.931380 master-0 kubenswrapper[27835]: I0318 13:52:04.931286 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-gd8zd_e6a77218-90a4-48a8-beff-2c3b2d66c53e/swift-ring-rebalance/0.log" Mar 18 13:52:05.215973 master-0 kubenswrapper[27835]: I0318 13:52:05.214476 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a767523-b86f-496d-940f-7a8afb0c3535/account-server/0.log" Mar 18 13:52:05.256742 master-0 kubenswrapper[27835]: I0318 13:52:05.253601 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a767523-b86f-496d-940f-7a8afb0c3535/account-replicator/0.log" Mar 18 13:52:05.276377 master-0 kubenswrapper[27835]: I0318 13:52:05.276100 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a767523-b86f-496d-940f-7a8afb0c3535/account-auditor/0.log" Mar 18 13:52:05.283553 master-0 kubenswrapper[27835]: I0318 13:52:05.283441 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a767523-b86f-496d-940f-7a8afb0c3535/account-reaper/0.log" Mar 18 13:52:05.298100 master-0 kubenswrapper[27835]: I0318 13:52:05.297463 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a767523-b86f-496d-940f-7a8afb0c3535/container-server/0.log" Mar 18 13:52:05.320185 master-0 kubenswrapper[27835]: I0318 13:52:05.320124 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a767523-b86f-496d-940f-7a8afb0c3535/container-replicator/0.log" Mar 18 13:52:05.329841 master-0 kubenswrapper[27835]: I0318 13:52:05.329796 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a767523-b86f-496d-940f-7a8afb0c3535/container-auditor/0.log" Mar 18 13:52:05.341449 master-0 kubenswrapper[27835]: I0318 13:52:05.341352 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a767523-b86f-496d-940f-7a8afb0c3535/container-updater/0.log" Mar 18 13:52:05.354859 master-0 kubenswrapper[27835]: I0318 13:52:05.354660 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a767523-b86f-496d-940f-7a8afb0c3535/object-server/0.log" Mar 18 13:52:05.367155 master-0 kubenswrapper[27835]: I0318 13:52:05.367086 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a767523-b86f-496d-940f-7a8afb0c3535/object-replicator/0.log" Mar 18 13:52:05.377101 master-0 kubenswrapper[27835]: I0318 13:52:05.377052 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a767523-b86f-496d-940f-7a8afb0c3535/object-auditor/0.log" Mar 18 13:52:05.456393 master-0 kubenswrapper[27835]: I0318 13:52:05.394123 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a767523-b86f-496d-940f-7a8afb0c3535/object-updater/0.log" Mar 18 13:52:05.477676 master-0 kubenswrapper[27835]: I0318 13:52:05.477219 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a767523-b86f-496d-940f-7a8afb0c3535/object-expirer/0.log" Mar 18 13:52:05.481228 master-0 kubenswrapper[27835]: I0318 13:52:05.481188 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a767523-b86f-496d-940f-7a8afb0c3535/rsync/0.log" Mar 18 13:52:05.490958 master-0 kubenswrapper[27835]: I0318 13:52:05.490918 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7a767523-b86f-496d-940f-7a8afb0c3535/swift-recon-cron/0.log" Mar 18 13:52:06.060877 master-0 kubenswrapper[27835]: I0318 13:52:06.060811 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-7d87854d6-r4dzk_15a97fe2-5022-4997-9936-4247ae7ecb43/cluster-storage-operator/0.log" Mar 18 13:52:06.066553 master-0 kubenswrapper[27835]: I0318 13:52:06.066509 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-7d87854d6-r4dzk_15a97fe2-5022-4997-9936-4247ae7ecb43/cluster-storage-operator/1.log" Mar 18 13:52:06.082811 master-0 kubenswrapper[27835]: I0318 13:52:06.082541 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qsnxz_deb67ea0-8342-40cb-b0f4-115270e878dd/snapshot-controller/3.log" Mar 18 13:52:06.087308 master-0 kubenswrapper[27835]: I0318 13:52:06.087249 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qsnxz_deb67ea0-8342-40cb-b0f4-115270e878dd/snapshot-controller/4.log" Mar 18 13:52:06.124974 master-0 kubenswrapper[27835]: I0318 13:52:06.124920 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-5f5d689c6b-68lgz_394061b4-1bac-4699-96d2-88558c1adaf8/csi-snapshot-controller-operator/0.log" Mar 18 13:52:06.130877 master-0 kubenswrapper[27835]: I0318 13:52:06.130834 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-5f5d689c6b-68lgz_394061b4-1bac-4699-96d2-88558c1adaf8/csi-snapshot-controller-operator/1.log" Mar 18 13:52:07.141274 master-0 kubenswrapper[27835]: I0318 13:52:07.140075 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-9c5679d8f-5lzzn_59bf5114-29f9-4f70-8582-108e95327cb2/dns-operator/0.log" Mar 18 13:52:07.401673 master-0 kubenswrapper[27835]: I0318 13:52:07.401569 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-9c5679d8f-5lzzn_59bf5114-29f9-4f70-8582-108e95327cb2/kube-rbac-proxy/0.log" Mar 18 13:52:08.243469 master-0 kubenswrapper[27835]: I0318 13:52:08.242204 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-92s8c_029b127e-0faf-4957-b591-9c561b053cda/dns/0.log" Mar 18 13:52:08.264438 master-0 kubenswrapper[27835]: I0318 13:52:08.264364 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-92s8c_029b127e-0faf-4957-b591-9c561b053cda/kube-rbac-proxy/0.log" Mar 18 13:52:08.277108 master-0 kubenswrapper[27835]: I0318 13:52:08.277048 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-7vddk_13c71f7d-1485-4f86-beb2-ee16cf420350/dns-node-resolver/0.log" Mar 18 13:52:09.585793 master-0 kubenswrapper[27835]: I0318 13:52:09.585709 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-jx4mf_19a76585-a9ac-4ed9-9146-bb77b31848c6/etcd-operator/1.log" Mar 18 13:52:09.602144 master-0 kubenswrapper[27835]: I0318 13:52:09.599518 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-jx4mf_19a76585-a9ac-4ed9-9146-bb77b31848c6/etcd-operator/2.log" Mar 18 13:52:09.895293 master-0 kubenswrapper[27835]: I0318 13:52:09.894862 27835 scope.go:117] "RemoveContainer" containerID="31d77bb97d052618340ff92a3dd7c1b07c30e9b59fbfd768fb7455fa70f9cefd" Mar 18 13:52:10.517004 master-0 kubenswrapper[27835]: I0318 13:52:10.516939 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcdctl/0.log" Mar 18 13:52:10.773756 master-0 kubenswrapper[27835]: I0318 13:52:10.773632 27835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b62cm/master-0-debug-t8gp9" event={"ID":"973053dd-62c6-47ec-a86b-2eba772a7039","Type":"ContainerStarted","Data":"14a4294f2a84b236b0fdfe84565c243dda6e7d893e25bc908e3581c3e43fe9da"} Mar 18 13:52:10.811094 master-0 kubenswrapper[27835]: I0318 13:52:10.811024 27835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-b62cm/master-0-debug-t8gp9" podStartSLOduration=1.783657026 podStartE2EDuration="20.811004341s" podCreationTimestamp="2026-03-18 13:51:50 +0000 UTC" firstStartedPulling="2026-03-18 13:51:50.95214776 +0000 UTC m=+1674.917359320" lastFinishedPulling="2026-03-18 13:52:09.979495075 +0000 UTC m=+1693.944706635" observedRunningTime="2026-03-18 13:52:10.805535042 +0000 UTC m=+1694.770746602" watchObservedRunningTime="2026-03-18 13:52:10.811004341 +0000 UTC m=+1694.776215911" Mar 18 13:52:10.829404 master-0 kubenswrapper[27835]: I0318 13:52:10.829330 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd/0.log" Mar 18 13:52:10.853043 master-0 kubenswrapper[27835]: I0318 13:52:10.852846 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-metrics/0.log" Mar 18 13:52:10.868979 master-0 kubenswrapper[27835]: I0318 13:52:10.868926 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-readyz/0.log" Mar 18 13:52:10.888490 master-0 kubenswrapper[27835]: I0318 13:52:10.888400 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-rev/0.log" Mar 18 13:52:10.901785 master-0 kubenswrapper[27835]: I0318 13:52:10.901741 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/setup/0.log" Mar 18 13:52:10.918116 master-0 kubenswrapper[27835]: I0318 13:52:10.917470 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-ensure-env-vars/0.log" Mar 18 13:52:10.933958 master-0 kubenswrapper[27835]: I0318 13:52:10.933921 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-resources-copy/0.log" Mar 18 13:52:10.981551 master-0 kubenswrapper[27835]: I0318 13:52:10.981296 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_814ffa63-b08e-4de8-b912-8d7f0638230b/installer/0.log" Mar 18 13:52:11.031529 master-0 kubenswrapper[27835]: I0318 13:52:11.031096 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_5217b77d-b517-45c3-b76d-eee86d72b141/installer/0.log" Mar 18 13:52:12.133988 master-0 kubenswrapper[27835]: I0318 13:52:12.133873 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_cluster-image-registry-operator-5549dc66cb-6l7pv_290d1f84-5c5c-4bff-b045-e6020793cded/cluster-image-registry-operator/0.log" Mar 18 13:52:12.151947 master-0 kubenswrapper[27835]: I0318 13:52:12.151906 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-f6zh2_407238a6-5f5c-4676-8ece-b9146f67cfb9/node-ca/0.log" Mar 18 13:52:12.922703 master-0 kubenswrapper[27835]: I0318 13:52:12.922563 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-wqxpk_d9d09a56-ed4c-40b7-8be1-f3934c07296e/ingress-operator/5.log" Mar 18 13:52:12.924812 master-0 kubenswrapper[27835]: I0318 13:52:12.924759 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-wqxpk_d9d09a56-ed4c-40b7-8be1-f3934c07296e/ingress-operator/6.log" Mar 18 13:52:12.938453 master-0 kubenswrapper[27835]: I0318 13:52:12.938392 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-wqxpk_d9d09a56-ed4c-40b7-8be1-f3934c07296e/kube-rbac-proxy/0.log" Mar 18 13:52:13.741709 master-0 kubenswrapper[27835]: I0318 13:52:13.741615 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-6hldc_e54baea8-6c3e-45a0-ac8c-880a8aaa8208/serve-healthcheck-canary/0.log" Mar 18 13:52:14.414521 master-0 kubenswrapper[27835]: I0318 13:52:14.414469 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-68bf6ff9d6-bbqfl_0a6090f0-3a27-4102-b8dd-b071644a3543/insights-operator/0.log" Mar 18 13:52:14.441427 master-0 kubenswrapper[27835]: I0318 13:52:14.441359 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-68bf6ff9d6-bbqfl_0a6090f0-3a27-4102-b8dd-b071644a3543/insights-operator/1.log" Mar 18 13:52:16.629879 master-0 kubenswrapper[27835]: I0318 13:52:16.629828 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_3d46bdde-fa29-4faa-a7a8-fb52f9bdd939/alertmanager/0.log" Mar 18 13:52:16.642834 master-0 kubenswrapper[27835]: I0318 13:52:16.642786 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_3d46bdde-fa29-4faa-a7a8-fb52f9bdd939/config-reloader/0.log" Mar 18 13:52:16.659010 master-0 kubenswrapper[27835]: I0318 13:52:16.658960 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_3d46bdde-fa29-4faa-a7a8-fb52f9bdd939/kube-rbac-proxy-web/0.log" Mar 18 13:52:16.674941 master-0 kubenswrapper[27835]: I0318 13:52:16.674893 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_3d46bdde-fa29-4faa-a7a8-fb52f9bdd939/kube-rbac-proxy/0.log" Mar 18 13:52:16.688776 master-0 kubenswrapper[27835]: I0318 13:52:16.688727 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_3d46bdde-fa29-4faa-a7a8-fb52f9bdd939/kube-rbac-proxy-metric/0.log" Mar 18 13:52:16.701010 master-0 kubenswrapper[27835]: I0318 13:52:16.700959 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_3d46bdde-fa29-4faa-a7a8-fb52f9bdd939/prom-label-proxy/0.log" Mar 18 13:52:16.716831 master-0 kubenswrapper[27835]: I0318 13:52:16.716771 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_3d46bdde-fa29-4faa-a7a8-fb52f9bdd939/init-config-reloader/0.log" Mar 18 13:52:16.778534 master-0 kubenswrapper[27835]: I0318 13:52:16.778470 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_cluster-monitoring-operator-58845fbb57-n8hgl_8c0e5eca-819b-40f3-bf77-0cd90a4f6e94/cluster-monitoring-operator/0.log" Mar 18 13:52:16.796147 master-0 kubenswrapper[27835]: I0318 13:52:16.796085 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7bbc969446-mxcng_6a93ff56-362e-44fc-a54f-666a01559892/kube-state-metrics/0.log" Mar 18 13:52:16.805783 master-0 kubenswrapper[27835]: I0318 13:52:16.805696 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7bbc969446-mxcng_6a93ff56-362e-44fc-a54f-666a01559892/kube-rbac-proxy-main/0.log" Mar 18 13:52:16.815888 master-0 kubenswrapper[27835]: I0318 13:52:16.815815 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7bbc969446-mxcng_6a93ff56-362e-44fc-a54f-666a01559892/kube-rbac-proxy-self/0.log" Mar 18 13:52:16.835380 master-0 kubenswrapper[27835]: I0318 13:52:16.835326 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_metrics-server-5668cbc594-2kzhf_1edb65b1-2635-4c6b-95c9-da2befb434b2/metrics-server/0.log" Mar 18 13:52:16.877165 master-0 kubenswrapper[27835]: I0318 13:52:16.877095 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_monitoring-plugin-75f844c59b-v7dzz_9439c9e6-476c-4bee-8285-5155fa553f30/monitoring-plugin/0.log" Mar 18 13:52:16.894907 master-0 kubenswrapper[27835]: I0318 13:52:16.894781 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-t4p42_702076a9-b542-4768-9e9e-99b2cac0a66e/node-exporter/0.log" Mar 18 13:52:16.909498 master-0 kubenswrapper[27835]: I0318 13:52:16.909449 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-t4p42_702076a9-b542-4768-9e9e-99b2cac0a66e/kube-rbac-proxy/0.log" Mar 18 13:52:16.920964 master-0 kubenswrapper[27835]: I0318 13:52:16.920922 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-t4p42_702076a9-b542-4768-9e9e-99b2cac0a66e/init-textfile/0.log" Mar 18 13:52:16.946382 master-0 kubenswrapper[27835]: I0318 13:52:16.946320 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-5dc6c74576-s4ql7_d325c523-8e6f-4665-9f54-334eaf301141/kube-rbac-proxy-main/0.log" Mar 18 13:52:16.960689 master-0 kubenswrapper[27835]: I0318 13:52:16.960640 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-5dc6c74576-s4ql7_d325c523-8e6f-4665-9f54-334eaf301141/kube-rbac-proxy-self/0.log" Mar 18 13:52:16.985167 master-0 kubenswrapper[27835]: I0318 13:52:16.985125 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-5dc6c74576-s4ql7_d325c523-8e6f-4665-9f54-334eaf301141/openshift-state-metrics/0.log" Mar 18 13:52:17.025921 master-0 kubenswrapper[27835]: I0318 13:52:17.025873 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_7b756a66-3b31-4c6c-acf2-94a47924cd17/prometheus/0.log" Mar 18 13:52:17.039245 master-0 kubenswrapper[27835]: I0318 13:52:17.039167 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_7b756a66-3b31-4c6c-acf2-94a47924cd17/config-reloader/0.log" Mar 18 13:52:17.054668 master-0 kubenswrapper[27835]: I0318 13:52:17.054562 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_7b756a66-3b31-4c6c-acf2-94a47924cd17/thanos-sidecar/0.log" Mar 18 13:52:17.069429 master-0 kubenswrapper[27835]: I0318 13:52:17.069358 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_7b756a66-3b31-4c6c-acf2-94a47924cd17/kube-rbac-proxy-web/0.log" Mar 18 13:52:17.082833 master-0 kubenswrapper[27835]: I0318 13:52:17.082767 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_7b756a66-3b31-4c6c-acf2-94a47924cd17/kube-rbac-proxy/0.log" Mar 18 13:52:17.115857 master-0 kubenswrapper[27835]: I0318 13:52:17.115811 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_7b756a66-3b31-4c6c-acf2-94a47924cd17/kube-rbac-proxy-thanos/0.log" Mar 18 13:52:17.129701 master-0 kubenswrapper[27835]: I0318 13:52:17.129635 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_7b756a66-3b31-4c6c-acf2-94a47924cd17/init-config-reloader/0.log" Mar 18 13:52:17.155868 master-0 kubenswrapper[27835]: I0318 13:52:17.155727 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-6c8df6d4b-7tcjk_fb65c095-ca20-432c-a069-ad6719fca9c8/prometheus-operator/0.log" Mar 18 13:52:17.168259 master-0 kubenswrapper[27835]: I0318 13:52:17.168177 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-6c8df6d4b-7tcjk_fb65c095-ca20-432c-a069-ad6719fca9c8/kube-rbac-proxy/0.log" Mar 18 13:52:17.186583 master-0 kubenswrapper[27835]: I0318 13:52:17.186535 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-admission-webhook-69c6b55594-wsmsc_6db2bfbd-d8db-4384-8979-23e8a1e87e5e/prometheus-operator-admission-webhook/0.log" Mar 18 13:52:17.212352 master-0 kubenswrapper[27835]: I0318 13:52:17.212290 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-d855b697f-6v4bh_7e1495df-e141-4c4d-9a05-5e8f3ee2667f/telemeter-client/0.log" Mar 18 13:52:17.214664 master-0 kubenswrapper[27835]: I0318 13:52:17.214581 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-d855b697f-6v4bh_7e1495df-e141-4c4d-9a05-5e8f3ee2667f/telemeter-client/1.log" Mar 18 13:52:17.235874 master-0 kubenswrapper[27835]: I0318 13:52:17.235802 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-d855b697f-6v4bh_7e1495df-e141-4c4d-9a05-5e8f3ee2667f/reload/0.log" Mar 18 13:52:17.252879 master-0 kubenswrapper[27835]: I0318 13:52:17.252751 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-d855b697f-6v4bh_7e1495df-e141-4c4d-9a05-5e8f3ee2667f/kube-rbac-proxy/0.log" Mar 18 13:52:17.295279 master-0 kubenswrapper[27835]: I0318 13:52:17.291430 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5f7fb669fb-msvkz_cfcf230d-b184-4d7f-aedc-e58264252b88/thanos-query/0.log" Mar 18 13:52:17.324434 master-0 kubenswrapper[27835]: I0318 13:52:17.324354 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5f7fb669fb-msvkz_cfcf230d-b184-4d7f-aedc-e58264252b88/kube-rbac-proxy-web/0.log" Mar 18 13:52:17.363549 master-0 kubenswrapper[27835]: I0318 13:52:17.363487 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5f7fb669fb-msvkz_cfcf230d-b184-4d7f-aedc-e58264252b88/kube-rbac-proxy/0.log" Mar 18 13:52:17.470299 master-0 kubenswrapper[27835]: I0318 13:52:17.470220 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5f7fb669fb-msvkz_cfcf230d-b184-4d7f-aedc-e58264252b88/prom-label-proxy/0.log" Mar 18 13:52:17.489963 master-0 kubenswrapper[27835]: I0318 13:52:17.489844 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5f7fb669fb-msvkz_cfcf230d-b184-4d7f-aedc-e58264252b88/kube-rbac-proxy-rules/0.log" Mar 18 13:52:17.511075 master-0 kubenswrapper[27835]: I0318 13:52:17.510948 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5f7fb669fb-msvkz_cfcf230d-b184-4d7f-aedc-e58264252b88/kube-rbac-proxy-metrics/0.log" Mar 18 13:52:17.853393 master-0 kubenswrapper[27835]: I0318 13:52:17.853281 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-h6chv_c730deaa-80a1-4fa1-aa84-0c523df12f53/controller/0.log" Mar 18 13:52:17.865142 master-0 kubenswrapper[27835]: I0318 13:52:17.865106 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-h6chv_c730deaa-80a1-4fa1-aa84-0c523df12f53/kube-rbac-proxy/0.log" Mar 18 13:52:17.897382 master-0 kubenswrapper[27835]: I0318 13:52:17.897302 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n7qbc_bb0c2adb-b940-48a0-b870-826b63cc2de4/controller/0.log" Mar 18 13:52:18.975965 master-0 kubenswrapper[27835]: I0318 13:52:18.975875 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n7qbc_bb0c2adb-b940-48a0-b870-826b63cc2de4/frr/0.log" Mar 18 13:52:19.111276 master-0 kubenswrapper[27835]: I0318 13:52:19.111216 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n7qbc_bb0c2adb-b940-48a0-b870-826b63cc2de4/reloader/0.log" Mar 18 13:52:19.203513 master-0 kubenswrapper[27835]: I0318 13:52:19.203441 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n7qbc_bb0c2adb-b940-48a0-b870-826b63cc2de4/frr-metrics/0.log" Mar 18 13:52:19.295966 master-0 kubenswrapper[27835]: I0318 13:52:19.295836 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n7qbc_bb0c2adb-b940-48a0-b870-826b63cc2de4/kube-rbac-proxy/0.log" Mar 18 13:52:19.320098 master-0 kubenswrapper[27835]: I0318 13:52:19.317683 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n7qbc_bb0c2adb-b940-48a0-b870-826b63cc2de4/kube-rbac-proxy-frr/0.log" Mar 18 13:52:19.334359 master-0 kubenswrapper[27835]: I0318 13:52:19.334190 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n7qbc_bb0c2adb-b940-48a0-b870-826b63cc2de4/cp-frr-files/0.log" Mar 18 13:52:19.345809 master-0 kubenswrapper[27835]: I0318 13:52:19.345755 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n7qbc_bb0c2adb-b940-48a0-b870-826b63cc2de4/cp-reloader/0.log" Mar 18 13:52:19.360767 master-0 kubenswrapper[27835]: I0318 13:52:19.360712 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n7qbc_bb0c2adb-b940-48a0-b870-826b63cc2de4/cp-metrics/0.log" Mar 18 13:52:19.382723 master-0 kubenswrapper[27835]: I0318 13:52:19.382650 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cpblcq_fbca4be6-7742-4c5b-ae0f-29f374f6e8a8/extract/0.log" Mar 18 13:52:19.396432 master-0 kubenswrapper[27835]: I0318 13:52:19.396365 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-49gvc_ea0bf2ff-c852-4365-9856-f3e4aa5b766d/frr-k8s-webhook-server/0.log" Mar 18 13:52:19.422149 master-0 kubenswrapper[27835]: I0318 13:52:19.422095 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-549849bb46-fnr78_ebfb29f2-806f-4507-8235-055d55cd360b/manager/0.log" Mar 18 13:52:19.450524 master-0 kubenswrapper[27835]: I0318 13:52:19.450467 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6c4dc89f9c-lppql_01ddc4ef-d5f9-4761-b728-828f9a107b0c/webhook-server/0.log" Mar 18 13:52:19.454991 master-0 kubenswrapper[27835]: I0318 13:52:19.454945 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cpblcq_fbca4be6-7742-4c5b-ae0f-29f374f6e8a8/util/0.log" Mar 18 13:52:19.486513 master-0 kubenswrapper[27835]: I0318 13:52:19.486324 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cpblcq_fbca4be6-7742-4c5b-ae0f-29f374f6e8a8/pull/0.log" Mar 18 13:52:19.658304 master-0 kubenswrapper[27835]: I0318 13:52:19.656617 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59bc569d95-9hllz_ddaec091-2dee-4ea7-a06e-c30e9c1ba96e/manager/0.log" Mar 18 13:52:20.119273 master-0 kubenswrapper[27835]: I0318 13:52:20.118591 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jr6nk_675171a9-caa6-495c-930b-aee9a8d4cbeb/speaker/0.log" Mar 18 13:52:20.130631 master-0 kubenswrapper[27835]: I0318 13:52:20.129434 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jr6nk_675171a9-caa6-495c-930b-aee9a8d4cbeb/kube-rbac-proxy/0.log" Mar 18 13:52:20.603535 master-0 kubenswrapper[27835]: I0318 13:52:20.603475 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-h6chv_c730deaa-80a1-4fa1-aa84-0c523df12f53/controller/0.log" Mar 18 13:52:20.628387 master-0 kubenswrapper[27835]: I0318 13:52:20.628333 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-h6chv_c730deaa-80a1-4fa1-aa84-0c523df12f53/kube-rbac-proxy/0.log" Mar 18 13:52:20.674359 master-0 kubenswrapper[27835]: I0318 13:52:20.672716 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-n7qbc_bb0c2adb-b940-48a0-b870-826b63cc2de4/controller/0.log" Mar 18 13:52:21.020141 master-0 kubenswrapper[27835]: I0318 13:52:21.020075 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d58dc466-7vhnh_ee6bd2ee-8a51-4d24-9abf-5029e73a106a/manager/0.log" Mar 18 13:52:21.043695 master-0 kubenswrapper[27835]: I0318 13:52:21.043647 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-588d4d986b-zr74z_8979be93-aaa2-4ad9-b6d3-50af024d681a/manager/0.log" Mar 18 13:52:21.217032 master-0 kubenswrapper[27835]: I0318 13:52:21.216964 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-79df6bcc97-hjwh4_b6047c49-c76b-4345-a72b-74be859fddc7/manager/0.log" Mar 18 13:52:21.235070 master-0 kubenswrapper[27835]: I0318 13:52:21.235022 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-67dd5f86f5-cjvmp_265baffa-3bec-4faa-be16-00c3d75f3b99/manager/0.log" Mar 18 13:52:21.251529 master-0 kubenswrapper[27835]: I0318 13:52:21.251195 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-8464cc45fb-xpppk_d109b5af-e96b-47ce-b4cc-c41c4e87ee49/manager/0.log" Mar 18 13:52:21.668007 master-0 kubenswrapper[27835]: I0318 13:52:21.667947 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7dd6bb94c9-w5stx_71018e48-64b3-42f0-b37f-dfa72163b1bf/manager/0.log" Mar 18 13:52:21.817624 master-0 kubenswrapper[27835]: I0318 13:52:21.817579 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6f787dddc9-pmtxh_f84705cb-9f70-43e3-ba36-6f9530ad53af/manager/0.log" Mar 18 13:52:21.914118 master-0 kubenswrapper[27835]: I0318 13:52:21.913979 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-768b96df4c-9xn5p_9d322608-4d0f-41cd-aff3-5de61bc2d86e/manager/0.log" Mar 18 13:52:21.929396 master-0 kubenswrapper[27835]: I0318 13:52:21.929315 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-55f864c847-mqptr_3640aec0-34c2-454b-95cb-822cccc6425f/manager/0.log" Mar 18 13:52:21.985253 master-0 kubenswrapper[27835]: I0318 13:52:21.985172 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67ccfc9778-n87m7_f0c88737-f524-4267-9c13-719936da8c4e/manager/0.log" Mar 18 13:52:22.099920 master-0 kubenswrapper[27835]: I0318 13:52:22.099813 27835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-767865f676-g2nh4_5b456512-ca01-4530-9368-6380bd8144e8/manager/0.log"